Every news story, narrative account, op-ed, letter; every article is colored by the bias of an author’s experiences, beliefs, and current knowledge. Media bias, fact-checking, and social media screening have been hot topics at some of the most important Artificial Intelligence conferences, attracting workshops, keynotes, and endless panel discussions asking how we might identify and prevent the spread of misinformation online. Since the 2016 presidential election, the number of searches for “Fake News”, as measured by Google Trends, has increased ~25 fold. ‘Fake News’ is the logical end state of the commercial market for attention; as revenue is tied more and more closely to audience engagement, whether in exchange for advertisements or behind a paywall. News is entertainment, and the most psychologically satisfying content wins.

Thus, we feel that there needs to be a tool that helps the user judge how much can he trust a particular source to read on a specific topic. To validate sources, we designed and implemented a data-driven infrastructure where we began through the specific lens of a limited set of impactful media events, along with a large number of stories characterizing those events over an extended period of time. We used natural language processing, visualization strategies, and algorithmic reasoning to help uncover flaws and potential triggers in the narrative choices being made by particular individuals and organizations. By using, algorithms like Logistic Regression, KNN, SVM, Perceptron and Random Forest. We successfully classified our articles into 3 bins- Highly biased, Maybe biased and Unbiased with an accuracy of 79.20 percent. What’s good about our model is that we could predict if an article is biased or not with a high recall score of 89 percent. We have handed our code back to our mentors from the AI for Good Foundation who are going to set up a new project team that will help take our project forward and tailor it for commercial use.