Combatting Populist Rhetoric on Twitter: The First Draft

Background

Our original problem was combatting populism in the media, and how this populism is amplified by shares and conversation in social media. Populism is a difficult phenomenon to define, as it is a big concept with many variables. We resolved to focus on the recent development of “post-truth” politics: facts being ignored in politics in favor of emotional appeals.

The political climate has been tumultuous in the passing year, with upsets like Brexit and the unexpected election of Trump rocking the political and economical climates. Both campaigns were driven by largely non-factual information: PolitiFact (a politics fact-checking website) determined that 70 percent of Trump’s “factual” statements during his campaign fell into the categories of “mostly false”, “false” and “pants on fire” untruths. (Politifact, 2016) The Brexit campaign infamously advertised being able to save £350 million for the National Health Service by leaving the EU, which turned out to be a non-factual statement. (Helm, 2016) Publications like The New York Times and The Guardian have written think-pieces on the issue, which indicates that the phenomenon is increasingly being brought to the public’s attention. (Davies, 2016) (Viner, 2016)

Reading articles on lying politicians has limited influence on individuals’ lives. People can of course be more aware, but that may not shield them effectively from the bombardment of emotional appeals. We want to create a service that helps people dissect the messages being targeted to them in real-time.

One of the most vulnerable services we identified is Twitter. As Twitter is a fast-paced information channel, misinformation can often be left unnoticed by the user. Politicians are increasingly using this media to contact their voters with emotional appeals (Trump is infamous for his Twitter rants (Shabad, 2016)). We want to create a service that helps users fact-check tweets by politicians real-time.

Another problem we identified is how to remain impartial. Media houses are largely regarded as partisan to either republicans or democrats. We want to seek the truth in a non-partisan manner. Wikipedia rose to our minds as the ultimate non-partisan service, edited and fact-checked by people everywhere. We decided that we would seek to emulate Wikipedia’s model: bringing facts to the people, from the people.

Currently the problem is being solved by users by Googling. Sites such as PolitiFact and FactCheck.org can be used to check statements made by politicians. This, however, requires dedication and time from the user. We want to remove the required time and dedication investment, and make fact-checking instant. Our end users can download a plugin onto their Twitter application, that will update in real time. Users from all over the world will be able to fact-check tweets, and give them a truth-rating. This will give the end user and instant answer to whether the tweet they are viewing is reliable. This will require moderators, to keep the information non-partisan.

The key parties in our solution are the users (both contributors and non-contributors), Twitter (we will use their API), and of course the politicians. Additionally, the contributors will be using 3rd party media resources to fact-check the tweets. Perhaps we could cooperate with PolitiFact and FactCheck.org, to bring their expertise to Twitter.

In the short-run we hope to affect the users. They will receive more reliable information on whether the information the politicians are supplying is true. This will hopefully lead to more informed voting behaviour, and more decisions based on actual facts.

In the long-run we could potentially affect the behaviour of politicians. If they begin to get called out on the non-truths they are speaking, perhaps non-factual emotional appeals will not pay off anymore, and the world will have more reliable politicians.

In the following sections we will go through the concept, value, societal goals, and limitations in more detail.

 

Our Concept

What

Adds-on for web (Windows, MacOS), with

  • Fact-checking functions, associate with social media (Twitter)
  • User-generated community (wiki, discussion board)

Who its for

Internet, or mobile users, who are

  • For those who wants to tackle more into the truth, but not sure where to start
  • Users who wants easy access to the source of information
  • Users who wish to contribute and/or spread out the facts

How it works

  1. Install adds-on on web browser
  2. Click ‘Fact-Check’ button
  3. See visual indicators (Green: close to fact / Red: close to false information or rumors)
  4. Click ‘Raising Voice’ and send feedbacks and/or contribute new information to wiki

*Algorism based, providing links

*But aims to become more towards user-generated, as more and more people contributes

What are other similar concepts

How do we differentiate

  • Focus more on user-generated informations and discussions (Collaborative intelligence)
  • Support other social media
  • Encourage user participation by providing features and events such as ‘this week’s fact-finder’, ‘cracking the urban legend’ etc.
  • Provide intuitive user interfaces

Diagram

https://docs.google.com/drawings/d/17VYVb47W5fKu1K-N3S2x0MELSBsIsO269eTL7Y0kAAo/edit

(UI mockups coming for final draft)

 

Value Proposition: Know What to Question

We identified our main value customer segment as Twitter users who follow politicians. These users’ Twitter feeds are flooded with tweets that influence their decision making, political views and general understanding of world events. Social media, and Twitter in particular, has created an arena for political figures to enforce their political views with fast, short, messages which are not always fact-checked by their readers. The user faces a problem of too much information and too little time to do fact-checking.

But knowing if a tweet is true or not is essential for the user to gain a fact-based, rational and objective world view, and e.g. to be able to vote for the most suited candidate in elections. A Twitter user who follows political figures would gain a lot by knowing if a Tweet’s is incredible, and should be questioned.

Our product allows a Twitter user to know what to question, by displaying a small indicator of the tweets ‘truth-value’. The truth-value is formed collectively by the community with simply upvoting or downvoting the tweet. The user gains a fast visual clue of what how the community rates a particular tweet, which immediately tells the user what other people think about the truthfulness of the tweet. This also helps lessen and moderate the flood of questionable tweets.

There is value also to other parties (like politicians, governments, and big media houses, and citizens who do not use Twitter), in the long and short run. These will be examined in the final draft.

 

Societal goals

Our ultimate vision is to have a society that thinks and questions everything it encounters, which helps it make better decisions. In the short run, we hope to achieve that by encouraging youngsters on social media to check the tweets’ credibility and make healthier decision whether to follow the source of tweets or not.  In the long run, however, we want to include other social platforms, which are becoming a vital source of news.

The people that we are mainly targeting are those who belong to each of the millennial generation and generation Z. They are the young adult social media users who become interested in politics, namely aged between 17 and 35.

Once people start having no tolerance for emotion-based statements on social media, we know we have done a good job. Nonetheless, “good” is hard to quantify, hence we consider a larger user-base to be a good indicator. For instance, the ratio of the people who use the service to those who vote in a campaign can quantify if the service is useful. Still, the number of users does not necessarily mean a factual tool, which is discussed in detail in the next section

We hope to create a ripple effect wherein the effect starts from the people who stop buying into post-truth tweets. This would cause a decrease in the popularity of populistic politicians, which in turn affect the politicians, who would no more rely on post-truth-characterized statements to gain acceptance, if they get elected. This also creates a healthier competition between different political parties, solely based on factual arguments.

 

Societal Limitations

Our solution, of course, subject to a certain amount of limitations. Vital to our solution is to achieve a sufficient user-base, since the content is going to be user-generated. As there already is a wide range of different kinds of plugins and extra features on the market, it might be hard to stand out and gain on trust among the potential users. In order to separate our product from the others, and above all from the fake ones, we must devote to marketing and emphasize that our company remains impartial and the service content is fully user-generated. Additionally, the literature on designing information technology proposes that to achieve the trust of customers, the result is technology that accounts for human values in a principled and comprehensive manner throughout the design process, since people trust rather on other people, not the technology (Friedman & al., 2000). This means, we must operate transparently, our aim must be clear to everyone.

Our specific target group are the young adults who are attuned to use social media and are aware of politics. They are a relatively easy group to reach, since they are talented computer users and they are able to question the content on the internet. Moreover, they might be seeking solutions to separate trustworthy sources already.

How about those people, who don’t know how to even use the internet and social media in a proper way, like elderly? They could well be interested in the correctness of the news they read, but firstly they are not necessarily aware of the need of questioning the news, and secondly if the are aware, they don’t know what to do about it. This group might be hard or even impossible to reach. Then there are people who don’t follow politics or simply don’t care about the post-truth tweets. However, people who don’t follow politics or don’t care about it yet, might become interested in our plugin, if there would be enough hype around the theme of populism and our product would have an established position as a reliable wiki plugin.

Our goals are to affect the behaviour of politicians, by making them to realize that it isn’t worthwhile to use emotional appeals and secondly, have a society that questions things and makes better decisions based on facts. But how far can we really get with the plugin? How much can our solution really change? What if the effect is the contrary, people stop questioning the things they encounter and only trust on technical solutions like plugins.


 

Sources

PolitiFact. (2016). “Donald Trump’s File.” Available at: http://www.politifact.com/personalities/donald-trump/ [Accessed 23.11.2016]

Helm, T. (10th September, 2016). “Brexit camp abandons £350m-a-week NHS funding pledge”. The Guardian. Available at: https://www.theguardian.com/politics/2016/sep/10/brexit-camp-abandons-350-million-pound-nhs-pledge [Accessed 23.11.2016]

Davies, W. (24th August, 2016). “The Age of Post-Truth Politics”. The New York Times. Available at: http://www.nytimes.com/2016/08/24/opinion/campaign-stops/the-age-of-post-truth-politics.html [Accessed 23.11.2016]

Friedman, B., Kjan, P. H. & Howe, D. O. (2000) Trust Online. Communications of the ACM. Vol.43(12), p.34-40.

Viner, K. (12th July, 2016). “How technology disrupted the truth”. The Guardian. Available at: https://www.theguardian.com/media/2016/jul/12/how-technology-disrupted-the-truth [Accessed 23.11.2016]

Shabad, R. (22nd November, 2016). “Trump meeting with New York Times back on after Trump Twitter rant”. CBS New. Available at: http://www.cbsnews.com/news/donald-trump-twitter-rant-new-york-times-cancels-meeting/ [Accessed 23.11.2016]

 

Other Teams’ Ads

http://confirmed-biased.tumblr.com/

  • The ad is intriguing, however it’s not clear what the product is.
  • “See what they see…” is nice, might give a hint but maybe we are biased because we heard about the product. Who are we? Who are they? This is a good tagline, but maybe a two-sentence explainer would have served the purpose.
  • Something to do with phones, is it an app? Is it a plugin? What does it do?
  • It’s a call to action, so written in user’s language
  • The images are inspirational (references to artworks), generates emotion
  • Not fact-based, no proof on why solution would be the best, no facts given
  • The imagery stands out, however I would not follow the link because I don’t know what it is
  • It doesn’t clearly address a societal problem
  • There is a call to action, but there could be a link (or a QR code), and clear expression to where it leads
  • Design perspective: It looks like early day Photoshop, Illustrator CD boxes…? Not sure where to concentrate on, and in what context that these well-known paintings are associated with each other. A slight low chroma color as a background would be better.

https://nlvksos.wordpress.com/

There was no ad here until Sunday

  • The ad doesn’t explain what social media bubbles are, not everyone knows what it is
  • The name of the solution is not clear, you could use a bigger font or something to make it stand out
  • You don’t explain where to download your solution or take this further (improve your call to action)
  • The picture is nice

https://someeducation4all.wordpress.com/

  • The ad is very clear: learn things for free. We think this pretty much targets all requirements.
  • However, we can think of so many solutions that solve this problem: coursera, codecademy, khanacademy, and duolingo. What is the differentiating factor here? Would be nice to show that.
  • The problem is, how does this stand out?
  • The call to action is clear, there is a QR code to follow, which is great
  • It’s not very inspirational, it’s quite boring, just an ad among others
  • The imagery used (lightbulb) is quite boring and predictable
  • The light bulb image is quite predictable.. how could we make it more standing out?
  • However the graphics are nice and clear and the colors are neutral, which is good for an education platform.
  • Doesn’t generate much emotion, which we think is good as education is more a cerebral thing and not emotional
  • It doesn’t really address a societal problem (unless the problem is that education is too expensive…?)
  • Call to action is clear “Join us!”
  • Quantify: how much does education usually cost?
  • Differentiate! 🙂
  • Design perspective: Good choice of color, font and composition. However, maybe too ‘right’ – and have seen many similar posters like this.

http://ethicalpublishing.tumblr.com/

  • This is a very clear ad, great job.
  • There are some anti-bullying campaigns online (#antibullying, #stopbullying, etc)
  • However this one has a clear message about not bullying through posting embarrassing photos.
  • The hashtag is not clearly referring to photos, so maybe that could use some work?
  • Is written in consumer language, speaking directly to user (“did you really think…”)
  • The picture is quite neutral, so it doesn’t generate much emotion.
  • Part of the team were not so emotionally compelled to follow the link or use the hashtag.
  • Also, part of the team thinks the image used kind of contradicts the message. Although it is more intriguing than a typical image of a person saddened by the bullying, but maybe you could show that this embarrassing pose is okay to be seen in real life but is not okay to be posted on social media.”
  • However it’s clearly actionable through the link and hashtag
  • Make it fact-based: how many people suffer of online bullying?
  • Design perspective: Composition wise, would have been slightly better if they had extra space on the top and have the line “PLEASE DON’T POST~” positioned their on the center. Or on top of the image.

http://itssocialmediathings.tumblr.com/

  • The purpose is clear, however the product is not
  • Is it an app? Is it a plugin?
  • The UI could be shown instead of text on the screen (the text is also quite hard to read when it’s tilted, and the graphical hierarchy of text could use some work)
  • The call to action is sort of clear (go to our facebook and twitter), however the customer could directly be asked to do something (“download our app” for example)
  • Not very compelling, doesn’t generate emotion
  • Yes I’m constantly online, but what does that have to do with this hand and this phone? Maybe the image could be used to generate more emotion and to get the viewer to relate to the problem?
  • Make it fact-based. You could say how many hours an average person spends online, would be compelling.
  • There are others like this, such as Franz
  • Design perspective: Too many text, too much closely attached to each other. Not sure where to concentrate.

http://the-happy-echo.blogspot.fi/

  • The ad is very clear on the concept and the product
  • It’s a filter for social media
  • Clear call to action (follow this QR code)
  • The imagery used is very descriptive (pushing the negativity aside), good job
  • However the emotion this triggers in me is annoyance, this sounds like censorship, yet this means that I am not in your target customer group
  • I’m not sure what the societal problem you are addressing is. Is it negativity? Is it bullying?
  • Why is negativity a bad thing? Doesn’t everyone feel negative sometimes? Isn’t that just real life? Why try to paint an unrealistic view of life? Why should I use this?
  • Design perspective: Curtain (especially black curtains) are usually being used, in films to advertisements, as a visual representation of concealment – to hide something or trying to avoid the attention. So visually this poster is expressing sort of completely opposite message from what they are trying to deliver from their service.

http://politicsisboring.tumblr.com/

  • The ad is compelling and funny (nice wordplay)
  • The colors grab my attention
  • The ad is a bit cryptic, is it a social media campaign?
  • It’s good to use a “celebrity’s” face, it will speak to young people
  • You could more clearly address the problem: POLITICS, since it doesn’t have this keyword anywhere
  • Perhaps a smaller copy text under the images and big text could clarify the concept more
  • A QR code would also make the link easier to follow
  • The look makes it stand out
  • The societal problem being addressed is good, and I can’t think of other solutions that address it (just make the problem clearer)
  • Design perspective: Probably better if they remove the YouTube UIs from the bottom image – it does not not well refined at the moment. The letter “OR” should be slightly bigger, or positioned in a different way as it does not stands out that much at the moment.

http://somekurssi2016.tumblr.com/

  • Really nice graphics
  • Compelling image, I can imagine myself in this situation (and experience this situation regularly)
  • Nice font, nice graphs
  • Clear hashtag call to action
  • However, is the hashtag everything you’re advertising? What’s the rest of the solution? Is it only a social media campaign?
  • I know there are some apps you can download for your computer that will control the brightness by time of day, is this like that? Differentiate yourself.
  • The problem you are addressing is real and experienced by a lot of people, good job.
  • Design perspective: Straight-forward and easy to understand. However, it does not have any visual indication or description that this is an app or web services. So I first thought that this was some sort of medical service advertisement – visually it does look like they are selling some very serious thing (dark background, static font style, the word “fight”).

https://privacyandsnss.wordpress.com/

  • The product is clear, it’s a plugin
  • However, why should I be concerned about this problem?
  • Quantify: what data are browser providers getting of me that I would not want them to know?
  • Call to action is clear (install now), however the font contrast is bad, it’s not very visible
  • The imagery used is not very compelling, it’s quite cliché and doesn’t really stand out
  • Could you use an image that would bring more emotions from the viewer?
  • Is this a large societal problem? Why?
  • Design perspective: Using already well-known image, aka “Uncle Sam” — it does visually stands out. However, we’ve seen many posters and advertisements that used Uncle Sam already. You could go with something more original.

http://social-media-misinformation.tumblr.com/

  • The purpose is clear and written in the user’s language, directly addressing them: “Do not believe everything you see on the internet”
  • There’s a clear call to action: download our app
  • The QR code makes it easy to access
  • Could be more fact based: what % of articles are fake? Quantify.
  • How does it work? How do you vet your information?
  • The picture is funny and generates emotion
  • Justify that this is a societal problem! How does believing hoaxes affect society?
  • Design perspective: Visually appealing and stands-out. Good visual representation of their idea and goal.

http://cs-e5610.tumblr.com/

  • The solution and product is very clear
  • There’s a clear call to action: it’s available in these places
  • The using of recommenders was clever
  • How does it work? Where do you get your information?
  • The Hillary joke is funny
  • You could make it more visually appealing, now there’s many different fonts and colors…
  • There are similar solutions (klikinsäästäjät on facebook etc), how do you differentiate?
  • Why would I want to save a click? Will it save time? Will it make me smarter? Convince me.
  • Quantify: how many of links are fakes? Why is this?
  • Design perspective: Too much text with same size & fonts & colors. Not sure which ‘catching tagline’ the reader should focus on to, immediately when they look into this poster. Maybe to use slightly low chroma color to some relatively less important texts (e.g., text on the web browser, “Save your clicks at~” “Also available on~”) or make them smaller than the tagline would have made it visually much intuitive.

First Thoughts on Our Solution

Needs

The problem of consumers not knowing what media to trust. Is the media they are consuming biased? Is the media telling the truth? The gatekeepers of media can be sorted into 3 categories: traditional big news organizations like BBC and CNN, countries’ governments (who may plant or influence information), and of course social media platforms like Facebook and Twitter.

Other stakeholders may be companies looking to influence the creation of media, through investing into political campaigns, or lobbying for their own agenda in news publications. This way companies are looking to influence consumer behaviour through the channels of news organizations or platforms that the customers trust to be unbiased.

When consumers trust media that is not fact-based, they end up making voting (and other) choices that are not based in reality.

blogimage1

 

Approach

Our solution would be a plugin for browsers that users can download (like AdBlock), which would enable users to see which information is fact-based (ideally with a grading system green-yellow-red, or something similar).

Additionally, our solution would include a system that alarms the user when their news feed is being highly tailored toward their interests (like on Twitter), so they could then select a more neutral news feed to avoid confirmation bias.

The business problem is keeping this solution completely neutral. This means funding from completely unbiased contributors, and those are difficult to find, as everyone has an agenda. Ideally this would work best as a non-profit open-source-driven project.

Going with the spirit of open source, the community of people can also contribute to adding or assessing if sources were evident or not. For example, if a piece of news is inaccurate, one can claim so by linking to an article that proves that.

blogimage2.PNG

blogimage3.PNG

 

Benefits

Our approach would encourage the content consumers to pay more attention to what they are reading. Is the source credible? Is there a confirmation bias happening (due to filters)? Are the facts stated based on actual evidence?

This would make our users more informed and aware consumers of media, who are more likely to make voting and other choices based on facts, not populistic propaganda. More informed citizens means smarter voters, which makes for a better world for everyone.

This would teach consumers media consuming skills, and eventually they could learn to doubt facts with no evidence and sources that seem sketchy.

 

Alternatives

Politifact – Collects fact ratings on politicians. Has been accused of being biased toward certain candidates (the company that owns Politifact has donated to Hillary Clinton, but then again, so has Donald Trump).

FactCheck.org – Non-partisan “consumer advocate for voters that aims to reduce the level of deception and confusion in U.S. politics”. Has highly sourced articles on specific political candidates and figures. Has gained recognition and won numerous awards for its contributions to political journalism.

Snopes.com – A website covering mostly urban legends, Internet rumors and e-mail forwards. Also fact checks some political rumors. In 2012, FactCheck.org reviewed a sample of Snopes’ responses to political rumors, and found them to be free from bias in all cases.

Google’s truth algorithm – Has wide reach through Google’s user base. Aims to place ratings on Google’s search results. Probably has the best chance of influencing what article people click on.

blogimage4.PNG

Our solution would provide a truly unbiased source of information that people could overlay on the articles they read. In this way, it would be more useful than Google’s truth algorithm (which is useful while searching for articles, but not while reading them).

Additionally, it would be useful if our solution could somehow show the existence and prevalence of filters in the user’s news feed, and offer the option of a neutral sorting algorithm, or one that deliberately picks articles with different viewpoints. However, the technical realization of this solution is very difficult.

Direction of Attention and Political Agendas

Attention is the new currency of social media and news articles. New research shows that people’s attention span has decreased from the reported 12 seconds in 2000 to an 8 second attention span today. This means that our attention spans are shorter than goldfish’s. (McSpadden, 2015) A short attention span means that those in charge of directing our attention and those that are able to capture it have a huge influence on our thoughts and opinions. In this way, attention economics also relates to our research question’s field: politics. Academic research has found that people who consume more news media have a greater probability of being civically and politically engaged. (Wihbey, 2015). The public is increasingly using social platforms such as Facebook and Twitter to get their news. The Obama presidential campaigns in 2008 and 2012 and the Arab Spring in 2011 catalyzed academic interest in the relationship of digital networks and political action. The data are relatively new and scarce, so data are far from conclusive.

A study published in 2012 by the journal Nature, “A 61 Million Person Experiment in Social Influence and Political Mobilization” suggested that messages on users’ Facebook feeds could significantly influence voting patterns. Certain messages promoted by friends “increased turnout directly by about 60 000 voters and indirectly through social contagion by another 280 000 voters”. The study also found that close friends with real world ties influenced each other more than casual Facebook friends. (Bond et al., 2012)

The study raised concerns about Facebook’s influence on the results of elections. If Facebook’s algorithms decide what you pay attention to, they can influence how you vote. Sifry of MotherJones raised concerns that it is unethical for Facebook to study user behaviour without clearly informing the users about it. While Facebook has stated that it does not target its “vote” prompt based on political affiliations, Sifry argued that transparency has not been Facebook’s hallmark with conducting the experiment. (Sifry, 2014) This raises serious questions about how our political ideas may be manipulated through the leveraging of attention from the companies that control it. What if companies like Twitter or Facebook are actively steering our attention, and through it our thoughts? Zittrain of New Republic even goes as far as to suggest that Mark Zuckerberg himself could want to personally influence the outcome of a heavily contested election, and do so by targeting “vote” prompts to users that can be identified as likely voters of a favorable party based on their personal data.

Zittrain talks about “digital gerrymandering” being hypothetically possible through the platforms we have all grown to use daily. He points out that no company has promised to be neutral. For example, in 2012 Google prominently displayed a censored logo on its front page to make its view of supporting net neutrality and opposition to the SOPA (Stop Online Piracy Act) bill known. A number of other influential platforms protested the SOPA bill by having their webistes “go dark”, such as Reddit, Imgur, WordPress.org and Mozilla (Sniderman, 2012). SOPA was eventually stopped, and the influence of the platforms probably played a sizable part in the outcome (through their own ties with governmental organizations or by being able to mobilize citizens to fight against SOPA). While internet censorship is generally not considered something positive (which was what SOPA was indirectly after according to the companies protesting against it), this example illustrates just how much influence big tech platforms have over the general atmosphere of opinion. However, Zittrain concludes that introducing legislation to prevent digital gerrymandering would not be productive, as it would go against American citizens’ first amendment rights: “Meddling with how a company gives information to its users, especially when no one’s arguing that the information in question is false, is asking for trouble.”

Google has taken initiative on the problem of attention being directed to untrue news (usually in the hope of garnering clicks and through that, ad revenue). Google introduced a tag on its search results to show the reader their truth rating before clicking. “Google has a vast repository—the so-called “Knowledge Vault”—of more than two billion facts pulled from the Internet, which its IT professionals could put to use to form the new algorithm. It would work by checking website pages/content—matching with its database, and cross-referencing relevant facts. Websites with high volumes of inaccurate or false data would, naturally, rank lower in search results. This way, trustworthy content would find its deserved place in the top results.” (Perez, 2016) This means that theoretically, this sorting algorithm would rely on mathematics and would have no room for human bias. However, this raises the problem that now Google is in charge of directing our attention, and even evoking our trust in sources, and could hypothetically use it to strengthen its own agendas. Should we trust our own minds, and our curiosity that leads us to click on articles, or should we trust Facebook and Google to make those decisions for us?

Perez of TechCrunch points out that Facebook promised a similar solution to solve the clickbait problem on Facebook in September, but has yet to produce it (Perez, 2016). Facebook was accused of placing biased articles (in favor of the democratic party) on its Trending page, which aimed to show trending news articles. While Facebook denied a problem, it fired all its human editors, and instead fully automated its Trending feature. The algorithm aimed to promote trending articles with no bias. However, within 72 hours of implementing an algorithm, the top story on Trending was about how Fox News icon Megyn Kelly was a pro-Clinton “traitor” who had been fired. The story was blatantly untrue. (Newitz, 2016) This shows that even the tech companies themselves are not fully in control of how their algorithms work, and their real world consequences; and that algorithms have the capability of being just as biased as humans.

Tech companies are not the only party directing our attention and through it, our political ideologies. Countries’ governments are known to use their media to perpetuate a certain viewpoint of the current political climate, known as “information wars”. In information wars, the parties aim to influence the flow of information, and to influence the general attitude of the population into a more favourable one towards the current government (or a certain party). The documentary World Order challenges the western view of how Russia is aiming to act in the world’s geopolitical scene. (Motturi, 2016)

The South Korean government was revealed to have a protocol to deal with vessel casualties (e.g. the sinking of MV Sewol in April 2014). The guideline ordered the developing or releasing of news topics that would distract the people from the ongoing casualties. The author argues that the South Korean government is more concerned about “covering up” the issue than trying to resolve the problem. (Park, 2014) South Korea has been ranked by its citizens as having one of the least trustworthy medias in the world. South Korea was ranked #66 out of 199 countries, with the lowest freedom of journalism reported in recent years. The position keeps declining, as Korea was ranked #31 in 2006. The ranking was based on suspicion towards journalism and perceived governmental control towards national broadcast media on their political perspective and bias. People in the Asia-Pacific region also tend to be relatively defensive toward press coverage of sensitive topics such as military tension or national security. Only 70% of South Koreans responded that press should be allowed to criticize the government’s policy, which is lower than the global average of 80%. Other Asian-Pacific countries are also around the 80% figure. One of the most sensitive news topic amongst South Koreans is national security, with only 37% of respondents saying that they accept sensitive national security related news to be broadcasted (with a worldwide average of 40%).

Results varied by age. 52% of citizens in ages 18-29 said that they approved to see national security issues covered in the media, with the figure in 50+ citizens being 19%. (Cha, 2016) A recently released report by the Korea Press Foundation showed 10 indexes indicating South Korea’s issues of the press ecosystem. South Korean media ranked 22 out of 26 as the 4th least trusted information source to their citizens. Only 10% of Koreans under 35 said they trust the news. (Korean press foundation)

The South Korean professor Sang-Jin Cheon, who wrote the book “The Era of Conspiracy” published in 2014, commented that the rise of conspiracy indicates the society’s democracy is under threat. He states that conspiracy roots back to systematized irresponsibility between social strata. Sun-Hyun Park of the Korean web publication eToday proposes that the only solution is communication between social strata to build trustworthiness. (Park, 2016) Perhaps this solution would be viable in other countries that are experiencing distrust of the media, such as the United States of America, where citizens’ trust in mass media has fallen to 40% in 2015 from 55% in 1999. The trust percentage has fallen more sharply among 18-49 year olds (36% trust in 2015) than in those over 50 (45% trust in 2015). Trust among democrats (55% in 2015) was also higher than among republicans (32% in 2015). The study speculates that the same forces behind the drop in trust in government and U.S. institutions may be behind the drop in trust in media. Also, some major venerable news organizations have been caught making serious mistakes, such as the scandal involving former NBC Nightly News anchor Brian Williams, who confessed to have exaggerated or “misremember” some of his firsthand accounts of news events. (Riffkin, 2015)

For reference, Finland’s press freedom score has been ranked as #11 out of 100 participating countries (with 0 being the best). The ranking is based on the fact that Finland has a variety of editorially independent print, broadcast and online news outlets. (Freedom House)

In our opinion, the best way to practice attention hygiene while reading political news is by using common sense, and not relying on third parties to provide information on what is trustworthy. Yle (a Finnish government sponsored news organization) describes a few approaches that everyone can take to consuming media content that can increase the likelihood of identifying reliable information correctly (Motturi, 2016):

 

1. Practising source criticism

Spotting misinformation and propaganda may be difficult. A few questions that you can ask yourself are: What or where is the conflict? Who are the parties? Which party benefits from the distribution of the information I am reading? Where is this information from?

2. Acknowledge how social media works

Algorithms are choosing what information to show you. Our news feeds are personalized according to our own interests, and thus a biased source of information.

3. Recognize confirmation bias

Humans tend to believe and remember news that confirms their existing bias. In this sense, our perception of reality can be biased. Reading highly emotion-based articles may play into this. It is good to sometimes examine where your opinions are coming from, and what they are based on.

 

By paying attention to where we are paying attention, and examining our own opinions and sources of information critically, we can strive towards a less biased viewpoint on the world. Different media platforms will always be inherently biased, as humans are inherently biased (and apparently algorithms are too in the case of Facebook). By using multiple media platforms and staying aware of the sorting algorithm selecting our news for us, we can stay sceptical and aware of the possible biases being shown to us.

 

Continue reading “Direction of Attention and Political Agendas”

It’s not black and white

It’s grey. Trough a typical teen-style video (gaming in the background, popping pictures, Q&A, fast-paced narration, references to popular conspiracy etc.) we’ll show that populism is not always associated with post-truth. Not all ideas proposed by populists are always bad just because they are brought up by them. In a perfect case everyone makes their own decision based on actual facts they read and find from first hand sources.

<3: Sharbel, Solip, Anna, Minja & Olli

[P o p] u l i s m

Social media can amplify populism.

Don’t let emotions guide your decision.

Check the facts.

“Every time that you get on stage

I picture the genius of this age

I vote to make us the greatest again ooh, oh

The credibility seems low

Yeah but trump gets my vote

 

Is it weird that I feel

intelligent whenever you talk?

Is it real that your wall

is going to protect us all?

Is it real that I‘d feel

jobless in case you would be gone?

Isn’t it so insane

that I’d buy into this campaign?

 

Every time emotional posts spread

I don’t need to be using my head

I follow blindly just what “feels right” ohh, oh

Well the facts might seem false

Yeah but Brexit’s my vote

 

Is it real what I see

On every social media post?

Well I cannot really doubt,

there’s a picture underneath it there’s a quote.

Shall I participate

Even though I know it’s too late?

I will affect the result

But I’m too old it won’t matter for me, mate.

 

Every time that you get addressed

I get defensive in my head

I follow blindly, just what feels right ohh, oh

Oh yeah…

Every time emotional posts spread

I don’t need to be using my head

  (I -don’t-need-to use- my head)

I follow blindly, just what feels right ohh, oh

The credibility seems low

 

But for the emotions I go…”

<3: Olli, Sharbel, Solip, Anna & Minja

Social Groups

We are examining populism in social media. While the definition of populism is quite broad, we are using the term to describe the use of non-factual information and emotional manipulation to appeal to the people.

The relevant social groups or “sides” to our problem are the consumers of information and the creators of it. Our focal point is populism in politics, so the primary groups of the information creators are politicians and the media, which includes journalists and influencers.

1002_some_1

Both sides can be split into groups by age, geography, ethnicity, gender, income, education level and political affiliation. We are interested to see whether these factors affect how people use and react to populism, or whether they realise it when they encounter it. Do different groups strategize differently to employ the use of populism in social media?

Our hypothesis is that certain age groups are more prone than others to populism. Millennials have been using social media for the majority of their life, and understand its implications more. They understand that not all sources are credible, and are more skeptical of the information they read. This differs from people in their sixties and seventies, that may not understand the nature of social media so well. They are used to the older age of bigger media outlets and longer news cycles. Media had more time to check facts and their credibility was more important, when the media sphere wasn’t so saturated. Now when news outlets are fragmented and news is used more like entertainment (in outlets such as Buzzfeed and Reddit), emotions are appealed to and the nature of information is different. Fact-checking is difficult when media needs to be the first one out with the news to gain clicks (and through the clicks, advertising revenue). The older generations are not used to the fact that the articles they are reading may have been written in a matter of minutes, and stated “facts” are mere assumptions, and may not recognize when their emotions are being manipulated. They have not cultivated the same source criticism.

One effect of this may also be the over doubtfulness of younger generations. They have learned to be skeptical to the point they may not believe anything, and may not latch onto any calls to action.
1002_some_2One event that supports this assumption is Brexit. The leave campaign was heavily populist, with people being pushed to vote leave, without fully understanding the implications of it (farmers losing important subsidies [1] , the difficulty of emigrating, etc.). People’s emotions were appealed to, and decisions were not based on facts. Many people expressed their regret with their vote after the fact. An estimated 1.2 million Leave voters regret their choice [2].

People’s opinions were manipulated with ads that appealed to emotion. For example, an ad picturing NHS (Britain’s National Health Service) inside and outside the EU contrasted a bleak system within the EU, and a more efficient one outside of it [3].

The ad relied on the slogan that that £350 million was spent every week. by Britain to remain in the EU, and this £350 million could be invested into healthcare. The same slogan was displayed on the infamous “brexit bus” [4]. However, after the campaign, the leave campaign leader Nigel Farage admitted that the slogan was not factual (as investing this £350 million into the NHS was not realistic), and regretted using it [5].

But who voted for the Leave side?  Who dis these populist, non-factual ads appeal to? 

img_2016-10-02-184018[6]
1002_some_3Statistics show that older voters were the majority of leave-voters. An interesting phenomenon showed that younger people voted overwhelmingly for stay, however were fewer in numbers (which could be evidence for the over-skeptical hypothesis). The people voting for leave also had a lower degree of education than the stay voters. This could mean that the lesser educated people are less equipped to deal with the onslaught of information that is in media today, and do not have the source criticism that may be instilled in universities. Statistics also showed that leave voters tended to be from lower income areas and areas where large numbers of people did not hold a passport (indicating that they have not been abroad recently). Evidently, prejudice against immigrants also played a large part in the leave-voters decision. Britons voting leave did not identify with the immigrants and thus felt more comfortable ostracizing them, believing that voting leave would change the current immigration situation in the UK.

1002_some_4

Populism is not only a problem in the western world. In South Korea, Korean NIS (National Intelligence Service) purposely ordered their intelligence officers to post negative comments in social media against the opposing political group, during the presidential election in 2012. The opposing group was the more liberal Democratic Party, and the president in power, Lee Myeong-Bak, was from the more conservative Saenuri party. After the election, the South Korean Police alleged that Korean NIS actively posted articles and comments related to politically sensitive topics on social media. [7, 8] 

The result of the South Korean presidential election in 2012 divided the country in half; voters who are 50+ years old strongly supported Park Geun-hye from Saenuri Party (right-wing, conservatism), while young voters voted to Moon Jae-in from Democratic Party (left-wing, social liberalism). Statistics also indicated that less educated and low-income workers were more favored to Park. [9, 10] After a very close match up, Park became the first woman to be elected as president in South Korea. Park won 51.6% from entire votes while Moon received 48.0%.

However, in the survey that was taken just after the police’s allegation, voters who voted for Park answered that they might have voted otherwise [11]. This indicates that social media, and the negative comments towards certain political group and ideology affected the political event. In the survey that was taken in 2015, 64.3% answered that Korean NIS social media activities did affect the election. Also, more than 50% of interviewees in their 20’s to 40’s answered that Park may not have become the president of South Korea without governmental intervention. More than 50% of interviewees in their 50s to 60s+ however, thought the opposite [12]. This incident, which is also known as “Manipulation of public opinion by NIS during 2012 South Korean presidential election”, is widely being censored in public media by Korean NIS and left-wing party up to this date [13]. South Korean prosecutors claim that they cannot clarify the influence of the Korean NIS’s social media activities with the result of the election. However, opposing political groups and civil activists claims that this was a systematic operation, carried out by elements of the government against the citizens of South Korea without their prior knowledge, designed to sway the public vote [13]. President Park maintains that she neither ordered nor benefited from such a campaign [14]. 

From these two examples, we can see that outright lying and emotional manipulation seems to be targeted more toward conservative voters. We need more data to determine whether this is a coincidence or a pattern. The situation currently happening in the U.S. elections seems to also support this hypothesis. Donald Trump, the republican candidate, uses outright lying more often than the democrat candidate Hillary Clinton [15, 16]. 

This hypothetical pattern could be because more conservative voters tend to be older and less educated (at least in the Brexit and South Korea cases), or because people with more conservative values somehow also are less critical about the information they choose to believe. However, no matter the reasons, this problem is rampant and hurting the voters (from every party), as they are making choices on who to vote based on non-factual evidence.

<3: Minja, Olli, Sharbel, Solip & Anna

References:

  1. Sinclair, H. (2016). Brexit: Farmers who  backed Leave now regret vote over subsidy fears. Independent, [online]. Available at: http://www.independent.co.uk/news/uk/farmers-brexit-regret-bregret-funding-common-agricultural-policy-a7163996.html [Accessed 29 Sep. 2016].
  2. Dearden, L. (2016). Brexit research suggests 1.2 million Leave voters regret their choice in reversal that could change result. Independent, [online]. Available at: http://www.independent.co.uk/news/uk/politics/brexit-news-second-eu-referendum-leave-voters-regret-bregret-choice-in-millions-a7113336.html [Accessed 29 Sep. 2016].
  3. Patriotic Populist (2016). Vote Leave Campaign Advert – Which HS Would You Choose? . Available at: https://www.youtube.com/watch?v=yIYq5xMW98I [Accessed 30 Sep.2016].
  4. Mirror (2016). We send the EU £350 million a week, let’s found our NHS instead.[image] Available at:  http://i4.mirror.co.uk/incoming/article7943774.ece/ALTERNATES/s615b/JS89532410.jpg [Accessed 30.9.2016].
  5. Allegretti, A. (2016). Nigel Farage AdmitS£350m Savinf for NHS in Eu Contributions Slogan “Was A Mistake”. Huffington Post, [online]. Available at: http://www.huffingtonpost.co.uk/entry/nigel-farage-good-morning-britain-eu-referendum-brexit-350-nhs_uk_576d0aa3e4b08d2c5638fc17 [Accessed 30 Sep.2016].
  6. Financial Times (2016). The demographics that drove Brexit [image]. Available at: http://blogs.ft.com/ftdata/2016/06/24/brexit-demographic-divide-eu-referendum-results/ [Accessed 29 Sep. 2016].
  7. Naver (2013). Available at: http://news.naver.com/main/read.nhn?mode=LSD&mid=sec&sid1=102&oid=079&aid=0002429303 [Accessed 1 Oct. 2016].
  8. Joy, A. (2013). Infographic: How South Korean Intelligence Interfered in Election. koreaBANG, [online]. Available at: http://www.koreabang.com/2013/features/infographic-how-south-korean-intelligence-manipulated-election.html [Accessed 1 Oct. 2016].
  9. Busan (2012).[online]. Available at: http://news20.busan.com/controller/newsController.jsp?newsId=20121121000160 [Accessed 1 Oct. 2016].
  10. Vop (2013).[online]. Available at: http://www.vop.co.kr/A00000595637.html [Accessed 1 Oct. 2016].
  11. Polinews (2013). [online]. Available at: http://www.polinews.co.kr/news/article.html?no=183788 [Accessed 1 Oct. 2016].
  12. Fact TV (2015). [online]. Available at: http://www.polinews.co.kr/news/article.html?no=183788 [Accessed 1 Oct. 2016].
  13. Woo, C. (2014). How South Korean Agents Used Social Media to Manipulate Public Opinion and Subvert Democracy, and How the Public is Reacting. Monitor, [online]. Available at: http://www.monitor.upeace.org/archive.cfm?id_article=1051 [Accessed 1 Oct. 2016].
  14. Sang-Hun, C. (2013) Prosecutors Detail Attempt to Sway South Korean Election. The New York Times, [online]. Available at: http://www.nytimes.com/2013/11/22/world/asia/prosecutors-detail-bid-to-sway-south-korean-election.html?_r=3& [Accessed 1 Oct. 2016].
  15. Politifact (2016). Donald Trump’s file [online]. Available at: http://www.monitor.upeace.org/archive.cfm?id_article=1051 [Accessed 2 Oct. 2016].
  16. Politifact (2016). Hilary Clinton’s file [online]. Available at: http://politifact.com/personalities/hillary-clinton/ [Accessed 2 Oct. 2016].

Introduction to the topic

The first assignment was about creating a video describing the societal problem we will be working with and here it comes…

Populism has always been employed by people seeking to influence or steer the general public. With social media, the reach and speed by which populism affects us are amplified.

Most of us have heard of what is being called the “post-truth era” in politics, which favors emotions rather than facts. While social media promises access to more information, thus more informed citizens, an unforeseen consequence has been greater access to untrue information. Brexit was a prime example of people somehow failing to check the facts, and being emotionally manipulated, hence coming up with uninformed decisions.

We aim to take a thorough look at the distortion of truth and the use of populism in today’s political landscape. Can we trust the news, content, and advertisements we see? Can we come up with tools to help people cope with the mass of information thrown at them?

<3: Anna, Minja, Olli, Sharbel & Solip