A Balance Between Moderation and Freedom of Expression

Artificial Intelligence & AI & Machine Learning. Image: Mike MacKenzie, Some Rights Reserved
Social Network Apps
Image: Tracy Le Blanc, All Rights Reserved

With the ever-increasing number of users thanks to widespread access to the internet, online communities have become important places to trade ideas and engage in social spaces. Owing to online possibilities like anonymity and pseudonymity and freedom of speech, the online community is confronted with abusive behaviours from such users (Papegnies et al, 2017). The digitalisation of most things such as businesses and services has put pressure and added risks on the screening protocols and processes of both content producers and platforms.

It is the responsibility of online media platforms to act on this issue through moderation to make online communities a safe space. This includes protecting users from hostile content online such as hate speech, nudity, terrorism, spam and scams.

Online platform and internet companies have somewhat benefited to provide better user experience with the help of artificial intelligence and machine learning. However AI tools used in automated content moderation still pose as threats to freedom of expression or biases against cultural and context specific content. This is where a Automated content moderation cannot be entirely independent and still required to work in synergy with human moderators.

The Genesis of Automated Content Moderation: From Manual to Automatic

In the beginning, since the creation of the world wide web, Content moderation had already been subtly operating though spam detection or harsh matching. Owing to the rise of social media, the task of content moderating had a higher demand, particularly operating from cities such as Manila Philippines and Bangalore India. The approximate 100,000 globally existing content moderators would screen though inappropriate and unwanted content such as violent media, pornography, hate speech etc.

Human Content Moderator
Image: Hitesh Choudhary, All Rights Reserved

Chen et al state that “the manual review tasks of identifying offensive contents are labor intensive, time consuming, and thus not sustainable and scalable in reality” (Chen et al, 2012, P.71). With the high demand and ever increasing content posted to social media platforms on the daily, it is here where Artificial intelligence claim it can contribute to much of the work by helping human moderators.

On March 20, 2018 a workshop titled The Turn to Artificial Intelligence in Governing Communication Online was held in Berlin by together by the Alexander von Humboldt Institute for Internet and Society (HIIG) and Access Now, a non governmental organisation that defends the digital rights of at risk users. Workshop participants concluded that the development of automated content moderation was largely due to the public pressure for platforms to actively engage against unwanted content. They emphasise that online regulation is not seen to be enforceable, particularly alluding to online activity during political events such as Brexit and the election of Donald Trump happening around the time of the conference.

The Turn to Artificial Intelligence in Governing Communication Online Workshop in Berlin. Image: Nick Feamster, All Rights Reserved.

How can Automation offer to replicate the work of human labour digitally?

Automated Content Moderation consists of:

  • Artificial intelligence technologies
  • Machine learning systems

Artificial intelligence tools are able to identify toxic content through filtering techniques, flagging posts and comments based on keywords. The presence of profanity and abusive words will most likely be classified as insulting (Chavan & Shylaja, 2015) and thereby toxic.

Google, one of the leading innovators and users of automated content moderation have created an Artificial intelligence moderation machine that recognises offensive speech by classifying how “toxic” a comment or post is during online discussions. This writing experiment rates phases out of a score of 1 to classify how likely it is to be perceived as toxic. In comparison to manual human content reviewers, AI tools in automated systems are an advantage in terms of scale and cost.

While Machine Learning methods approach text through automated categorisation, it sorts text into predefined labels (Dinakar et al, 2011). Machine learning is established though support-vector machines teaching the method of text categorisation (Dinakar et al, 2011). This makes identifying taking content down faster.

The company Smart Moderation provides services that use Artificial Intelligence and Automation to moderate of any inappropriate content or within content in your personal media platforms.

Management of Information
In terms of information management, Automated Content Moderation has managed and broken down content that breaches the terms and services of media platforms. It has been able to assist platforms abide by government regulatory policies such as the German Network Enforcement Act (NetzDG) which requires platforms to take down unlawful content like hate crime within 24 hours of being detected.

Human manual moderation still has a vital part in conjunction with Automated Content Moderation. A report titled Content Moderation: The Future is Bionic published by Accenture has noted that when it comes to content moderation, AI has its strength in real-time evaluations of mass data in various dimensions and human have their strength in evaluating content that falls in the grey area including cultural or context sensitive data.

AI and Human Cooperation in Content Moderation
Image: Accenture, All Rights Reserved.

The Economic effects of Automated Content Moderation

Following Facebook’s Cambridge Analytica scandal, a month after testifying in congress, Mark Zuckerberg announced at Facebook’s F8 2018 Developer conference that how their AI systems were effective in enforcing community standards. The AI systems are able to automate the process of moderating content at faster speeds, detecting a range of categories from nudity,hate speech, fake accounts and suicide prevention through learning by examples within the system.


Watch Mark Zuckerberg speak at the F8 Facebook Developer Conference about how AI tools were deployed from tackling fake accounts and threats prior to elections to reviewing sensitive ads.

This all sounds great that Facebook is taking a more serious approach to their privacy and security issues with the utilisation of AI in their content moderation. This announcement automation tool does play a major role in reviving Facebook’s trust and reputation amongst users, of which 87 million had their personal data compromised. However, thanks to the automation system, this regained trust and credibility also helps to recover some of the $118 Billion stock value lost following the scandal.

Economically speaking, the online media platforms will benefit from Automated content moderation. At incredibly cheap costs and the ability to operate in fast paces, these platforms that have high user-content generation can save a lot of money and time with Automated systems.

The Journal of Engineering; Atlanta has featured Webpurify, A profanity filtering, image and video moderating service as one of the industry leaders that offer automated intelligence moderation service (AIM) for images. With 10 years of moderation experience (Journal of Engineering, 2017), They offer this at fast speeds and a low cost solution at $0.0015 per image with AIM technology, creating a real time automated moderation. (The Journal of Engineering, 2017).

In an economic sense, low skilled manual content moderators are at risk of being replaced by Artificial intelligence and machine learning in content moderating. In the near future, when algorithms and risk factors are perfected and eliminated respectively, new positions such as investigators with analytical thinking, cultural understanding and high trained skills for market, legal and regulatory knowledge will supersede low skilled positions. On the plus side, AI tools are able to mitigate the trauma and post-traumatic stresses that human moderators are exposed to by taking over much of the reviewing and flagging of horrific content.

The Political effects of Automated Content Moderation

In response to social media platforms being exploited by extremist groups in posting violent videos because they were the most effective at spreading fear and terror, The UK government has publicly announced plans to spend £600,000 pounds worth of public funding to collaborate with ASI Data Science for a stricter examination of extremist content.

A blocking tool will be able to recognise ISIS- related propaganda with an over 94% percent accuracy rate. The tool runs an algorithm that can distinguish extremist online content apart from news and media that report terrorist propaganda.They aim for this to interfere with terrorist plans and protect online users from being exposed to such violent images. Home Secretary Amber Rudd has stated that “This government has been taking the lead worldwide in making sure that vile terrorist content is stamped out”. With the use of Automated content moderation, they most likely can achieve this.

The Social effects of Automated Content Moderation

All forms of automation carry their individual chances and risks. The participants in the The Turn to Artificial Intelligence in Governing Communication Online workshop held in Berlin fear that automation may likely lead from being a reactive to a proactive moderation. That means machine learning systems may soon be able to analyse all uploaded content and not just flagged content that triggers the system.

A major concern is when it comes to decision making, AI tools may confuse content that is culturally and contextually appropriate to be deemed offensive. These learning systems can generate biases against marginalised groups that already face biases, Non-English speaking minorities and satirical or ironic postings. Algorithmic training still has a long way to come in terms of making accurate judgements.

What can this mean for everyday users?

As a student and an ordinary user of media platforms, I fear that expressing and communicating a thought or idea in the wrong way may trigger Automated censorship. If a scenario of over-blocking occurs, Automated content moderation can possibly infringe one’s rights to freedom of expression particularly for those who use media platforms to express themselves.

Content moderation has to find a fine balance where they protect users from offensive and violent content whilst remaining the right to free speech of others. Leading media platforms and services reap the benefits from automated content moderation owing to its ability to work on large scales at cheap costs. AI helps to keep the regulations and terms of service of these platforms in check. This inherently comes at the expense of low-skilled manual moderators.

 

 

Reference List

ABC News. (2018, April). Facebook says up to 87m people affected in Cambridge Analytica data-mining scandal. ABC. Retrieved from http://www.abc.net.au/news/2018-04-05/facebook-raises-cambridge-analytica-estimates/9620652

Accenture (n.d.). Content Moderation: The Future is Bionic. Retrieved from https://www.accenture.com/cz-en/_acnmedia/PDF-47/Accenture-Webscale-New-Content-Moderation-POV.pdf

Alexander von Humboldt Institute For Internet And Society (2018). Workshop: The turn to artificial intelligence in governing communication online. Retrieved from https://www.hiig.de/en/events/workshop-artificial-intelligence-governing-communication-online/

BBC News. (2017, December). How extremists and terror groups hijacked social media. BBC. Retrieved from https://www.bbc.co.uk/bbcthree/article/16b6c718-17d4-426d-add4-625af822e8d2

Chen, Y., Zhou, Y., Zhu, S., & Xu, H. (2012). Detecting offensive language in social media to protect adolescent online safety. Paper presented at the 71-80. doi:10.1109/SocialCom-PASSAT.2012.55

Chavan, V. S., & Shylaja, S. S. (2015). Machine learning approach for detection of cyber-aggressive comments by peers on social media network. Paper presented at the 2354-2358. doi:10.1109/ICACCI.2015.7275970

Dinakar, K., Reichart, R., & Lieberman, H. (2011). Modeling the detection of textual cyberbullying. Paper presented at the , WS-11-02 11-17.

Gollatz, K., Beer, F., Katzenbach, C. (2018). The Turn to Artificial Intelligence in Governing Communication Online Workshop Report. Retrieved from https://www.hiig.de/wp-content/uploads/2018/09/Workshop-Report-2018-Turn-to-AI.pdf

Greenfield, Patrick. (2018, February). Home Office unveils AI program to tackle Isis online propaganda. The Guardian. Retrieved from https://www.theguardian.com/uk-news/2018/feb/13/home-office-unveils-ai-program-to-tackle-isis-online-propaganda

Lee, Dave. (2018, February). UK unveils extremism blocking tool. BBC News. Retrieved from https://www.bbc.com/news/technology-43037899

Papegnies, E., Labatut, V., Dufour, R., & Linarès, G. (2017). Graph-based features for automatic online abuse detection. Paper presented at the , 10583 70-81. doi:10.1007/978-3-319-68456-7_6

Rohleder, Bernhard. (2018, February). Germany set out to delete hate speech online. Instead, it made things worse. The Washington Post. Retrieved from https://www.washingtonpost.com/news/theworldpost/wp/2018/02/20/netzdg/?utm_term=.7ba8e1b9d224

Solon, Olivia. (2018, July). Does Facebook’s plummeting stock spell disaster for the social network?. The Guardian. Retrieved from https://www.theguardian.com/technology/2018/jul/26/facebook-stock-price-falling-what-does-it-mean-analysis

Terdiman, Daniel. (2018, May). Here’s How Facebook Uses AI To Detect Many Kinds Of Bad Content. Fast Company. Retrieved from https://www.fastcompany.com/40566786/heres-how-facebook-uses-ai-to-detect-many-kinds-of-bad-content

Terdiman, Daniel. (2018, April). Here’s How Facebook Can Regain Trust At Its F8 Conference. Fast Company. Retrieved from https://www.fastcompany.com/40564264/heres-how-facebook-can-regain-trust-at-its-f8-conference

WebPurify; WebPurify launches automated intelligent moderation service. (2017, Mar 27). Journal of Engineering. Retrieved from http://ezproxy.library.usyd.edu.au/login?url=https://search-proquest-com.ezproxy1.library.usyd.edu.au/docview/1880163720?accountid=14757

Nicole Zhong
About Nicole Zhong 3 Articles
USYD Student - Arts + Sci From Sydney Australia Currently studying digital culture

Be the first to comment

Leave a Reply

Your email address will not be published.


*


three + 1 =