Censorship or Censors**t? The Regulation of Social Media Posts

What should the obligation of social platforms to remove hate speech and illegal speech be?

Image representing online hate. Image source: Pexels. CC0 licensing

Introduction

The use of hate speech and illegal speech online is undoubtedly harmful to individuals and people groups. However, the vast nature of social media and the infinite different circumstances individuals involve themselves in makes the removal of hate speech exceptionally challenging. Besides, whose responsibility is it to ensure this happens? Individuals, government, social platforms? Can speech be mislabelled as hate speech? There are many difficulties when navigating the implementation of hate speech regulation in Australia. Nonetheless, it is necessary for the ethical functioning of social platforms.

What is Hate Speech?

The boundary in which speech steps into ‘hate speech’ and ‘illegal speech’ is disputable because of the elusive nature of language and motive. The most distinctive attribute of hate speech is the use of negative language towards another person or group of people. Defining hate speech is also explored in consideration of the context in which a person has been speaking. Whichever way hate speech is interpreted, hate speech in society is a disruption of the ‘public good’ and adds difficulty in sustaining what is good and fair in society (Waldron, 2012).

The Academy of the Social Sciences in Australia (ASSA) defines hate speech as “speech or expression which is capable of instilling or inciting hatred of, or prejudice towards, a person or group of people on a specified ground.” In Australia, it is illegal to discriminate or inflict hatred based on colour, race, ethnicity and national origin, as seen in the Racial Discrimination Act of 1975. Only some states have jurisdiction that facilitates hate towards other aspects such as religion and sexual orientation.

This image represents appropriate steps taken by journalists to ensure hate speech is not published:

Image representing 5 point test to eliminate hate speech. Image: EJN, CC0 licensing

Online Hate Speech Regulation Around the World

This list consists of select examples and is non-exhaustive:

Reduction in hate speech on social media sites in Germany from German Justice Ministers call for change:

Reduction in hate speech infographic. Image: bmjv (Federal Ministry of Justice and Consumer Protection, Germany). All Rights Reserved.

The EUs Code of Conduct for IT companies sets the framework for a 24-hour deadline for the removal of hate speech posts and similar illegal content. Germany’s law acts in accordance with this 24-hour period, furthering it by declaring the punishment be up to 50 million Euros for social networks, and 5 million Euros for individuals. Posts that are not “manifestly unlawful” must be removed within 7 days.

 

Introducing Regulations to Australia

Since its inception, the internet has been a place somewhat detached from reality; a prospering playground for anonymous action. Even with connection to a person, the behind-the-screen nature of social media gives users a sense of protection and distance. Online social media has a seemingly limitless reach, equipping those who commit hate crimes with great influence and a great capacity for hurt. Just as hateful action has no place in public society in Australia, social media sites are also no place for hate crimes. Social media sites have an obligation to maintain a stable community space.

In Japan, using social media to upload videos of racist rallies and marches to the internet was fuel to prejudice against ethnic Koreans, with the effective limitation of this seemingly being the Japanese government regulating content spreading across social media (Kotani, 2017). Regulation is necessary to ensure that hate speech does not roam free. EU Commissioner for the Security Union, Julian King said:

You wouldn’t get away with handing out fliers inciting terrorism on the streets of our cities – and it shouldn’t be possible to do it on the internet, either. While we have made progress on removing terrorist content online through voluntary efforts, it has not been enough. We need to prevent it from being uploaded and, where it does appear, ensure it is taken down as quickly as possible – before it can do serious damage.”

European Commission instated campaign. Image: European Commission. All Rights Reserved

 

Far reaching, consistent speech regulation is not something that is solely controllable by individuals. One can only exert so much authority or influence over another person. The perpetrator of hate speech presumably has just as much power and authority on that social media platform as any individual who attempts to assert justice. It must be the platforms and national governments who enforce regulation of hate speech, implementing consequences and boundaries to abide by. Simpson (2013) asserts that the value for restricting hate speech is promoted through legal restriction.

The authority of governing bodies above what is achievable by individuals taking matters into their own hands is profound. Governments can implement consequences and mediate differing opinions. Take for example the Elonis case of threatening and hateful posts on Facebook; with no regulation in place there can be no consequence for those guilty of hate crime. I would argue that Facebook failed their duty of maintaining a stable community by allowing threatening posts to persist to the extent that they did.

This video shows the case of Anthony Elonis’ potential conviction by the US Supreme Court for Facebook posting:

 

Rejecting Hate Speech Regulations in Australia

Whilst illegal speech emerges on social media sites, there is the chance that ordinary speech is misunderstood or mislabelled as hate speech. This is partly owing to the ambiguous boundary of what is or is not hate speech. You could consider that when speech is undeservedly censored, this threatens free speech. Free speech is essential especially when it goes against the mainstream, as it allows individuals to criticize the government and allow it to evolve (Gelber, 2010).

In order to rid social media of hate speech at the time constrained rate outlined by the EU, Artificial Intelligence must be utilised. However, AI has its limitations. Whilst AI can search for keywords, it cannot navigate the context of language use and thus inevitably generates false positives or negatives. Avoiding wrongful AI detection is difficult as users may have no insight into what is considered right and wrong by the platform.

Image: Screenshot retrieved from Quora

 

If social media platforms are expected to remove all potential hate speech, it can be expected that the severity of hate speech gets minimised. With more cases of hate speech at hand, and the lack of a firm definition of ‘hate speech’, convicting ‘hate crimes’ could be more challenging than ever. Germany in 1987 prosecuted 1,447 individuals for right-wing related hate crimes; recent prosecutions for all hate crime are at around 100 a year (Cohen, 2014). Massaro (1991) finds that prosecution for hate speech also has a strong left-leaning bias. The efforts of social media platforms could all be in vain, with a side of accidentally censoring what need not be censored.

So is there much point in censorship? If conviction is so unstable and opinions of the norm are perhaps biased, should individuals take matters into their own hands to defend what is right? Nadine Strossen suggests that social media’s attempt to censor hate speech has been as unsuccessful as other attempts by governments. Perhaps it is better to make our own decisions than leaving it to corporations and governments.

Watch The Atlantic’s video below:

How would this regulation affect Australian internet users?

If social media was to remove hate and illegal speech in the same manner that is active in Germany, Australians could expect to have their content checked thoroughly. This would likely be achieved through AI keyword scanning and similar artificial intelligence usage to accomplish the task. As monetary ramifications for social platforms are so large, the incentive to do a thorough job is high. And so, I would assume that social platforms would rather accidentally remove what is legal than miss what is illegal and suffer the consequences.

Conclusion

As contrasted by Cohen (2014), liberal democracies across the world share a value for freedom of speech, but most also have regulation to limit discriminatory and hateful speech. Social media sites have a moral obligation to moderate their platforms. The removal of what is obviously hate speech is a necessary action to ensure that society online is kept stable and safe.

The precautions that must be taken by Australia is to ensure that hate speech removal is fair and effective include:

  1. Appropriate ramifications for social platforms failure to meet criteria e.g. Germany’s 50 million Euro fine
  2. Human approved removal of what is detected by AI keyword filtering
  3. Manual filtering of content; AI is not flawless in detection
  4. Legislation with detailed contextual definition of what is ‘hate speech’ and what is ‘free speech’
  5. Procedure for the elimination of bias when convicting hate crime

 

Reference List

Academy of the Social Sciences in Australia (2006), Hate speech, free speech and human rights in Australia Available at https://www.assa.edu.au/event/hate-speech-free-speech-and-human-rights-in-australia/

AustLII Racial Discrimination Act 1975 Retrieved from http://www8.austlii.edu.au/cgi-bin/viewdb/au/legis/cth/consol_act/rda1975202/

BBC Technology (2018, January 1). Germany starts enforcing hate speech law. BBC. Retrieved from https://www.bbc.com/news/technology-42510868

Bundesministerium der Justiz und fur Verbraucherschutz (2017). Percentage of social media hate speech deleted after user reports. [image] Retrieved from https://www.dw.com/en/german-justice-minister-defends-controversial-anti-hate-speech-legislation/a-38900261

Cohen, R. (2014) Regulating Hate Speech: Nothing Customary About It. Chicago Journal of International Law, 15(1), 229-255.

CNN (2014). Supreme Court hears arguments on free speech, social media. Retrieved from https://www.youtube.com/watch?v=3aPzjQHq5ws [Accessed 13 Oct. 2018]

DW (2016, October 26). Germany’s Maas threatens social media firms with sanctions over hate speech. DW. Retrieved from https://www.dw.com/en/germanys-maas-threatens-social-media-firms-with-sanctions-over-hate-speech/a-36157430

Ethical Journalism Network (2015). Hate Speech: A Five Point Test for Journalists. [image] Retrieved from https://ethicaljournalismnetwork.org/resources/infographics/5-point-test-for-hate-speech-english

European Commission (2018) State of the Union 2018: Commission proposed new rules to get terrorist content off the web [press release] Retrieved from http://europa.eu/rapid/press-release_IP-18-5561_en.htm

European Commission (2018) Countering illegal hate speech online #NoPlace4Hate [image] Retrieved from http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=54300

European Commission (2016) Code of Conduct on Countering Illegal Hate Speech Online. Retrieved from https://ec.europa.eu/info/sites/info/files/code_of_conduct_on_countering_illegal_hate_speech_online_en.pdf [Accessed 13 Oct. 2018]

European Commission (2016) List of actions by the Commission to advance LGBTI equality Retrieved from  https://ec.europa.eu/info/sites/info/files/lgbti-actionlist-dg-just_en.pdf

European Commission (2008) Acts Adopted under title IV of the EU Treaty. Official Journal of the European Union, 328(55), 1-4. Retrieved from https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2008:328:0055:0058:en:PDF

Gelber, K. (2010) Freedom of political speech, hate speech and the argument from democracy. Contemporary Political Theory, 9(3), 304-324.

Gollatz, B., Beer, F., & Katzenbach, C. (2018) The Turn to Artificial Intelligence in Governing Communication Online Workshop Report. Retrieved from https://www.hiig.de/wp-content/uploads/2018/09/Workshop-Report-2018-Turn-to-AI.pdf

Massaro, T. (1991) Equality and Freedom of Expression: The Hate Speech Dilemma (Arizona Legal Studies Discussion Paper). University of Arizona.

Simpson, R. (2013). Dignity, Harm, and Hate Speech. Law and Philosophy, 32(6), 701-728.

The Atlantic (2018). Social Media and Hate Speech: Who Gets to Decide?. Retrieved from https://www.youtube.com/watch?time_continue=1&v=bghTL5gU6fs [Accessed 13 Oct. 2018].

United States Courts (2014) Facts and Case Summary – Elonis v. U.S. Retrieved from http://www.uscourts.gov/educational-resources/educational-activities/facts-and-case-summary-elonis-v-us

 

 

 

Melissa
About Melissa 3 Articles
A humble music loving uni student

Be the first to comment

Leave a Reply

Your email address will not be published.


*


five × four =