Hate Speech and Networks PDF
Document Details
Uploaded by NeatestCosmos1882
Università Ca' Foscari Venezia
ET
Tags
Summary
This document is a past paper on hate speech, examining its presence and spread on online networks, specifically focusing on social media platforms. It analyzes the influence of social media platforms, their approaches in handling the issue, and the different aspects of hate speech, including the legal framework.
Full Transcript
FUNDAMENTALS OF IT LAW [ET7004] HATE SPEECH ONLINE GROUP 7 Destiny Obidike Giorgio Nicastro Timo Pirotta Ardriz...
FUNDAMENTALS OF IT LAW [ET7004] HATE SPEECH ONLINE GROUP 7 Destiny Obidike Giorgio Nicastro Timo Pirotta Ardrizzo Rares Stefan Neagu Camilla Nicodemo Federico Pavan Nicola Nanti Francesca Passarelli Giorgia Pisanu Alessia Piovaticci Università Ca’ Foscari Venezia- BSc Digital Management 1 What is Hate Speech? It can be defined as any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group based on who they are, in other words, based on their religion, ethnicity, nationality, race, gender or other identity factor. In Italy the public debate on the line between freedom of speech and hate speech in the new media has developed in recent years, mostly in the wake of offensive and sexist comments directed against men and women in public positions. In response not only to hate speech on the Internet, but more in general to questionable behaviour online, various politicians and authorities have called for regulating the Internet and for stricter laws against hate crimes. A number of bills have also been proposed, which however have not been passed due to widespread opposition by MPs, journalists and opinion leaders, worrying that excessive regulation of the use of the Internet might infringe on the freedom of speech. Spread of Hate Speech Through Networks Online networks are fertile ground for hate speech to spread, because algorithms that drive amplification prompt users to engage and because of network effects. Social media algorithms can create “filter bubbles” and “echo chambers” that reinforce users’ existing beliefs and biases. Such algorithms can bring similar content to the surface, and lead users to find more people like they are, creating online communities that may die on hate speech as of пароль and may be spilled to a much wider audience. The anonymity offered by social media can also help the spread of hate speech, encouraging people to express hateful views they wouldn’t dare to voice publicly. The network effect is important in spreading hate speech. Hateful messages are easily disseminated with real-time access to a worldwide audience. Also, this content, which is considered “sticky,” can resurface and draw views long after it’s deleted. In the social media realm, companies are struggling to moderate and balance between free expression and hate speech, though their attempts to eradicate hateful material continue. Hate speech can have strong offline impacts as well, encouraging behaviours that can lead to real-world discrimination and violence. There is not one solution to this problem, but stricter regulation, technological fixes, and education and positive speech promotion through community moderation will only help alleviate this problem. Influence of Social Media Platforms Social media platforms allow users to produce content, provided they accept their terms of use. Each social media platform as a privately owned company defines its terms of use or so-called “community standards”. These are important in relation to regulating hate speech. Università Ca’ Foscari Venezia- BSc Digital Management 2 Those are some of the ways the biggest social media platforms act when hate speech is involved. Facebook Facebook's hate speech policy: must be directed at groups, so "I hate Christians" would get flagged but "I hate Christianity" would not. Reports are checked manually, and not systematically; also the repetition of a report eventually provokes action to be taken. Challengers to hate speech exploit grey areas that allow people to recast harmful messages as patriotic ones and dodge efforts to get them removed. Facebook depicts hate speech as a personal problem that can be solved with blocking instead of dealing with systemic issues. YouTube Its rules allow for criticism of governments, but hate speech targeting racial or religious groups is flagged. It requires users to block, flag and report content to trigger review, while comments are moderated (and managed) separately from the video. Harmful content often stays much longer than it should on the platform as it depends heavily on user reports. Twitter Hate speech falls broadly under abusive behaviour and is subjected to strict review processes on Twitter. Enforcement involves the use of both machine learning algorithms and human review, and most commonly suspends contentious accounts instead of banning them entirely. However, its more stringent policies and broader conception of hate speech are among aspects that lend it uniqueness, while temporary provisions raise doubt on effectiveness. The Case of Caroline Criado-Perez Caroline Criado-Perez faced on-line abuse after successfully campaigning for a woman’s face to appear on UK banknotes. Miss Criado-Perez, who had appeared in the media to campaign for women to feature on bank notes, said that the abusive tweets began the day it was announced that author Jane Austen would appear on the newly designed £10 note. She reported them to the police after receiving “about 50 abusive tweets an hour for about 12 hours” and said that she had stumbled into a nest of men who coordinate attacks on women. Isabella Sorley and John Nimmo admitted at Westminster Magistrates’ Court sending the messages over a public communication network. They wrote tweets like: Università Ca’ Foscari Venezia- BSc Digital Management 3 “Fuck off and die you worthless piece of crap”, “go kill yourself” Isabella Sorley was sentenced to twelve weeks in jail and John Nimmo was jailed for eight weeks. After the trial, Caroline Criado-Perez suffered life changing psychological effects from the abuse which she received on Twitter, she also said that this was just a small drop in the ocean comparing the amount of abuse that women get online and how few people see any form of justice. John Nimmo also targeted Stella Creasy, labour MP for Walthamstow, with the messages: “the things I could do to u (smiley face)” and called her “dumb blond bitch”. The abuse received by Ms Criado-Perez and Ms Creasy also resulted in Twitter updating its rules and confirmed it would introduce an in-tweet “report abuse” button on all platforms. Social Perspective Hate speech is words with negative connotation, words targeting individuals and groups based on their race, religion, gender, etc. Psychologically, it produces anxiety, depression and post-traumatic stress syndrome, and erodes self-esteem, with chronic exposure manifesting in physical health problems such as insomnia or heart disease. At a social level, it promotes polarization, a decline in trust, and the normalisation of prejudice, which, in turn, keeps fuelling the discrimination and alienation of marginalised groups. At the community level, hate speech fractures social bonds, incites violence and silences victims, leading to reduced civic participation and the underrepresentation of targeted groups in public discourse. On the level of society, it instigates fear, cultural degeneracy, and divides. Meanwhile, it becomes increasingly hard to strike a balance between freedom of speech and regulation. Repeated words of hate also create an atmosphere that is toxic and polarizing that strikes at the heart of the cohesiveness and inclusiveness of us as we expand outward. Confronting it demands education to combat the normalization of hate, laws to create some bounds around toxic discourse, community programs to rebuild trust and psychological support for victims. Such measures can help create a culture of respect, inclusiveness, and resilience, considerably lessening its devastating effects on individuals and society. Hate speech in America legislation American law protects hate speech under the First Amendment. The courts justify this by stating the First Amendment actively prohibits the government from censorship of the public debate on issues of social importance even if such discourse offends sensibilities or involves hate used to invoke grief, anger or fear. The existing conditions in First Amendment’s case law imply that hate speech becomes a criminal offence only if it leads to violence or involves a threat of violence against some person or a group, which is more than an everyday occurrence. The first amendment accepts that the state’s attempts to restrict the propagation of hate speech will invariably stifle the debates and conversations which are essential in a democracy. Instead, it is we the people who have the capability to best counteract hate speech Università Ca’ Foscari Venezia- BSc Digital Management 4 — either directly through argument, protest, inquiry, & jokes, or indirectly through quieting down or walking away. Most of the time there is no ban on hate speech. However, these types of speech are banned: Incitement to violent or lawless action: Fighting words: Fighting words mean speech that is likely to incite violence or a riot and would be detrimental to public order Harassment and defamation When discussing hate speech, however, the more traditional types of laws that exist mostly regulate hate speech directed at individuals in public places have the less relevance on the Internet and particularly social networks. There have been demands in the recent past for stricter measures aimed at dealing with hate speech, especially on social media networks such as Facebook, Twitter, and Youtube. European legislation The European Union is based on the principles of respect for human dignity, liberty, democracy, equality, the rule of law, and for human rights. This includes respect for the rights of persons belonging to minorities. Hatred and intolerance in any form are incompatible with these basic rights and freedoms. European Convention on Human Rights (ECHR) Adopted in 1950 by the Council of Europe, the European Convention on Human Rights is the central instrument that governs the limits placed on free speech in Europe. While Article 10 of the ECHR provides a guarantee of free expression, Council Framework Decision 2008/913/JHA on Combating Certain Forms and Expressions of Racism and Xenophobia by Means of Criminal Law The present framework decision was adopted by the EU in 2008 to facilitate harmony among the laws of the various EU Member States, criminalizing racist and xenophobic speech, incitement to violence, and hate speech. It also criminalized genocides and hate speech online. This decision urges criminalization of online hate speech and thereby gives equal treatment in both traditional media and on the Internet. Digital Services Act (DSA) The Digital Services Act, which entered into force in November 2022, is one of the EU's most comprehensive efforts to regulate online content, including hate speech. The DSA aims to create a safer online space by imposing stricter rules on large online platforms (defined as Università Ca’ Foscari Venezia- BSc Digital Management 5 platforms with over 45 million users in the EU) to combat illegal content, including hate speech. Key features of the DSA include: Hate Speech & Illegal Content Removal: Online platforms are obliged to take emergency removals of illegal content such as hate speech. The manifestly illegal content should be taken down within 24 hours after notification or flagging. Proactive Monitoring: This requires platforms to adopt mechanisms that can detect and remove illegal content, including hate speech. Transparency: The DSA requires platforms to be transparent about how they moderate content. Risk Assessments: Very Large Online Platforms, the largest of the platforms, are obliged to perform risk assessments regarding the impact of their services, especially about the diffusion of harmful content like hate speech. Technological Solutions and Challenges in Detecting and Reducing Hate Speech The most important instruments in combating online hate speech are artificial intelligence and machine learning. These are complex algorithmic approaches that analyze text, audio, and imagery for harmful content detection. Supervised learning frameworks and neural networks, as presented by BERT, underlie the classification of hate speech through the recognition of patterns in annotated datasets. NLP reinforces detection by analyzing the context, sentiment, and intent; multimedia hate speech in video and graphics is handled through tools like speech recognition and image analysis. There are big challenges despite these advances. Sarcasm, cultural references, or even coded language used in hate speech usually confounds AI. Biases built into the training data can result in vastly differing detection capabilities for languages and locations. The rapid evolution of slang and symbols used for spreading hate usually outpaces updates in the detection models. Operationally, scalability is a challenge since any platform processes a huge amount of content in real-time. Ethical concerns include balancing the moderation of hate speech with protecting free expression and ensuring that AI systems remain transparent and unbiased. False positives will result in censorship, while false negatives allow harmful content to persist. Adaptive AI systems in the near future will evolve with new trends, including multimodal approaches in combining text, visual, and audio analysis, and collaboration among governments, companies, and researchers. All these efforts are important in the quest to make online spaces safer and inclusive, together with ethical frameworks and legal regulations that set boundaries. Università Ca’ Foscari Venezia- BSc Digital Management 6 Conclusion Hate speech online remains a challenging issue, driven by algorithms, anonymity, and the vast reach of digital networks. This project explored its damaging effects on individuals and society while analysing how measures like the EU’s Digital Services Act and platform-specific policies aim to tackle it. But breaks remain, like a lack of uniformity, biased algorithms, and the tension between freely expressing ideas and protecting users from harm. Therefore, it is evident that there are a number of measures that need to be taken. Platforms, for their part, should invest in robust AI moderation systems that are able to identify content in context, as well as satire and the rapidly-evolving trends of hate speech, while also working to mitigate the biases built into their algorithms. A varying interplay involving governments and tech companies will provide opportunities for setting clear regulations and enhancing transparency: platforms should openly share moderation practices, performance metrics, and the limitations of their systems to build public trust. Education is another major aspect of this fight against hate speech: with such an education, individuals will indeed be able to identify, counter hate speech, and provide supportive communities pushing for more positive online environments. In addition, platforms can also encourage constructive dialogue by promoting positive speech campaigns and creating tools that facilitate respectful and serious conversations. Università Ca’ Foscari Venezia- BSc Digital Management 7 References https://www.theguardian.com/uk-news/2014/jan/24/two-jailed-twitter-abuse-feminist-campai gner https://www.bbc.com/news/uk-25641941 https://www.bbc.com/news/uk-23485610 https://www.ala.org/advocacy/intfreedom/hate#:~:text=Under%20current%20First%20Amendmen t%20jurisprudence,against%20a%20person%20or%20group. https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/combatt ing-discrimination/racism-and-xenophobia/combating-hate-speech-and-hate-crime_en#:~:text=H ate%20motivated%20crime%20and%20speech,or%20national%20or%20ethnic%20origin. Goh, E. Y., & Yang, H. (2024). Harnessing Artificial Intelligence to Combat Online Hate: Exploring the Challenges and Opportunities of Large Language Models in Hate Speech Detection. Zhang, S., & Wang, H. (2019). Hate Speech Detection: Challenges and Solutions. Agata L., Nina., WE CAN! Taking Action against Hate Speech through Counter and Alternative Narratives NYU School of Global Public Health - The Consequences of Hate Speech https://publichealth.nyu.edu/ Brookings Institution - How hateful rhetoric connects to real-world violence https://www.brookings.edu/ Università Ca’ Foscari Venezia- BSc Digital Management 8