IEEE OC Computer Society Distinguished Lecture#2 by Dr Oboler (on Aug 30)




Lecture#2: The Technical Challenge of Hate Speech, Incitement and Extremism in Social Media

(Lecture#1 "Software Engineering for the Research Environment" by Dr Oboler is on Aug 29 @ 4pm CALiT2)

Despite its pervasiveness in society, social media has been a largely unregulated domain. There have been growing concerns over the use of social media for the purposes of hate speech, incitement and the promotion of violent extremism. These issues have not been sufficiently addressed by social media companies and governments, particularly in Europe, are increasing looking to regulation. The agreement on a “Code of Conduct on illegal online hate speech” signed by the European Commission and Facebook, Twitter, YouTube and Microsoft the end of May 2016 is a sign of this shift.

This talk will introduce the problem of hate speech, incitement and extremism in social media. The technical challenges in measuring and responding to these problem will then be discussed, along with some of the technical solutions which have been created. The ethical obligations of Software Engineers in light of these problems will also be considered

  Date and Time




  • 14988 Sand Canyon Ave. Studio 4
  • Gigasavvy
  • Irvine, California
  • United States 92618
  • Click here for Map

Staticmap?size=250x200&sensor=false&zoom=14&markers=33.6745246%2c 117
  • Asad Abu-Tarif
    Ph.D., MBA
    Chair, IEEE Orange County Computer Society

  • Co-sponsored by Venue Sponsor: Gigasavvy
  • Starts 13 August 2016 12:00 AM
  • Ends 30 August 2016 12:00 PM
  • All times are America/Los_Angeles
  • No Admission Charge
  • Register


Dr. Andre Oboler
Dr. Andre Oboler of Online Hate Prevention Institute


The Technical Challenge of Hate Speech, Incitement and Extremism in Social Media

The problems of hate speech, incitement and the promotion of violent extremism are a growing challenge to social media companies, law enforcement, intelligence agencies and researchers.

Without knowing what the public is reporting to the social media platforms, how can a governments judge if social media platforms are responding adequately? This issues has come up in cases like the murder of Leigh Rigby (the Telegraph reports: "Facebook 'could have prevented Lee Rigby murder'", Sky News "Facebook Failed To Flag Up Rigby Killer's Message"), it's also been a hot topic in the US Congress e.g. ABC News reports, "Officials: Facebook, Twitter Not Reporting ISIS Messages". In Israel Internal Security Minister Gilad Erdan recently said Facebook has blood on its hands for not preventing recent killings. He is quoted by Al-Monitor as saying, "[The Facebook posts] should have been monitored in time, and [the murder] should have been averted. Facebook has all the tools to do this. It is the only entity that, at this stage, can monitor such a tremendous quantity of materials. It does it all the time for marketing purposes. The time has come for Facebook to do the same thing to save lives."

Social media companies know far more than they used to both about their users and the content on their platforms. Nevertheless, there are numerous challenges to identifying such content including: the volume of content created in social media; the use of videos, images, coded language, and local references which prevent effective text analysis; the changing nature of the expression over time; and legal and technical limitations which can prevent or delay data sharing. Many of these issues can be understood as open technical challenges.

Beyond the ethical obligations of software engineers to act for the public good, these challenges are no long as "extra" which tech companies, or computing professionals, can engage in if they want to be more ethical. They have become an obstacle which puts people’s lives at risk, and puts technological progress at serious risk from governments intervention. The wrong response to solve this problem will lead to over-reach and a surveillance state which is just as reviled by the public.

 The approach my organisation uses allows relies on crowd sourcing, artificial intelligence and cloud computing. It enables content to be evaluated by people, over coming many of the problems from a to d, but then quality controls the response of the crowd through AI. It allow empirical results to be gathered, such as those reflected in this report we produced for the Israeli Government on antisemitism in social media: What the report shows clearly is that the existing approaches by the social media companies are not responding well enough. There are challenges for them as well in better identifying and respond to reports even with full access to the data.


Dr Andre Oboler is CEO of the Online Hate Prevention Institute an Australian charity combating racism, bigotry and extremism in social media. He also serves as an expert on the Australian Government's Delegation to the International Holocaust Remembrance Alliance, co-chair of the Working Group on Antisemitism on the Internet and in the Media for the Global Forum to Combat Antisemitism, and as a Vice Chair of the IEEE Computer Society's Member and Geographic Activities Board. Dr Oboler holds a PhD in Computer Science from Lancaster University (UK), a Juris Doctor from Monash University (Australia) and completed a Post Doctoral Fellowship in Political Science at Bar-Ilan University (Israel). His research interests include empirical software engineering, process improvement, hate speech in social media and the social implications of technology.

Web: Online Hate Prevention Institute; personal website

Address:Victoria, Australia

Dr. Andre Oboler of Online Hate Prevention Institute


The Technical Challenge of Hate Speech, Incitement and Extremism in Social Media


Address:Victoria, Australia


6:00 - 7:00pm Networking and dinner

7:00 - 8:00pm Presentation by Dr Andre Oboler

8:00 - 8:30pm Q&A with Dr Oboler


IEEE OC Computer Society thanks Gigasavvy for sponsoring the venue.