Skip to main content

Helping to build a healthy information sphere

 
International Human Rights Programme / Partner story

Image © Kristijana23/Shutterstock

According to a UNESCO survey published in September 2023, in almost every country, people go to social media for their main source of news. Yet, despite their high use of social media, more than 85 per cent of people are worried about the impact of online disinformation, and 87 per cent believe it has already harmed their country’s politics.

Indeed, disinformation, generated through algorithmic manipulation, bias, hate speech, and misleading information, has exploded on social media sites in recent years. “Disinformation is threatening the very basis of a rights-based order, which is leading to an erosion of trust in the democratic world,” says Patricia Melendez, programme officer for Oak’s International Human Rights Programme (IHRP). “This is why the IHRP supports organisations working to build a healthy information sphere.”

Establishing the disinformation knowledge base
The International Panel on the Information Environment (IPIE) was officially launched in May 2023. In its short existence, it has already come to occupy a unique space as the only authoritative global body that studies research on the information environment. The IPIE works with hundreds of affiliate researchers across the world, in ten languages, to provide an impartial and independent assessment of the information sphere. “This kind of validation provides policy makers with the information and analysis they need to make informed decisions,” says IPIE chair Dr Phil Howard.

Global content regulation is in its infancy, and legislation such as the European Union’s landmark Digital Services Act, while an advance, has failed to close significant gaps. According to the IPIE survey, two-thirds of researchers said that the failure to hold social media companies to account over inadequate content moderation was a significant hurdle, while a third of researchers globally highlighted social media companies as one of the most serious threats to healthy public discourse.

This is why the IPIE intends to establish a panel made up of researchers, activists, and policy makers, to inform discussions going forward. “This will help maximise the work of regulators in holding the high tech giants to account,” says Phil.

Making disinformation unprofitable
Advertising is key to internet business models, so we also support organisations working to reduce the commercial incentives that drive disinformation. The Global Disinformation Index (GDI) serves governments, not-forprofit organisations, online platforms, and media through its neutral, independent, transparent index of a website’s risk of disinforming readers. “All internet stakeholders need to be aware of who is monetising harmful content,” says GDI co-founder, Clare Melford.

This is why GDI uses cutting-edge artificial intelligence (AI), combined with thorough analyses of journalistic practice, to best serve and inform advertisers, the ad tech industry, search and social media companies, and researchers.

One of the GDI’s key tools is the Dynamic Exclusion List. Licensed to online ad agencies, brand safety companies, and other corporations, this tool enables them to make informed decisions about where to place ads, thus avoiding harming their brands.

As of June 2023, the GDI had assessed more than 700,000 websites, in over 40 languages and 150 countries. There are currently 3,300 websites on the Dynamic Exclusion List and, according to Clare, they have lost an estimated USD 200 million per annum in ad revenues. This is a drop of 80 per cent since GDI began commercial licensing in 2020. “We don’t claim total credit for this,” says Clare. “Many other organisations have played a part. But the dramatic fall in ad revenues on the most disinforming websites shows that, given the choice, advertisers choose quality. Following the money works.”

Looking to the future, GDI is already using advanced AI technology to detect disinformation at an even greater
scale. GDI sees itself as part of a free market solution to the problem of disinforming content that provides evidencebased data to help companies with their risk assessments, just like credit ratings agencies in the financial sector.

“Our vision is that one day, GDI will be joined by a thriving marketplace of other source-rating organisations, to give tech companies a choice, as we work together to fight the war against disinformation,” says Clare.

Holding tech giants to account
In many developing countries, big tech companies do not treat their employees in line with legal standards. We are supporting organisations that seek to secure an open, transparent, and accountable digital information sphere.

Foxglove is a not-for-profit organisation that works to make technology fair for everyone. “It is a myth that social media platforms were created by well-meaning tech billionaires in their garages or in their dorm rooms,” says Foxglove director, Martha Dark. “They were created by an army of badly paid staff, outsourced and toiling in appalling conditions, often in impoverished parts of the world.”

Kenya is the main hub for Facebook and TikTok’s content moderation operations for east and southern Africa. These social media moderators are the internet’s essential workers, as, without them, social media would be flooded with toxic content and disinformation that would render them unusable. In March 2023, Foxglove supported an initial 43 Facebook content moderators in Kenya to sue the social media site’s parent company Meta, and two outsourcing companies, for unlawful redundancy. That number has since grown to 185, more than two thirds of the 260 moderators who were laid off in total.

As well as seeking safe and fair working conditions for hundreds of content moderators, Foxglove supports their access to proper mental healthcare. “This is sadly necessary,” says Martha, “given the traumas of having to view disturbing material such as beheadings and child abuse for nine hours per day.”

Foxglove also supports 184 content moderators in Kenya who were sacked by Sama, Facebook’s outsourcing company. The case has gone well so far – in May 2023, a judge made an interim ruling that the workers were unlawfully sacked, that Facebook had to pay withheld salaries, and Facebook was the ‘true employer’ of the workers, despite its outsourcing model.

In a landmark case supported by Foxglove, the son of an Ethiopian professor from the Tigray region in the north of the country, filed a lawsuit against Meta. He alleged that his father was gunned down in November 2021 because Facebook’s algorithm prioritised hateful and inciteful content against his father.

Foxglove is seeking structural changes to Facebook’s business model, and demands that it implement measures to stem the flow of disinformation in the context of the war in Ethiopia. During the Capitol Hill riots in 2021, Facebook changed the algorithms in a matter of hours, demonstrating what is possible. Foxglove is also calling for Facebook to employ moderators who speak the required languages to ensure the platform is safe for users and workers.

2024: a year of challenges
At Oak, we believe that everyone has a right to reliable, trustworthy, and accurate information. But developments in information technology, including the algorithmic curation of news, are amplifying conspiracy theories and hate speech, and enabling malign interference in electoral processes. In 2024, up to two billion voters will go to the polls in the EU, India, Indonesia, Mexico, the UK, and the US, so a lot is at stake in terms of producing real harms and undermining trust in democratic institutions.

This is why our International Human Rights Programme supports the IPIE, the Global Disinformation Index, and Foxglove. “The IHRP’s new partnerships have provided a solid empirical basis on which to make a tangible difference, and have pointed the way forward with some creative and innovative initiatives,” says Adrian Arena, director of Oak’s IHRP. If you want to know more about the IHRP’s strategy, please visit our website.