Tech reporting can push big tech to tackle disinformation | tackling-disinformation-learning-guide | DW | 17.03.2024
  1. Inhalt
  2. Navigation
  3. Weitere Inhalte
  4. Metanavigation
  5. Suche
  6. Choose from 30 Languages

Tackling disinformation: A learning guide

Tech reporting can push big tech to tackle disinformation

Tech reporting can play a role in tackling disinformation by holding technology and social media giants to account, believes the Mozilla Foundation's Odanga Madung.

A journalist sits in front of a computer

To hold Big Tech accountable, journalists outside of the West should be involved in tech reporting

False narratives and conspiracy theories transcend borders, eroding trust in electoral processes as successive analyses and investigations show. From baseless claims of election fraud to foreign influence campaigns that exacerbate domestic divisions, the integrity of democracy and trust in institutions hangs in the balance. Moreover, the proliferation of artificial intelligence is amplifying disinformation efforts, distorting the very fabric of reality. Yet somehow, despite moving from scandal to scandal, tech companies have continued to create massive profits and make their shareholders rich. 

Against this background, there could hardly be a worse time to skimp on combating harmful content online. More than 83 countries are holding national elections in 2024. This includes five of the of the world's 10 most populous countries Bangladesh, India, Indonesia, Mexico and Pakistan, all from the Global South sending nearly a third of the world's population to the polls. 

Because of proximity and economic power, US and Chinese institutions wield disproportionate influence over tech platforms, giving these countries a significant say in shaping platforms' priorities. Meanwhile, Global Majority countries are left grappling with questions about how to hold tech giants accountable for the harms that they exacerbate. After all, what leverage do they have? Within these regions, most tech giants barely employ representative numbers of staff or contractors. This limits their knowledge of local contexts and how to roll out their technologies in these regions. 

Few content moderators for languages other than English

Meta, which owns Instagram, Facebook and WhatsApp, had less than 500 content moderators dedicated to the whole of sub-Saharan Africa; 200 of them have since been laid off. When push comes to shove, this can have catastrophic results, as it did in Ethiopia, where Meta's failure to tackle hate speech on Facebook helped fueled the Tigrayan conflict

Yes, the centralization of power is something to be concerned about! But, more recently, this problem has gained an additional dimension the fact that the majority of these immensely powerful ecosystems are run on a whim by a clique of siloed US-based billionaires. While this issue is well documented, the ousting of Open AI's co-founder and CEO that plunged the tech company into chaos made clear just how deeply rooted this dimension of centralization is, and what the potential consequences might be.

As Shoshana Zuboff, the author of "The Age of Surveillance Capitalism" so succinctly puts it in a 2022 paper: "It is astonishing to consider that our emergent information civilization is wholly dependent upon these "spaces," yet they remain for sale or rent by any individual, corporation, politician, billionaire, megalomaniac or billionaire megalomaniac, with no law to constrain their action, unlike almost any other form of property."

Social media icons logo displayed on a smartphone with disinformation on screen seen in the background

Powerful tech platforms wield enormous economic, social and political power

Reframing this era of big tech's dominance as that of an oligarchy enables us to better understand how problems like the amplification of mis- and disinformation persist till today despite the attention and funding information disorders get. 

Lean startup ideas, which were the founding of many of these companies, allowed them to build many of these products without thinking much about the effects they would have on society. It's what made them billionaires in a very short time and at relatively young ages. But the consequence is that the algorithms forming the backbone of their products ended up being famously addicted to prioritizing content and information that is polarizing and harmful. 

Big tech needs to be held accountable

Real institutional accountability measures are needed to address the challenges we see arising from this manifestation of surveillance capitalism within our societies. We need to not only examine how these companies design products but also understand their ownership structures, organizational hierarchies and incentives. We also need to question how maximizing shareholder value competes with the values of our societies or exacerbates harm within them? 

This will allow us to begin to understand why, for example, platforms are increasingly disinterested in distributing news to their audiences or putting safeguards in place ahead of an election — because these aren't profitable measures for them.

A woman casts her vote during Nigeria's 2023 presidential elections.

Politicians in Nigeria paid social media influencers to spread disinformation ahead of Nigeria's 2023 election

It is for this very reason that industries like disinformation-for-hire, where these platforms' features are manipulated to spread harmful information for profit, will never go away. For instance, in Kenya, the trending algorithm of X, formerly Twitter, was easily manipulated by a thriving disinformation-for-hire industry that spread propaganda and stifled dissent. Similar discoveries were made in Nigeria ahead of their 2023 elections.

The consequences of these actions are far-reaching, eroding trust in information sources and undermining the democratic process. But if you were to ask X about it now, the only response you'd get from them is a poop emoji. That response is what regulators and accountability actors ought to pay attention to. It's a clear sign of how much the companies realize they can get away with negligence. 

Such problems are not exclusive to a particular platform. Rather, most of these platforms mimic each other's bad behavior. X isn't the only company that has laid off many of its trust and safety personnel in recent times. Meta has laid off some 21,000 employees since November 2022 —leading to concerns about how this impacts trust and safety, while many YouTube employees working on misinformation policy were impacted by layoffs at parent company Google. There's evidence that many of the resources in these companies were dedicated towards dealing with the contexts of Global Majority countries.

More than a dozen people stand in front of a Nairobi court and talk to each other.

After being fired by an outsourcing company, former Facebook content moderators decided to take Meta to court

Tapping into large-scale data from social media platforms allows researchers and local organizations to understand things such as the impact of online polarization or how misinformation affects voting patterns. But tech giants have now unanimously made it almost impossible to access data from their platforms by either truncating APIs (the set of programming code that allows programs to communicate) or making it incredibly expensive to use them. This makes it harder to hold tech firms accountable even as the landscape of threats facing users grows. 

It therefore doesn't matter how many fact-checking programs tech giants tout, the reality is that platforms are probably less prepared in the 2024 super election year for the threats facing their users than in 2016 when the problematic nature of their industry went mainstream. 

AI a treat, but also potentially part of the solution

Although artificial intelligence is often touted as a massive disinformation threat, it could also be part of a potential solution. Advanced models like the chatbot GPT-4 have shown promising results in moderating content. Self-reported data from social media firms also shows how such platforms are increasingly leaning on automated detection tools to filter disinformation and hate speech, which cuts the amount of content that human reviewers must look at. But there's also significant risk from leaning too much on AI, whether it's accidentally taking down legitimate speech or missing key linguistic and local contexts. 

Problematically, the tech giants which dominate social media are also riding the AI innovation wave. As a result, observers see the same problems persisting as this technology is adopted by consumers: exploitative labor systems, promotion of harmful bias and lack of transparency.

Finding a way out of this complex web of challenges is not easy. While regulation can play a role, it has its limitations.

A photo of one hand holding a microphone and another hand holding a notebook

There is an audience for tech reporting in Africa

'Invest in tech journalism'

I believe investing in tech journalism can be part of the solution as technology plays such an integral role in consumer and political affairs that there's a latent audience hungry for tech-related stories. And I don't mean run-of-the-mill tech reporting that geeks out about the latest gadgets. 

Technology at its core is constantly reshaping and restructuring society. We need efforts that focus on training journalists to recognize how tech sits at the heart of many of their stories, and what impact it has on the injustices they cover. Publications such as Rest of WorldSemafor
and The Continent have shown that there is audience traction to be gained with their dedicated coverage of these issues across Africa. 

This approach can empower the engine of accountability and provide some leverage against the global hegemony of tech platforms. A good example is how coverage by local tech journalists of the plight of content moderators in Kenya enabled them not only to gain narrative momentum with the public but also seek justice in Kenyan courts. Through this, they're the first group of Africans to take tech giants such as TikTok and Meta to court. It's only through collective action that we can hope to navigate the complex terrain of the digital age and safeguard our democratic values.

This article by Odanga Madung was a guest contribution. Odanga is platform integrity researcher at the Mozilla Foundation and journalist based in Nairobi, Kenya. 

This article is part of Tackling Disinformation: A Learning Guide produced by DW Akademie.

The Learning Guide includes explainers, videos and articles aimed at helping those already working in the field or directly impacted by the issues, such as media professionals, civil society actors, DW Akademie partners and experts.

It offers insights for evaluating media development activities and rethinking approaches to disinformation, alongside practical solutions and expert advice, with a focus on the Global South and Eastern Europe.

DW recommends

Audios and videos on the topic