The online information environment
The internet has transformed the way people consume, produce, and disseminate information about the world. A new report from The Royal Society highlights key challenges for creating a healthy online information environment.
Key findings
- Although misinformation content is prevalent online, the extent of its impact is questionable.
For example, the Society’s survey of members of the British public found that the vast majority of respondents believe the COVID-19 vaccines are safe, that human activity is responsible for climate change, and that 5G technology is not harmful. The majority believe the internet has improved the public’s understanding of science, report that they are likely to fact-check suspicious scientific claims they read online and state that they feel confident to challenge their friends and family on scientific misinformation.
- The existence of echo chambers (where people encounter information that reinforces their own beliefs, online and offline) is less widespread than may be commonly assumed and there is little evidence to support the filter bubble hypothesis (where algorithms cause people to only encounter information that reinforces their own beliefs).
Uncertainty is a core aspect of scientific method, but significant dispute amongst experts can spill over to the wider public, according to the report. This can be particularly challenging when this uncertainty is prolonged, and the topic has no clear authority. This gap between uncertainty and certainty creates information ‘deserts’ online with platforms being unable to clearly guide users to trustworthy sources. For example, during the COVID-19 pandemic, organisations such as the World Health Organization and the National Health Service were able to act as authoritative voices online. However, with topics such as 5G telecommunications, it has been more difficult for platforms to quickly identify trustworthy sources of evidence and advice.
The concept of a single ‘anti-vax’ movement is misleading and does not represent the range of different reasons for why some people are reluctant to be vaccinated.
- Technology can play an important though limited role in addressing misinformation content online.
In particular, it can be useful in areas such as rapid detection of harmful misinformation content. Provenance enhancing technology, which provides information on the origins of online content and how it may have been altered, shows promise and will become increasingly important as misinformation content grows more sophisticated. Even now, expertly manipulated content appears to be difficult to detect. Survey experiments conducted for the report indicates that most people struggle to identify deep-fake video content even when prompted.
- Incentives for content production and consumption are the most significant factor to consider when evaluating the online information environment. These incentives can occur on a macro and micro level (affecting both platforms and individual users) and have been described in this report as content which exists for public benefit (eg helping others) or private benefit (eg generating financial profit).
Understanding how to mitigate the role of these incentives in the spread of misinformation content requires further consideration on the economic and legal aspects of the online information environment.
Recommendations:
As part of its online harms strategy, the UK Government must combat misinformation which risk societal harm as well as personalised harm, especially when it comes to a healthy environment for scientific communication.
When considering the potential damage caused by unchecked scientific misinformation online, the framing of ‘harm’, adopted by the UK Government, has focused primarily on harm caused to individuals rather than society as a whole1. For example, this limitation risks excluding misinformation about climate change. While the commissioned YouGov survey suggests that levels of climate change denialism in the UK are very low, there is evidence
to suggest that misinformation encouraging climate ‘inactivism’ is on the rise.
The consequences of societally harmful misinformation, including its influence on decision-makers and public support for necessary policy changes, could feasibly contribute to physical or psychological harm to individuals in future (eg through failure to mitigate climate catastrophe).
This view is complemented by our YouGov survey which suggests that the public are more likely to consider misinformation about climate change to be harmful than misinformation about 5G technology (a subject which has been significantly cited within discussions on online harms).
There needs to be a recognition that misinformation which affects group societal interests can cause individual harm, especially to infants and future generations who do
not have a voice. We recommend that the impact of societal harms on current and future generations, such as misinformation about climate change, is given serious consideration within the UK Government’s strategy to combat online harms.
Governments and social media platforms should not rely on content removal as a solution to online scientific misinformation.
Society benefits from honest and open discussion on the veracity of scientific claims. These discussions are an important part of the scientific process and should be protected. When these discussions risk causing harm to individuals or wider society, it is right to seek measures which can mitigate against this. This has often led to calls for online platforms to remove content and ban accounts. However, whilst this approach may be effective and essential for illegal content (eg hate speech, terrorist content, child sexual abuse material) there is little evidence to support the effectiveness of this approach for scientific misinformation, and approaches to addressing the amplification of misinformation may be more effective.
In addition, demonstrating a causal link between online misinformation and offline harm is difficult to achieve, and there is a risk that content removal may cause more harm than good by driving misinformation content (and people who may act upon it) towards harder-to-address corners of the internet.
Deciding what is and is not scientific misinformation is highly resource intensive and not always immediately possible to achieve as some scientific topics lack consensus or a trusted authority for platforms to seek advice from. What may be feasible and affordable for established social media platforms may be impractical or prohibitively expensive for emerging platforms which experience similar levels of engagement (eg views, uploads, users).
This is an abridged and edited version of a report from The Royal Society, The Online Information Environment. To read the report in full, click here.
The Society’s survey of members of the British public found that the vast majority of respondents believe the COVID-19 vaccines are safe, that human activity is responsible for climate change, and that 5G technology is not harmful. The majority believe the internet has improved the public’s understanding of science, report that they are likely to fact-check suspicious scientific claims they read online and state that they feel confident to challenge their friends and family on scientific misinformation.
Follow us on twitter: @risksEmerging