Key findings from the literature review

The full literature reviews can be accessed here: Literature Review Part One PDF and Literature Review Part Two PDF.

To paraphrase classical historian Mary Beard, western democracy is a 2000 year old experiment. In 2019 the significant technical disruption that is digital media is having a powerful effect on the results. Yet what is the nature of that effect? Does our collective written and published knowledge tell us what benefits and opportunities digital media offers in building a stronger, more inclusive and participatory democracy? And the threats or risks it poses to it? And what if anything does the empirical evidence tell us optimises the opportunities and reduces the risks to our democracy from digital media?

The answer to these questions remains elusive. While our literature review was not exhaustive, this research confirms that there is, at present, a troubling dearth of scientific, empirical, evidence-based research that tests or aims to validate “workable solutions” to the seven key threats to democracy we’ve identified in this project.

While some empirical evidence exists, notably in the area of designing new platforms and affordances with pro-social intent, the significant majority of the research relating to the threats we identified is based on expert opinion and normative approaches. Meaning, it presents theoretically sound arguments about the way things “ought to be” if democracy is to be “reclaimed” from incivility and a rogue form of capitalism in the digital age.

In the expert opinion literature the following four themes were identified:

1. Policy / Legal Solutions

For example, adapt existing legislation; create new legislation; institute new oversight bodies or inter-government agencies; or to improve regulations on content moderation.

2. More Corporate Transparency

Currently the lack of transparency around moderation practices presents challenges to accountability, governance, and the ability to apply public and legal pressure. Expanding empirical research to improve moderation processes requires private intermediaries to make these processes and practices accessible to researchers.

3. Better Design

Platform design can influence the way individuals, organisations and institutions make decisions around platform uses/objectives. Pro-social and democratic values must be encoded into the infrastructure of the internet, including algorithms. At present, the normative values embedded into these global private intermediaries – e.g., openness, connectedness, free speech, etc. – are not culture-neutral norms. It the first step towards designing more deliberative spaces, pro-social tools and online environments.

4. Improve Content Moderation

Calls range from the standardisation of industry-wide “best practices” to more transparency and researcher access. These actions would require greater corporate transparency, corporate grievance mechanisms that are transparent, accessible and in accordance with international human rights law, and multi-stakeholder, and inclusive governance approach, and content moderation should become an organisational priority rather than department silo.

This absence of tested solutions is not evidence that proposals do not work, but that they are untested. This leads us to conclude there is a critical need for investment in more research. People in government, civil society, NGOs, and private enterprise need to commit to researchers and projects who will do pre-and post-testing of solutions that stakeholders are recommending.

Such research will not only measure effect and enable us to extend what’s working to other places or contexts, but ensure future normative prescriptions are informed by evidence beyond the anecdotal (or budgeting restrictions).

It is critical that people in the New Zealand government especially measure whether or not what is being done is working to build a more inclusive and participatory democracy. New Zealand would break significant ground in that regard.

When people in government and civil society seek recommendations for solutions, they need to mitigate the risk that experts reproduce “solutions” that fit the professional discourses in which they’re embedded. To do this, it is important that people in government ask multi-stakeholder group participants:

  1. What if any evidence do they have for the suggestions made?

  2. What experiences inform these recommendations and why do they identify them as workable solutions over others? 

  3. How do they imagine testing their effectiveness?

Given the current lack of evidence, it is critical that the values, experiences, and outcomes that underlie recommendations are made transparent and visible.