We’re entrusting ever more aspects of our lives, identities and relationships to the digital realm, but what level of trust do we feel? And are we making conscious choices about where we place it?
COVID-19 has highlighted our expanding dependency on personal data to support public wellbeing, and drawn attention to questions such as who uses it and how. A row recently broke out in the UK when the government shared track and trace data with police.
Those with low trust in applications of personal data try to protect themselves by keeping a low profile online – but how safe are they?
“Even the most scrupulous people regarding personal data are unaware how much of a trail they leave”, says Bjorn Wahlstrom who runs Current Consulting, a corporate investigation agency operating in Hong Kong and China. “For instance, a person can easily be traced to their hometown through shared family photos combined with satellite data, even if they aren’t active on social media.”
Beyond the risks associated with data relating to individuals, there are the risks to society of information, or misinformation, shared with individuals. The prevalence of online misinformation, including false reports of it, has weakened trust in the voting process in the United States, to the extent that millions of Americans doubted or rejected the results of the recent presidential election.
As trust ebbs, it’s perhaps unsurprising that there are more calls for the digital realm to be both protected and policed. The Economist reports that Facebook took down 33,600 pieces of content in response to legal requests, and used artificial intelligence to identify and remove 99% of child nudity posts before they were seen and reported.
These examples call on us all to think about what systems and structures we need for a trustworthy digital realm. Who should be accountable? Are tech giants well placed to police this public space? Are they protecting public interests by removing fake news and harmful content, or are they, as the New York Post has described it, carrying out acts of modern totalitarianism?
Moreover, to what extent can governments address these complex questions? Policy debates are taking place in countries across the world, and yet the digital realm stretches across borders in multiple ways, from infrastructure, to content creation, to users and their interactions.
This conundrum led participants in a workshop I facilitated to discuss whether we need a separate world order to govern the digital realm. The workshop was an online transborder event, jointly organised by my innovation agency Flux Compass with the Institute for Public Sector Transformation at Bern University for Applied Sciences, with delegates from Switzerland, Shenzhen, Hong Kong and Singapore, working in research, tech businesses, public sector transformation and organisational change.
Our objective was to identify policy recommendations that the OECD could put to governments, as part of its #govaftershock initiative to support each country’s capacity to anticipate, understand, and govern complex and changing circumstances, while promoting international collaboration.
The conversation moved onto ways to enable public engagement and inclusive governance. How is it that we can really understand what people want? One current method is to look at how people are using the web, but there are severe limitations to this, observed Dr Anna Jobin, one of the speakers who helped set the scene: usage can only indicate preference if there are real choices to make. To what extent are individuals aware of making choices, and do they understand the implications of their behaviours?
Another question for inclusive governance is how policy procedures can be designed to include not only diverse user groups, but also those not active in the digital space – whether that’s through choice or due to digital exclusion – through lack of infrastructure such as a network, lack of access to a device, insufficient information in minority languages, and so on.
What fundamental changes are needed to enable inclusive engagement on policy-making?
This is where the question of a new digital world order was raised. Sovereign boundaries don’t apply to communities online, participants observed, while multiple jurisdictions are an obstacle to governance: how can we enable the digital world to be less country-based, even at the fundamental level of internet gateways and country codes?
Dr Severine Arsene, another speaker, recalled the Declaration of the Independence of Cyberspace, written in 1996, declaring “the global social space we are building to be naturally independent of the tyrannies [of governments]”. This Declaration envisaged Cyberspace as a realm beyond national borders “that all may enter without privilege or prejudice”, and was sadly not cognisant of the extent to which society’s inequalities would be embedded and reflected in its development.
In particular, the conversation focused on the dominance of business interests and their focus on profit, as opposed to the creation of shared, public value. Current governance models simply aren’t doing enough to promote citizens’ interests: a more open approach to policy-making is required, in which citizens are empowered as part of collaborative, participative and transparent processes. Civil institutions need to be strengthened to promote ways for citizens to co-create their digital future, and conflicts of interest need to be recognised and addressed, with better checks and balances. Critically, more time is needed to consider some of the challenges: the perception of a global digital race is engendering a ‘move fast and break things’ approach which lacks sufficient caution and necessarily leads to non-inclusive decision-making.
Collaborative, inclusive design for the public good is critical for the future of our ‘digital lives’ – but let’s not forget the fundamental question: how much of our lives do we want to be digital? Where are we drawing the border between the human and digital? Is that a choice we’re making consciously? And crucially, who is ‘we’?