Research Note

The Digital Identity Accountability Gap

Our second Key Finding highlights the urgent need for a clear, widely accepted definition of digital identity that prioritises individual rights and privacy. The lack of a universal definition for digital identity, outlined in Key Finding 1 and published last week, creates significant challenges in policy-making, systems design, and everyday use. Definitions vary widely based on personal interests, market forces, and technological trends, and the resulting ambiguity allows for harm to slip through the cracks — sometimes accidentally, sometimes not. For those living in highly digitised societies, the consequences are evident: trust in digital identity systems is undermined by their inherent contradictions and failures.

If you or your organisation are interested in collaborating on a case study, or if you have any questions about this work, we’d love to hear from you, via email or Signal.


[1]When digital technologies are implemented in societies, they reshape both power structures and the opportunities available through new digital systems. Digital identity systems, in particular, carry inherent ambiguities as unclear and often contradictory definitions, as well as weaknesses in their conceptual models and evaluations, which significantly influence their real-world outcomes. Within digital security, vulnerabilities are typically identified through adversarial security practices. These practices involve security experts acting as attackers to discover flaws in a system’s design and implementation, detailing these vulnerabilities, and forecasting the potential consequences of exploitation. Proposed fixes are then typically tested through repeated adversarial analysis.

The need for cybersecurity is obvious. Yet, despite the explosive growth of the cybersecurity industry over the last twenty years, there remains no equivalent adversarial socio-technical security practice to analyse digital infrastructure before it is implemented. Consequently, deeper conceptual flaws beyond cybersecurity’s immediate scope often remain unchallenged and unaddressed[2]. This gap is particularly dangerous given that digital identity itself lacks a coherent, universally accepted definition, as detailed in our first finding. This definitional ambiguity is not merely academic; it enables specific mechanisms that create accountability vacuums. These mechanisms include the obfuscation of responsibility through algorithmic decision-making, the shifting of legal burdens onto individuals through consent-based models of 'user sovereignty', and the creation of centralized points of failure that amplify the potential for fraud and abuse. In the context of digital identity systems, these oversights have severe implications. The most tangible effects of these accountability gaps include shifting responsibility from providers to users, appointing individuals as involuntary identity managers, and limiting citizens’ ability to negotiate equitably with identity or service providers. As policymakers and technology vendors continue to communicate ineffectively, these fundamental socio-economic and legislative consequences remain unresolved.


In an August 2020 keynote, Estonian President Kersti Kaljulaid described digital identity as foundational for social cohesion within a connected society: “How can you ask people to apply proper cyber hygiene, if they do not have a way to identify themselves to each other while they act and transact online? Everything starts from governments, who are the only entities, who have the legal space control, who can actually create digital identities, which are respected by all parties and they should work internationally.”[3] Estonia’s computer-addicted government has spent twenty years arguing the case for a mandated state digital identity both at home and abroad. Digital identity proponents have positioned Estonia’s scheme as the only means to efficiently govern a complex modern society. Years after helming the digitisation of Estonia’s government and financial sector, Toomas Hendrik Ilves stood before the UN and positioned Estonia’s 2012 digital election as a success: “Twenty-one years after restoring our independence, Estonia is an example where a combination of responsible free enterprise, E-governance, international partnerships and eco-friendly policies can put you in the fast lane of development. [...] [Estonia’s digitised public service] has increased the possibility to exercise fundamental rights and freedoms and improve inclusive and responsible governance.”[4]

This utopian advocacy for the digitised society is not new and not unique to Estonia, but the country’s early and successful implementation of a national identity scheme includes then-novel fundamentals that have subsequently been adopted and normalised throughout the world. Within Estonia’s digital society, citizens are able to transact with the government in highly personal contexts, such as medical care and prescription refills, participating in an election, or enrolling their children into childcare and schooling services. In all cases, this is presented as effortless and secure despite a tremendous amount of data generated and stored on citizens with each transactions. However, despite their conceptual aspirations of user empowerment, efficiency, innovation or transparency, these fundamentals create significant reconfigurations of power relations between users, service providers and identity providers.

To combat criticisms of surveillance, Ilves’ digitisation office proposed the concepts of state accountability, where state workers are surveilled while accessing stored data on citizens and disciplined for improper use, and so-called user data sovereignty, where the citizen is designated with the the responsibility to set and manage individual access rights of their personal information for service providers.[5] Although widely marketed as a set of policy and systems innovations, these two concepts are an acknowledgement of the inescapable surveillance capabilities of a digital state, made possible by combining the promises of frictionless access to citizen records and the implementation of an all-encompassing and data-rich digital identity. Having been unable to cryptographically prevent the weaponised design[6] potential inherent in such a system, the proposed solutions are instead socio-technical, where the acknowledged interplay between state as an identity vendor, service providers, and users produce emergent threats wholly beyond the system itself.[7]

It goes without saying that vendor accountability is an unconvincing policy whose enforcement remains subject to the discretion of relevant authorities. One prominent example is the Australian government’s Robodebt scheme, a bipartisan[8] digital identity system[9] intended to algorithmically eliminate welfare fraud. Ironically, the system itself unlawfully[10] issued hundreds of thousands of debt notices based on flawed algorithms without adequate proof. Between July 2016 and October 2018 alone, data from the Department of Human Services revealed that 2,030 people died after receiving a Robodebt notice, with nearly a third of them classified as "vulnerable". The scheme inflicted significant financial distress, emotional trauma, and resulted in several suicides[11]. Despite the harm, which led to a Royal Commission and the government being ordered to pay A$1.2 billion in a class-action settlement, no civil servant or government minister was held accountable, illustrating the systemic failure of accountability in digital governance.

Similar dynamics have emerged in Tanzania with the deployment of the National Identity Authority (NIDA). By 2021, nearly three-quarters of the eligible population had enrolled in Tanzania’s national identity scheme. However, systemic design flaws and insufficient legislative protections created significant barriers to essential civic services. Citizens were forced into burdensome individual negotiations with service providers and tasked with managing their own identity security. These dynamics reinforced existing inequalities, leaving citizens vulnerable to exclusion with little accountability or avenues for redress.

The desire to deploy digital identity in democratic elections is another hallmark of accountability that, despite its flaws, remains unchallenged. Once again, Estonia’s digital state is considered a pioneer of electronic and internet-based voting and governance[12], however, a 2014 study by an international team of security researchers identified critical social engineering and ‘false verification’ attacks, rendering vote casting systems vulnerable to real-time manipulation[13]. In a damning disclosure, the researchers warned that the flaws were so deeply rooted in the voting system’s architecture, that the only correct course of action was to discontinue internet-based voting until the system could be fundamentally re-engineered to prevent potential large-scale electoral fraud.[14]

The problem of democratic accountability extends beyond Estonia. For example, despite the attention around Estonia’s digital civil society, the country is far from the first to host digital elections – India has relied on them since 1982[15], adopting digitised electoral roll enrolment in 2023[16]. Proponents hail these systems as milestones in transparency for the world’s largest democracy, yet India’s electronic polling has continually faced calls for full reconciliation of electronic tallies with paper verifications.[17]

In many cases, those demands remain unmet, casting a deeply opaque digital shadow on suspected electoral fraud. On January 5, 2025, New Delhi Chief Minister Atishi publicly alleged a large-scale voter fraud scheme involving thousands of suspicious applications filed under names of individuals who denied submitting them, with local officials reportedly deleting records without proper verification.[18] This latest allegation is one of hundreds of real or suspected fraud within the country’s e-voting systems, and comes after the Electoral Office spent the past five years linking voter rolls to India’s national Aadhaar ID system, purportedly to combat electoral fraud.[19] In a recent Mumbai rally, Congress Party leader Rahul Gandhi went so far as to assert that Prime Minister Narendra Modi “can’t win polls without EVMs, ED, CBI, and I-T,”[20] accusing the ruling party of undermining democratic institutions via relying on compromised voting infrastructure.


While the concept of user data sovereignty is widespread and often described as users having control over their own data, this control is effectively ceded the instant the data is transferred to the service provider. Service providers frequently make non-negotiable demands for personally identifiable information, insisting that users disclose sensitive data as a prerequisite for accessing their services.[21] If users refuse to comply with these demands, they are typically denied access, leaving them with little choice but to acquiesce if they wish to use the service. This undermines the stated aspiration of user sovereignty, as it strips users of meaningful control over their personal data once it is handed over. Instead of empowering users, the system reinforces the dominance of service providers who set unilateral terms, effectively eroding user agency through coercion and lack of post-interaction enforcement. The proposed solution, vendor accountability, is an idealistic appeal to power that highlights the inadequacy of user sovereignty in addressing these power disparities. When a digital identity is deployed into a digitised society, it creates an opportunity to further centralise power away from users, and towards identity and service providers.

This combined power imbalance and vacuum of accountability is most prevalent in consumer finance, where the notion of user control over personal data is increasingly undermined. As part of broad anti-fraud efforts, banks continue to adopt newer forms of digital identity verification, including fingerprint scanning, facial recognition, and voice recognition.[22] The use of biometrics is positioned as essential for giving users more control over their identity and finances, yet they require users to surrender sensitive personal information. Once this biometric data is provided, control is effectively ceded. Regardless of whether the biometric is held by the user or even an fraud-detection third party, the power dynamic has shifted towards the user’s non-negotiable surrender of sensitive information.[23]

At the same time, biometric technologies are used with increasing frequency to force customers to shoulder increasing financial risk. In instances of unauthorised transactions, banks have claimed that successful biometric verification is evidence that an account holder has approved a transaction, and subsequently refused to reimburse victims of fraud. This effectively shifts the burden of proof onto the consumer, who must demonstrate that their immutable biometric data was somehow compromised – a daunting and often impossible task. In many jurisdictions, such claims are a flagrant disregard for existing consumer protection legislation, yet governments have been slow to address this behaviour. The implementation of biometric verification not only erodes consumer protections but also amplifies the existing power imbalance between service providers, users and the state. It underscores how the concept of user sovereignty fails to address these disparities, as users are compelled to relinquish control over their most personal data without genuine recourse or negotiation.


In research interviews, participants described examples of how digital identities erode state or corporate accountability. The details varied widely, but shared common characteristics: digital identity became a surface to project one’s goals onto, and the nature of this first principle could both obfuscate actors’ intent and liability and allow for second-order consequences that leaves advocacy organisations, policy makers, or even citizens themselves racing to catch up. One participant, formerly employed to lead digitisation efforts within financial institutions, spoke openly about a culture that encouraged the development of user-centric identities through over-datafication and profiling, emboldened by a cavalier and dismissive culture around digital identity:

“What I saw was terrifying, a wild west with no rules. The goal of employees was to stay out of prison. Don’t break any laws that you know about. Ask for forgiveness if you didn’t know what you were doing was illegal. That culture keeps me up at night. It terrifies me. If we continue to create policy and regulation based off of what we have done for the past 20 years, it’s going to be... Actually, it’s already a massive global problem. I think it’s going to get even worse.”[24]

Another participant, an academic and civil rights activist, described a similar void of accountability within the context of Aadhaar, a biometrics and demographics-derived mandatory digital identity scheme[25] for India’s 1.43 billion citizens.[26] While Aadhaar has reached over 1.2 billion enrolments and is linked to over 1,600 government schemes, its implementation has been plagued with issues. A 2019 study by Dalberg found that while many users reported benefits, 0.8% of people were denied essential services like food rations due to Aadhaar-related failures, affecting millions.[27] The same study noted that marginalised groups, such as homeless and transgender people, have significantly lower enrolment rates (30% and 27% respectively lack Aadhaar)[28]. This exclusion from the foundational ID system effectively bars them from critical welfare.

“The government thought that the problem in the food subsidy programme was that, before I can go and withdraw my rations, you go and pretend to be me and take my rations. And so, they proposed to use biometric authentication [Aadhaar] to ensure that only I can get my ration and not you.
Where this technology is supposed to stop fraud, it actually ends up empowering corruption within the bureaucracy. The authority responsible for distributing food subsidies checks my Aadhaar identity and tells me, ‘Oh, the authentication has failed.’ But in fact, it’s gone through. The fraud point is centralised and lucrative, and Aadhaar makes this possible. Instead of holding fraudulent operators accountable, the entire population is being punished with this crazy technology and being made to pay the price for somebody else’s faults.”
[29]

This criticism is not specific to Aadhaar. Instead, it illustrates how the use of digital identity as an enforcer of accountability creates a new set of ambiguities that remain unaccounted for, such as when “ration shop owners were asked to take photographs of [people for whom fingerprint authentication failed] before giving them food rations”[30] or when the UIDAI failed to file “a single case against anyone” as a result of the ‘Aadhaar leaks’ scandal (in theory “punishable by up to three years in prison”).[31] In the case of the various welfare programmes in India, where bureaucratic failures and corruption is well documented.[32] Here, recipients now navigate an apparatus whose corruption is centralised and emboldened by a new impunity made possible by the system’s design. At the same time, within the same system, these citizens have lost the ability to negotiate with the state.

Over time, the potential consequences of digital identity on power relations have become more widely understood and led to a rising resistance to digital. In 2019, following the World Food Programme’s biometric and blockchain powered Building Blocks pilot project in Jordan two years earlier, Houthi authorities resisted attempts at “ventriloquis[ing] for the poor,”[33] protesting the deployment of similar schemes within Yemen and identifying the central role biometric digital identity plays in challenging local sovereignty.[34] As a result, the final design of the aid project granted Houthi control over data storage and access – a decision by the World Food Programme that may have its own unforeseen consequences.

In Western contexts, humanitarian or crisis based identity systems follow a similar trajectory, shifting the burden of proof from identity and service providers to individual users. In Digital identity as platform-mediated surveillance, Silvia Masiero highlights the case of the “shift in the Eurodac system, which univocally identifies asylum seekers in European countries through their fingerprints, in 2015 made the Eurodac database interoperable with national police authority databases across Europe.”[35] In the US, medical records from abortion clinics and client data from domestic violence organisations are routinely used by immigration enforcement officers during dragnet operations targeting suspected undocumented migrants[36], and often the integrity or accuracy of such policing operations remains completely unchecked.[37] In this case, even a heavy regulatory effort targeting digital identity would fail to address the core problem: it is difficult to argue than even a plurality of users of abortion clinics would wish to carry this digital identity and corresponding medical data as an attribute in a centralised wallet, or in a decentralised one held onto a device easily stolen or seized. Instead, the only path forward that avoids such abuses is to fully anonymise and segregate the data beyond the reach of any digital identity. This is at odds with the desires of digital identity proponents, particularly those who embrace the Ilves-style rhetoric of the responsible, all-knowing, digitised social state.


Finally, digital identity erodes state sovereignty itself. Operating through a radical redistribution of responsibilities between institutions, the shifts in accountability created by the deployment of digital identity controlled by third parties parallels the disempowerment born from the loss of control over other forms of infrastructure through privatisation. The 20th century is filled with examples of the erosion of state stability and social cohesion through rampant privatisation. In the modern digitised society, the control of underlying infrastructure for which a digital system is built on top of is of equal importance, but rarely included in analysis of state digital sovereignty. This has immediate consequences. Since 2022, Ukraine has relied on Starlink for satellite internet for citizens and defence, and this dependency has been leveraged against the state by Elon Musk.[38] In Brussels, the dream of the smartphone-based European digital identity (eIDAS) will have to rely heavily on Amazon AWS servers[39] and the Apple and Google smartphone duopoly, creating a sovereign dependency on foreign companies that are regularly sued by European regulators.

Everywhere digital identity is deployed, complex questions around sovereignty and accountability are raised. In response to each new wave of systemic failure, new regulatory frameworks and technical standards are invariably proposed as definitive solutions. Yet these proposals often arise from the same misguided thinking and flawed first principles that created the initial problems, promising user control and security while reinforcing the very power imbalances they claim to solve[40]. This cycle of failure, followed by promises of a technical or legislative fix, serves to obscure the fundamental nature of the problem.

Such accountability failures remain unresolved, even in emerging policy and technical specifications. A recent technical analysis of the eIDAS network found that, in practice, many solutions fail to follow modern security guidelines because “solution providers trade security for simplicity.”[41] In other words, regardless of the technology or the motivations of their designers: there is yet to exist a frameworks that is capable of eliminating the core issues of power imbalance and the potential for systemic harm, enabled by digital identity systems. Our research shows the opposite, that these complex dynamics around legitimacy, control, power and care act as a disconnect, creating hidden accountability gaps that allow their most egregious failures be ignored by most, if not all, actors in the field.🞻


A flagship adversarial analysis of past, present and future digital identity systems.
The Digital Identity Event Horizon is a groundbreaking report that exposes how digital identity creates brittle societies.
Become a free NDC member and get every chapter delivered in your inbox, every week.

  1. Key Points

    • No adversarial framework exists to test digital identity infrastructure before deployment, leaving critical flaws unaddressed.
    • Ambiguity around accountability shifts responsibility onto users, who must act as identity managers without meaningful recourse.
    • Utopian narratives like Estonia’s digital state mask structural power imbalances and normalise state surveillance.
    • Vendor and state accountability measures are often performative, failing to prevent systemic harm, as seen in Robodebt and Aadhaar.
    • Biometric identity systems increase user risk while eroding legal protections and consumer rights.
    • In practice, ‘user sovereignty’ means coerced compliance with service provider terms under threat of exclusion.
    • Digital identity systems entrench inequality, undermine civic trust, and disempower both users and nation-states.
    • Without genuine accountability, digital identity enables coercion, fraud, and systemic abuse at scale.
    ↩︎
  2. Echoes from History II: The Dangers of eIDAS
    Christopher Allen, Blockchain Commons
    21 November 2023 ↩︎

  3. Opening Speech by President Kersti Kaljulaid
    Latitude59
    2020 ↩︎

  4. Information Technology Can Transform Countries, Estonian President Tells UN Debate
    UN News
    26 September 2012 ↩︎

  5. President of Estonia: How do we improve security and e-governance?
    e-Estonia
    8 September 2016 ↩︎

  6. On Weaponised Design
    Cade Diehm, The New Design Congress
    16 February 2018 ↩︎

  7. Entanglements and Exploits: Sociotechnical Security as an Analytic Framework
    Matt Goerzen, Elizabeth Anne Watkins, and Gabrielle Lim, 9th USENIX Workshop on Free and Open Communications on the Internet
    2019 ↩︎

  8. Australia’s Robodebt Scheme: A Tragic Case of Public Policy Failure
    Chirrag, Blavatnik School of Government
    26 July 2023 ↩︎

  9. Labor Falls Short on Robodebt Royal Commission Measures
    Josh Adams, Green Left
    5 December 2023 ↩︎

  10. The Federal Court Approves a $112 Million Settlement for the Failures of the Robodebt System
    Human Rights Law Centre
    11 June 2021 ↩︎

  11. After robodebt: Restoring trust in government integrity and accountability
    Mark Dreyfus, The Monthly
    1 February 2024 ↩︎

  12. Estonia 2024 Digital Decade Country Report | Shaping Europe’s Digital Future
    European Commission
    22 July 2024 ↩︎

  13. Estonian Electronic Voting System Vulnerable to Attacks, Researchers Say
    Lucian Constantin, PCWorld
    12 May 2014 ↩︎

  14. Security Analysis of the Estonian Internet Voting System
    Drew Springall et al., Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (CCS’14)
    2014 ↩︎

  15. “Pioneer Of Digital Democracy”: India 1st Country In The World To Use EVMs
    Pallava Bagla, NDTV
    17 May 2024 ↩︎

  16. From EVMs To Blockchain: How Technology Is Revolutionising India’s Electoral Process
    Anjan Pathak, BW Businessworld
    3 June 2024 ↩︎

  17. India’s Voting Machines Are Raising Too Many Questions
    Andy Mukherjee, Bloomberg
    11 April 2024 ↩︎

  18. Large Scale Fraud Taking Place in Voter Additions and Deletions in New Delhi Assembly. Delhi CM Atishi Ji Writes This Letter to Hon’ble CEC Presenting Evidence and Seeking Time to Meet
    Arvind Kejriwal [@ArvindKejriwal], Twitter
    6 January 2025 ↩︎

  19. Link Your Aadhaar and EPIC | Digital Governance
    Vikaspedia
    Accessed 4 April 2025 ↩︎

  20. Modi Can’t Win Polls without EVMs, ED, CBI, IT: Rahul Gandhi
    Purnima Sah & Abhinay Deshpande, The Hindu
    17 March 2024 ↩︎

  21. Digital Identity Infrastructures: A Critical Approach of Self-Sovereign Identity
    Alexandra Giannopoulou, Digital Society 2, no. 2
    2023 ↩︎

  22. Determinants of Consumer Adoption of Biometric Technologies in Mobile Financial Applications
    Anna Iwona Piotrowska, Economics and Business Review 10, no. 1
    2024 ↩︎

  23. Banks Find AML “Ineffective”, Propose Access To Social Media
    L0la L33tz, The Rage
    23 July 2024 ↩︎

  24. Research participant
    Corporate security researcher/former financial systems lead ↩︎

  25. Vision & Mission
    Unique Identification Authority of India, Government of India
    Accessed 4 April 2025 ↩︎

  26. Understanding Aadhaar, India’s National Identification Initiative
    Vikram K Malkani, Indian Century Roundtable
    2023 ↩︎

  27. State of Aadhaar Report 2019
    Swetha Totapally, et al., Dalberg
    2019 ↩︎

  28. Unseen and Unrecognised: The Indians Excluded from Aadhaar
    Arya Raje and Ganesh Pandey, Haqdarshak
    24 August 2023 ↩︎

  29. Research participant
    Academic and civil rights activist ↩︎

  30. Dissent on Aadhaar: Big Data Meets Big Brother
    Reetika Khera, ed., Orient BlackSwan, Hyderabad, Telangana
    2019 ↩︎

  31. ibid. ↩︎

  32. ibid. ↩︎

  33. Lineaments of Biopower: The Bureaucratic and Technological Paradoxes of Aadhaar
    Keith Breckenridge, South Asia: Journal of South Asian Studies 42, no. 3
    2019 ↩︎

  34. Digitisation and Sovereignty in Humanitarian Space: Technologies, Territories and Tensions
    Aaron Martin et al., Geopolitics 28, no. 3
    2023 ↩︎

  35. Digital Identity as Platform-mediated Surveillance
    Silvia Masiero, Big Data & Society 10, no. 1
    2023 ↩︎

  36. ICE Is Grabbing Data From Schools and Abortion Clinics
    Dhruv Mehrotra, Wired
    3 April 2023 ↩︎

  37. IRS Nears Deal with Ice to Share Data of Undocumented Immigrants – Report
    Olivia Empson, The Guardian
    23 March 2025 ↩︎

  38. Here’s a Look at Musk’s Contact with Putin and Why It Matters
    David Klepper and Lisa Mascaro, AP News
    25 October 2024 ↩︎

  39. Customer Checklist for EIDAS Regulation Now Available
    Borja Larrumbide and Daniel Fuertes, Amazon AWS Security Blog
    9 May 2023 ↩︎

  40. eIDAS 2.0 Sets a Dangerous Precedent for Web Security
    Christoph Schmon, Marta Staskiewicz, and Théa Hallak, Electronic Frontier Foundation
    5 December 2022 ↩︎

  41. eIDAS Interoperability and Cross-Border Compliance Issues
    Marko Hölbl, Boštjan Kežmah and Marko Kompara, Mathematics 11, no. 2
    2023 ↩︎