A Loved Child Has Many Names
Our identity is many things to many people. But increasingly, we are also many things to many systems. Can we come to know ourselves beyond a growing industry that compels a continuous production and exteriorisation of identities?
In Scandinavia there is a proverb: a loved child has many names. This wholesome saying simply means that lively kids with a knack for charm are given many nicknames. Look a little deeper, and what is implied is that identity is made in relation: the loved child in question is establishing independent relationships with all sorts of people and therefore given names that express these special bonds, referencing the moments, details, and memories when they were formed. (For my part, I was often called “Cleo,” “Jayapapaya,” and “princess wetfoot” as a child, with the latter referring to my tendency to fall into the canals, swamp lands, and lakes just outside Copenhagen).
Our identity is many things to many people. But increasingly, we are also many things to many systems. Think not just of passports or national IDs, but also online logins, apps, bank accounts, employee cards, loyalty cards, Apple ID, World ID, DIDs, ENS, ICNS, and so on. But far from being an expression of just how loved you are in this world, the names, numbers, and addresses captured in these technological systems are of a different sort entirely. They define a you as known by apps, data brokers, advertising agencies, government agencies, and a plethora of other strangers and their algorithms.
We are in the midst of an exponential expansion of digital personas that coincides with two other major societal symptoms: conflictual identity politics and a profound loneliness and exhaustion that is engulfing so many people’s inner world. There is a growing sense that the identity that is formed through our immediate personal relationships simply bears less weight, has less power than the profiles and identities assembled via digital interactions and their capacity to vastly proliferate bits of us across global networks.
So much of social life has shifted to digital apps and infrastructures, and these have been built in a manner that enables mass data capture (surveillance) and its commercialization. This confluence of digital infrastructure and a surveillance business model has brought about an industry of “the self.” It is tempting to read this development as nefarious, but there is a banality at play here with technological affordances stacking up in such a way as to make it nearly impossible for any ordinary marketing director, app creator, or organization to NOT seek continuously more user engagement and assemble profiles and identities of as many people as possible. This has placed the production of digital selves in high demand, but with the silent consequence of a mental health crisis, minds nudged into a state of victimized adolescence: perpetual posting pumping both stock prices and egos through self-harming social validation. But burnout is now bursting this identity bubble, and the question is what sort of technical transformation is needed now to recover, grow up, and move on.
This coincidence and conundrum is the subject matter of this text. Appreciating it requires delving into the technological and politico-economic affordances that have led here, from the ways subjectivity is shaped by its technological work on the world to how technology now shapes our very identities. It is from the latter that we must now imagine and invent interventions which may have the capacity to transform the core affordances of information technologies and their current grip on and exploitation of the self.
Work-Related and Control-Related Technologies
Technology, in its most fundamental expression, simply means how things are done: technologists develop new ways of doing things that enable greater reach, ease, speed, power, pleasure, precision, or predictability. But the how matters in its own right. A letter is not the same as a text message. A text in turn differs from a Twitter or Insta DM. Each has different experiences and impacts that make them qualitatively different for not just the sender and receiver, but also for society at large. Sending a letter requires some paper, a stamp, and a postal service, while sending a DM requires a chain of lithium miners, chip manufacturers, app developers, and internet providers. Some have a strange need to assert that such technological achievements mean that humans have conquered the physical world to do their bidding. But technology is nothing less than a collaboration with the properties of the world: how we do things also shapes who we are and our relationships with one another, rearranging entire societies in the process.
Ursula Franklin’s distinction between work-related and control-related technologies provides a good contrasting pair to illustrate how technologies and societies are entangled. Work-related technologies enable people to complete a task: a keyboard, a loom, a mixing deck. They create the opportunity for new types of production, transforming the material of the world and the capacities of the worker in tandem. But as these tools become incorporated into broader production processes, control related technologies are also increasingly inserted. Software engineer Ellen Ullman describes a time when she helped install new software in an office complex and noticed her boss suddenly realize that the new networked word processing would enable him to track the keystrokes of his secretary to check how much she was working. What is at stake now is no longer the capacity of the human worker, but the control of them through technology.
Technology is nothing less than a collaboration with the properties of the world: how we do things also shapes who we are and our relationships with one another, rearranging entire societies in the process.
A trained metallurgist before she was a theorist of technology, Franklin articulates this distinction between work and control through an exploration of what she calls “prescriptive” technologies, using the historical example of ancient Chinese bronze casting. She realized something while examining “ding” vessels from 1200 BC: metals were not the only raw material that were molded there, but also the very structures of society. The ding production process necessitated distinct, separate, and specialized skills, from creating the initial design, to producing the life-size cast, and finally the forge where the entire alloy had to be poured in one single go. Handling a literal ton of molten alloy took a highly disciplined and hierarchical organization to safely and effectively handle the pour. With the production of these objects came a highly disciplined society, the prescriptive production processes patterning social relations that then became the prevalent norm far beyond the forge. In the case of prescriptive technologies people are specialized and split into distinct functions in an overall production process. The primary concern of prescriptive technologies shifted from not just merely enabling the work at hand, but also to subjugating people, processes, and materials to the requirements of a final product.
Many centuries later, industrialization in Europe would take prescriptive, control-related technologies to the next level: in the factory, in large-scale agriculture, and the post-Fordist office. But this is a long story for another time, one in which people took on many new names, powers, and sufferings.
Prelude to the Information Age
In modern history, identity systems have become firmly control-related: registering the identities of people in order to establish territorial control; enforcing taxation; assigning rights, permissions, credit; or signifying belonging to a group, community, or organization. This has led to stratified and rigid social roles and class identities: a factory worker, a barber, an intellectual.
But with the commercialization of personal data, something has fundamentally shifted: identity technologies are no longer simply control-related but also work-related. The diverse identities of everyone have become the material that is acted upon by themselves: no longer the metallic or fluid material of the earth, but ourselves with all the data that leaks into and feeds a new information economy.
What follows are some notes on how this shift has happened, unpacking the affordances of networked, algorithmic, biometric, and cryptographic technologies. These notes are vaguely but not strictly chronological, and are far from comprehensive. Instead, they should be read as a sketch, outlining a story of how “identity” and identification shifted from being control-related to become the very material that is being worked on through the affordances and political economies of digital technologies—and what to do next.
First note: Networked technologies
Every morning, I reach for my phone. It’s one of the first things that I do. I know it is not a healthy habit. And one day, I’ll kick it. But, as with every addict, there is obviously something I am getting out of it, something I am reaching for. And, like many addicts, what I am reaching for is connection. Phones offer a whole bundle of this need, packing in the power to connect with most of my meaningful relationships, with disparate locations and their images, stories, and sentiments. They are the interfaces to vast networks of people, places, and things.
What, then, are the affordances of network technologies like the smartphone that I reach for and enter into so habitually? What kinds of social relationships and identities are formed with and through them? Networks are meant for connecting and they have a compulsion to scale: a network of three provides participants little value, whereas a network of three billion constructs vast possibilities of connection, information, relationships, audiences, markets, and transactions. If Franklin’s “prescriptive technologies” effectively standardize and split people and materials into specialized roles and disparate processes, “network technologies” connect them up again. They enable both the spread of prescriptive processes across the globe at an unprecedented scale and the possibility of their undoing. The rise of globalization as a neoliberal economic paradigm, for example, went hand-in-hand with the world wide web of communication, but so did the new forms of collective action that sought to disrupt it.
The anti-globalization movement in the early 2000s made use of the internet to connect disparate activists, trade unions, environmental campaigners, and human rights organizations into a movement of millions across the world, constructing a “multitude” with a shared vision for rights and local autonomy against globalized capital. Such network capacities repeated themselves in the Arab Spring, Occupy, Indignados, and more recently the Sudanese Revolution. In Tunisia, for example, this allowed protestors to find each other and organize. Networks enabled free flows of information, rapidly creating new affinities between people beyond the prescriptive roles assigned to them—building new forms of collective identity in the process.
One of the early global activist networks of the world wide web was Indymedia, a federated collection of websites set up by programmers and media producers. The idea was as innovative as it was simple: to create digital spaces where citizens, activists, and thinkers could comment on political events happening around them to counter the dominant ideologies of mainstream media. And yet this grass-roots movement in independent media also served as an incubator for another major technological innovation whose long-term significance was, at the time, unforeseen: a handful of Indymedia engineers went on to create Twitter, a “micro-blogging” site helping for instant reporting from street demonstrations.
The invention of Twitter also brought a new primitive to the internet: the status update. This new primitive effectively shifted the internet from being a Web1 knowledge repository to the real-time information networks that characterize Web2 What was not predicted at the time, however, was how status updates would end up focusing less on current events than on highly personal disclosures of the self.
Perhaps counter-intuitively for a generation that grew up hoping that the internet would topple top-down hierarchies and enable people-power from below, it turned out that networks can also do the exact opposite. Network technologies can become a means for concentrating power at an unprecedented scale. In the process of connecting people and things, networks have the power to standardize and aggressively scale protocols, interfaces, and logics. A plainly hidden fact of networks is that they do not just enable connections between people but render people accessible to advantaged players, producing fields ripe for extraction and commercialization. With networks like Twitter, born of street activism and alternative media, that is precisely what happened. By connecting people, networks opened up the self as a new geopolitical and commercial battleground.
Second Note: Algorithmic Technologies
The federated networks like Indymedia were eventually supplanted by platforms powered by algorithms. Social movement media became simply social media, increasingly centered on personal updates about people’s intimate lives and relationships. An industry of the self rapidly formed around this growing culture of transparency, converging towards compulsive self-disclosure: the talents of the world’s brightest engineers and designers focused on ensuring that people keep posting, producing addictive algorithms and features to ensure a continuous pipeline of personal and behavioral data. Leah Pearlman, the co-creator of Facebook’s Like button, even got herself hooked, while Aza Raskin, the creator of infinite scroll, full of regret said: “It’s as if they’re taking behavioral cocaine and just sprinkling it all over your interface and that’s the thing that keeps you like coming back and back and back.”
But what is it people are coming back and back for exactly? Content, entertainment, and attention, sure, or maybe just dopamine peaks and distraction. But also a sense of self, however new and intensified, through infinite info-feed feedback telling us we exist.
Interactive content prods people for reactions, likes, and dislikes and then broadcasts who we are to ourselves and others. Where state identity systems serve bureaucratic requirements of taxation and the governance of bodies in a real economy, digital identities are lured into existence in a manner more reminiscent of speculative finance, a veritable “identity bubble” occurring as a result. Identity politics has become identity wars as we are nudged to disclose and define our identities in infinitely intersectional audience segmentation, instigated at the behest of an industry of the self that benefits from this weaponization and overproduction of identification.
Where state identity systems serve bureaucratic requirements of taxation and the governance of bodies in a real economy, digital identities are lured into existence in a manner more reminiscent of speculative finance, a veritable “identity bubble” occurring as a result.
The surveillance capitalist business model of this new industry of the self was jump-started and powered back in 1995 by research grants from the US National Security Agency to the two researchers who would soon go on to create Google: the latter would essentially provide the rails for mass surveillance while the former relaxed privacy regulations, granting Google the raw data resources to build a business model and generating a haystack of Big Data for the NSA. As General Keith Alexander famously put it, the NSA needed that data haystack in order to find their “needle.” A new form of surveillance was born where algorithms would surface anything that was “not hay,” thereby discovering the needle.
Today, the use of algorithmic production of profiles ranges from fine-grained segmentation in targeted advertising or political messaging to generating profiles and targets of people to be killed in drone strikes or military operations. This new model of mass surveillance is less concerned with what a suspect has done than what they might do in the future, not so much catching criminals as generating them. The “Lavender” program developed and currently used by the Israeli army, for example, processes masses of data to generate “kill targets” based on a calculated probability that they might be Hamas fighters. Initially used as an “auxiliary tool” to generate intel that would then be investigated, the Lavender system was rapidly adopted as automatic kill order, generating lists of people with no further investigation. In the words of an Israeli officer: “Once you go automatic, target generation goes crazy.” There is a devastating banality to the evil of the admin overhead in managing the appification of life when considering just how difficult it is to know whether one’s behavioral data traces have trained the calculating machines currently generating kill lists in Gaza.
The real living individual of flesh and blood is of little interest to the algorithm. The “loved child” is now known not so much by the people around them as by the bots that scrape and analyze their clicks and likes, the child abstracted into calculable bits and analyzed in relation to billions of other data points in Big Data pools. This is a relational identity taken to the nth degree, where vast amounts of relational data are continuously analyzed to discover patterns, surface anomalies, and produce probable “needles” (that is to say, profiles and targets) on demand.
Third Note: Biometric Technologies
The recent resurgence of AI has been capturing all the headlines, but accompanying it is a comparable resurgence of biometric identification. Biometrics are a response to the machine deliriums unleashed through the sudden broad availability of machine learning algorithms. Algorithms, having learned our online habits, are now deployed by companies, criminals, and security contractors alike to automate and thereby supercharge their simulations. Bots swarm the internet, faking identities, accounts, posts, and content, while the rise of agentic AIs has caused people to speculate that it will soon become impossible to distinguish between interactions with real humans and artificial entities. In response to this flight into algorithmic fantasy, companies are turning to biometrics as a potential anchor, seeking to ground digital identities in biological certainty.
Where biometrics used to be the remit of the state, scanning faces at borders and fingerprinting prisoners, today everyone’s everyday devices now conduct their own biometric border checks: from the softly spoken, seamless security features of Apple device logins to the loud, counter-dystopian marketing propaganda by World, formerly Worldcoin, seeking to scan the eyeballs of the world in exchange for a speculative UBI token. The irony is that it is the very companies causing the problem in the first place who are now launching and selling solutions—see OpenAI CEO Sam Altman founding World, which is a unique case in point.
World is primarily a cryptocurrency and blockchain that markets itself as “the real human network,” which is to say, a financial network with a “proof of human[ness].” This promises to be a solution to the scams and mass unemployment that the rise of AI would unleash upon the world, but one which nonetheless fuses biometric data collection with crypto-opportunism. “Community” members can exchange their iris scans, proving their unique humanness, in return for a crypto token. As most of the crypto industry has realized, economic incentives “work” in the sense that many people in the world will do strange things for a speculative chance to make some gains, including scanning their eyeballs and handing over their biometrics to a company owned by a few people in a faraway country (Kenya was one of the first countries that World targeted). If the intent is indeed to solve the social problems that arise from the availability of large language models (LLMs) rather than gain market power, deeper collaboration with existing organizations such as trade unions might have been considered.
With that said, the bot problem is ubiquitous enough that it doesn’t take a looming AGI to convince people that some sort of authentication of online actions is needed. Luckily, biometrics are not the only way to anchor the present machine deliriums in a material reality. Cryptography, as a collection of primitives, enables a different sort of approach to grounding our interactions in verifiable reality.
Fourth Note: On Cryptographic Technologies
Cryptographic techniques are mathematical means for revealing and concealing, verifying, and authenticating. They are lightweight, comprising puzzles and maths, but also extremely powerful, placing them at the center of geopolitics and conflict. Recognizing this power, cryptographer Phillip Rogaway wrote a call to practitioners to recognize their role as not simply technical, but also entailing a moral and political imperative: “Cryptography rearranges power: it configures who can do what, from what. This makes cryptography an inherently political tool, and it confers on the field an intrinsically moral dimension.” The technology has therefore also spurred activist culture and actions from the “crypto wars” of the 1970s where cryptographers took on the US government to advocate for the public right to use cryptography, to the cypherpunks early warnings that the internet would become a means for mass surveillance.
Cryptographic techniques have affordances that can enable a radically different approach to digital identity because they can be used to focus on rights rather than capturing identity per se. Cryptographic key pairs—where a public key is used to encrypt messages and a private key is used to decrypt them—are a fundamental primitive of many cryptographic schemas. The key simply grants access and usage rights to a holder, it does not need to know anything about a person in order to work. This emphasis on codifying and building rights rather than identity leaves the question of identity, namely who we are in our relationships to others, up to social life rather than a technical system that otherwise can and will be exploited. In the context of the industry of the self, cryptography protects privacy and prevents the self from being coded into data and traded on open markets.
Meanwhile, the “crypto wars” continue in new attempts by governments to prevent, outlaw, or backdoor the use of strong encryption by the public. “Crypto-politics” puts states and companies in a conundrum. On the one hand, they have a compulsion to know all they can about their citizens, customers, competitors, and enemies to gain geopolitical or commercial advantage. On the other hand, this very pursuit risks leaving their own systems insecure and exposing their citizens and customers to spying from competitors, enemies, fraudsters, and scammers in turn. This paradox of networked technologies has informed important data protection principles including purpose limitation and privacy by design which are now enshrined in regulations like the General Data Protection Regulation (GDPR). These principles recognize that technological capacities, once built, can be picked up and used strategically by benevolent as well as malicious actors. From the GDPR to backdoors in smartphones and the recent UK and EU attempts to undermine end-to-end encryption by scanning content on all devices, governments continue to swing between protecting digital identity and privacy and undermining them.
Zero-knowledge proof cryptography is a major hype among the technically savvy, because it supposedly solves this conundrum, providing the best of both worlds. “Zero-knowledge” refers to the fact that the technique enables someone to prove something to be true without having to reveal any evidence. For example, a person might prove that they are over 18 without needing to show their passport or reveal their actual age. But this supposed silver bullet is quickly becoming an excuse for companies and governments to continue their same business of control and value extraction while nevertheless claiming to protect privacy: World markets its use of zero-knowledge to convince people it is safe to hand over their biometrics to a private corporation seeking to create a worldwide central identity and finance database. One of the inventors of zero-knowledge proofs, Shafi Goldwasser, promotes the use of zero-knowledge proof cryptography to enable the continued training of algorithms for automated policing despite increasing regulatory protections for privacy.
But the affordances of zero-knowledge cryptography are not limited to the desires of malicious actors. Despite its growing use for mechanisms of control, zero-knowledge cryptography also enables selective disclosure—potentially one of the most important design principles for recuperating the freedom of the self from an externalized economy of identity.
From Loved Child to Loathed Adult—and Liberation
Writer and educator Vanessa Machado de Oliveira describes a Guarani (an indigenous ethnic group of Brazil) coming-of-age ritual: The young person is celebrated with a feast with their community before being ceremoniously buried with only their head above ground. Over the course of the four days, the person remains there, enduring not just the weather but insults, spitting, and ridicule by their community. She goes on to describe: “The buried young person was not supposed to respond. They had to accept everything in silence – but not as a form of submission, quite the opposite. The young person was supposed to find their grounding in the living land that held them for four days. In this way, their sense of intrinsic worth would not be grounded in human interactions – including the opinions of other members of their own community – but in the sense that we are held by the land itself.”
The ritual was designed to cut the psychological and emotional umbilical cord such that a person’s sense of self is no longer tied to the whims of external validation. Instead, the young person would become grounded, quite literally, in the earth and themselves. In this way, the person enters adulthood connected to a larger world beyond their ego and the judgments of others. What might such a passage from adolescence into adulthood look like living with digital technologies? Is it possible to ground our sense of selves in our own interior rather than Likes and content feeds?
There is a 600 billion-dollar data industry that benefits from the need for social validation via channels that are easily financialized. Every new user and engagement metric pumping numbers that enter pitch decks and valuations. This industry demands of the self to continue exteriorizing itself into machine-readable forms: enter your details there, tell us your preferences, set up a new account, verify your account, verify your identity, KYC, login, express yourself, show your eye, your face, your smile, your credentials, your networks, your credit score, enter your pin, verify, authenticate.
In the physical world, people move fluidly between ways of being that are formed in relation to people and contexts. In the digital realm, the integrity of our inner selves and their formation through such situated and contextual relationships is completely shattered.
This immense admin overload and sleaze of systemic stalking has only recently become the norm. Before the persistent pings of the past fifteen years or so, most people spent more time within the quiet of their own thoughts or in the company of people they know to varying degrees without having to fully define and identify themselves. You didn’t have to be someone, instead, you spent most of the time just being.
Rather than continuously projecting opinions, preferences, and personal details that make up your profile, the norm in the physical world is rather more selective and private. In the physical world, people quietly and subconsciously perform selective disclosure, which means they make ongoing micro-decisions on what they reveal to who: what is appropriate to disclose to a mother versus a lover versus an employer. In the physical world, people move fluidly between ways of being that are formed in relation to people and contexts. Revealing, concealing, and discovering different aspects of ourselves and others, our many names. In the digital realm, the integrity of our inner selves and their formation through such situated and contextual relationships is completely shattered. The majority of the time, most people have no clue who or what is analyzing their behaviour, with hundreds if not thousands of companies scraping their behavioral patterns and personal information.
The most important ethics, design, and engineering topics today are indeed anonymity and selective disclosure. Like the Guarani person buried neck deep in the ground, anonymity alleviates the need for constant external validation, instead giving space and time for our interior to make itself known to us. Selective disclosure meanwhile makes sure that we can engage meaningfully with the world without needing full identification, revealing, concealing, learning, and growing in a way that is contextually sensitive, thereby lowering the threat level for ourselves and others. It is through such learning with others that liberation from the confines of identification can happen, to ground oneself in something bigger.