AI is an exciting and rapidly evolving technology that has the potential to transform our world in so many ways. But let's be real, AI is not perfect. In fact, there are some limitations to AI that are worth discussing.
First of all, AI is only as good as the data it's trained on. If the training data contains biases or inaccuracies, the AI system will reflect these same biases and inaccuracies in its outputs. This can lead to unfair or incorrect results and decisions, which is definitely a problem.
Another thing about AI is that it just doesn't have that human touch. AI systems can only make decisions based on the data and algorithms provided to them, and can't account for unique or complex situations that often require a human's intuition and creativity. This can sometimes lead to suboptimal outcomes, and let's be honest, we don't want that.
And then there's the issue of transparency. Sometimes, AI algorithms can be difficult to understand or explain, which can be a problem when it comes to making important decisions based on AI outputs. This can be especially concerning when it comes to sensitive issues like employment, criminal justice, or healthcare.
Last but not least, there are some ethical concerns with AI as well. As AI systems become more advanced, they raise questions about….
blah.
blah.
blah:
There is a sheen to our world.
You know what it is, this sheen. The Sheen. It forms a glossy coat that covers everything: every stock photo, every LinkedIn post, every dribbble illustration, every commercial, every bathetic line uttered by a hero in a blockbuster movie (“ummm, yeah… so that just happened!").
It’s a mixture of the safe, the formulaic, the blandly humorous and the superficially insightful. It’s what is acceptable, baseline - what fits within the mold. It makes everything feel, look, and act the same.
The Sheen can offer new knowledge, but it doesn’t challenge fundamentals. It avoids upsetting established categories. It reigns in the political, the bleak, the subversive and the unpredictable.
Every true crime podcast, every triple A video game, every reality tv show is coated, no, suffused with the Sheen. Anything covered in The Sheen can evoke, but only mildly; it can elicit titillation or surprise, but not so much that it manages to upset a framework of prescribed rationality, meaning that that which is covered in the Sheen remains digestible enough such that a wide, safely held swath of the population will not be turned off, upset, or confused. And those that are turned off, upset, or confused are done so in a way that is in itself prescribed - the centrality of the hegemonic ideology is reinforced when one is offended by the banally titilliating.
Think of this sheen as a hyperobject, a conceptual object that layers on the invisible matter of culture, language, aesthetics and desire, creating a semiotic package, one that announces itself so loudly we can no longer hear it: it is normal. It is generic. It is normal, generic taste, fashion, style, behaviour, and most of all: thought. It is the mixture of all of this, the distilled generic version of us and how we are, presented back to ourselves, magnified. We see the Sheen and say “yes, this is us”, and in doing so we reaffirm the Sheen, add new layers, thicken it.
As normative as the Sheen is, it is not human. It is a representation of the human. It points to what we agree that the human is, such that we can continue (‘progress’) in a way that affords a predictable trajectory, one that doesn’t upset power balances, economics, systems of epistemology and ontology, decorum or linguistic norm.
The Sheen is not human, but it is AI.
To fathom this rather ambiguously poetic point, it’s helpful to conceptualise AI in terms of how it ‘is’ in the world. But rather than approaching how ‘it is’ from a technical perspective, I’ll utilise certain philosophical concepts, which are better placed to analyse how AI interweaves with the societal and the human.
AI is trained on a massive corpus of text drawn from the internet and other sources. Let’s call this corpus ‘content’. Content is all the ‘stuff’ from which anyone or anything can express themselves. It is the source of expression - how meaning is made. I’m borrowing here from the philosophers Manuel De Landa, Deleuze and Guattari and Saussure, while at the same time pulling away from their exact frameworks.
Prior to the existence of the potent ‘neural nets’ that make up the current crop of AIs, digital technology wasn’t ‘trained on’ any content. All it did was store and allow access to ‘content’. It could represent digital information in stratified ways according to basic organising rules. Consider the file system on your computer or a list of products on a website.
This content wasn’t expressed in any original ways outside of how it was stored, or stratified, using basic rules, usually involving metadata. Basic recommendation algorithms could present content based on feedback from the user in original ways, but it could not express this content differently. That is, it couldn’t remix the content (say Youtube videos) into new content. We could say it was stratifying various ‘wholes’ of content rather than being able to pull apart these ‘wholes’ into ‘parts’ and rearrange the parts to form new wholes. It certainly wasn’t able to somehow find the raw matter of new content itself.
As noted, however, content is used to ‘express’ meaning. We can think of content as stimulating expression. Content is the raw material, expression is the folding of content into new meaning.
So we might say that computers, prior to AI, could ‘express’ content in highly limited ways - they could only regroup ‘wholes’ into large ‘bundles’ - such as in playlists, folders or feeds. This form of expression, if it is an expression, is highly arbitrary and has little meaning outside of the preferences of the user.
However, AI is a bit different. AI is able to express content in ways that break apart these wholes. That is to say, it doesn’t just filter and regroup multiple ‘wholes’ of content but instead is able to use individual ‘parts’ (words) in new ‘wholes’ (syntax).
Summarising, AI analyses the parts of content to express new wholes via the rearrangement of parts - and not just bundles of wholes - into rich veins of apparent meaning. Again, it accomplishes this through analysis of the parts, rather than the just particular qualities of wholes.
But. But here is the crux of the argument. The only source of ‘content’ AI has is what’s called a LLM (Large Language Model) - the learning model involving the weighting and categorisation of its corpus via supervised learning. Supervised learning involves measuring how well neural network algorithms categorise human-labelled data, with AI incorporating the results of this evaluation into future iterations of it LLM. This is the ‘analyses of parts’.
Humans, however, have a much richer source of ‘content’: experience prior to any sort of meaning derived from representation, categorisation or analysis.
To explore this, let’s take another step into philosophy. Specifically, semiotics.
Charles Sanders Peirce was a semiotician, among other things, and taxonomised the many aspects of signification. In other words, he explored how we make meaning from that which we experience, and broke this process down into various sub-processes and aspects. He saw the process of signification associated with our experience of everything in the world: perceivable qualities and sensory input, objects, and interpretations.
As part of this, he described the concepts of Firstness, Secondness and Thirdness. These three processes involve how we experience ‘signs’ — how we relate to ‘signs’ in the world and the things they represent - but they can easily be transmuted into the nature of experience itself.
He noted that Firstness was associated with qualities - like colour. Firstness isn’t representation - it is the pure quality of a feeling. He considered it a sort of ‘unmediated access’ - think sensation, or qualities, like red, or rough, or loud. It is unrelated to anything else except itself.
Secondness is the actual experience of Firstness in the world. Say, seeing a red light, or feeling a rough tabletop. This is the relation of a quality/sensation to something (some ‘thing’) experienced. Quality or sensations, after all, are manifested someway in the world outside of purely abstract conceptions of their experience.
Thirdness is the meaning of the mediated quality - the categorisation. Say for example, the thought, “I need to stop my car” upon seeing a red light. It’s how Firstness and Secondness are interpreted. This is the mediated, categorised idea of experience. Thirdness is suggesting that the meaning of a thing in the world has a meaning outside of what it is itself.
Humans, beings in-the-world, have experiences of the first two, Firstness and Secondness, and are able to reflect create Thirdness. In this ‘Thirdness’, we are able to represent the meaning of the world in a mediated way.
AI, however, is limited to this Thirdness: mediation or representation of facts, experiences of the world without firsthand experience of the represented. Put plainly, AI doesn’t experience, it represents. This is done through the creation of its LLM and its categorisation of any input we provide to it.
We could drill down into an AI’s ‘experience’ and call its ‘Firstness’, for example, the conductivity of its semiconductor, and its Secondness the 0s and 1s of its binary logic - but that would likely be assigning far too much subjectivity to the AI. An AI has no sensory subjectivity. It’s only when some sort of learning model is applied that it emulates a kind of subjectivity of Thirdness.
Fundamentally, AI cannot add to its content via 'experiencing’ a colour (Firstness) or a coloured thing (Secondness), rather it can only rely on representations of those qualities and facts via the mediation of being ‘trained’ on that data. It could however ‘represent ‘yellow as an input labelled ‘yellow’ and then be trained to recognize this quality. But this is not Firstness. - this is very different from ‘experiencing’ yellow, or a yellow thing, like us, where the experience unmediated; it is the actuality of the experience rather than a categorisation of that experience.
Again, put simply, AI can’t just experience yellow or a yellow thing, it needs to be trained to recognise the category of yellow or the yellow thing.
Consider: an ancient human from 100,000 years ago or any animal could experience yellow or a yellow thing without linguistic categories. An AI could not - it needs them represented, categorised.
But. AI can still ‘express’. It cannot experience new ‘content’ in the way we do, but it can express. This is different from previous ways computation worked.
AI’s ‘expression’ can occur only because of how the ‘content’ its trained on has been coded. That is, it has been coded through variable weighting of statistical probabilities of words in syntactical space - a process emulative of Thirdness.
So the AI has nothing else aside from the text within itself to reference. The coding is determined by the regularities within the text the AI was trained on - the way that the language is used is deterministic of the way it applies language. The only content is via the human: the corpus and the supervised training.
AI expresses probabilistic outputs of what it has been trained on - so AI outputs are marked not by references to that which is in the world, but rather differences within itself. There is a kind of ‘newness’ in that the particular organisation of the syntax may be original - the ‘whole’ - but the meaning is inherently immanent. The parts of the wholes that make up the content of an AI certainly can ‘point to’ something in the world, but what is being ‘pointed to’ cannot be experienced by an AI.
There is no entry point for content other than us. We are the sole ingress: we create the content that AI is trained on, and we generate the conversational inputs that make it compare its LLM to generate its own output. Our experience of Firstness, Secondness, our being-in-the-world, and all the embodied thinking, gestalt reasoning, time and space-based-metaphors, and many other processes resulting from being bodies in the world are the sources of this ‘content’.
To be clear, I don’t believe there isn’t any reason an intelligence created by humans could never have the same kinds of capacities or subjectivities of a human. Arguments that claim something mystical, ineffable, and exclusive about humans are essentialist and anthropocentric. There is, to me, no reason that an AI could not be largely indistinguishable to a human. But the AI we have is not it - far from it.
For now, only organisms borne of organic bodies via biological evolution can add ‘content’ to themselves, by the simple fact of them having and being bodies in the world. Humans and other animals can accord meaning to movement in space and time prior to language, indeed prior to cognitive thought. Our pre-cognitive minds, our feelings, our senses and fleshy bodies engage in ways that interweave with our societal, cultural, linguistic and environmental situatedness to create new content. This is us engaging in the world of content - the brute Firstness and Secondness of the world, from which a Thirdness emerges.
But if AI has just us as the ‘content’ from which it can express, what is the result?
Distilled us. The Sheen.
At best, what we have in AI is a fascinating information retrieval and assembly systems of the most baseline version of ourselves. AI can only reproduce what’s already latent within us, what we think we are and know.
The danger is that we fall in love with the basic idea of us, that we distill it, that the same mediocrity of the generic becomes the content that AI feeds on. If AI relies on us for content, and reaffirms its expression by virtue of our attention and approval, we will, in essence, reinforce ever more distilled pre-existing content via feedback loop.
We have all heard the drumbeat about us being joy-seeking dopamine addicts, endlessly scrolling on our phones. But the question is far more interesting than this - it is not our general obsession with digital devices that is interesting but what content and expressions have us obsessed. If we were enlightening ourselves with constant learning and art, would we think of this ‘addiction’ the same way?
This is the question of AI and our engagement with technology. What are we serving ourselves? We need to ask ourselves if the extent of our need for the baseline version of our desires to be further distilled into that which gets us what we want - often repurposed as what we need - prevents us from separating our wants from what we could want. In other words, our capacity to distinguish the possibilities for new ways of getting intellectual and emotional value from our media ecosystem is diminished because we are acclimated to concoctions that are at once both incredibly generic yet incredibly stimulating such that we may not imagine why we would want anything else.
This cycle of establishment and reinforcement of predictable, safe content is why we are flooded with the formulaic, the generic: The Sheen. The Sheen is why we are flooded with billions of identical LinkedIn posts. It’s why most novels are minor variations on one another. Why video games are largely identical in terms of aesthetics, narrative and gameplay. It’s why there are thousands of crime dramas that are nearly exactly the same in form.
Yet consider: with each conventional crime drama produced the particular plot points, aesthetics and characterisation gets more precise and incisive, tickling that exact part of our brain that detects why we love these types of shows. The certainly don’t all achieve this particular effect, certainly, but the ones that don’t better inform the ones that do. This is the darwinian demand of capitalism on art. This same process occurs with our engagement with AI.
Yet this isn’t the only way forward. Enormous amounts of generic 'expressions’ of AI could become so ubiquitous and implicitly identifiable so as to become ‘white noise’.
We don’t marvel at a stock photo. We barely see it. Might that become true of most written language we see? Or most illustrations? The question will then become: how does one transgress this white noise to burst with unique meaning?
Either way we receive AI, the answer to mitigating these pernicious effects is to rely on our experience of being as the cornerstone or our ‘content’. Art, humour, and even theory tend not to stem in the first instance from signification or categorisation, but rather the primacy of being - the firstness, secondness - and challenges and questions to the manner by which we experience these ways of being, and the meaning we gather from these experiences. This doesn’t mean just embracing these experiences - but often challenging our experience and the assumptions inherent in it.
One can look at the art of an abstract expressionist like Rothko, and experience the Firstness of colour and the primacy of being that exists within such an experience.
This is the weird, the challenging, the unsettling, the confusing, the beautiful, the transgressive, the feared and the loved. This isn’t transcendental - again - there’s no reason that an artificial consciousness couldn’t achieve such an experience. But it’s something, for now, that we need to lean into to ensure that that which expresses but doesn’t experience pigeonholes us into a version of ourselves that keeps us safe, predictable and, ultimately, boring. Which, if we’re not careful, is what AI will do.