Discussion of the potential benefits and harms of artificial intelligence (AI) is everywhere currently following the ‘AI Safety Summit’ hosted by the UK government. But what of the human side of the relationship? How might this be affected as the power and reach of AI develops? Drawing on my research and consultancy experience, I offer some brief thoughts on how our thinking about AI might be lost and gained in future.
Is AI awful?
In the episode of the Netflix series Black Mirror called ‘Joan is Awful’ the main character Joan (played by Annie Murphy) discovers that a global streaming platform has launched a TV drama adaptation of her life - in which she is portrayed by Hollywood A-lister Salma Hayek. Joan discovers that she has signed away the rights to her identity by ticking the T&Cs box which ‘nobody reads’ and can therefore do nothing about it. She teams up with the real Salma ‘f***ing’ Hayek who, we learn, has also lost the rights to her actor identity and that it is an AI generated Hayek who is playing the Joan character. Hayek’s real life is, in turn, being played at the next level by Cate Blanchett and the ‘average’ Annie Murphy/Joan is derived from someone on a lower level who is referred to as ‘Source Joan’. So, the Joan persona is abstracted across (at least) four levels. Each level gets further away from the ‘mundane’ origin which we might think of as reality.
In my Doctoral research I identified the powerful capacity of information and communication technologies to abstract from reality and that this can lead to the creation of an ‘as if’ organization. The abstraction (a simplified representation, model or concept) generated by technology from the source data is mistakenly treated as if it is the same as the everyday experience of people ‘on the ground’.
A simple example from my research was the use of the Outlook calendar to organize appointments in a mental health clinic which created a neat, manageable version of the service but excluded the ‘messy’ reality where clinicians need space between clients to process the disturbance of mental ill-health including through informal, but essential, conversations with colleagues. Similarly, the broadcast of the as if life of Joan has a real-life impact on Annie Murphy/Joan as she loses her job and her relationships when her boss and partner mistakenly treat the abstract ‘Joan’ as if it was the actual person Joan. Alfred North Whitehead referred to this as:
'the fallacy of misplaced concreteness’: that is, mistaking the symbol for the thing symbolised, and I believe this will become increasingly likely as the power and sophistication of AI increases.
AI and machine learning will be capable of generating abstractions far more powerfully even than current technologies. If we take X/Twitter as an example of an existing technology, this has created a new form of debate via a process of simplification (140 characters) and multiplication (followers and re-tweets). This has, to a great extent, replaced more traditional forms of discourse and debate and we can recognise the effects of this – both positive and negative.
A lot of the current discourse on AI is split into either ‘good’ or ‘bad’ with no grey areas, one is either for it or against it, and each person puts into AI their own fears or desires. Depending on your position, AI is either going to save or destroy humanity. It is either idealised or demonised which is indicative of an infantile state of mind. Work is needed to get to a more healthy and mature position in which all aspects of AI, and its future potential, can be held in mind at the same time.
Just as the real Joan had good and bad characteristics which are exaggerated by the AI script generation to produce ‘good TV’, so all technologies, AI included, have benefits but may also have a downside. We know, for example, how machine learning to inform facial recognition software widened pre-existing racial inequalities*. In identifying these problems, they can hopefully be eradicated (or lessened) and the positive applications of AI can be built on.
Instead of a binary good/bad view of AI I think we need to focus on how we will relate to it as it grows and develops, in the same way as we might with a child. AI will become an increasingly powerful and independent actor in human affairs. As AI matures it will become harder for us to observe, understand, or control how AI is acting in multiple contexts and systems. The concern is that, once this new order has emerged, humans may no longer be at the centre of the networks and structures that we ourselves established. Any human intent for those systems will have been mediated and translated in multiple ways so that it is not possible, or meaningful, to ‘return to factory settings’.
In Joan is Awful there is a ‘Source Joan’ that the narrative returns to and who is free to continue her life once the computer driving the TV programmes is smashed. As AI (and computing capacity) advances I believe it will no longer be possible to make this journey from representation back to origin and that misplaced concreteness will be next to inevitable as there will be no ‘Source Joan’ for us to compare to the AI generated entity.
We will be living in as an as if world without being able to recognise it as such.
This is of course the territory of the film ‘The Matrix’** and is perhaps what Jean Baudrillard predicted with his term ‘simulacra’ which is defined as ‘something that replaces reality with its representation’. He distinguished between simulations ‘of a territory, a referential being, or a substance’ and simulacra which are:
the generation by models of a real without origin or reality: a hyperreal.... It is no longer a question of imitation, nor duplication, nor even parody. It is a question of substituting the signs of the real for the real.
In psychoanalysis, ‘symbolic equation’ is an indication of an inability to manage pathological anxiety. As Isabel Menzies Lyth demonstrated:
Defences [against anxiety] inhibit the capacity for creative, symbolic thought, for abstract thought, and for conceptualisation.
The concern for our relationship with AI is therefore that we lose our own capacity to engage with it in a thoughtful and creative way. Anxiety about its safety may push us further into treating it as a concrete thing and less as a concept of our own creation to which we must learn to be ‘good enough’ parents. Most of us will recognise the feeling of ‘losing it’ in trying circumstances. Equally, we know the importance of regaining our equilibrium for our own sanity and that of the child. It is this balance we should try to achieve in relation to AI.
All new technologies need to be engaged with in a flexible and creative state of mind to enable adaptive change and innovation. We need to evolve new ways of operating that are protective of human capacities, including the ability to think about what AI is and what it is doing and not to, either, turn away from it or act impulsively towards it. An environment is needed where we are able to engage creatively, with curiosity, with technology as a new actor in the world.
*Najibi, Alex. 2020. Racial Discrimination in Face Recognition Technology; Harvard University October 24, 2020. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/
Waggett, N. 2017. Technology at Work: An Investigation of Technology as a Mediator of Organizational Processes in the Human Services and the Implications for Consultancy Practice. Available via my blog https://unstickwork.blogspot.com/
**a key difference between the scenario I am presenting and that in The Matrix is that 'the world that has been pulled over your eyes to blind you from the truth' is an intentional construct of the alien overlords, and is actively policed to keep it in place, whereas the concern in relation to AI is that humans will mistake AI generated 'worlds' for reality such that they will become effectively real. I am not suggesting that AI has, or will have, intent in the same way as people, but who knows?
#AI #artificalintelligence #machinelearning #humanness #netflix #charliebrooker #blackmirror #thematrix #jeanbaudrillard #technology #aisafetysummit2023
Comments
Post a Comment