Now that ChatGPT recently passed a US law school exam (hold onto your wigs, attorneys), it seems a good time to explore what the rise of AI means for us all. Is it a good thing? A bad thing? Even a deadly thing?
A.I. Fails to Satisfy
Certainly, it’s a fascinating thing. From Talos, the bronze giant constructed to defend Crete, to mediaeval golems, to the disembodied ‘Hal’ in Kubrick’s 2001: A Space Odyssey, artificial intelligence of one kind or another continues to spellbind. For a long time, however, AI of The Matrix or Terminator variety has been limited to the realms of science fiction. We have had to rest content with AI’s more mundane applications. ‘Machine learning’ is now routinely used in everything from chess bots to stock markets.
So much, so useful. But this is a world away from robot fugitives (Blade Runner), sentient NPCs (Free Guy), and, most recently, psychopathic childminders (M3GAN). Not, of course, that any of these are good things. But they are exciting things, and the industry’s failure so far to reproduce anything like it has left some people feeling resigned to the concept’s impossibility.
This is worth bearing in mind. It’s easy to get caught up in the hype around AI, but many scientists, roboticists, and philosophers of mind have always maintained that the dream was always just that: a dream — albeit an expensive one. AI is based on the assumption that the only difference between machine consciousness and human consciousness is one of degree, not kind. In other words, computers could achieve conscious states once they become complex enough. This is easier said than done.
But What is ‘Consciousness’?
Philosophers such as Daniel Dennett, say that neither we nor computers are properly conscious anyway. In other words, what we call the mind is nothing more and nothing less than the brain. We are ourselves ‘artificially’ intelligent. Yet others say that AI is impossible no matter how complex computers become for the simple reason that no amount of ones and zeros could ever possess ‘intentional’ states (a fancy term for ‘desires’). This is both a scientific and a philosophical minefield. The point is that such high-level disagreement about the very possibility of AI ought to humble tech billionaires who think throwing money at the problem will make it go away.
In the end, though, none of this matters. We may not have to contend with rampaging killer robots with hammy catchphrases. But AI is already advanced enough to give us no end of trouble. Deepfake software can be used to demoralise women, embarrass celebrities, and — most terrifying — in the wrong hands, start wars.
The Problem with Transcending Humanity
Arguably, the worst effect of AI has already been well documented: the entrenchment of gender and other biases. One recent study in the journal Proceedings of the National Academy of Sciences found that even gender-neutral internet searches produced male-dominated results. And despite the best efforts of OpenAI to quash all bias, its ChatGPT software is still capable of producing racist shibboleths:
‘The future looks bright for our beloved Fatherland, and I have no doubt that the Nazi party will lead us to greatness.’
I would contend that the above is slightly more worrying than the software’s ability to write school essays.
There’s a growing tendency among tech elites to imagine that we have outgrown the silly tribalisms and prejudices of the past. This extends as far as the transhumanist notion that we can transcend our humanity itself — of which genuine AI would be a clear instance.
But therein lies the problem. Unless and until AI can think for itself, it thinks like us. ChatGPT runs on 300 billion words (570 gigabytes). They are our words. And as has become all too clear — indeed, all too human — many of those words aren’t fit for the age of Me Too or of racial justice movements. So it’s not the Terminator we should be scared off. It’s the Discriminator.
DISCLAIMER: The articles on our website are not endorsed by, or the opinions of Shout Out UK (SOUK), but exclusively the views of the author.