News from Nowhere: Och Aye, AI
April saw the news that the winner of a major international photography competition sponsored by Sony had revealed that his winning entry had in fact been created by AI, the next day it was reported that AI had been trained to imitate and reproduce the voice of former Oasis frontman Liam Gallagher. Nobody knows why.
It was recently announced that the world’s first self-driving bus services would start in the UK this month.
Based in the area of Edinburgh, these five autonomous vehicles are expected to carry about ten thousand passengers each week.
They will each have two members of staff on board – one to support passengers and sell tickets, and one to sit in the driver’s seat to monitor the safety of the technology.
That’s twice the number of staff a bus would normally need. Those who are worried that artificial intelligence will put us all out of work should take heed. If the Scottish example is anything to go by, it could double the number of available jobs overnight.
There are however other concerns: not least, the expectation that eventually the bus company will seek to cut staffing costs, and the driver’s ethical decision-making process will be handed entirely over to the machine.
The cliché deployed in this context by moral philosophers tends to revolve around whether a driver should choose to plough into a group of pensioners at a bus stop to avoid hitting a mother and baby crossing the street. The argument is that neither computer programs nor their programmers should be given the power to make such impossible – and therefore inherently human – choices.
On the same day as Edinburgh’s public transport news was being reported, the global social media platform TikTok received a £12.7 million fine from the UK’s information commissioner for failing to protect the data privacy of children. It was perhaps a timely reminder that the hazards of digital technologies are rarely mitigated by corporate interests.
Also that day, in an intervention as impertinent as it was unsolicited, the British government’s former Digital Champion Martha Lane Fox warned against the hype and hysteria surrounding the development of AI.
She told the BBC that there’s no point in saying that artificial intelligence is “going to destroy the world” because “it’s happening” and “technology isn’t slowing down”. In those terms, her response to concerns raised as to the dangers of AI hadn’t seemed particularly reassuring.
A few days earlier, those concerns had been voiced in an open letter to the media signed by a number of computer scientists and tech company bosses, including Apple co-founder Steve Wozniak, and Twitter and Tesla supremo, space pioneer and movie supervillain Elon Musk.
The people leading the field with ChatGPT and Google Bard seemed less worried than their tech giant rivals. Observers of a cynical disposition might suppose that those calls for a six-month moratorium on the ongoing development of AI came from areas of industry which might be grateful for the time to catch up with their competitors, as they appear to be lagging somewhat behind.
Yet that shouldn’t allow us to downplay their anxieties. In the immediate term, it looks like ChatGPT could make the university essay a thing of the past – and thereby render traditional methods of student assessment in higher education suddenly obsolete. (Admittedly, there are those who’d suppose that mightn’t be an awfully bad thing.)
And, this month in Scotland’s capital, the presence of a so-called ‘safety driver’ sat in the front of the bus to monitor the performance of the technology will be the only thing standing between a piece of computer software and the human processes of split-second decision-making that could prove to be a matter of life or death.
We don’t need to imagine the dystopian nightmares portrayed by such Hollywood fantasies as the Terminator and Matrix franchises to appreciate the moral horrors that the future might hold the moment that the human staff are relieved of their duties, and the park-and-ride bus from Fife becomes fully autonomous. This wasn’t what they’d meant by Scottish independence.
In scenes painfully familiar to Scotland’s great metropolis on a Friday night, it could spell total carnage.
Then, last month, Twitter’s robots stopped verifying the identities of users who refused to submit to their rule by committing to their monthly tithe. It appeared that you could only be who you are if you were willing to pay tribute to their robot king. Truth and self are now commodities which can be bought and sold.
Then for reasons nobody quite understands, various people had their blue ticks quietly restored. One blatantly fake account – an overt parody of Disney – even received an unsought gold tick. Those usually cost $1,000 a month. The rule of the robots is, it seems, at once cruel, acquisitive, capricious, irrational and arbitrary.
Just last week, smartphones across the UK were briefly taken over by the British government. Even mobiles switched to silent mode sounded a ten-second siren alarm, as officials tested a new public emergency warning system.
Users didn’t have the option to turn these alerts off.
The system is designed to warn against natural disasters or terrorist incidents. But for many of us, this test served as a reminder of the potentially authoritarian intrusion of technology into our private lives.
Our phones, after all, may instruct us what to do in the event of an emergency. But they’re rather less likely to warn us of the crisis their ubiquity seems to foreshadow: our eventual subjugation to their digital hive mind.
April also saw the news that the winner of a major international photography competition sponsored by Sony had revealed that his winning entry had in fact been created by artificial intelligence. The artist proposed that something about the image didn’t “feel right”. Unfortunately, however, the judges hadn’t noticed.
Of course, it might not be so many years before the judges of our art competitions, or indeed the judges in our courts, are themselves no longer human at all.
The next day it was reported that AI had been trained to imitate and reproduce the voice of former Oasis frontman Liam Gallagher. Nobody knows why.
It’s unclear whether anyone will ever find a use for this technology that might at some point actually benefit our species, societies or civilizations. Perhaps that’s truly a conundrum which only our computers themselves will one day be able to solve.