press release
Published: 08 March 2023

Comment: Why we shouldn’t worry about AI taking our jobs (yet)

Expert commentary by Director of Innovation and Partnerships at the Surrey Institute for People-Centred AI, Dr Andrew Rogoyski

The recent media frenzy that followed OpenAI's launch of the ChatGPT[1] platform in late 2022 has rekindled the age-old debate about technology displacing human work.

The argument goes back many decades, arguably centuries (consider the 19th-century Luddites' rejection of textile machinery), resurfacing whenever a new technology is seen to threaten existing ways of working.

So, should we be worried?

Part of the reason that ChatGPT has captured so much attention is that it has allowed non-technical people, indeed the general public, to engage with a state-of-the-art AI system in a way that is intuitive and interesting. Essentially anyone can input some text – a question, a request, a statement – and ChatGPT will reply with text that makes sense (most of the time). We're used to the idea of automation in manual/blue-collar work, but displacing creatives…?

So, will this technology displace human work?

Probably not, at least in the short term. People are already finding problems with machine-authored content. There are numerous examples of quite plausible nonsense emerging from the machines. Meta's Galactica lasted less than a week before it was pulled. The sheer believability of the output was demonstrated to be a problem. If you're running a media outlet using an AI writer, you'll need to have a human editor supervising such outputs if you want to avoid embarrassing glitches. Digital artists have already started legal action for infringement of copyright on their work used to train some image generators[2].

Writers are already starting to use ChatGPT as part of their workflow, getting first drafts that are then edited by the human writer, saving valuable time. We adapt. I'm more worried about the misuse that such technologies are being put to. It took a few days before trolls and scammers were using VALL-E to clone people's voices[3]. ChatGPT has been shown to be capable of creating malware[4].

The believability of AI-generated text becomes a problem in its own right – OpenAI has just launched a new AI that seeks to detect AI-written text to help with some of these problems[5]. New filters are being added to ensure that the AIs aren't producing dangerous or unpleasant material. The fact that humans are endlessly inventive, if not always in a good way, should give us hope that we're not about to be displaced.

Not until we develop true artificial general intelligence, then all bets are off.

###

Notes to editors

[1] https://openai.com/blog/chatgpt/

[2]https://arstechnica.com/information-technology/2023/01/artists-file-class-action-lawsuit-against-ai-image-generator-companies/

[3] https://metro.co.uk/2023/01/31/ai-voice-generator-used-to-deepfake-celebrities-spewing-racist-abuse-18195060

[4] https://research.checkpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/

[5] https://platform.openai.com/ai-text-classifier

 

Media Contacts


External Communications and PR team
Phone: +44 (0)1483 684380 / 688914 / 684378
Email: mediarelations@surrey.ac.uk
Out of hours: +44 (0)7773 479911