Economy & Business

ChatGPT - Short-term friend, long-term foe?

18 April, 2023

It is only a matter of months since ChatGPT came on the scene, and it’s difficult to read a newspaper or journal at the moment without it being mentioned. The excitement is justified, and the benefits are huge, but what are the longer-term risks?

Lee Faulkner
 



When the Euroview editor asked me if I was interested in writing about ChatGPT, my first reaction was “I can’t - I know nothing about it other than what I read in the press”. But as a firm believer that technology should be for the ordinary person, rather than an excuse for nerds to self-congratulate and win awards from each other, I came to the conclusion that it might actually be better to have an “IT lay person”, like me, write about it rather than an “expert” (whoever they might be). So I said “yes”.

 

But before I start…

I need to set out the scope of my thinking: ChatGPT and Artificial Intelligence (AI) in general have a huge potential for increased efficiency, speed and the removal of the mundane from life - those benefits are not really in doubt. Nor is it in doubt that there are myriad ethical and political issues surrounding AI, and the potential it has for replacing humans in a lot of what we do today. Those issues are vital and fascinating but they’re not my main focus here - what concerns me most is the possibility of ChatGPT changing the way we think and communicate. Humans have a unique ability to listen to arguments, judge them and then make decisions - the key for me is whether that uniqueness is under threat.

 

What is ChatGPT?

ChatGPT stands for “Generative Pre-trained Transformer”. When I try and unravel what that means in real English I think (but I don’t know for sure) that the “Generative” is because it generates answers to questions, the “Pre-trained” is because it gets its answers from what others have produced before, and the “Transformer” because it rejigs what it’s been trained on to give you the answer it thinks you want.

 

It was developed by an outfit called OpenAI, and their latest version is the very-recently-launched GPT-4. Their website calls ChatGPT an “AI-powered language model”; it has “been trained on a massive amount of text data from the internet” and “can generate human-like responses to a given prompt”. It claims it can “answer questions, converse on a variety of topics, and generate creative writing pieces”.

 

Google has its own version called Bard, which is in the experimental phase and you can join a waiting list to try it out if you wish. Microsoft is an investor in OpenAI and their collaboration enables both entities to independently commercialise the resulting technologies; Microsoft 365 Copilot will probably make the biggest headlines, as it offers to take a lot of the grunt out of the office work many of us do.

 

Why might ChatGPT be useful?

Much of the excitement about ChatGPT has, of course, been generated by the novelty of the things it can do. The main benefit is to give people, whether they be business people, students, writers or artists, the tools to do routine, boring work very quickly indeed so that they have more time to do what humans do best - think, design, create, beautify.

 

If you ask ChatGPT what it’s good at, it claims it can answer questions, generate text, complete sentences, write stories, translate, debug code… I could go on. If ChatGPT were a person you would probably want to slap them for their arrogance but, alas, it doesn’t have any feelings you can hurt. It says it can “learn and adapt to new information, allowing it to improve its performance over time”, a bit like an open-minded and educated human perhaps? Seemingly the more we use it the better and faster it will become, the reverse of what usually happens with machines and humans - the more we use them the more clapped out or burned out they become.

 

I can definitely see the benefits in having a tool that can write tedious reports and PowerPoint presentations and, similarly, as the recipient of such tedium, can see the benefits of it being able to summarise these reports and presentations so you don’t have to read them in full. What it can’t do is tell you what’s NOT in a report, which is often the most enlightening of all - can it spot lying by omission?

 

Why might it be problematic or even dangerous?

I asked ChatGPT what it thought its own dangers were; it replied:

 

  • Misinformation
  • Bias
  • Malicious use in the wrong hands
  • Privacy (invasion and increased hackability thereof)
  • Dependency (leading to loss of critical thinking skills, and the fomentation of social isolation)

 

The other immediately obvious risks are cyberbullying (since it allows users to communicate anonymously), phishing from scammers, and providing a platform for sexual predators to lure minors and groom them for exploitation. These shouldn’t be beyond the wit of ChatGPT’s creators to deal with though.

 

Certain tasks and, hitherto normal functions are also at risk: The opportunities for plagiarism and cheating in exams are greatly increased and might lead to the return of “pen and paper” exams. Plagiarism will be hard to spot because if you make the same inputs to ChatGPT more than once you get a different answer each time.

 

Defamation, always a potential risk, has already become a real one: a mayor in Australia, Brian Hood, has threatened to sue ChatGPT’s false claims that he had served prison time for involvement in a bribery scandal (actually he was the person who notified the authorities about the bribes). I sympathise with him - how is anyone going to be able to prove that what you wrote was indeed from your pen rather than from ChatGPT? I can imagine a situation where “it was the computer wot dunnit” becomes an almost-irrefutable defence in a libel suit.

 

Is ChatGPT intelligent?

As ChatGPT has been “trained” on a massive amount of data that already exists, the responses or opinions it gives are, effectively, only recycled or reformatted responses that others have already written or opinions that have already been opined. ChatGPT will, presumably, be regularly updated with new thought, new writing and new opinions (at the moment it stops at anything published after 2021), but it will still always be searching backwards. Perhaps the repackaging of what other people have already thought and written can, in and of itself, be thought-generating and original, an automated “seeing things from a different angle or perspective” perhaps? But can it ever generate new thinking or new opinions? ChatGPT can only ever pull from a finite resource set - what has happened before - but the scope of the human brain is infinite. ChatGPT can generate an almost-infinite set of responses from a finite set of source materials, but that’s not the same as it having an infinite power of thought.

 

ChatGPT will reproduce the biases and prejudices inherent in what it collects - so it will stick with the collective zeitgeist from time zero till today, and presumably average it out in some way, but it can’t create a new zeitgeist.

 

When comparing humans with ChatGPT there are parallels in what actuaries do - we look back at what happened before, use our judgement about whether or how much the factors that caused that experience are still relevant, and then make assumptions about the future - AI will do the first bit but it can’t do the second. It can make judgements based on past experience, and its processing capability will be similar to humans in that we, too, have certainty only about what’s happened before, but judgement is not something that can be developed purely from the past - you need to have feeling and imagination too and machines have neither.

 

Even if you wanted to credit ChatGPT with real intelligence, it (currently) wouldn’t actually be that bright - studies in which students take tests using ChatGPT only achieve medium-level grades. Mediocre content, test results and performance are all going to be replicated and fomented by ChatGPT.

 

“Prompt engineering”

There is growing literature on the ins and outs of making the best use of these tools, and particularly how to structure inputs to get the most interesting and useful outcomes. This is known as “prompt engineering” and there are suggestions on the OpenAI website about what inputs are most likely to generate useful outputs.

 

Some might say that this is just a question of learning a new skill, a bit like learning a new programming language, and that it won’t have any impact on our behaviour, but I beg to differ. For me, this is potentially the most alarming aspect of ChatGPT - we will have to structure the way we think to mirror the way that machines “think” (i.e. collect, process, prioritise) - that could stultify our communication very quickly. We have some evidence of this today from the widespread use of “business-speak” - people never used to ask “where’s the low hanging fruit?” or “are we all singing from the same hymn sheet?” but they do now because the business world has conditioned us to mimic and respond to the garbage that others spew out every day. I loathe the proliferation of business-speak with a passion - it anaesthetises proper communication and kills original thought, squeezing us all into dangerous corporate groupthink - the same will happen to everyday dialogue if we start communicating in ways that maximise the quality of a chatbot’s responses. Business-speak tries to make things sound complicated to impress others, but it is ultimately lazy, and so is prompt engineering.

 

There is also the risk of “hallucination” where ChatGPT can appear to make things up, maybe because the question isn’t specific enough so it goes with what it knows because it can’t know what it doesn’t know. The all-too-common parallel of that is a politician answering a question you didn’t ask because they can’t cope with, don’t know or don’t want to answer the one you did ask.

 

So, friend or foe?

ChatGPT absolutely isn’t a fad - the world of AI is here to stay in all its manifestations. It is already being used extensively by many in their daily lives and studies, and we might all be forced into using it, if nothing more, from our collective FOMO (Fear Of Missing Out). The playfulness we might feel when we first use it will probably go away, but that will give way to increasing debate about its usefulness and limitations.

 

The problem with ChatGPT, unlike other tech products, is that we can’t blame the developers for what happens next - the zero entry barrier to its use puts the onus on all of us to decide what we want to use it for and what AI future we aspire to having. The developers have just put it out there, and we’re all free to find ways to utilise it and explore its boundaries or to drown in it.

 

Our politicians and thought leaders need to understand and educate themselves quickly and become AI-literate otherwise the risks will grow faster than the benefits.

 

My main concerns are longer term: what will it do to original thought and human communication in all its forms? There are precedents that back up my worries - there are people now who cannot tell the time from a clock face as they’ve been brought up entirely on a digital diet. And navigating by the sun, or landmarks has been almost completely replaced by Google Maps.

 

Artificial Intelligence can be useful, but not if it kills the innate human version.

 

Lee Faulkner is a Fellow of the Institute and Faculty of Actuaries, the UK’s actuarial body, and has more than 30 years’ experience in the world of financial services in Asia, Europe and Latin America. He is a Taiwan Gold Card holder and now lives in Taipei.

Go Top