By Stuart Dorman, Chief Innovation Officer at Sabio Group
AT THE mention of GPT-3, you’d be forgiven for thinking that George Lucas had invented a new droid character in his latest offering of the Star Wars franchise.
But instead of playing a leading role in Obi Wan Kinobi, which of course is streaming now on Disney+, GPT-3 is instead set to make its mark on the world in its own way.
You see, GPT-3 – or Generative Pre-trained Transformer-3 to give it its right name – is a fairly new type of artificial intelligence (AI) that is rapidly evolving and emerging as a candidate to become the next major general-purpose technology.
It’s known as a ‘Foundational AI’, and at a very basic level it’s already been introduced into some of our everyday tasks – think predictive texts for example.
But, with around 80% of all AI research now focused on Foundational AI, it’s set to inject itself deeply into many human endeavours – from “writing code to drug discovery”, according to The Economist.
So what exactly is GPT-3? And what is the potential of Foundational AI?
GPT3 has been around since mid-2020, having been created by OpenAI, a San Francisco-based artificial intelligence research laboratory. It is actually, in fact, a third-generation language model in their GPT series, containing up to 170-billion parameters. It works by using deep learning to produce human-like text to such a high quality that it can be difficult to determine whether it was written by a human or not.
At the time, it was the first Foundational AI model ever created, but it is not the only one in existence today – having since been joined by “cousins” such as LaMDA, DALL-E and ERNIE to name a few from across the globe.
Months after the development of GPT-3, Microsoft announced that it had licensed ‘exclusive’ use of GPT-3’s underlying model. But now, thanks to the successful evolution of the model and indeed Foundational AI as a concept, it has caught the attention of some other large players in the market, who are sitting up and taking notice of its potential – such as Google, who launched PaLM in April this year.
Why are only just talking about it?
Substantial advances in hardware engineering have led to the Foundational AI model being touted for broader commercial use. In fact, some have suggested it may even be an industrial revolution in waiting in its own right – thus having the same impact as Coal in 1765, Gas in 1870, Electronics and Nuclear in 1969 and the Internet and Renewable Energy in 2000.
It has the potential for huge economic impacts – and is a clear phase change in AI capability.
What’s more, this capability is becoming available to more and more people and organisations. As computer processing power increases rapidly year-on-year while data processing costs reduce at the same time (the AI itself doesn’t need as much time and energy to execute its analyses of data) it is becoming cheaper to run and therefore more accessible.
In a nutshell, by reaching more people, it increases its potential and – as you can imagine – the prize for winning that particular race is mindboggling…
Controversy is never far away…
However, as with anything that has the potential to change the world, the Foundational AI model has not been short of controversy to seek.
Just this month (June 2022), it was brought into mainstream focus when a Google engineer – working on LaMDA, one of the cousins to GTP-3 I spoke about earlier – was put on leave for claiming the AI had become ‘sentient’.
Now, in my view I don’t believe that we are on the verge of witnessing a bot, or any AI, becoming a sentient ‘being’ that can take in its surroundings, have feelings and be ‘alive’.
But the fact of the matter is this; we are now nearing the point where AI technology is so good that it will be impossible for a human to distinguish if they’re having a conversation with a bot or another living breathing human being. The completion of the Turing test just got one step – or a few steps – closer…
For us at Sabio, Foundational AI is fascinating and has huge potential to deliver value to the world of CX. Implementing this technology in a place where millions of conversations take place every day (digital messaging interactions and contact centres) feels like a logical home for this technology.
Organisations can finally put to use the gargantuan amounts of data from call recordings and chat conversations collected over many years. For example, that data could be used to train these new models to automatically generate responses to questions instead of interpreting the customer’s intent and matching a pre-defined response – which is mostly how today’s implementations of conversational AI work.
This has the potential to scale AI to an entirely new level, not just surfaced through voice or chat bots, but to improve employee experiences. It could suggest how to respond to customer enquiries with the best possible answers, whilst simultaneously removing/reducing the need to type on a keyboard thus driving huge gains in productivity.
I’ll be exploring this in more detail in my next blog on this subject.
In the meantime, if you’re interested in continuing the AI conversation then you can join me alongside colleagues at Google Cloud and Twilio for our ‘Disrupt on the Road’ event – ‘The Future of AI in CX’ – at Google CSG in London on Thursday, July 7th.
Learn more and register here.