Artificial Intelligence dominated the headlines last year – from natural language processing going mainstream with Chat GPT, fear of generative AI threatening livelihoods, digital doppelgangers sparking Hollywood strikes to world leaders trying to wrap their heads around its regulation. But what’s next in AI?
We invited national business and tech journalist Nick Huber to our annual client event at the end of January to give us his take on what’s next in AI. Here are our stand-out snippets from a jam-packed talk.
1. We are quite possibly at peak hype.
It’s impossible to accurately predict what will happen with generative AI. But looking back at the story pitches he’s received over the last year, and the global conversation on AI overall, Nick told us most of what he’s hearing is still predominantly speculation, potential and ambitions. Organisations excitedly talk about their AI plans, but there’s still very little concrete evidence of success – or of returns on AI investment.
Nick showed Gartner’s 2023 AI Hype Cycle, which has generative AI exactly where the noise suggests it is: the “peak of inflated expectations”. That means we’re potentially just 2-5 years away from the “plateau of productivity”, but also and close to tumbling into the “trough of disillusionment”.
Nick reminded us that with new tech, we tend to overestimate the short-term impact of the technology and underestimate the long-term impact. Which is good news for regulators.
2. The first attempts at AI regulation are emerging.
One thing the global hype (and the concerns it triggered) has done is prompt the first attempts at legislation to govern the use of AI. Nick is watching progress of the EU’s AI Act with interest. It’s the first comprehensive piece of regulation in this space. The act defines different risk-levels, which provide clues on where regulation will focus and how:
- ‘Limited risk’ – the user is told that the tech they’re using is AI and they decide whether to continue interacting with it.
- ‘High risk’ – AI systems that negatively impact safety or fundamental rights.
- ‘Unacceptable risk’ – such as biometric identification of people.
The ‘limited risk’ category is where we’ve already seen most of the mainstream generative AI applications today: help writing e-mails, first drafts of documents, chatbots.
3. There’s no end to the AI buzzwords.
As the hype continues for a little longer, new AI tech and variations are emerging. Nick’s generative AI trends to watch in 2024 include:
- ‘Multimodal’ AI that can handle text, images, and sound.
- ‘Agentic AI’ which can pursue complex goals with limited human intervention.
- Tech such as ‘retrieval augmented generation’ to make generative AI more reliable and less likely to ‘hallucinate’.
As with all change, it’s our attitude toward a new technology that will make or break us. Those who learn about and embrace AI tools today can give themselves a competitive advantage moving forward. But a healthy dose of scepticism is useful too, to separate the real developments from the hype*.
* Nick recommends Hard Fork from the New York Times for a balanced view.
Photo credit: Mark Waugh Photographer