Posted inEmergent Tech

7 generative AI trends that will shape the GCC in 2024

Given the rapid adoption of tools like ChatGPT throughout the GCC, let’s explore the anticipated transformations and their implications for a collective Smart Future

As we end another year, let us take part in the time honoured tradition of making predictions about the next. When it comes to AI, the mind-numbing pace of development in 2023, particularly with generative AI, means we should expect quite a bit of action in 2024. With the incredible rate of onboarding for tools like ChatGPT across the GCC, let’s take a look at what we can expect to change, and what it means for a shared Smart Future. 

Here are seven trends we expect to emerge around generative AI in 2024. 

1. Cost reduction

Providers can expect big costs for large-language models (LLMs), both in terms of gathering access to training data (including human feedback regarding LLM performance) as well as the underlying costs of training the models themselves. To remain competitive, these providers will need to continue innovating at blistering speeds, while providing services at acceptable rates to LLM consumers. The economic pressures at play here will drive a lot of activity towards attempting to reduce costs by making it cheaper to run existing models as well as training newer and more advanced models.

2. Multi-modal models

Although the name might suggest that LLMs are solely oriented towards the ingestion and generation of language, LLMs are going multi-modal! The remarkable capabilities of generative AI to produce images / audio / video / etc., are increasingly going to be combined with LLMs. Not only will you be able to interact with the LLM of your choice simply via natural language, but you will be able to upload images alongside text, speak to the model through voice chats, and have the model generate multi-modal outputs. This flexibility of inputs and outputs is not only incredibly useful on its face, but opens the door to reasoning in incrementally better, and increasingly more human ways.

3. Autonomous agents

In 2024, look out for more offerings where generative AI is put to work interpreting human instructions and carrying out complex tasks by breaking them down into sub tasks, and chaining together sequences of actions and reasoning steps. This initiative, in the form of autonomous agents (e.g., AutoGPT and BabyAGI), will initially be confined to fault- and risk-tolerant use cases until autonomous agents have proven themselves to be sufficiently robust. However, this asymmetry presents a risk in the realm of cybersecurity, as threat actors will be more open to adopting such untested tools and may gain an advantage over defenders who will need to be more cautious. 

4. ‘Thinking’ vs ‘knowing’ 

While a lot of innovation has come about because of the ability to finetune LLMs, and to train them through human feedback (using reinforcement learning, i.e., RLHF), a lot of tasks will simply require an LLM that can “reason” sufficiently well to be coupled with more robust means of accessing the right information. In 2024, expect a tug-of-war between RAG and the fine-tuning of models as organisations prioritise a system’s learned capabilities (how it thinks) over the volume and quality of data to which it has access (what it knows). As LLM providers continue to improve model performance, companies putting LLMs into production will begin to tilt towards spending more effort on intelligently extending access to data via RAG and other methods.

Sohrob Kazerounian, Distinguished AI Researcher, Vectra AI

5. More integration options 

Right now, the region’s organisations are exploring ways to adopt generative AI, like LLMs, into their technology stacks. AI providers will therefore spend 2024 continuing to improve their integration options. They will look to provide ways to customise and connect their offerings to customers’ core business applications. For example, throughout the year we will see it become easier to specify more output formats for an LLM and validate those outputs, which will be extraordinarily important when it comes to regulatory compliance. Tasks like calling arbitrary functions and plugins will become easier for coders, without needing to reinvent these patterns. 

6. Laws and society

AI adoption has been strong in the GCC. But while governments such as those in Saudi Arabia and the United Arab Emirates have issued guidance on implementation, as part of various economic Vision programs, formal legislative frameworks have yet to emerge. However, we can expect this to change in the coming year as the explosive adoption of AI rubs up against strict regional rules on data privacy and residency. This will spark renewed discussion on ethical and socio-political questions around AI. Companies looking to adopt gen AI should watch the regulatory landscape with keen eyes.

7. Keeping the ‘mis’ and ‘dis’ out of information 

Perhaps one of the greatest fears behind AI is its potential to pollute the information ecosystem. Generative AI models are widely available and while some commercial products appear to have ethical “wrappers” around the raw LLM, those that figure out how to game the API, or even build their own models from scratch, will have powerful tools in their hands. The potential for fake audio and video or mass-produced fiction masquerading as news will then become very real threats to societies everywhere. Not only will such content sow distrust among neighbours; it may compromise the health and safety of entire communities. Governments, industry and civil society will need to work together to develop entirely new tools to combat these types of disinformation.

Years of AI

We have years of AI to come. But what we do next year — how we adopt it, train it, use it, govern it, and even how we defend against it — will set the stage for the acts to follow. It is up to us what shape the play takes but if we are thoughtful, we can have the future we want: safe, prosperous, and sustainable.