Posted inBrand view

How to scale AI strategically

Effective implementation and growth of artificial intelligence

Sid Bhatia, Regional Vice President and General Manager for META, Dataiku, shares insights into the applications of AI in the Middle East, the challenges organisations face when deploying AI projects, and recommendations for effective scaling. He also sheds light on Dataiku’s innovative approach to generative AI with LLM Mesh and how it is empowering organisations in the rapidly evolving AI landscape.

From conversations you are having with customers, what are the use cases/applications that are best suited to AI?

We’ve noticed a pattern across the Middle East where many customers start with customer-centric use cases as far as AI is concerned. These include customer segmentation, customer 360, cross-selling and upselling opportunities, sentiment analysis, and customer retention. Another category is risk mitigation, with a focus on anomaly detection and predictive maintenance, particularly in the manufacturing and oil and gas industries. Once there is clarity on the use cases, it’s essential to standardise them with a data science and machine learning platform to prioritise and implement effectively and maximise the value of the organisation.

What are some of the biggest challenges organisations face when deploying AI projects?

There are several challenges to consider. First, there’s a talent gap; organisations often lack data scientists and need to upskill their existing workforce, involving business analysts, Python champions, and others.
Secondly, there’s a need for speed, delivering analytics faster to the organisation, which requires a pre-integrated framework. This framework should empower users to connect with diverse data sources, efficiently transform and structure the data, develop prototypes, operationalise them, and monitor models, all within a single interface.

Building trust with the business teams is also crucial for AI project success. Other challenges include data quality, infrastructure, integration, ethical concerns, justifying the business case, and ensuring compliance.

What recommendations do you have for organisations looking to scale AI effectively?

To kickstart the journey effectively, I recommend beginning with a thorough key stakeholder mapping. Identify those C-level executives who are not only aware of AI but also have the necessary sponsorship. Gaining buy-in from higher-ups within the organisation is pivotal.

Next, delve into the core value drivers for the organisation. These drivers might revolve around boosting revenue, cost reduction, process optimisation, new product or service launches, customer retention, or enhancing satisfaction. Once the groundwork is established, narrow it down to the top three most promising use cases for the organisation. Among these top three, prioritise a use case with the potential to create an immediate impact. Concurrently, assess the organisation’s internal readiness for AI implementation. This may involve investing in training, and certifications, and fostering awareness about AI.

With this foundation in place, initiate the first pilot project. Don’t be deterred by the possibility of occasional setbacks; if a particular AI project encounters challenges, use these experiences to iterate and refine the approach.

What are the applications of generative AI, and how is Dataiku enabling organisations?

Our approach to generative AI at Dataiku is truly distinctive. Recently, during the AI Everyday New York conference, we unveiled our innovation known as LLM Mesh. This stands for Large Language Model Mesh and is our response to the broad spectrum of generative AI applications.

It’s of paramount importance for any organisation to establish robust governance in this domain. What sets LLM Mesh apart is its role as the backbone for crafting various generative AI applications. It offers several valuable capabilities. It empowers to make an informed choice. While numerous vendors provide API services, it’s crucial to align AI initiatives with the most relevant use cases. LLM Mesh facilitates this by allowing the decoupling of applications from the services layer. This decoupling enables the exploration of multiple models, selecting the one that best aligns with performance, cost, and security requirements, before deploying it.

Generative AI can be a significant financial undertaking, given the potential for numerous repeated prompts. Thus, it’s imperative to comprehend the cost implications and implement strategies to control expenses. Without cost containment, expenditures in this area can quickly spiral.

LLM Mesh serves as a solution to this by providing transparency and addressing the compliance aspect. It ensures that organisations remain in compliance at all times. This is particularly critical when dealing with repeated prompts that could potentially touch upon corporate intellectual property (IP) or customer personally identifiable information (PII). Any data leakage in this context could lead to reputational damage, revenue loss, and regulatory issues.