Posted inEmergent Tech

Five key considerations to create successful AI strategies 

This guide examines five areas to focus on to get maximum value from AI initiatives

With the amount of hype that’s surrounded AI and generative AI over the last year, people would be forgiven for rushing to adopt the first AI solution they looked at. AI projects have to involve data, and massive amounts of it. This data goes on a complex journey before it reaches AI usability and organisations need to ensure they have the tools and processes in place to make it successful. In addition, there are many other things to consider: usability; getting value; proving ROI; managing sustainability.

This guide examines five areas to focus on to get maximum value from AI initiatives.

Fred Lherault, CTO Emerging, Pure Storage

Design for flexibility — don’t get tied down

In a fast paced industry, technology professionals are no stranger to moving quickly. While this pace is exciting, in the world of AI, the tools, models and methods used are different today than they were even a year ago. Anticipating this and building flexibility into planning cycles, systems and processes is essential to getting value from any AI project.

Don’t try to build a solution that fits strictly the requirements of today, because this solution could quickly become obsolete and potentially fail if it can’t adapt to the constant changes in the AI space. If an investment was made two years ago — what’s happening with it now — is it still usable and driving business value? Build with the most flexibility possible, for the unknown and unexpected situations, so that the organisation can adapt as needed.

Incorporate flexible consumption models

The fast pace of change mentioned makes planning large-scale investments a potentially risky expense. The market is moving much faster than the traditional purchasing cycle for many companies and so how technology is consumed is another factor that needs to be flexible.

Whenever possible, avoid being tied down by CapEx investments, which can be more risky when it comes to AI. The resources may sit unused for many months, and may need to cater to different requirements as the project, use cases and ecosystem evolves.

A flexible consumption model will enable organisations to flex up and down as required, and change requirements as projects evolve. Not only that but they will be backed up by Service Level Agreements (SLAs) to provide customers with more reassurance that vendors are there to support them.

Incorporate sustainability into preparation

There used to be three dimensions infrastructure managers would care about: capacity, performance and cost. Now power efficiency is the four dimension and it’s as important as the other three in storage decision making, especially when it comes to power hungry AI.

The demands on power and cooling from AI projects are at ridiculously high levels. We hear about it from every customer — they need to drive efficiency in every area they can while using AI. The data centre designs from ten years ago aren’t fit for purpose in this regard — they were never designed to be used in this way and organisations need many times the KW per rack capacity than they did years, or even months ago. Every KW that can be saved on storage and networking is a KW that can be used for compute resources to run AI.

Add to this the rising electricity costs to power and cool data centres, as well as the increasing limitations on building new data centres, and organisations should be looking for the most power efficient technology available which fits into the smallest footprint.

Build the right tools and platforms to improve how data scientists spend their time

Training AI is resource intensive and generally requires expensive GPUs. Organisations are therefore well aware of the need to optimise GPU resource utilisation. There is however another resource that is as important to optimise: data scientists’ time, since they spend more time loading, cleaning and experimenting with data than they spend training and scoring AI models.

In order to make the data preparation part of the process smooth and efficient, the AI platform team will need to build the necessary tools and platforms. Like developers, data scientists would rather have instant access to resources; instant results; and self service to perform their work quickly and efficiently. AI platform engineers will need to build an as-a-Service AI tooling platform that does this, further bolstering the earlier argument in favour of a flexible consumption model.

Another consideration is where the data sits: on-prem or cloud. Using GPUs in the public cloud is an extremely expensive resource. It would be useful for an experiment, but for intensive or long term work, it’s not the right workload for cloud use.

Build ROI models

This should go without saying — but no-one should rush into an AI project without building a solid business case with metrics and ROI. Those who hurry to adopt because of the hype, risk their efforts being wasted. As ever, this comes back to business strategy: what is important for the business, internally and for customers. What is physically possible with the resources available. This includes both computing and skilled people.

Organisations need to put metrics together before they start experimenting: define the parameters and success factors, the same as any other business issue.

Planning for an unknown future

Of course every organisation is planning without knowing the future. No-where is this more true than in AI. The pace of change, the skills shortage, the complex landscape mean that what works now, is likely to change in the future and organisations need to ensure they have planned, prepared and built in flexibility to meet the changing needs. There are certain elements that organisations want, which will never go away, including ease of use, sustainability and proving ROI. If AI managers can plan and incorporate technology which ticks these boxes, they’re well on the way to successful AI implementation.