Hyperstack - Thought Leadership

THE UNTOLD COST OF GENERATIVE AI: HOW TO OVERCOME HIDDEN COSTS AND CHALLENGES

Written by Emelia Beeson | Oct 5, 2023 1:16:46 PM

Generative AI is revolutionising every industry, from healthcare to entertainment, however we still don’t really understand its true potential – or the challenges associated. The technology comes with its own set of constraints; as businesses increasingly adopt AI technologies, realising the full scope of what's involved is crucial. 

From the initial investment to ongoing operational costs, it’s imperative to understand how to allocate and budget compute resources effectively for businesses to implement generative AI successfully.

Generative AI: A Brief Overview

In layman’s terms, Generative AI is exactly what you’d think it is; it is AI-generated content, such as Midjourney images, Chat GPT responses, and AI-generated voices/videos. It refers to machine learning models capable of creating new data based on existing data sets. These models, such as GANs (Generative Adversarial Networks) and LSTMs (Long Short-Term Memory Networks), require substantial computational power and data storage.

It’s here where Cloud computing, especially GPU Cloud computing, becomes an essential component of their operation, offering the scalability and flexibility needed for these complex tasks.

However, the computational and storage requirements are often underestimated. To give you an idea of the compute overheads of AI development cost, Statista recently claimed that over 70% of global corporate investment in AI is spent on infrastructure – that’s a global spend of about $64.4 BILLION on computing in 2022 alone.

AI Resource Demands

The computational demands of generative AI are enormous. Deploying ChatGPT into every search conducted by Google, for instance, would require 512,820.51 A100 HGX servers, totalling 4,102,568 A100 GPUs.

Generative AI training requires high-grade central processing units (CPUs) and graphics processing units (GPUs). GPUs drastically speed up the training process – they handle multiple tasks simultaneously, so the time and energy required for data processing are significantly reduced. For example, inference speed alone is 237 times faster with the A100 GPU than with traditional CPUs – and the A100 is a whole generation behind the H100.

Speed enables a machine to execute tasks in less time, but it's only part of the equation – efficiency is also a key consideration when allocating compute resource. An efficient system will use less energy and fewer resources to perform the same tasks, thereby reducing operational expenses in AI.

For example, Hyperstack’s networking and storage is fully optimised for GPU Cloud workloads, and its servers employ sustainable cooling systems that allow the hardware to run at peak performance without overheating or wasting energy. This is particularly important for businesses that need to run their AI models 24/7, as even small efficiencies can translate into significant cost savings over time.

Moreover, efficiency extends beyond just hardware. It also involves software optimisation. Efficient algorithms can perform tasks using fewer computational resources, which means that you can do more with less. For instance, some machine learning algorithms are designed to minimise the number of calculations needed to arrive at a solution, thereby reducing the computational load.

While speed is undoubtedly important, efficiency is the key to sustainable and cost-effective AI operations. By focusing on both, businesses can ensure that they are getting the most out of their compute resource, allowing them to scale their operations more effectively. Additionally, the use of cloud resources allows for greater flexibility, enabling companies to adjust their computational needs based on project requirements, thereby avoiding unnecessary costs of AI.

The Cost of Generative AI

Implementing generative AI is not cheap. The costs can range from thousands to millions, and even billions of dollars, depending on the scale and complexity of the project. One estimate suggests that ChatGPT could cost over $700,000 per day to operate – that’s c$21mil per month. These costs include data storage, computational power, and the human resources needed for implementation and maintenance. It's crucial to factor in these costs when planning an AI project.

However, budget overruns are common in AI implementations, often due to underestimating the resource demands. To rub salt in the wound, pricing transparency from legacy cloud providers remains famously unclear. They package up add-ons to the point where a seemingly good deal becomes a vendor lock-in with a lavish or even unfeasible price tag, especially for newer companies.

Therefore, a well-thought-out budget that includes contingencies for unexpected costs is essential for the successful deployment of generative AI. Companies should also consider the long-term operational costs, including regular updates and maintenance, to ensure that the project remains viable in the long run.

Limitations of Generative AI

While generative AI holds immense potential, it's not without limitations. The technology is still in its nascent stage, and there are concerns about data privacy, model interpretability, and ethical considerations.

While Europe is ahead of the curve in privacy concerns and regulation, the region’s technological adoption of AI is lagging due to a lack of infrastructure investment and overall understanding of the complexities of its implementation.

Businesses need to be aware of these challenges and prepare strategies to mitigate them. This includes regular audits of data usage and computational needs, as well as ethical reviews to ensure that the AI models align with societal norms and regulations. Companies should also be prepared for the possibility of data breaches and have contingency plans in place to address such issues.

Energy Consumption

Generative AI is a significant energy consumer. Its training processes particularly require a considerable amount of electricity, contributing to its overall cost and ESG impact.

Hyperstack’s network of servers all run 100% on hydro-power and even the back-up generators are powered by bio-diesel, alleviating the potential overall environmental impact of AI. In terms of cost-efficiency, innovations in hardware and software are making it possible to achieve the same computational results with less energy. NVIDIA’s H100 GPUs, for example, are 26x more energy efficient than CPUs when measured across inferencing benchmarks. This not only reduces costs but also aligns with the growing emphasis on sustainable business practices.

Choosing an energy-efficient GPU cloud provider like Hyperstack can be a win-win situation for both cost reduction and ESG.

How AI Can Be Used to Reduce Costs

AI itself can be a solution to some of the challenges it poses. AI can automate many of the routine tasks associated with data management, reducing labour costs. These cost-saving measures are not just theoretical; they are being implemented in real-world scenarios. Companies are using AI to automate customer service, optimise supply chain logistics, and even predict maintenance needs for machinery, all of which contribute to significant cost savings.

If you’re already taking the steps to implement generative AI, then it’s almost foolhardy to overlook other applications of AI to offset some of the initial investment required.

Cloud Capacity Scale-Planning

Effective cloud capacity planning is crucial for implementing generative AI. Businesses need to assess their current and future needs to avoid cost overruns.

Organisations must make data-driven decisions about whether to deploy AI workloads in a shared public cloud, an on-premise environment, or combine the two in a hybrid cloud environment. This involves analysing the performance, security, and cost implications of each option. By doing so, businesses can optimise their cloud resources, ensuring that they are neither under-utilising nor over-committing, both of which can lead to unnecessary expenses.

Once a business has audited their own needs, they then need to assess and plan their scaling needs alongside the availability of infrastructure. In many cases, large-scale AI operations need to reserve in advance, as the demand for GPU resources can often outpace supply, leading to an overall lack of availability in the market:

"I think it's not controversial at all to say that, at least in the short term, demand is outstripping supply, and that's true for everybody," 

AWS CEO Adam Selipsky, referring to the H100s that are designed for Generative AI.

These limitations are not insurmountable but require careful scale-planning and consideration – even with NVIDIA’s awe-inspiring shipment of 550,000 H100 chips in 2023, lead times for the H100 are still lengthy, with many large-scale projects needing to deploy multiple clusters for one environment.

Advanced analytics tools can also forecast future resource needs, enabling companies to plan their cloud capacity more effectively.

AI and Business Strategy

Generative AI should align with your overall business strategy. Focus on applications that offer clear returns on investment (ROI) and significant profit margins. Tailored cost-optimisation methods can help strike the right balance between performance and cost. This involves identifying the key performance indicators (KPIs) that are most relevant to your business and optimising your AI models accordingly. By doing so, you can ensure that your investment in AI and cloud resources yields the maximum possible returns, thereby justifying the costs involved.

Strategic planning is essential for the successful implementation of generative AI, and businesses should consult with experts in the field to ensure that their projects are both feasible and aligned with their long-term goals.

Generative AI is a groundbreaking technology with the potential to transform various industries. However, it comes with its own set of challenges, particularly in the realm of cloud computing. By understanding these challenges and leveraging solutions like Hyperstack’s GPU Cloud, businesses can strive for the full potential of generative AI in a cost-effective and efficient manner. As technology continues to evolve, it's crucial for businesses to stay updated on the latest trends and innovations, ensuring that they are well-positioned to capitalise on the opportunities that generative AI offers.

With the right Cloud partnership, proper planning, and strategic investment, the challenges can be overcome, paving the way for a new era of innovation and growth.

Sign up to Hyperstack today to reduce your compute overheads by up to 75%, or book a meeting directly with our team to reserve in advance and discover how we can help your business implement AI on a larger scale.