A pragmatic approach to generative AI
Eighteen months into the generative AI boom, some may wonder if the luster is wearing off. In April, Axios called gen AI a “solution in search of a problem.” A month later, a Gartner survey in the US, UK and Germany found that around half of all respondents had difficulty assessing the organizational value of AI, despite generative solutions being the No. 1 form of deployment. Meanwhile, Apple and Meta are reportedly withholding key AI features from Europe due to compliance issues.
Between the regulatory issues and ROI questions, it’s tempting to wonder whether generative AI might turn out to be the tech industry’s newest gadget – more NFT than Netflix, if you will. But the problem isn’t the technology; it’s the mentality. What we need is an alternative approach.
Not all AI is the same. We have a problem with companies jumping on the AI bandwagon, especially for generative use cases. Practitioners will only unlock the true potential of AI – including generative applications – if they prioritize an engineering-first mindset and cultivate the expertise to add domain knowledge. Then, and only then, can we build a roadmap for concrete long-term value.
Chief Sales Officer, Virtusa.
Not all AI is the same
Broadly speaking, Enterprise AI splits into generative and analytical applications. Generative has gotten all the recent attention thanks to its uncanny ability to create written content, computer code, realistic images, and even video in response to user requests. AI for analysis has been commercialized for much longer. It is the AI that companies use to carry out their activities, draw trends and make decisions based on large amounts of data.
Analytical and generative AI can of course overlap. Within a given stack you can find all kinds of integrated applications – a generative solution on the front end, for example, that surfaces ‘traditional’ AI-powered analytics to provide data visualization for the answer. Yet the two parties are fundamentally different. Analytics AI helps you operate. It’s reactive. Generative AI helps you create. It’s proactive.
Too many stakeholders gloss over this dichotomy, but it matters in the all-important value conversation. AI-powered analytics have long proven their ROI. They provide insight into the way we collect data, and the outcomes – from customer segmentation to predictive maintenance to supply chain optimization – provide established value.
Generative AI? That’s a different ball game. We see a lot of experimentation and investment, but not necessarily a commensurate output. For example, a company’s engineers can be 30% more effective if they use a generative AI tool to write code, but if that doesn’t lead to shorter product-to-market cycles or higher net promoter scores, then it’s difficult to quantify real value. . Leaders must break down the value chain into its modular components and ask the hard questions to map generative use cases for real value.
The bandwagon problem
The ROI problem for generational AI is as much a bandwagon problem, with many stakeholders beginning their search for an AI solution with only a generative implementation in mind. Business leaders are trying to force AI – and especially generative solutions – on problems they don’t have. They create use cases just to get in on the game, often at the behest of their boards, because they don’t want to be left behind.
It’s time to take a step back. Leaders must remember two things.
First, it’s important to separate the use cases. Is this drive for a generative solution best served by an analytical solution, in whole or in part? Often an organization only needs pure AI – for fraud detection or risk management, for example – and not a GPT that turns it into the latest prompt wizard.
Second, it is equally important to integrate AI only where it makes sense. It must solve immediate problems that the company can realize value by solving. Otherwise it represents a solution without problems. You gave the orchestra drums for an arrangement without percussion.
Why domain knowledge is essential
Bandwagon skeptics who appreciate the nuances of AI can take a pragmatic approach that delivers fair value by taking a technical perspective. The biggest problem with AI, both generative and analytical, is a lack of understanding of the context or business domain in which practitioners work.
You can generate a block of code, but without understanding where that code fits, you can’t solve any challenge. Take an analogy: an enterprise may have put an AI model out on the street, but the engineers know the area. The company must invest significant resources in training its newest resident. After all, it is there to solve an acute problem, not to just knock on anyone’s door.
When done correctly, generative models can deliver substantial long-term value. AI can generate code against significant requirements and context – guardrails built as part of a broader investment in domain knowledge – while engineers have the context to modify and debug the output. It can accelerate productivity, make practitioners’ jobs easier and, if clearly mapped across the value chain, generate quantifiable ROI.
That’s why it’s essential to have the discipline to invest in this domain knowledge from the start. Leaders must build this into any AI investment plan if they want actionable long-term results. Sacrificing depth for speed can lead to piecemeal solutions that ultimately don’t help, or only help for a short time. Those who want AI for the long term must make the effort to build context from the bottom up.
A roadmap for discipline
For business leaders, the roadmap to value-driven AI starts with asking the right question: What problem in my business does AI really need to solve? Disciplined practitioners bring an engineering mindset that asks the right questions, considers deeper problems, and seeks focused solutions from the very beginning. When done right, analytical or generative AI can accelerate a team’s effectiveness by removing the mundane, boring parts of their roles. But generative intelligence must have the right guidelines and industry-specific training so that implementations don’t stray from their path.
Approached this way, gen AI will not go the way of the metaverse. Its primitive beginnings can grow from superficial use cases to actual value because enterprises have invested the resources to create context. If not, the costs of failure are already becoming apparent. Companies will pile on additional computing, storage and networking costs, only to find that they have not delivered measurable cost savings or revenue gains.
But for those who adopt a technical mindset and don’t take shortcuts, this alternative approach can indeed provide a solution. A pragmatic approach to AI starts with asking the right questions and investing in domain knowledge. It ends with targeted solutions that deliver quantifiable long-term value.
We list the best Large Language Models (LLMs) for coding.
This article was produced as part of TechRadarPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, you can read more here: