Three Types of AI. One Strategy Question.

AI Strategy 12 min read by Girish Koliki
Three Types of AI. One Strategy Question.

There are three distinct types of AI doing very different jobs in production right now. Most businesses treat them as one thing. That is why their AI strategy sounds confident and delivers nothing specific.

Ask ten business leaders what their AI strategy is and you will hear the same word repeated back at you with total confidence: "AI." We are investing in AI. We are rolling out AI. We are hiring for AI.

That is like saying your transport strategy is "vehicles." A bicycle, a cargo ship, and a helicopter are all vehicles. They solve completely different problems. Treating them as interchangeable would be obviously ridiculous, and yet that is what most companies do with AI every day.

There are three distinct types of AI in production right now: traditional machine learning, generative AI, and agentic AI. They have different strengths, different cost profiles, different failure modes, and different infrastructure requirements. The companies getting real results from AI know which type solves which problem. The rest are spending money on capability they cannot map to outcomes.

88% Organisations using AI in at least one business function (McKinsey, 2025)
~5% Generating real value from AI at scale (BCG, 2025)
42% Large organisations running AI in production (Dataiku, 2026)

§ Traditional Machine Learning: The Workhorse Nobody Talks About

Traditional ML is the oldest and least glamorous of the three. It is also the most widely deployed in production, by a large margin.

This is the AI that predicts which customers will churn, classifies support tickets by urgency, detects fraud in financial transactions, and forecasts demand in supply chains. It learns patterns from historical data and applies them to new inputs. No text generation, no creative output, no autonomy. Just pattern recognition and prediction, running quietly inside systems that most employees never see.

ML models like XGBoost, random forests, and logistic regression still power the core decision-making in banking, insurance, retail, manufacturing, and healthcare. A CES 2026 report made the point bluntly: the models that ship are not the ones generating research papers. They are the ones that work reliably, deploy easily, and deliver consistent results under real-world conditions.

Computer vision is a good example. In factories, hospitals, and retail environments, ML-powered vision systems run quality inspection, patient monitoring, and inventory management on edge devices. These are not large language models. They are lightweight, task-specific models optimised for speed and power efficiency. Dell's 2026 edge AI predictions identified computer vision as the leading edge AI use case, with organisations moving from proof-of-concept to production scale.

The key characteristics of traditional ML for decision-makers: it needs structured, clean data to work well. It is interpretable in ways that LLMs are not, which matters in regulated industries where you need to explain why a decision was made. And it is cheap to run at inference time compared to generative models. A well-tuned XGBoost model running fraud detection costs a fraction of what an LLM call costs per prediction.

Traditional ML is not exciting. It is not on the cover of magazines. But if you pulled it out of most enterprises tomorrow, core operations would break within hours.

Where traditional ML earns its keep

  • Fraud detection and anomaly scoring in financial services
  • Demand forecasting and inventory optimisation in retail and logistics
  • Predictive maintenance on factory floors and industrial equipment
  • Customer churn prediction and lifetime value modelling
  • Computer vision for quality control, safety monitoring, and medical imaging
  • Credit scoring and risk assessment in lending

§ Generative AI: The One Everyone Knows by Name

Generative AI is what most people mean when they say "AI" in 2026. It is ChatGPT, Claude, Gemini, Midjourney, and the wave of tools built on large language models and diffusion models. It creates new content: text, images, code, audio, video.

The business value is real, but it is different from ML. Generative AI handles unstructured, language-heavy work. Summarising documents. Drafting customer communications. Generating code. Answering questions from a knowledge base using retrieval-augmented generation (RAG). Translating between languages. Turning a product spec into marketing copy.

McKinsey's research shows 78% of organisations now use AI in at least one function, and much of that recent growth is generative AI being embedded into existing workflows. Companies report up to 30% workload reduction after integrating generative AI into production processes. That is not a marginal improvement.

But generative AI has a specific cost and complexity profile that traditional ML does not. LLM inference is expensive. Every API call costs money, and costs scale with usage in a way that a deployed XGBoost model does not. Latency is higher. Outputs are probabilistic, which means the same input can produce different outputs, and those outputs can be wrong in confident-sounding ways. Hallucination is still a live production risk that requires guardrails, human review, or structured validation.

There is also a growing split between cloud-hosted models and on-device deployment. Small language models (SLMs) are gaining ground for tasks that need to run locally: on phones, laptops, and edge devices. Models like Llama 3.2 at 1B parameters can handle summarisation, light Q&A, and formatting tasks on-device, with better latency and privacy than cloud round-trips. The trade-off is capability. Frontier reasoning and long conversations still need larger models running in the cloud.

For engineering leaders, the practical question is not "should we use generative AI?" The question is "where does generative AI add value that a cheaper, faster ML model cannot?" If the task involves structured data and a well-defined prediction, traditional ML is almost always a better fit. If the task involves unstructured language, creative generation, or complex reasoning over text, that is where generative AI earns its cost.

Where generative AI fits

  • Summarising and extracting information from documents, emails, and reports
  • Code generation, review, and debugging across engineering workflows
  • Customer-facing chatbots and knowledge assistants backed by RAG
  • Content creation for marketing, product descriptions, and internal communications
  • Translation and localisation at scale
  • Turning unstructured data into structured outputs (e.g. extracting fields from contracts)

§ Agentic AI: The Newest Layer, and the One Most Likely to Be Oversold

Agentic AI is the current hype cycle's main character. Gartner predicts 40% of enterprise applications will embed task-specific AI agents by the end of 2026, up from less than 5% in 2025. The market is projected to reach $52 billion by 2030. Every major vendor is racing to ship agent capabilities.

The concept is straightforward. Instead of AI that responds to a single prompt and returns a single output, agentic AI can break a goal into steps, execute those steps across multiple tools and data sources, and adjust its plan based on what it finds along the way. It moves AI from responding to acting.

A traditional ML model predicts which customer is likely to churn. A generative AI model drafts the retention email. An agentic system does both, plus decides which retention strategy to use, personalises the message, schedules the send, and logs the outcome. It closes the loop.

In production today, agent-based systems are handling customer service tickets end-to-end, managing inventory decisions by combining demand forecasts with supply constraints, and coordinating multi-step workflows across enterprise tools. Open protocols like MCP and A2A are reducing the integration cost that used to make this kind of orchestration prohibitively complex.

But there is a meaningful gap between the ambition and the reality. Fewer than one in four organisations have successfully scaled AI agents to production, according to McKinsey. Most are still in pilot. The failure pattern is consistent: teams layer agents onto broken processes and expect the agent to compensate. It does not. An agent that orchestrates a workflow with undefined ownership, messy data, and manual exceptions will produce chaos faster than a human doing the same work manually.

Agentic AI also introduces a governance problem that traditional ML and generative AI do not. When an ML model makes a prediction, a human still decides what to do. When a generative model drafts something, a human reviews it before it ships. When an agent acts autonomously, the decision-to-action loop can close before anyone checks. That requires a different kind of oversight: clear escalation paths, audit trails, and defined boundaries for what the agent can and cannot do without approval.

For business owners and engineering leaders, the honest assessment is this: agentic AI is real, it is production-ready for well-defined workflows, and it will become a standard part of enterprise architecture. But it sits on top of the other two layers. Agents that lack good ML models to make predictions, or good generative models to interpret and communicate, are agents with nothing useful to orchestrate.

40% Enterprise apps expected to embed AI agents by end of 2026 (Gartner)
<25% Organisations that have scaled AI agents to production (McKinsey)
$52B Projected agentic AI market by 2030

§ How the Three Types Work Together

In a well-built system, these are not competing options. They are layers.

Consider a retail company. Traditional ML models run demand forecasting and fraud detection behind the scenes. Generative AI powers the customer support chatbot and writes product descriptions at scale. Agentic systems coordinate between the two: an agent monitors inventory predictions from the ML model, identifies a low-stock product, checks supplier availability through an API, generates a purchase order using a language model, routes it for approval, and logs the outcome.

Each layer does what it is best at. The ML model handles the structured prediction. The generative model handles the language. The agent handles the orchestration and action. Remove any one layer and the system degrades.

The same pattern applies in healthcare. ML models analyse imaging and patient data. Generative models summarise records and support clinical documentation. Agents coordinate referral workflows, schedule follow-ups, and flag exceptions for human review.

The companies pulling ahead are not the ones that picked the most advanced type of AI. They are the ones that mapped each type to the right problem and built the connective tissue between them.

§ What This Means If You Are Making Decisions

If you are an engineering leader or business owner trying to figure out where to invest, the first step is specificity. Stop treating AI as a single line item. Break it down by type and map each type to the business problem it solves.

If your problem is prediction on structured data, you probably need traditional ML. It is cheaper, faster, more interpretable, and more mature than the alternatives. Do not use an LLM to do what a gradient-boosted model can do for a fraction of the cost.

If your problem is understanding, generating, or transforming unstructured content, generative AI is the right tool. But budget for the inference costs, build guardrails for hallucination, and decide early whether you need cloud-scale reasoning or whether a smaller model running locally is good enough.

If your problem is coordinating multi-step workflows across systems, agentic AI is where you should be looking. But only if the underlying data, processes, and decision rights are already well-defined. Agents do not fix broken workflows. They amplify them.

And if your AI strategy document uses the word "AI" fifty times without once specifying which type, that is worth fixing before the next budget cycle.

A note from fusecup

At fusecup, we help engineering leaders and business owners figure out which type of AI fits which problem, and build the systems that connect them. Whether you are starting with your first ML model or architecting an agent-based workflow, we are happy to talk it through. No agenda, no pitch. Just a practical conversation about where you are and what would actually move the needle.

§ References

  1. McKinsey, The State of AI (2025). AI adoption across business functions, scaling challenges, and enterprise maturity data.
  2. BCG, To Unlock the Full Value of AI, Invest in Your People (2025). Research on the ~5% of companies generating value from AI at scale.
  3. Gartner, 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026 (August 2025). gartner.com
  4. Dell Technologies, The Power of Small: Edge AI Predictions for 2026 (January 2026). dell.com
  5. Axelera AI, CES 2026: From AI Hype to Inference Reality at the Edge (January 2026). community.axelera.ai
  6. Dataiku, Enterprise Machine Learning Platforms: A Buyer's Guide for 2026 (March 2026). dataiku.com
  7. Machine Learning Mastery, 7 Agentic AI Trends to Watch in 2026 (January 2026). machinelearningmastery.com