An Aforza point of view on McKinsey’s Building the foundations for agentic AI at scale

McKinsey & Company - Building the foundations for agentic AI at scale

McKinsey’s latest piece, Building the foundations for agentic AI at scale, lands on a conclusion most technology leaders already feel in their bones: data is the backbone of agentic AI.

The article reports that nearly two-thirds of enterprises have experimented with agents, but fewer than one in ten have scaled them. Eight in ten cite data limitations as the reason. The diagnosis is right. The architectural prescriptions, modular layers, governed semantics, observable pipelines, are sound.

There is one element of the article we want to engage with more sharply, because it is the assumption that quietly stops the most ambitious commercial transformations in Consumer Products from ever beginning.
It is the idea that data quality must be in place before agents can run.

🔗 McKinsey & Company – Building the foundations for agentic AI at scale

The data-first mental model is what’s holding CPG back

Consumer Products organisations have been told for the better part of a decade that the route to commercial AI runs through the data warehouse. Get the master data right. Reconcile the trade promotion ledger. Stand up a customer 360. Then, and only then, you can think about intelligent execution at the shelf.

It is a logical sequence on a slide. It is a transformation killer in practice.

The reason is structural, not cultural. The data that matters most in CPG, real shelf state, real distributor sell-through, real promotional uptake, real outlet-level distribution, is not sitting in a warehouse waiting to be governed. It is generated at the moment of execution: when a rep walks into a store, when an image of a fixture is captured, when a perfect store audit is completed, when an order is placed at a wholesaler. McKinsey’s Exhibit 3 shows Consumer Goods and Retail running 81% gen AI adoption across functions, with the heaviest concentration in marketing and sales (45%). That is consistent with what we see. The capability is being deployed. The bottleneck is no longer access to AI. It is the belief that the field data underneath it has to be perfect first.

And that field data is precisely the data that organisations cannot fix in advance, because it does not exist until somebody captures it.

McKinsey - Building Foundations for Agentic AI - Exhibit 3

Reference: McKinsey & Company – Building the foundations for agentic AI at scale – Exhibit 3

Quality is a byproduct of execution, not a prerequisite for it

There is a different mental model emerging from the CPG companies that are scaling agentic capability fastest. We heard it expressed clearly by a commercial leader at one of our customers, a major beverages business, when we asked what advice they would give to peers approaching the same transformation.

The biggest lesson I’d want to share with anyone looking at this is that you don’t need perfect data to start. That belief stops more transformations than it should. We put capability in the hands of reps and let the system drive data quality upward over time. The real-time shelf photo, processed by the visual intelligence layer, lifted the floor immediately because you’re working from what’s actually on the shelf right now, not from what someone entered manually last week. Data quality becomes a byproduct of using the platform well, not a prerequisite for it. That’s a fundamentally different mental model from how most organisations approach data programmes, and it’s one I think more businesses should adopt.

Commercial leader, global beverages company

This is not a contradiction of McKinsey’s argument. It is a sequencing point. The seven architectural principles in the article, ingestion as a product, shared semantics, governance by default, observable behaviour, controlled execution, are the right principles. But the order of operations matters. In CPG, you do not build the architecture first and then turn on execution. You turn on execution at the point of pain, the store visit, the audit, the promotion check, and the architecture earns its quality from the field upward.

A shelf photo, processed by a vision model in real time, gives you an objectively true picture of distribution and share of shelf the moment it is taken. That is higher-fidelity data than any quarterly retail audit. The act of running the agentic capability is what creates the clean signal. Apply the same logic to image-driven order capture, to deductions matching, to promotional execution. The platform is the data quality engine.

McKinsey - Building Foundations for Agentic AI - Exhibit 1

Reference: McKinsey & Company – Building the foundations for agentic AI at scale – Exhibit 1

Look at the McKinsey data again with this lens. Data limitations sit at 80%, second on the list. But the top constraint, at 86%, is operating model and talent. The third, at 80%, is lagging adoption and ineffective change management. Two of the three biggest blockers to scaling AI are about how organisations work, not what is in the warehouse. The companies that move first put capability in front of users and let the operating model catch up. The data follows.

What this looks like in practice

This is not a theoretical position. It is how Aforza customers are running today. AG Barr, the UK’s largest independent soft drinks company, deployed Ava’s visual intelligence and intelligent stock recommendations into the field without waiting for a perfect upstream data model. The system processes shelf images in real time and feeds clean, structured outputs back into Salesforce, where they become the trusted record of what is actually happening in store. The data foundation got stronger because the platform was running, not before it.

This is the practical realisation of McKinsey’s principle that modern architectures should analyse model outputs to strengthen the data itself. Gen AI applications, the article notes, can generate labels, usage patterns, and context that improve quality and support future models. We agree. We would go further: in CPG commercial execution, that is not a nice-to-have feature of the architecture. It is the whole strategy. The agentic layer is the data quality layer.

The architectural work McKinsey describes still needs to be done. Semantic layers, knowledge graphs, AI gateways, governed data products, all of it matters, particularly as multi-agent workflows scale across trade promotion, distributor management, retail execution, and deduction recovery. But the mistake is to treat that work as the gate. Run the capability where the pain is sharpest. Let the field data flow back. Build the architecture in parallel, not in series.

Meet Ava
Ava Library eBook

This is where Aforza has done something no other CPG platform has. The Ava Library is the industry’s first packaged library of agentic AI use cases purpose-built for consumer goods. Not a toolkit. Not a set of APIs waiting for a systems integrator to assemble. A living library of pre-built agents that already know how to review a store visit for perfect store compliance, validate a retailer deduction against the originating trade promotion, surface the next best action for a key account, draft a promotional plan from post-event ROI, reconcile distributor sell-out data, and dozens of other jobs that sit at the heart of the CPG commercial day.

🔗 You can download a copy of the Ava Library here

This matters because it directly addresses the common barriers identified as blocking AI adoption in CPG. Security concerns recede because every agent in the Library runs inside the customer’s existing Salesforce platform, on infrastructure their CIO has already approved. The expertise gap closes because the agents are pre-built for CPG use cases; commercial teams consume them, they don’t build them. And the ROI question answers itself when every agent is tied to a specific commercial job with a measurable outcome attached.

The question worth asking

If your organisation is one of the 80% citing data limitations as the constraint on scaling agentic AI, the question is not whether the diagnosis is right. It almost certainly is. The question is whether your data programme is structured to fix the problem before deployment, or to fix it through deployment.

In CPG, the second answer is the one that produces both the better data and the faster impact. The companies that understand this are already moving. The ones still building the perfect foundation will find, in eighteen months, that their competitors built theirs by running it.

Commercial Excellence Exchange

For more insights like this, join the Commercial Excellence Exchange, our community for leaders across Commercial Excellence, Sales Excellence, Field Effectiveness, Sales Effectiveness & Commercial Performance in the Consumer Goods industry.