AI now enters almost every business conversation by default. Boards want a point of view on it. Leadership teams want a plan for it. Business units want to show they are doing something with it. And vendors are more than happy to reinforce the sense that the future belongs to the companies moving fastest. The real challenge is not AI activity alone, but AI value realization.
That creates a familiar kind of pressure. Many organizations are not just trying to understand what AI can do. They are trying to understand what it means for their competitive position. Are they moving early enough? Are others already further ahead? Are they investing enough? Are they being too cautious? Or worse, are they falling behind while the market moves on without them?
This is where AI FOMO enters the picture.
The feeling is understandable. McKinsey’s 2025 global survey The State of AI 2025 found that AI use is now widespread, with 88 percent of respondents saying their organizations regularly use AI in at least one business function. But the same report shows that most organizations are still early: nearly two-thirds have not yet begun scaling AI across the enterprise, and only 39 percent report any enterprise-level EBIT impact. In other words, AI activity is broad, but meaningful value capture remains uneven.
That is exactly why so many companies feel pressure to keep up. But the conclusion many draw from that pressure is often the wrong one. They assume that if AI matters strategically, the route to value must be through more platforms, more pilots, more tooling, and more spend.
In reality, AI value does not primarily come from platforms and technology. It comes from building the right environment for AI to work: the right organizational design, the right governance, the right ownership, and the right culture.
Once that becomes clear, the implication is hard to miss. Competitive advantage in AI will come less from more money and more from smarter money.
That should be read as good news. It means the answer to AI pressure is not panic. It is not an arms race for tools. It is not a reflex to outspend everyone else in the room. The better response is to invest intelligently in the conditions that allow AI to create real, repeatable value.
The Pressure to Keep Up Is Real, but It Can Lead to the Wrong Question
A lot of companies are still orienting themselves around a simple comparison question: are we ahead or behind?
It is easy to see why. Industry reports highlight performance gaps. Public case studies make progress look faster and more settled than it usually is. Boards want reassurance that the company is not underreacting. Leadership teams feel pressure to show momentum. And because AI is discussed so publicly, comparison becomes almost unavoidable.
But while the anxiety is real, the question itself is not especially useful.
The problem with asking whether you are ahead or behind is that it collapses very different realities into one vague sense of urgency. It encourages external benchmarking before internal diagnosis. It makes visible activity look like maturity. It can push companies toward symbolic progress: more announcements, more experimentation, more spending, more motion. None of that guarantees value.
A more useful question is harder and more practical at the same time:
Are we becoming more capable of turning AI into measurable, trusted, repeatable value?
That question shifts attention away from appearances and toward capability. It asks not how much AI can be seen from the outside, but whether the organization is getting better at making AI work in ways that actually matter.
This distinction matters because many companies do not have an AI shortage. They have a value realization problem.
Adoption Creates Visibility. It Does Not Automatically Create Value
One reason AI benchmarking becomes misleading is that the language of progress is often too imprecise. Organizations talk about use, adoption, scale, maturity, and value as though they are different names for the same thing. They are not.
A company can be using AI without having integrated it meaningfully. It can integrate AI in pockets without being able to scale it. It can scale certain tools without seeing clear business impact. And it can show a great deal of AI activity without being much closer to sustained competitive advantage.
This is one place where a more critical reading of the current AI conversation helps. McKinsey is useful in showing that the gap in outcomes is real. But Jing Hu in her Substack „Explain the McKinsey 2025 AI report“ makes a valuable counterpoint in her discussion of the 2025 report: broad adoption headlines and “high performer” narratives can easily be read too simplistically. Her point is not that the performance gap is imaginary. It is that visible AI use should not be mistaken for deep organizational readiness or durable value creation.
That caution matters. It keeps us from treating “everyone is doing AI” as proof that everyone is getting meaningful returns from it.
The important distinction, then, is not between companies that have AI and companies that do not. It is between companies that are visibly active and companies that are structurally capable of turning AI into value.
Why AI Value Realization Depends on the Environment Around the Technology
When organizations struggle to realize returns from AI, the instinct is often to look for an answer in the technology stack. Maybe the models are not good enough yet. Maybe the data infrastructure is not mature enough. Maybe the tooling needs to improve. Maybe the wrong platform was chosen.
Sometimes those things are true. But in most enterprise settings, the limiting factor is not the raw availability of technology. It is the environment in which the technology is being introduced.
AI does not create value in a vacuum. It creates value inside an operating context. That context determines whether promising use cases remain isolated, whether cross-functional work actually happens, whether ownership is clear, whether governance enables action, and whether teams are prepared to redesign work rather than simply layering new tools onto old complexity.
This is where McKinsey’s 2025 report is most useful. Its strongest insight is not simply that AI use has broadened. It is that the organizations seeing the greatest benefit are the ones redesigning workflows, scaling faster, embedding AI into more business functions, and using management practices that align with broader transformation capability across strategy, talent, operating model, technology, data, and adoption. McKinsey also identifies workflow redesign as one of the strongest contributors to meaningful business impact.
That is exactly where organizational design matters.
If decision rights are fragmented, initiatives slow down. If responsibilities sit ambiguously between digital, operations, customer service, compliance, and business leadership, momentum gets lost between functions. If success metrics are vague, projects continue without clarity about whether they are worth scaling. If AI is treated as a side program rather than part of the operating model, it remains visible but disconnected.
Governance matters in the same way. The common mistake is to see governance as the thing that arrives after innovation, once the “real work” is done. In practice, governance is one of the things that determines whether work can move beyond pilot mode at all. When it is built in too late, it becomes friction. When it is designed from the start, it becomes part of the mechanism that makes AI usable, scalable, and trustworthy.
That is why governance should be treated as part of the growth equation, not just part of the risk conversation. If you want to explore that point in more detail, see AI Governance: From Risk to Competitive Advantage
Culture Is Not the Soft Side of AI. It Is Part of the Execution Layer
Culture matters because it determines whether the formal design of the organization actually works in practice.
A company can define ownership clearly on paper and still fail if teams continue to protect their silos. It can install governance structures and still fail if those structures are seen as obstacles rather than enablers. It can invest in AI tools and still fail if leaders do not create the trust required for people to change how they work. It can talk about transformation and still fall short if managers reward short-term visibility more than responsible integration.
This is why culture should not be treated as a soft or secondary topic in AI strategy. It shapes execution.
In practice, culture influences whether people surface concerns early, whether they collaborate across functions, whether they trust outputs enough to work differently, and whether leaders can create clarity in situations that are still evolving. It influences whether AI is approached as a capability to build or merely as a set of tools to deploy.
If organizational design makes AI scalable on paper, culture is what makes that design work in reality.
This is also where the “smarter money” argument becomes concrete. Once you accept that value is created by the environment around the technology, the investment question changes. The real issue is not whether you can spend more. It is whether you are funding the conditions that make value possible.
A useful companion read here is How Ethical AI Builds Trust – and Competitive Advantage.
Why More Money Is Often Less Important Than Better Allocation
Once AI becomes strategically important, spending becomes a proxy for seriousness. The bigger the budget, the stronger the commitment seems to be. That logic is easy to understand, and in some cases greater investment is necessary. But as a general principle, it is too blunt to be helpful.
What matters is not simply whether an organization is spending more. What matters is where the money is going.
A company can increase its AI budget and still invest mostly in visibility rather than effectiveness. It can fund a growing portfolio of use cases without strengthening the conditions needed to scale them. It can spend heavily on platforms and still struggle with weak ownership, poor workflow integration, inconsistent adoption, and unresolved accountability.
This is where Jing Hu’s critique becomes especially useful. In her reading of McKinsey’s 2025 survey, she points out that only a small minority of companies fall into McKinsey’s “high performer” group. Her argument is not that AI success is impossible. It is that the wrong lesson is often drawn from these numbers. The fact that a small subset is pulling ahead does not mean every company should respond with more visible AI activity or bigger budgets. It means companies need to understand what actually enables value realization.
That is why the gap in AI outcomes is better understood as a gap in capability than as a simple gap in spend.
Smarter money looks different from reactive money. It does not chase every signal of market urgency. It goes into the conditions that make value repeatable.
It goes into clearer decision-making structures, so initiatives do not stall between functions. It goes into governance that is designed early enough to support scale rather than slow it down. It goes into redesigning operating processes, so AI becomes part of how work is actually done. It goes into leadership alignment, so teams are not pulled in different directions. It goes into change capability, so adoption is more than superficial compliance. And it goes into the cultural conditions that make collaboration, trust, and accountability possible.
That is the real strategic shift. AI advantage does not come primarily from doing more. It comes from becoming better at making AI work.
If you want to take that discussion one step further, see AI Strategy, where Verged lays out how to identify the right AI opportunities, where to invest, and how to connect AI activity to measurable business and customer outcomes.
In Customer Engagement, Weak AI Design Shows Up Fast
For Verged, this matters most clearly in customer engagement.
Customer engagement is one of the places where the gap between AI ambition and organizational reality becomes visible very quickly, because it sits at the intersection of value creation, operational delivery, trust, and governance. This is where AI does not just affect internal efficiency. It shapes actual customer experience, service quality, responsiveness, personalization, escalation, and brand perception.
That makes customer engagement an unusually revealing environment.
If ownership is unclear, customer journeys become fragmented. If governance is too vague, scaling becomes risky or inconsistent. If teams are not aligned, personalization, service logic, and operational workflows begin pulling in different directions. If the culture is weak, frontline teams resist adoption or work around the intended process. If leadership focuses too narrowly on efficiency, the organization may reduce cost-to-serve while quietly weakening trust or experience quality.
In other words, customer engagement makes abstract AI problems concrete.
It shows whether the business can connect technology to outcomes across the full chain from design to delivery. It reveals whether AI is improving the customer relationship, improving the economics of service, and improving decision quality in ways that can be governed and sustained.
For readers who want a broader responsible-AI perspective, this section can also point to the AI Briefing Series on Responsible AI.
The Better Way Forward for AI Value Realization
If there is one shift leadership teams should make, it is this: move away from asking whether you are doing enough AI and toward asking whether you are investing in the conditions that make AI valuable.
That is a different conversation from the one many companies are having today.
It means looking past the surface indicators of progress and asking tougher questions. Where is AI creating real customer and business value today, and where is it merely creating activity? Where does ownership become unclear? Where do processes prevent scale? Where is governance too late, too heavy, or too disconnected from the work? Where is trust missing? Where is culture making adoption harder than the technology itself?
These are not secondary questions to be addressed once the tools are in place. They are the questions that determine whether the tools will matter.
AI value does not primarily come from platforms and technology. It comes from building the right environment for AI to work. And once that becomes the focus, the broader implication follows naturally: competitiveness and advantage will come less from more money and more from smarter money.
That is not an argument for moving slowly. It is an argument for moving with more discipline. The organizations that succeed will not necessarily be the ones with the most visible AI activity. They will be the ones that invest intelligently in organizational design, governance, ownership, and culture, especially in the parts of the business where value, trust, and execution are tightly connected.
For companies feeling the pressure of AI FOMO, that should be the most reassuring insight of all. The way forward is not to spend blindly. It is to build the conditions under which AI can actually deliver.

