AI Agents Will Replace Software Engineers? Why This Narrative Is Getting Ahead of Reality
May 15, 2026 / 20 min read
May 15, 2026 / 16 min read / by Team VE
The Timeline Has Shifted
In June 2025, Sam Altman published a short essay that read less like speculation and more like a status update. “The takeoff has started.” Altman’s public persona reinforces the effect. He does not present himself as a showman. Slim, soft-spoken, often dressed without ceremony, he tends to frame even ambitious claims in calm, almost technical language, which is precisely why statements like this carry weight. They do not sound like hype and feel almost as if someone is trying to describe what he believes is already underway.
In The Gentle Singularity, he has laid out a near-term sequence where AI systems move from assisting with cognitive work to generating new ideas and eventually operating in the physical world. The timeline he described did not stretch decades. It actually sits uncomfortably close to the present.
Across the Atlantic, Deep Mind’s Demis Hassabis has a different subliminal aura. A former chess prodigy-turned-neuroscientist; Hassabis speaks with the measure of someone more comfortable inside complex systems than public narratives. In an interview with The Guardian, he offered a more measured timeline. Systems approaching general intelligence, he said, could arrive within five to ten years, possibly sooner, while also stressing how difficult it is to define or verify that threshold in practice.
Where Altman compresses the future, Hassabis tends to qualify it. Where Altman sketches trajectories, Hassabis dwells on definitions and yet the direction is the same. Neither is talking about a distant horizon anymore.
Then there is a third voice, albeit in a different register. In March 2025, Dario Amodei’s Anthropic warned US officials that “powerful AI systems” could arrive as early as late 2026 or 2027. There was no narrative framing with no philosophical context. It was just a compressed timeline, delivered in the language of policy.
Three different personalities, three different tones, but with one shared signal. The future is being pulled closer. If this was the entire story, it might read as a convergence of ideas. Independent views which are all gradually aligning but when observed closely outside these statements, the industry is behaving in ways that complicate that interpretation.
In February 2026, Reuters reported that the world’s largest technology companies are expected to spend roughly $650 billion on AI infrastructure in a single year. At the same time, McKinsey’s latest global survey on AI, shows that organizations are not just experimenting with AI systems. They are increasing investment, embedding AI into core business functions, and restructuring around its expected impact.
The superintelligence timelines start to read differently once placed next to these numbers. An industry that is spending at such a massive scale, renaming entire divisions around “superintelligence,” and competing for talent with nine-figure offers is not simply observing the future. It is acting as if the outcome is close enough to justify immediate, aggressive positioning.
In such an environment, timelines stop behaving like a neutral forecast and tend to become part of the system. A shorter horizon creates urgency as it pulls capital forward, compresses hiring cycles, and makes delay look like risk. A more measured horizon signals scientific caution, regulatory awareness, and distance from the more aggressive claims.
Both can be sincerely believed, but neither is untouched by incentives, which leaves a different question hanging in the air. The question is not when superintelligence will arrive but whether the race to describe it has already become part of the race to build it.
If the debate over superintelligence timelines were purely technical, the differences between leading voices would cluster around evidence. You would expect disagreements about scaling limits, model architectures, or the interpretation of benchmarks. Instead, the divergence appears in how different leaders choose to describe the same future, and in the assumptions, they embed within that description.
Sam Altman does not frame superintelligence as a distant milestone waiting to be reached at some undefined point. He frames it as a process that has already begun. When he writes that “the takeoff has started,” the significance lies more in what that statement does to the reader’s sense of time. It collapses the distance between the present and future by removing the conceptual space that allows organizations to treat superintelligence as a theoretical concern and instead positions it as an unfolding reality. In doing so, it subtly shifts the burden of proof. Skepticism no longer challenges a speculative claim, but it simply challenges a trajectory that is presented as visible, cumulative, and already in motion.
Set against that is the approach taken by Demis Hassabis, whose language reflects a different intellectual instinct. Despite acknowledging that systems approaching general intelligence could emerge within five to ten years, he returns repeatedly to the difficulty of defining what such a system would constitute in practice. This is not hesitation in the conventional sense. It is an insistence that the category itself remains unstable. The effect is to keep the timeline close while resisting the temptation to turn that proximity into a clean narrative. Where Altman reduces uncertainty by emphasizing momentum, Hassabis preserves it by questioning the terms on which progress is being measured.
Similarly, when Anthropic addressed US policymakers in 2025, the language shifted again, becoming more compressed and more direct. In a policy context, what matters is not how the future is defined, but how soon it could impose risks that require preparation. It is at this point that Dario Amodei’s observation becomes difficult to ignore. In Machines of Loving Grace, he notes that discussions of AI progress can drift into something that resembles persuasion, particularly when the people describing the future are also actively engaged in building it. This is not an accusation of bad faith so much as a recognition of structural tension.
When the same actors occupy the roles of researcher, executive, and public narrator, the boundary between describing what is happening and advocating for a particular interpretation of it becomes inherently unstable. Seen together, these positions do not resolve a simple disagreement about timing. They reveal a more complex pattern in which different ways of speaking about the future align with different institutional needs and intellectual traditions.
Altman’s language emphasizes inevitability by creating a sense of forward momentum that is difficult to ignore. Hassabis maintains proximity while preserving conceptual ambiguity, reinforcing research-driven caution that resists oversimplification. Anthropic’s policy-facing statements compress the timeline further, translating uncertainty into risk. Amodei, while standing slightly apart, draws attention to the difficulty of disentangling analysis from advocacy in a field where both are deeply intertwined.
If the language of superintelligence suggests confidence, the capital flowing into the sector makes that confidence tangible. When Reuters reported that the world’s largest technology companies are expected to invest hundreds of billions in AI infrastructure in a single year, it became a figure that was difficult to contextualize because it has no real precedent in enterprise technology. It is not a gradual build-out of capability, nor a cautious expansion into an emerging field. It is a concentrated, front-loaded commitment that assumes the returns will justify the speed.
At that scale, investment stops looking like a response to progress and begins to resemble a bet on inevitability. Companies do not deploy capital in hundreds of billions to explore uncertain possibilities. They do it when they believe that the underlying trajectory is strong enough that arriving early matters more than being cautious. The question shifts from whether the technology will mature to who will control it when it does.
This shift becomes more visible when attention moves from infrastructure to talent. When Sam Altman claimed that competitors including Meta were offering compensation packages worth as much as $100 million to recruit top researchers, the figure drew attention because of its scale, but the more important signal lay beneath it. The pool of individuals capable of advancing frontier AI is extremely small, and the advantage of securing even a handful of them is considered large enough to justify such extraordinary cost. Talent acquisition has moved from becoming a supporting activity to a central lever in determining who moves fastest.
The same logic extends into how companies choose to describe themselves. Meta’s decision to reorganize its AI efforts under a division explicitly named “Superintelligence Labs” is not simply an internal branding exercise. It is a public declaration of intent, one that signals to employees, investors, and competitors that the company sees itself as participating directly in the race toward that outcome. Naming, in this context, shapes how the organization is perceived and how its ambitions are understood.
Individually, each of these developments can be explained in isolation. Large infrastructure spending can be framed as defensive positioning in a competitive market. Aggressive hiring can be understood as a natural result of scarcity. Strategic naming can be dismissed as internal alignment. But taken together, they point to a more important shift. The industry is no longer treating superintelligence as a distant possibility. It is building the financial, technical, and organizational base required to pursue it.
At this level of spending, the issue is more about commitment. Once companies allocate hundreds of billions of dollars to compute, chips, data centers, energy contracts, and rare technical talent, they create a new competitive baseline. Rivals cannot simply wait for scientific certainty because the infrastructure race has already begun. Suppliers, investors, startups, and enterprise buyers begin reading these commitments as signals of where the market is going.
That is what makes the money consequential. It does not prove that superintelligence is near, but it proves that the world’s most powerful technology companies are preparing as if major capability gains are close enough to justify immediate action. Once that preparation starts at this scale, the forecast becomes less important than the behavior it has already produced.
If the timeline debate were purely strategic, the easiest way to dismiss it would be to assume that the more aggressive forecasts are simply overstated. This interpretation is tempting, particularly given the incentives surrounding capital, talent, and positioning. But it may also be incomplete. There is a second possibility that optimists may not be wrong at all; instead, they may simply be early.
The last three years have already shown how quickly expectations can be revised. Systems that were once described as narrow have begun to perform across domains. Capabilities that were treated as research milestones have entered production environments. Each step may not have delivered a clean transition to general intelligence, but it has narrowed the distance in ways that were not widely anticipated.
From this perspective, the compression of timelines does not necessarily reflect exaggeration. The people making these forecasts are operating with information that is not fully visible to the market. Internal benchmarks, experimental models, and early-stage systems rarely enter public discourse in complete form. What appears to be overconfidence externally may feel like pattern recognition internally.
This possibility complicates the argument. Because if the optimistic timelines are even partially correct, then the behavior of the industry begins to look less like strategic positioning and more like rational acceleration. The scale of investment, the intensity of hiring, and the urgency of messaging would be responses to a trajectory that is moving faster than expected.
At this point, the distinction between prediction and strategy becomes harder to draw. The same actions that can be interpreted as narrative-driven also make sense as preparation. This is where the tension begins to sharpen. If the optimists are wrong, the industry is overcommitting based on a narrative that reinforces itself. If they are right, then the industry may still be underprepared for the speed of change it is trying to anticipate.
Either way, the timeline debate cannot be treated as a detached question. Whether it reflects strategy or insight, it is already driving behavior.
The tension between prediction and positioning is not unique to artificial intelligence. It tends to emerge whenever technology sits at the intersection of scientific uncertainty, strategic competition, and large pools of capital.
One of the closest parallels is the early phase of the space race. In the late 1950s and early 1960s, the question was not simply whether humans could reach the Moon, but how quickly it could be done and who would get there first. Public timelines became part of the competition itself. When President John F. Kennedy declared in 1961 that the United States would land a man on the Moon before the decade was out, the statement was a mechanism for mobilizing political will, funding, and scientific coordination. NASA’s own historical records show how that declaration reshaped priorities and accelerated investment, turning an uncertain ambition into a national objective with a defined timeline. The forecasts merely helped drive it.
A similar pattern appeared during the early years of the internet. In the late 1990s, expectations about how quickly digital businesses would transform commerce led to a surge of investment that far outpaced immediate capability, culminating in the dot-com bubble. Many of those bets proved premature in the short term, but the underlying trajectory was not entirely wrong. As later retrospectives have noted, the infrastructure, talent, and experimentation funded during that period laid the foundation for the dominant technology companies that followed.
In both cases, predictions about the future did not simply reflect progress. They influenced how quickly progress was pursued, how much capital was deployed, and how risk was interpreted. Artificial intelligence now sits in a similar position, though at a different scale and with broader implications.
When the stakes are high enough, and the future is uncertain enough, the act of predicting what comes next begins to shape how quickly it arrives.
By this point, superintelligence timelines cannot be treated as simple projections of technical progress. They may still be grounded in real capability gains, internal benchmarks, and scientific judgment, but they now operate inside a much larger system of incentives. The companies making these forecasts are also raising capital, buying compute, hiring scarce researchers, negotiating with policymakers, and shaping how the market understands what comes next.
This makes the forecasts more consequential than dishonest. In a field where the same organizations are building the systems, funding the infrastructure, and explaining the future to the public, the boundary between analysis and positioning becomes naturally unstable. A timeline can be sincere and strategic at the same time.
For enterprises, this is the most useful reading. The practical question is not whether Sam Altman, Demis Hassabis, or Anthropic has the most correct date. The practical question is what their timelines are already forcing into motion. If the companies closest to frontier AI are investing, hiring, reorganizing, and lobbying as if major capability gains are near, ordinary businesses cannot treat AI planning as a side experiment.
The impact will not wait for a clean AGI declaration. It will show up earlier, in smaller and more operational ways. Hiring briefs will change. Teams will need people who can work with AI systems, not merely use them casually. Vendor selection will become more demanding. Documentation, data access, workflow design, quality control, and accountability will matter more because AI increases both output and risk. Leaders will need to know which tasks can be automated, which require human judgment, and which should be redesigned completely.
This is where forecasts become operating signals. They tell businesses that the competitive environment is already being rearranged around the expectation of a more capable AI. Waiting for certainty may feel disciplined, but in practice it can become its own form of risk. By the time the timeline becomes obvious, the companies that treated it seriously may already have rebuilt the way they hire, train, manage, and deliver work.
For those outside the small circle of companies building frontier AI systems, the debate over superintelligence timelines often feels like a distant, almost abstract question. It is easy to treat it as something to watch rather than something that demands immediate interpretation. After all, if even the people closest to the technology cannot agree on when it will arrive, then waiting for clarity can appear to be a reasonable position.
But that reading underestimates what the timeline debate is already doing. Even in the absence of consensus, these forecasts are shaping decisions in real time. When leaders describe the future as close, capital moves more quickly, talent reallocates toward the perceived frontier, and organizations begin to restructure capabilities that are expected. When others emphasize uncertainty or caution, a different set of behaviors follows, with greater attention to governance, risk, and staged adoption.
In this way, the timeline does not need to be correct to be consequential. It only needs to be credible enough to influence behavior, which is already visible. Companies are redesigning workflows to accommodate AI systems that are still evolving. Hiring patterns are shifting toward roles that sit at the intersection of domain expertise and machine capability. Investment decisions are being made on the assumption that the underlying trajectory will continue, even if its exact destination remains unclear.
None of these changes depend on superintelligence arriving on a specific date. They depend on the belief that something significant is close enough to prepare for. This is what makes the current moment different from earlier waves of technological optimism. The narrative is not trailing the investment. It is moving alongside it, reinforcing it, and in some cases accelerating it. The more the future is described as imminent, the harder it becomes for institutions to justify inaction, and the cost of being late begins to outweigh the risk of moving early.
The question, then, is not whether Altman’s compressed timeline, Hassabis’s cautious one, or Anthropic’s policy-facing warning proves most correct. The more useful question is what these timelines are already doing to the world around them. They are moving capital, redirecting talent, changing boardroom priorities, and forcing companies to ask whether their people, systems, and decisions are ready for an AI-shaped operating environment. Superintelligence may arrive earlier than expected, later than expected, or in a form that makes today’s labels look crude. But the race to prepare for it has already begun. In that race, the most expensive mistake may not be betting on the wrong date. It may be waiting for a date before changing anything at all.
May 15, 2026 / 20 min read
May 13, 2026 / 24 min read
May 13, 2026 / 20 min read