Developers Slower With AI Coding Tools, Despite Feeling Faster
The paradox of AI: Experienced developers are 19% slower with AI coding assistants, yet overwhelmingly perceive a speed boost.
July 11, 2025

A striking new study has challenged the prevailing narrative surrounding the productivity benefits of AI coding assistants, revealing a significant discrepancy between developer perception and actual performance. The research, conducted by the AI research nonprofit METR, found that experienced software developers working on familiar, complex open-source projects were, on average, 19% slower when using AI tools.[1][2][3][4] This counterintuitive finding directly contradicts the developers' own beliefs; participants anticipated the AI would make them 24% faster before the study and, even after experiencing the slowdown, still perceived a 20% speed improvement.[5][1][3][4] The results raise critical questions about how productivity is measured in the age of AI and the real-world impact of these increasingly ubiquitous tools.
The METR study employed a rigorous randomized controlled trial methodology, observing 16 experienced developers as they worked on 246 real tasks within their own large-scale open-source projects.[5][6] Participants used popular AI tools, primarily Cursor Pro with models like Claude 3.5, to complete tasks in codebases they were already intimately familiar with.[5][2] This setup was intentionally designed to differ from many previous studies that relied on more controlled, less realistic programming challenges or benchmarks, which the METR researchers suggest may overestimate AI capabilities by not accounting for the complexities of navigating large, existing codebases.[2][3] The core finding was unambiguous: when developers had access to AI assistance, their task completion times were significantly longer.[3][4]
A central reason for this unexpected slowdown appears to be the cognitive overhead associated with managing and verifying AI-generated code. The study noted that while developers spent less time on active coding and searching for information, this time was reallocated to prompting the AI, waiting for its output, and, most critically, reviewing and cleaning up the suggestions.[5][6] An estimated 9% of their time was dedicated to this review process alone.[5] Fewer than 44% of the AI's suggestions were accepted by the developers, who cited the AI's lack of contextual knowledge about the large and complex repositories as a primary factor for the slowdown.[5] This highlights a crucial challenge in AI-assisted development: the time and mental effort required to validate the correctness, style, and integration of code you did not write yourself can offset, and even exceed, the time saved by its initial generation.[7][8] This "AI-induced tech debt," as some have termed it, points to a potential hidden cost of relying on these tools.[9]
Despite the measurable decrease in speed, the study also uncovered a fascinating psychological component: developers felt their work was easier and more pleasant when using AI.[5][1] This perception of reduced effort may explain why many participants expressed a continued desire to use the tools even after being presented with the data showing they were slower.[1] The feeling of making faster progress, even when it’s an illusion, can be a powerful motivator. This disconnect between feeling faster and actually being faster underscores the need for more nuanced productivity metrics that go beyond simple task completion times and incorporate factors like cognitive load, code quality, and long-term maintainability.[7][10] While some studies have shown significant productivity boosts, particularly for less experienced developers or on more constrained tasks, the METR research suggests these benefits do not universally apply, especially for seasoned experts working in familiar environments.[11][12][2]
The implications of these findings are significant for the software development industry, which has been rapidly integrating AI tools into its workflows.[13][14][15] While the hype around massive productivity gains persists, this study introduces a critical dose of reality, suggesting that the true value of AI assistants may not be a simple speed increase.[16] Instead, their benefits might lie in reducing the cognitive burden of certain tasks, improving developer satisfaction, or assisting with boilerplate code, allowing senior developers to focus on higher-level architectural challenges.[17][5] It also signals to AI developers that improving model performance on isolated benchmarks is not enough; true utility will come from better integrating with complex, existing codebases and reducing the verification overhead for users.[5][3] As AI continues to evolve, the industry must move beyond a simplistic view of productivity and develop a more holistic understanding of how these powerful tools can best augment, rather than simply accelerate, the craft of software engineering.
Sources
[1]
[3]
[4]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]