Fields Medalist Tao Proposes "Artificial General Cleverness," Dumping AGI for Honesty.
Mathematician Terence Tao proposes “Artificial General Cleverness” as a practical, statistical reality check for the AGI debate.
December 17, 2025

The escalating debate over the true capabilities of advanced artificial intelligence has prompted a renowned voice in mathematics to propose a fundamental shift in terminology, suggesting "artificial general cleverness" (AGC) as a more accurate and honest label for what current AI systems actually achieve. Terence Tao, a Fields Medalist considered one of the world's foremost mathematicians, argues that the widely touted goal of "artificial general intelligence" (AGI) is a misnomer that overstates the intrinsic cognitive abilities of existing tools. The proposal, initially articulated on the social platform Mastodon, posits that while modern AI is extraordinarily capable, its success is rooted in probabilistic methods and brute-force data processing rather than true, flexible human-like understanding[1][2]. This distinction moves the conversation away from science fiction-tinged aspirations of a conscious, reasoning machine toward a more practical assessment of AI as a powerful, but non-intelligent, problem-solving engine[3].
The critique arises from a fundamental difference in how AGI is generally understood versus how current models, particularly large language models (LLMs), operate. Artificial General Intelligence is broadly defined in the AI research community as a hypothetical stage where a system can match or surpass human cognitive abilities across virtually all intellectual tasks, possessing the ability to generalize knowledge and solve novel problems without task-specific reprogramming[4][5]. This ambitious goal is a stated focus for major tech companies like OpenAI, Google, and Meta[4]. Tao, however, defines his alternative, AGC, as the ability to solve broad classes of complex problems through "somewhat ad hoc means"[2][6]. He emphasizes that these means may be stochastic, rely on raw computing power, be ungrounded or fallible, and often trace back to "similar tricks found in an AI's training data"[3][2]. Crucially, in Tao's view, these characteristics disqualify the output from being classified as the result of genuine "intelligence"[3].
Tao's central argument rests on an analogy that captures the sometimes-disappointing nature of algorithmic brilliance, likening the current generation of AI to a "clever magic trick"[3]. The initial awe at a machine generating a complex solution or text can "dissipate" or "transform to technical respect" once the mechanics of how the trick was performed—namely, sophisticated pattern matching and statistical extrapolation over a massive dataset—are understood[3][7]. This, he argues, results in a technology that is simultaneously "very useful and impressive" yet "fundamentally unsatisfying and disappointing" from a philosophical standpoint[8][7]. A key philosophical observation supporting the AGC label is the decoupling of cleverness and intelligence in AI. While these traits are often correlated in human performance, Tao suggests they are "much more decoupled for AI tools," which are optimized specifically for "cleverness"[3][6]. Viewing current tools primarily as a "stochastic generator of sometimes clever - and often useful - thoughts and outputs" may provide a more productive framework for utilizing them to solve difficult problems, he concludes[3].
The mathematician's proposal gains significant traction because it addresses a critique already prevalent among other top AI researchers: the ambiguity and hype surrounding the AGI label[9]. Several computer scientists and ethicists have argued that the AGI discourse undermines the ability to choose effective, well-motivated scientific and engineering goals, fostering an "Illusion of Consensus" and "Supercharging Bad Science"[9][10]. Dario Amodei, co-founder of Anthropic, has publicly expressed a preference for terms like "powerful AI" to avoid the "sci-fi baggage and hype" associated with AGI[11]. Tao’s AGC offers a specific, technically grounded alternative to this problem. Furthermore, the push for AGC relates directly to the contentious issue of AI benchmarking, which has come under scrutiny for generating misleading narratives about AI's capabilities[12]. Tao himself has critiqued claims that AI models have "solved" problems from the International Mathematical Olympiad (IMO) by pointing out that the systems operate under vastly different conditions than human contestants—often involving multiple retries, human-in-the-loop editing, and filtering only the successful attempts[12][13]. He highlights that comparing AI's performance under these ideal, scaffolded conditions to human achievement in a strict, unforgiving setting is a fundamentally "inaccurate 'apples-to-apples' comparison"[14][12].
The implications of adopting "artificial general cleverness" as a dominant term are profound for the AI industry's self-perception and public communication. By accepting the AGC label, the industry would move away from hyping the arrival of a near-human artificial consciousness and embrace the reality of its current technological achievements: incredibly potent tools for amplification and complex problem-solving. This perspective encourages a focus on human-AI collaboration, where the AI serves as a powerful research assistant to handle the "tedious steps" of computation and calculation, complementing the human's ability to generate "truly new ideas"[15][16]. This pivot from chasing a monolithic, vague AGI goal toward leveraging specific, stochastic cleverness could lead to more honest product development, clearer risk assessment, and ultimately, a more productive research agenda focused on creating tangible, specific, and useful tools for various professional domains[10]. Tao's philosophical reframing serves as a crucial reality check, suggesting that the ultimate value of current AI lies not in its intelligence, but in its unparalleled and ever-increasing cleverness[2].
Sources
[3]
[5]
[7]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]