AI Rivals Fund $1 Billion Bet Against Big Tech’s Data Scaling Path.
Funded by billions, radical new labs challenge Big AI, betting data efficiency, not size, is the path to AGI.
January 29, 2026

A seismic shift is underway in the foundational research of artificial intelligence, heralded by two new labs launching major funding drives that directly challenge the prevailing wisdom of Big AI. While industry leaders like OpenAI and Google DeepMind continue to pursue ever-larger models with multi-billion dollar compute budgets, a pair of new ventures is placing a massive, multi-hundred-million-dollar wager on the idea that the current path of extreme data-scaling is a dead end for achieving true general intelligence. The upstart lab known as Flapping Airplanes recently closed a $180 million funding round at a $1.5 billion valuation, backed by top-tier firms including GV, Sequoia, and Index, while a competitor, Core Automation, is currently seeking a colossal raise of up to $1 billion to pursue an even more radical architectural overhaul. The combined capital sought represents a decisive, contrarian injection into a technology sector increasingly defined by the cost and consumption of data, suggesting that a significant faction of the AI research community believes a fundamental new impetus is urgently required.[1][2][3][4]
The central premise driving the formation and valuation of Flapping Airplanes is the "data efficiency problem," a profound inefficiency that plagues all large language models. The startup aims to develop AI systems that can learn more like humans, with a goal of being 100,000 to 1,000,000 times more data-efficient than the current state-of-the-art models. Current generative models require ingesting a significant fraction of the entire internet to achieve their capabilities, a process that is resource-intensive, environmentally costly, and rapidly reaching the limits of available high-quality training data. Flapping Airplanes views this as a foundational architectural failure, arguing that a breakthrough in data efficiency is essential to realize the next phase of artificial general intelligence (AGI). The company’s $180 million raise, which cemented a $1.5 billion valuation, positions it as a major research lab from day one, attracting validation from highly respected figures in the field, including advisors like former Tesla and OpenAI director Andrej Karpathy and Google DeepMind veteran Jeff Dean.[1][2][3][4]
The pursuit of data efficiency at Flapping Airplanes is not simply an optimization effort; it involves a deep dive into the underlying mathematical and theoretical mechanisms of machine learning. The lab's researchers are focused on changes at the most foundational level, exploring new loss functions and even the radical possibility of replacing gradient descent altogether, the bedrock optimization algorithm that has powered modern deep learning for over a decade. The firm's investors are betting on the success of these “weird, new ideas” to create a new paradigm that circumvents the bottlenecks of the current transformer architecture. Applications for a dramatically more data-efficient AI are far-reaching, with potential applications initially targeted across high-value sectors such as robotics, retail, and scientific discovery, where the cost and availability of labeled, real-world training data remain the primary barriers to deployment. The company is leaning into what one of its backers, Sequoia, reportedly termed the "young person's AGI lab" approach, signaling a belief that fundamental breakthroughs are more likely to emerge from a newly capitalized, research-first environment.[2][3][4]
Core Automation, the other venture making waves in the funding landscape, represents an even more explicit challenge to the AI establishment, both architecturally and culturally. Founded by Jerry Tworek, a former senior researcher at OpenAI who led work on reinforcement learning and reasoning, the company is seeking up to a billion dollars to pursue a model of continual learning. Tworek left the industry giant, citing a shift within OpenAI toward “more conservative ways” and a decreased focus on the kind of high-risk, fundamental research he believes is necessary for AGI. Core Automation’s ambitious goal is to develop AI that needs 100 times less data than current models and possesses the critical, yet currently lacking, human-like ability to learn continuously from real-world experience. The startup’s proposed model, codenamed "Ceres," would function as a single, perpetually learning agent.[1][2][5][6]
The long-term vision of Core Automation is notably expansive and high-stakes, underscoring the revolutionary nature of the foundational work Tworek is prioritizing. Tworek's conviction is that the current architecture has fundamental limits, necessitating new breakthroughs to reach AGI. The startup’s roadmap extends beyond merely better chatbots, envisioning the development of AI agents capable of profound real-world output, from sophisticated industrial automation and the creation of "self-replicating factories" and bio-machines to, ultimately, the ambitious prospect of planetary terraforming. The company's goal to reduce the reliance on vast, static datasets and replace it with continuous, experience-based learning is seen by Tworek as an essential step for building general intelligence. His departure and subsequent mega-fundraising effort highlight a growing schism in the AI community, where some of the field's top researchers are choosing to build independent labs to explore paths that diverge significantly from the data-and-compute-scaling strategy of the incumbent trillion-dollar companies.[1][2][5][6]
Taken together, the massive funding rounds pursued by Flapping Airplanes and Core Automation signify more than just another influx of capital into the AI boom; they represent a serious, well-funded counter-movement in the quest for AGI. The founders and investors behind these ventures are making an explicit statement: the path to the next generation of general intelligence does not lie in simply adding more parameters and more data to the transformer models that dominate today, but in a complete architectural and algorithmic overhaul. Their success would validate the theory that fundamental research, rather than brute-force scaling, holds the key to true intelligence, potentially decentralizing the power structure of the AI industry. Their failure, however, would reinforce the dominance of the current data-centric giants, potentially costing hundreds of millions of dollars in a high-stakes bet that the industry’s current trajectory has reached its philosophical and technical limits.[1][2][3][4]