Google AI Learns Continuously, Gains Human-Like Long-Term Memory with Titans.
Google's Titans and MIRAS overcome static AI, enabling models to continuously learn, remember, and adapt like humans.
December 5, 2025

In a significant stride towards creating more dynamic and adaptable artificial intelligence, Google has detailed a new architecture named Titans, complemented by a theoretical framework called MIRAS.[1][2] These initiatives tackle one of the most persistent challenges in AI: developing models that can learn continuously from new information after their initial training.[3] The goal is to move beyond the current paradigm of static AI, which remains largely fixed in its knowledge, and cultivate systems that possess a functional long-term memory, allowing them to evolve and update in real-time.[2][4]
The vast majority of today's advanced AI models, including the powerful large language models that have captured public attention, are based on the Transformer architecture. While revolutionary, Transformers have a fundamental limitation: their knowledge is frozen at the end of a computationally intensive training phase.[5][4] Introducing new information typically requires extensive retraining, a process that is both costly and time-consuming.[6][7] Furthermore, these models often struggle with "catastrophic forgetting," a phenomenon where learning new tasks causes the model to abruptly lose proficiency in tasks it had previously mastered.[8][6][9] This static nature restricts their ability to adapt to ever-changing environments and retain information over extended periods, a key component of human-like intelligence.[10][11] The computational cost also increases dramatically as the length of the data sequence grows, limiting the ability of Transformers to handle the extremely long contexts required for tasks like genomic analysis or understanding entire documents.[2]
The Titans architecture directly confronts these limitations by introducing a sophisticated memory system inspired by human cognition.[8][5] It integrates distinct components for short-term and long-term memory, enabling the AI to process immediate information while simultaneously storing and retrieving historical context.[12][4] A key innovation within Titans is the "surprise metric," which allows the model to prioritize and memorize unexpected or pivotal pieces of information it encounters during use, a process Google refers to as "test-time memorization."[2][5] This mimics the human tendency to remember surprising events more vividly.[13] By dynamically learning what to remember, what to forget, and how to retrieve information, Titans can handle massive amounts of data without losing track of crucial details.[8][12] The architecture is designed to be flexible, with different configurations that allow it to combine historical and current data to enrich context or use long-term memory as a distinct processing layer for deeper integration.[12]
Providing the theoretical foundation for Titans and similar future architectures is the MIRAS framework.[2] Google describes MIRAS as the "blueprint" for building models capable of this real-time adaptation.[2] Instead of compressing information into a fixed state, MIRAS allows a model to actively learn and update its own internal parameters as new data streams in.[2] The framework defines a sequence model through four main components: the memory architecture that stores information, the attentional bias that determines what the model prioritizes, a retention gate that balances learning new knowledge against retaining past information, and the memory algorithm that updates the memory.[2] MIRAS reinterprets the concept of "forgetting" as a form of regularization, ensuring the model doesn't deviate too far from its past state while incorporating new details into its core knowledge.[2][14] This framework combines the speed of recurrent neural networks (RNNs) with the accuracy of Transformers, aiming for both efficiency and performance.[2][15]
The development of Titans and MIRAS signals a potential paradigm shift in the field of artificial intelligence, moving closer to systems that can learn and reason in a more human-like manner. The implications for the industry are vast. AI assistants could maintain context over years of conversations and scientific literature, and systems could become far more adept at spotting anomalies in large datasets, such as in medical scans or financial transactions.[8] For businesses, this could translate to enhanced AI writing assistants that maintain style across long documents or more sophisticated content personalization based on extensive user histories.[16] While the technology is still in its early stages and faces challenges in scaling and real-world implementation, early benchmarks show Titans outperforming existing models on various tasks, including language modeling and commonsense reasoning.[8][2] By enabling AI to learn from its ongoing interactions with the world, these projects lay the groundwork for more intuitive, flexible, and genuinely intelligent systems.[8]
Sources
[3]
[4]
[5]
[6]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]