Multiverse Computing Secures $215M to Shrink AI Models by 95%, Reshaping Industry
Multiverse's CompactifAI secures $215M to dramatically shrink AI models, unleashing powerful, efficient intelligence for every device.
June 13, 2025

Spanish quantum software startup Multiverse Computing has secured a landmark $215 million in Series B funding to scale its groundbreaking AI compression technology, CompactifAI.[1][2] This significant investment aims to address one of the most pressing challenges in the artificial intelligence industry: the immense size and cost associated with large language models (LLMs).[3][4] The company's quantum-inspired technology promises to reduce the size of these models by as much as 95% while maintaining their performance, a development that could democratize access to powerful AI and reshape the economics of the entire sector.[5] The funding round was led by Bullhound Capital and saw participation from a diverse syndicate of high-profile investors, including HP Tech Ventures, SETT, Forgepoint Capital International, CDP Venture Capital, Santander Climate VC, Quantonation, and Toshiba.[6][7] This infusion of capital, which consists of a mix of equity, grants, and partnerships, is set to accelerate the global adoption of Multiverse's compressed AI models, potentially revolutionizing the $106 billion AI inference market.[5][1]
The core of Multiverse's innovation lies in its CompactifAI technology, a novel approach to model compression rooted in quantum physics principles.[8] Traditional methods for shrinking AI models, such as pruning and quantization, often lead to a significant degradation in performance.[4][2] Multiverse, however, utilizes a quantum-inspired technique called Tensor Networks to simplify complex neural networks.[6][4] This method, pioneered by Multiverse's Co-Founder and Chief Scientific Officer, Román Orús, allows for a deep analysis of a neural network's internal workings to identify and eliminate billions of redundant parameters without sacrificing the model's core accuracy.[5][6] The company claims that this process results in only a 2-3% loss in precision, a stark contrast to the performance trade-offs common with other compression techniques.[6][1] By applying this to existing open-source LLMs like Meta's Llama models and Mistral, Multiverse creates "slim" versions that are not only dramatically smaller but also significantly faster and more efficient.[8]
The implications of this technological advancement are far-reaching and could fundamentally alter the landscape of AI deployment. The massive computational and energy costs associated with running large-scale LLMs have been a major barrier to their widespread adoption, largely confining them to specialized, cloud-based infrastructure.[5][4] By drastically reducing model size, CompactifAI enables these powerful tools to run on a much wider array of hardware. This includes not only more affordable cloud and private data center setups but also, in the case of ultra-compressed models, directly on edge devices like personal computers, smartphones, cars, and even small single-board computers like the Raspberry Pi.[5][4][9] This shift promises to enhance performance, improve data privacy by keeping information local, and significantly lower costs. Reports indicate that CompactifAI models can be 4 to 12 times faster and reduce inference costs—the expense of using a model to generate results—by 50% to 80%.[5][4] This increased efficiency also translates to greener AI, as smaller models consume less energy.[2]
With the new injection of capital, Multiverse Computing is poised for rapid expansion. The company, founded in 2019, plans to use the funds to scale product development, expand the adoption of CompactifAI, and deploy its compressed models on a global scale.[7] Having already established a customer base of over 100 clients, including major corporations like Bosch and the Bank of Canada, and holding over 160 patents, Multiverse is well-positioned for growth.[6][7] The firm, which has seen its valuation jump fivefold to over $500 million with this latest funding round, intends to focus on commercializing its compressed models rather than just the compression tool itself.[1][2] This strategic focus on providing ready-to-use, efficient AI solutions could make advanced artificial intelligence more accessible, affordable, and sustainable for a broad range of industries, from finance and healthcare to manufacturing and defense.[7][2]
In conclusion, Multiverse Computing's successful $215 million funding round marks a pivotal moment for the AI industry. The company's CompactifAI technology offers a compelling solution to the critical challenges of LLM size, cost, and energy consumption. By leveraging quantum-inspired principles to compress models without a significant loss of performance, Multiverse is paving the way for a new era of AI that is more efficient, accessible, and deployable across a vast spectrum of devices. The strong backing from a diverse group of international investors underscores the perceived potential of this technology to become a foundational layer in the evolving AI infrastructure. As the company scales its operations and expands its offerings, the ability to run powerful AI on local devices could unlock a wave of innovation, bringing the benefits of enhanced performance, privacy, and cost savings to businesses and consumers worldwide.
Research Queries Used
Multiverse Computing $215M funding CompactifAI
Multiverse Computing secures funding for AI compression
CompactifAI technology quantum-inspired LLM compression
implications of LLM compression technology
investors in Multiverse Computing