Baya Systems Scales Bengaluru Hub to Break Data Movement Limits in AI

Leveraging India’s talent to accelerate software-defined fabric IP, solving the crucial data movement challenge in chiplet architectures.

January 14, 2026

Baya Systems Scales Bengaluru Hub to Break Data Movement Limits in AI
The latest expansion of the Baya Systems Engineering Hub in Bengaluru marks a significant strategic maneuver in the global race to accelerate the development of Artificial Intelligence and High-Performance Computing infrastructure. Located at Brigade Metropolis, the expanded facility is poised to become a core node in the company’s worldwide network of design centers, reinforcing India's status as a preeminent hub for advanced semiconductor and chiplet technology innovation[1][2]. This move is a direct response to the exponential growth in demand for intelligent compute, which has placed an unprecedented strain on traditional chip architectures and created a critical industry bottleneck in data movement[3][4][5].
Baya Systems, a pioneer in software-driven, chiplet-ready semiconductor fabric IP, is strategically positioned at the nexus of this challenge. The company's fundamental mission revolves around solving the problem of efficient data transfer within and between the increasingly complex multi-die systems known as chiplets[3][4]. As AI workloads scale from gigabytes to petabytes, the speed and energy efficiency of moving data—not just the raw processing power—have become the new limit on performance[3][4][6]. Baya Systems’ flagship offerings, the WeaverPro™ software platform and the WeaveIP™ fabric components, provide a unified, scalable, and highly efficient interconnect solution designed to maximize throughput and minimize latency, power consumption, and silicon footprint[3][4]. This technology enables designers to combine best-in-class processing elements, including CPUs, GPUs, and specialized AI accelerators, into a cohesive, high-performance system of chiplets[6].
The Bengaluru engineering hub serves a vital function in this global strategy, particularly in a region that contributes nearly 20 percent of the world's total semiconductor design talent[7]. The city has established itself as India's semiconductor powerhouse, hosting major global R&D centers from companies like Intel, AMD, and Texas Instruments, and benefiting from strong government backing through initiatives like the India Semiconductor Mission[8][9]. Baya Systems’ expansion in this environment capitalizes on a deeply experienced talent pool specializing in VLSI design, embedded systems, and advanced software development—the exact skill sets required to engineer the next generation of chiplet-based architectures[8][10]. The teams based in Bengaluru are expected to drive innovation across critical areas, including AI acceleration, data center infrastructure, automotive systems, and advanced high-performance computing[3][11]. This local expertise is crucial for supporting the company’s vision of delivering foundational technologies for future-proof multi-cluster and multi-chiplet designs[3].
The global market context further underscores the urgency and importance of Baya Systems’ engineering focus. The chiplet economy is rapidly evolving as the preferred method for scaling performance efficiently, controlling costs, and improving time-to-market compared to traditional monolithic System-on-Chip (SoC) designs[3][6]. The adoption of industry-aligned standards, such as the Ultra Accelerator Link (UALink™) and Universal Chiplet Interconnect Express (UCIe), has made interoperability between different vendors' components possible[12][5]. Baya Systems is actively contributing to this ecosystem by ensuring its software-defined fabric IP supports these open standards, allowing customers to unlock performance across heterogeneous compute architectures, including those based on both Arm and RISC-V technology[13][14]. For a company experiencing "hypergrowth" following its emergence from stealth mode, the expanded Bengaluru presence provides the necessary operational capacity and intellectual capital to meet accelerating customer adoption and a reported five-fold growth in design wins[1][15].
Industry validation of Baya Systems' approach is evident in its corporate backing and early customer adoption. The company recently closed a successful Series B funding round, raising over $36 million with investment from strategic partners and leading venture capitalists[12][16]. Furthermore, the company has attracted an executive and engineering team comprising veterans from industry giants, including Apple, AMD, Arm, and Intel[12][5]. The technical leadership includes Jim Keller, a widely recognized microprocessor architect, who serves as Chairman and provides a visionary foundation for the company’s product roadmap[17][12]. Key partnerships, such as the one with AI chipmaker Tenstorrent, have already demonstrated the real-world impact of Baya's technology, which has been shown to deliver significant performance-per-watt advantages, including up to a 66 percent increase in throughput and a 50 percent reduction in silicon area for specific designs[1][15].
The commitment to expand a key engineering facility in India, alongside global design centers in Santa Clara, Austin, and Cambridge, underscores a strategy of globally distributed innovation[1][18]. The talent within the Brigade Metropolis hub will be instrumental in the ongoing development and post-silicon tuning of the company's modular system IP solutions[6][19]. This continuous development cycle is critical in a fast-moving field like AI, where the software-hardware interface must be constantly optimized to keep pace with evolving models and increasingly complex system designs[3][6]. By scaling its physical presence in a talent-rich region like Bengaluru, Baya Systems is not only meeting its current engineering needs but is also making a long-term investment in the future of semiconductor design, positioning itself as a central player in the global effort to resolve the data movement challenge that is restraining the next wave of intelligent compute[18][20].

Share this article