Mira Murati's Thinking Machines Lab Launches Tinker, Democratizing AI Fine-Tuning

Tinker, Mira Murati's first product, simplifies advanced AI model fine-tuning, empowering broader innovation beyond tech giants.

October 3, 2025

Mira Murati's Thinking Machines Lab Launches Tinker, Democratizing AI Fine-Tuning
In a significant move aimed at broadening access to powerful artificial intelligence tools, Thinking Machines Lab, the highly anticipated startup from former OpenAI CTO Mira Murati, has launched its first product.[1][2] The new offering, named Tinker, is an application programming interface, or API, designed to simplify the process of fine-tuning large language models.[1][2] While the initial premise suggested the ability to perform these complex operations directly on a personal laptop, Tinker functions as a managed service, allowing researchers and developers to define and control experiments from their local machines while the computationally intensive training is handled by Thinking Machines' powerful internal clusters.[1][3][4][5] This approach removes significant infrastructure hurdles, potentially accelerating innovation and custom model development across the AI landscape. The launch marks the first concrete step from a company that attracted a staggering $2 billion in seed funding before even revealing a product, signaling immense investor confidence in Murati's vision for the future of AI development.[1][6][7]
Tinker is engineered to empower a broader range of users, from academic researchers to independent "hackers," by abstracting away the complexities of distributed training.[1][2] Users can interact with the system using Python code, specifying the data and algorithms for their fine-tuning tasks without needing to manage the underlying hardware orchestration, resource allocation, or failure recovery.[4][8][5] The service supports a variety of open-weight models, from smaller systems to massive mixture-of-experts models like Alibaba's Qwen-235B-A22B.[1][2][9] A key technical feature of Tinker is its use of Low-Rank Adaptation (LoRA), a technique that significantly reduces the computational resources required for customization.[3][4] LoRA works by adding a small number of new parameters to a pre-existing model and training only those, rather than retraining all of the model's billions of parameters. This efficiency allows multiple fine-tuning jobs to share the same base model on a GPU cluster, driving down costs and speeding up iteration times.[3][4][10] To further support developers, the company has also released the "Tinker Cookbook," an open-source library with examples and common abstractions for tasks like improving mathematical reasoning, training models to use external tools, and preference learning.[9][3]
The company behind this new tool, Thinking Machines Lab, was founded in February 2025 by Murati and has rapidly assembled a formidable team of AI veterans, many of whom are fellow alumni of OpenAI.[9][6][11] This collection of talent, including OpenAI co-founder John Schulman, has been a major factor in the startup's ability to secure one of the largest seed funding rounds in Silicon Valley history, with a valuation reaching $12 billion.[1][11] Investors include prominent firms like Andreessen Horowitz and tech giants such as Nvidia and AMD, underscoring the industry's bet on the company's approach.[1][12] The venture is part of a growing trend of well-funded startups founded by ex-OpenAI leaders, each aiming to carve out a distinct path in the rapidly evolving AI ecosystem.[6] Thinking Machines' stated mission is to "empower humanity through advancing collaborative general intelligence," and with Tinker, it is making a strategic bet that the next wave of AI progress will be driven not just by creating larger general-purpose models, but by enabling widespread customization and specialization of existing ones.[2][12]
The launch of Tinker carries significant implications for the AI industry, primarily centered on the theme of democratization.[13][14] Historically, fine-tuning large-scale models has been the preserve of large corporations and well-funded research labs due to the immense cost and technical expertise required to manage the necessary computing infrastructure.[13] By offering a managed service that handles these complexities, Thinking Machines is lowering the barrier to entry for creating custom AI models tailored for specific tasks and domains.[15][13] This could foster a new wave of innovation in fields that require highly specialized AI. Even before its public launch, Tinker was being used by research groups at Princeton University for mathematical theorem proving, Stanford University for chemistry reasoning, and by the AI safety organization Redwood Research for control tasks.[2][9][16] These early applications demonstrate the platform's potential to accelerate cutting-edge research by allowing scientists to focus on algorithms and data rather than infrastructure management.[5] This shift could lead to more diverse and specialized AI applications, moving the industry beyond a reliance on a few monolithic models controlled by a handful of tech giants.[16]
In conclusion, the debut of Thinking Machines' Tinker API represents a pivotal moment in the ongoing evolution of artificial intelligence. It is not a tool that turns a laptop into a supercomputer, but rather one that intelligently connects a developer's local environment to the immense power of dedicated AI hardware. By simplifying the process of fine-tuning, Murati's heavily backed venture is placing sophisticated model customization capabilities into the hands of a much wider audience. The platform is currently in a private beta, with plans to introduce usage-based pricing in the coming weeks.[1][9] The success of Tinker will be a key test of the thesis that the future of AI lies in enabling the broader community to adapt and specialize powerful models, potentially unlocking countless new applications and research directions that are currently beyond the horizon.

Sources
Share this article