AI36 views7-9 minutes min read

Thinking Machines Lab Unveils Tinker AI Tuning Tool

Thinking Machines Lab, founded by former OpenAI researchers, launched Tinker, a new tool for automating custom AI model creation. It aims to make advanced AI capabilities more accessible, supporting o

Leo Ingram
By
Leo Ingram

Leo Ingram is a business technology correspondent for Neurozzio, reporting on corporate partnerships, digital transformation, and the application of AI in various industries. He specializes in analyzing how major technology deals reshape business sectors.

Author Profile
Thinking Machines Lab Unveils Tinker AI Tuning Tool

Thinking Machines Lab, a new startup founded by former OpenAI researchers, has launched its first product, Tinker. This tool aims to automate the process of creating custom, advanced AI models. The company believes Tinker will make powerful AI capabilities more accessible to a wider range of users, from businesses to individual researchers.

The launch highlights a growing trend in the artificial intelligence sector: the focus on fine-tuning existing large language models (LLMs) for specific applications. This approach allows developers to tailor AI systems to perform specialized tasks, such as generating legal documents or assisting with medical inquiries.

Key Takeaways

  • Thinking Machines Lab, founded by ex-OpenAI researchers, launched Tinker.
  • Tinker automates the fine-tuning of advanced AI models.
  • The tool supports open-source models like Meta's Llama and Alibaba's Qwen.
  • It aims to make frontier AI capabilities more accessible to diverse users.
  • The startup secured $2 billion in seed funding, valuing it at $12 billion.

New Tool Simplifies Advanced AI Customization

Tinker is designed to simplify what is currently a complex and resource-intensive process. Traditionally, customizing AI models involves significant technical expertise, access to powerful hardware like Graphics Processing Units (GPUs), and specialized software tools. Tinker seeks to remove these barriers.

Mira Murati, cofounder and CEO of Thinking Machines Lab, stated that Tinker will empower more individuals and organizations. She emphasized the goal of making advanced AI research accessible. This accessibility could lead to new discoveries and applications across various fields.

"We believe [Tinker] will help empower researchers and developers to experiment with models and will make frontier capabilities much more accessible to all people," said Mira Murati, cofounder and CEO of Thinking Machines.

The company's strategy centers on the idea that fine-tuning frontier models represents the next significant development in AI. This involves taking pre-trained models and adapting them to perform specific tasks more effectively. For example, an organization could fine-tune a model to excel at drafting specific types of legal contracts.

Fast Fact

Thinking Machines Lab raised $2 billion in seed funding, achieving a valuation of $12 billion before publicly announcing its first product.

How Tinker Works

Tinker currently supports fine-tuning for two prominent open-source models: Meta’s Llama and Alibaba’s Qwen. Users can interact with Tinker through its API, using a few lines of code to begin the customization process. The tool offers two main fine-tuning methods.

The first method is supervised learning. This involves adjusting the AI model using labeled data. For instance, if a model needs to classify images, it would be trained with a dataset where each image is correctly labeled. The second method is reinforcement learning. This increasingly popular technique trains models by providing positive or negative feedback based on their outputs. This helps the model learn desired behaviors over time.

After fine-tuning, users can download their customized models. They can then run these models in their own environments, offering flexibility and control over their AI applications. This capability is crucial for businesses and researchers who need to integrate AI into their existing systems.

Team Expertise Drives Anticipation

The AI industry is closely watching Tinker's launch due to the strong background of its founding team. Mira Murati previously served as the CTO of OpenAI. She also briefly held the CEO position at OpenAI in late 2023. Her departure from OpenAI came about 10 months before the announcement of Thinking Machines Lab.

Murati cofounded Thinking Machines Lab with several other OpenAI veterans. These include John Schulman, an OpenAI cofounder known for his work on reinforcement learning; Barret Zoph, former vice president of research; Lilian Weng, who focused on safety and robotics; Andrew Tulloch, an expert in pretraining; and Luke Metz, a post-training specialist. This collective experience positions the company as a significant new player in the AI landscape.

Background Information

Fine-tuning involves taking a pre-trained large language model (LLM) and further training it on a smaller, specific dataset. This process optimizes the model for particular tasks, improving its accuracy and relevance for specialized applications without requiring training a new model from scratch.

Reinforcement Learning for Enhanced Capabilities

John Schulman, a cofounder of Thinking Machines Lab, played a key role in developing the reinforcement learning methods used to fine-tune ChatGPT's underlying language model. This process involves human testers providing feedback, which helps the model generate more coherent conversations and avoid off-topic or undesirable responses.

Schulman believes Tinker will make it easier for more people to unlock new capabilities from large AI models through reinforcement learning and other advanced training techniques. He noted that Tinker abstracts away complex distributed training details. However, it still gives users full control over their data and algorithms.

"There's a bunch of secret magic, but we give people full control over the training loop," Schulman told WIRED. "We abstract away the distributed training details, but we still give people full control over the data and the algorithms."

Thinking Machines Lab began accepting applications for Tinker access recently. The company is not currently charging for its API but plans to introduce fees in the future. This initial free access allows more developers and researchers to experiment with the tool.

Beta Tester Feedback

Early users have provided positive feedback on Tinker's capabilities. Eric Gan, a researcher at Redwood Research, a company focused on AI safety, used Tinker's reinforcement learning features to train models for writing code backdoors. Gan found that Tinker allowed him to extract capabilities from models that would not be apparent through a standard API.

Gan also highlighted the ease of making adjustments during fine-tuning. He stated that Tinker is significantly simpler than performing reinforcement learning from scratch. He added that reinforcement learning is particularly effective for highly specialized tasks where existing models might not perform adequately.

Robert Nishihara, CEO of Anyscale, another beta tester, compared Tinker to existing fine-tuning tools like VERL and SkyRL. Nishihara praised Tinker's balance of abstraction and tunability. He believes many users will find the API valuable.

  • Eric Gan: Used Tinker for training models to write code backdoors, noting its ability to reveal hidden model capabilities.
  • Robert Nishihara: Praised Tinker's combination of simplicity and control, predicting high adoption.

Addressing Misuse and Promoting Openness

The availability of open-source models raises concerns about potential misuse. Thinking Machines Lab currently vets all applicants seeking API access. Schulman indicated that the company plans to implement automated systems to prevent malicious use of Tinker in the future.

Beyond Tinker, Thinking Machines Lab has already contributed to fundamental AI research. The company has published studies on maintaining neural network performance and enhancing the efficiency of large language model fine-tuning. This research underpins the technology used in Tinker.

The company's commitment to making powerful AI tuning accessible also reflects a broader push for openness in the AI sector. Many leading US AI companies keep their most advanced models proprietary, accessible only through APIs. In contrast, China currently leads in the number of open-source frontier AI models, which are used globally.

Mira Murati expressed hope that Tinker can help reverse the trend of commercial AI models becoming increasingly closed. She emphasized the importance of collaboration between frontier labs and academic researchers. A divergence between these groups could hinder the responsible development of powerful AI systems.

"If you consider what's being done in frontier labs and what other smart people in the world of academia, they're sort of diverging more and more," Murati said. "And that's not great if you think about how these powerful systems are coming into the world."