Researchers from MIT have released Boltz-1, a generative artificial intelligence (GenAI) platform that predicts the structures of proteins and other biomolecules. Designed to serve as a fully open-source alternative to Google DeepMind’s AlphaFold 3, the platform promises performance equal to that of AlphaFold 3 without any restrictions on commercial use.
Gabriele Corso and Jeremy Wohlwend, both graduate students at MIT, led the development of Boltz-1 alongside Saro Passaro, a research affiliate with the MIT Jameel Clinic, and MIT professors Regina Barzilay (electrical engineering) and Tomi Jaakola (computer science). The team presented the model on December 5, 2024 at MIT’s Stata Center, according to a press release.
“It is crucial that every biologist, whether in academia or industry, can access such tools [as Boltz-1 or AlphaFold 3],” says Corso in an interview with Lab Manager. Corso and the other researchers successfully replicated AlphaFold 3 and identified improvements to the underlying diffusion model that boosted Boltz-1’s prediction efficiency and accuracy. Corso says that Boltz-1 can now produce a biomolecular complex structure prediction in just 30 to 60 seconds while achieving the same level of accuracy as AlphaFold 3.
Replicating AlphaFold 3
AlphaFold 3 was created by DeepMind, Google’s own AI research laboratory. Attempting to replicate the product of something created with the backing of one of the largest tech companies on Earth is a daunting task.
“The key [to creating Boltz-1] was understanding resource requirements and working around them creatively,” says Corso. “For any project, some resources are absolutely essential . . . while others are merely advantageous.” Corso’s team “managed without” the advantageous.
For the essential resources—namely, compute power for training the model—the researchers secured a grant for computational support from the US Department of Energy and partnered with Genesis Therapeutics for infrastructure and machine learning engineering, as well as additional computational support.
Of course, these requirements were for the initial development of Boltz-1. Running a pretrained model is far less intensive than training one from scratch, so Boltz-1 should be within reach of most labs.
How labs can implement Boltz-1
“We have designed Boltz-1 to be simple to install and run,” Corso says.
The main prerequisite to using Boltz-1 is having access to a computer outfitted with modern GPUs, either locally or on cloud instances. “Ideally, these GPUs should have 40+ gigabytes of memory; however, even with 24 gigabytes or 32 gigabytes, the model can handle most input sizes,” he adds.
Interested lab managers can find the source code and detailed installation instructions on the Boltz-1 GitHub repository.
Next steps for Boltz-1
Corso notes that the team has several goals on the development roadmap, such as enhancing the model’s ability to predict complexes containing nucleic acids (a weak point for Alpha Fold 3 as well, according to Corso), optimizing the model for faster runtimes, and integrating Boltz-1 with other tools commonly used in labs for better accessibility.
For those interested in keeping up with Boltz-1 development, the Corso and his team have a public Slack channel available that now has more than 500 researchers.