Have you ever accidentally ruined a recipe in the kitchen by adding salt instead of sugar? Due to their similar appearance, it’s an easy mistake to make. Similarly, checking with the naked eye is also used in chemistry labs to provide quick, initial assessments of reactions; however, just like in the kitchen, the human eye has its limitations and can be unreliable.
To address this, researchers at the Institute of Chemical Reaction Design and Discovery (WPI-ICReDD), Hokkaido University led by professor Yasuhide Inokuma have developed a machine learning model that can distinguish the composition ratio of solid mixtures of chemical compounds using only photographs of the samples.
The model was designed and developed using mixtures of sugar and salt as a test case. The team employed a combination of random cropping, flipping, and rotating of the original photographs in order to create a larger number of sub images for training and testing. This enabled the model to be developed using only 300 original images for training. The trained model was roughly twice as accurate as the naked eye of even the most expert member of the team.
“I think it’s fascinating that with machine learning we have been able to reproduce and even exceed the accuracy of the eyes of experienced chemists,” commented Inokuma. “This tool should be able to help new chemists achieve an experienced eye more quickly.”
After the successful test case, researchers applied this model to the evaluation of different chemical mixtures. The model successfully distinguished different polymorphs and enantiomers, both of which are extremely similar versions of the same molecule with subtle differences in atomic or molecular arrangement. Distinguishing these subtle differences is important in the pharmaceutical industry and normally requires a more time-consuming process.
The model was even able to handle more complex mixtures, accurately assessing the percentage of a target molecule in a four-component mixture. Reaction yield was also analyzed, determining the progress of a thermal decarboxylation reaction.
The team further demonstrated the versatility of their model, showing that it could accurately analyze images taken with a mobile phone, after supplemental training was performed. The researchers anticipate a wide variety of applications, both in the research lab and in industry.
“We see this as being applicable in situations where constant, rapid evaluation is required, such as monitoring reactions at a chemical plant or as an analysis step in an automated process using a synthesis robot,” explained specially appointed assistant professor Yuki Ide. “Additionally, this could act as an observation tool for those who have impaired vision.”
- This press release was originally published on the Hokkaido University website