Biomarker identification enables researchers to better understand disease mechanisms, develop targeted diagnostics, and tailor therapies to individual patients. This approach, supported by innovations in proteomics, genomics, and AI-driven data analytics, is transforming the way we treat diseases like cancer, Alzheimer’s, and more. Thomas Moehring, senior director, OMICS applications LSMS and managing director at Thermo Fisher Scientific, discusses advances and improvements to workflows in these areas:
How important is accurate biomarker identification and verification for developing effective diagnostics and therapies?
Accurate biomarker identification is critical for continuing success on our precision medicine journey. When you think of diseases like cancer, obesity, Alzheimer's, or any other, we must find answers to why the effectiveness of drugs or treatments for these diseases can vary significantly among patients. For example, two people with the same type of cancer might receive the same drug, which leads to a quick recovery for one, but the other may have adverse effects. The reason lies in the unique biochemical pathways we have as individuals.
The closer we can get to accurately identifying biomarker panels that describe the onset and development of a disease, the better we can understand why disease mechanisms occur in patients differently and how specific treatments may work for some patients and not for others.
Achieving this understanding is crucial for advancing personalized medicine, where we can determine which drugs are best for each individual based on their specific protein levels, which is the first step in biomarker identification. Once we identify these differences in patients, it’s critical to verify the accuracy of these biomarkers, which is made possible with high-resolution mass spectrometers that can guide researchers from identification to verification and validation.
How does integrating proteomics with genomics provide a thorough understanding of disease mechanisms and lead to improved treatments or therapies?
Genomics provides the genetic blueprint to show what’s possible within a biological system and what certain genetic defects may lead to future disease. On the other hand, proteomics describes the actual state of a cell and reveals what’s happening right now. Proteomics can show how these genetic factors manifest within the patient's current disease state. Both genomics and proteomics, as well as transcriptomics, are needed to fully understand a disease’s impact on the patient. Combining this omic makeup offers a complete, comprehensive view, allowing for more precise identification of disease mechanisms and better-targeted treatments.
What are some real-world examples that demonstrate how translational omics achieves breakthroughs faster than traditional approaches?
Recent advances in mass spectrometry mean researchers no longer need to compromise on sensitivity, throughput, or speed. Translational omics and science allow researchers to identify biomarkers of interest and apply them in clinical research to accelerate new possibilities in drug discovery and development.
Translational omics, for example, have been critical for better understanding Alzheimer’s disease to develop effective diagnostic blood tests and assays for earlier intervention and treatment of the disease. High-resolution mass spectrometry can be used to measure proteins in the blood to determine the likelihood of amyloid plaques in the brain and help healthcare providers diagnose the disease earlier to help slow disease progression and better manage symptoms.
How does AI and data analytics accelerate biomarker identification and verification?
Over the last 20 to 25 years, we have experienced a significant increase in the number of samples that need to be analyzed. Back then, researchers may only have had the ability to analyze 10 or 20 samples, but now we’re analyzing cohorts in the thousands, making automation essential to every lab professional. Automating and standardizing sample prep allows researchers to process large sample amounts much faster while maintaining efficiency and avoiding biases or false positives in biomarker identification.
Data analytics and software connectivity also enable seamless data flow between systems to automate data acquisition and processing, allowing researchers to get to biomarker identification and verification faster. Advanced and powerful tools like machine learning and AI help researchers analyze large data sets, identify patterns in the data, and gain insights that would otherwise be challenging to uncover manually. AI tools, in particular, excel at mining data from vast cohorts, which can also help researchers discover potential biomarker panels by analyzing changes in protein levels and interactions over time.
How can lab managers improve sustainability while doing this type of research?
Lab managers and professionals should continually evaluate their operations and instruments to ensure they’re using environmentally friendly options. Everyone should strive to make the most of our available resources to reduce the ecological footprint and seek alternative strategies or partnerships to achieve sustainability goals. In omics research and the world of mass spectrometry and chromatography, we’re continuing to see more environmentally friendly options, such as replacing oil-based pumps with dry pumps to consume less power and avoid oil. This can reduce energy consumption and waste on a much broader scale to have more efficient and sustainable workflows that improve not only the health of patients but also our environment.
What are tips for improving the speed of these workflows?
End-to-end workflows and connected software environments can simultaneously help labs accelerate biomarker verification by leveraging highly sensitive and selective instrumentation that increases the sample units and reduces run time, allowing researchers the flexibility and efficiency to advance any omics research. Automation can also help researchers spend less time on sample predation from extraction to digestion and TMT multiplexing, as well as a growing emphasis on using ready-made kits to streamline sample prep and improve reliability. Another area of focus for lab managers and researchers should be to focus on improving the speed of liquid chromatography (LC) separation techniques and using higher-performance columns such as micropac columns to achieve faster sample throughput and consistency.
Additionally, an automated, connected software environment allows researchers to more efficiently analyze these large data sets and speed up identifying and verifying all biomarkers that matter. Newer, more modern instrumentation will enable researchers to increase how many samples they can run from 20-50 per day to well over 100 per day, but without the right technology and data, you won’t be able to maximize this throughput and analysis.
Some of the researchers we work with can now create methods that once took three months to develop and now only take a few days. None of this would be possible without the right software strategy.
What do labs need to learn to effectively integrate proteomic and genomic workflows?
Genomics research is currently more mature, having received significant investment over the last 20 years. Such attention has allowed labs to create very standardized and high throughput workflows that provide a certain level of speed, analysis, stability, and robustness that is considered less mature in proteomic workflows. However, much of this was due to instrument limitations forcing labs to compromise and sacrifice throughput, sensitivity, or resolution. Recent advances across the entire mass spectrometry and discovery workflow are now allowing us to map the human proteome and match the same level of standardization, throughput, and sensitivity that many of us came to expect in genomics over the years. Another aspect for labs to consider is how to harmonize transcriptomic data with proteomic and genomic datasets which will require robust tools for multi-omics integration and data quality. Luckily, researchers have the available instruments and data processing software today to analyze proteomics, genomics, and transcriptomics together to better understand the disease trajectory and how to develop personalized treatments for patients.