Lab Manager | Run Your Lab Like a Business

Trends in Mass Spectrometry

Amrita Cheema, PhD, associate professor and codirector of the Proteomics and Metabolomics Shared Resource at Georgetown University Medical Center, talks to contributing editor Tanuja Koppal, PhD, about the growing use of mass spectrometry as a tool for detecting biomarkers for early prediction and diagnosis of disease, leading to personalized therapy. She highlights that improvements in software
and hardware have led to better resolution and specificity, which in turn have increased the use of this technology for biomarker discovery and will potentially help pave its path into the clinic as a diagnostic tool.

by Tanuja Koppal, PhD
Register for free to listen to this article
Listen with Speechify
0:00
5:00

Q: Can you talk about the focus of the work being done in your lab?

A: The focus of our laboratory right now, as a part of the collaborative research at Georgetown University, is preclinical detection of disease. Our recent paper [Nature Medicine 20, 415– 418 (2014)] demonstrated the use of a mass spectrometry (MS)-based profiling approach for detection of phenoconversion to Alzheimer’s disease (AD) in asymptomatic individuals. However, this approach can be extended to any biomedical problem, such as the early detection of cancer and other diseases that are asymptomatic till late stages. Our goal is to detect and identify biomarkers that can enable early detection biomarkers to augment the development of disease-modifying therapeutics leading to personalized therapy.

Q: Can mass spectrometry be used effectively to profile other biomolecules, besides lipids?

A: For this study we started out using MS-based biomarker discovery using an untargeted metabolomic profiling approach. The underlying idea was to interrogate the metabolome without a bias for a particular class of metabolites. The goal was to obtain a broad coverage of the metabolome by using various extraction procedures and column chemistries. In general, these analyses facilitate the detection and relative quantification of metabolites like amino acids; nucleotides; and polar, semipolar, and nonpolar molecules. However, bioinformatics analysis of this dataset revealed that lipids were predominant discriminants between the preconvertors and normal control groups. We then pursued them further using a targeted MS approach to characterize and quantify them.

Get training in Asset Management and earn CEUs.One of over 25 IACET-accredited courses in the Academy.
Asset Management Course

Q: How long have you been working with MS?

A: Our core lab started out with a proteomics focus, and we have been using matrix-assisted laser desorption/ionization (MALDI) and the triple quadrupole time-of-flight (Q-TOF) instruments for nearly a decade now. Around 2007 we branched off into metabolomics, an emerging field that provides information on changes in small molecule abundance in response to system perturbation. We continue to use Q-TOF MS, but over the years there has been a lot of improvement in these instruments, with respect to both the hardware and the software. This has led to increased sensitivity and resolution, leading to better biomarker discovery. Currently, in our lab we use ultrahigh-performance liquid chromatography (UHPLC) for our separations, and having used traditional HPLC in the past, I believe that UHPLC is a superior technology for high-throughput biomarker discovery efforts. The difference is striking in terms of run times, retention time, and consistency of peaks as well as resolution and sensitivity that are far superior to traditional HPLC.

Q: What were some of the challenges when you were working with MS for proteomics?

A: Challenges with proteomics range from extensive sample preparation and cleanup, acquisition time, and constraints of the analytical platform used. For example, detection of low-abundance protein biomarkers in a blood sample can be very difficult. However, when it comes to deconvolution of data, proteomics is a much more mature technology, as compared to metabolomics. The up-front sample preparation protocols for serum/plasma metabolomics are relatively simple, since proteins need to be removed; however, the data analysis is a huge challenge. So in some ways there are contrasting challenges in the two fields.

A major problem with the clinical studies, whether it is proteomics or metabolomics, is that people do not pay attention to the pre-analytical variables. When you are designing a clinical study, you want to make sure that the samples are collected and stored in a consistent manner. One of the reasons why we were able to tease out subtle changes in metabolite abundance in our study is because it was a very tightly controlled dataset. There has to be collaborative input between the clinicians and the analytical folks doing the back-end work. Only then can you circumvent the challenges around variability and the analytical challenges of MS to get good results for biomarker discovery.

Q: Why is data analysis such a challenge in metabolomics?

A: In metabolomics you start out with a sample extraction protocol for either broad-range or targeted analysis. If you are using MS, you select the right column chemistry that will enable you to see the differences in metabolites in the samples, without any a priori knowledge. You then end up with thousands of MS peaks, which translate to a huge amount of data. Identifying the metabolites corresponding to these peaks is based on accurate mass using databases that are not completely annotated. So what ends up happening is that nearly 60 percent of what you find is unknown. That’s where the technical immaturity lies right now. We have to catalog some of these unknown metabolites that we find, which could be potentially useful biomarkers, and have to come back to them later as the databases mature.

Q: Do you need the help of a biostatistician to overcome some of these data analysis challenges?

A: Designing a metabolomic study that would yield biomarkers of significance certainly requires the input of a statistician. Moreover, performing clinical studies with a human cohort is very different from studies with cell lines or in animals, which are much more controlled. For translational studies you need an expert to determine the power of the study; that is, determining how many samples you need to get a statistically significant result. The other aspect is how do you treat the outliers and the noise? When we first generated this data we used commercially available software to analyze the findings, and that did not yield optimal results. There was too much cloudiness and noise in the data. Clinical variability comes from factors like diet, age, and gender, and some of those you can’t control. You then need an expert statistician to use methods that can start to tease out the subtle differences in the results. These studies require a close collaboration between clinicians, analytical chemists, and biostatisticians to look at the data from every perspective.

Q: What did you do differently to overcome the challenges you mention?

A: Some challenges are hard to overcome unless we make advancements in analytical platforms, but let me mention a few that we did overcome. We started out performing untargeted plasma profiling using four sets of samples—normal controls, preconvertors, postconvertors, and patients with Alzheimer’s disease. When we looked at the literature, plasma extraction protocols were limited, and we were not sure if they could support a broad-range separation. We tried and tested different extraction procedures and homed in on a sequential extraction technique that would allow us to look at all classes of metabolites—polar, semipolar, and nonpolar molecules. The next challenge was how to select a column that would facilitate the detection of all those metabolites. C18 reverse-phase chromatography is one of the most commonly used techniques for these type of separations, but we ended up using a column developed by Waters that used the charged surface hybrid (CSH) technology. It worked very well for us in terms of resolution, and we could get much more information from the sample. Those are some of the analytical innovations we put in place. We also had to optimize the gradients to suit those workflows. All this paid off, and we did end up getting good high-quality data, which was reproducible. There have been best practices described and improved upon, and we did a lot of literature searches to see what leading researchers were using. There were protocols where people have described the use of good quality controls, column conditioning, and such that go into ensuring that you get good mass accuracy and good resolution and there is no drift in the data from where you started toward the end of the batch. My lab is also the Waters Center of Innovation, and so there is a lot of back-and-forth between the experts at Waters and people in my lab, which helps in method development. It’s a very synergistic relationship, and these are the type of interactions that will help the field progress faster.

Q: Is the investment in UHPLC well justified, or can you use traditional HPLC and make other changes to get the same results?

A: I have used both platforms, and the differences between the two are very apparent. UHPLC has many hardware improvements. For instance, our old HPLC did not have specific column heaters and it was very difficult to manage column temperatures. UHPLCs came with specific column controls so you could facilitate better separations without column buildup. The Waters UPLC also has a column manager, so you can use and manage four columns in tandem and it really increases our throughput, which is important for a core facility like ours. In our lab we have one type of column allocated to one type of matrix so we do not cross-contaminate. The innovation in the UHPLC column pore size also enables extremely high sensitivity and narrow peak widths, which makes the chromatography remarkably superior to HPLC. However, one thing you have to be cognizant of is that an old-generation mass spectrometer that cannot scan fast will not benefit from having a UHPLC in the front end. So we had to systematically upgrade all our mass spectrometers. Similarly, if you have a very high-end mass spectrometer coupled with a HPLC, you tend to lose out on a lot of information. Hence, it pays to have a balanced configuration both at the front and back ends.

Q: How are you handling reproducibility challenges working with biological samples?

A: Analytically you can check for intrabatch inconsistencies by injecting pooled quality controls at regular intervals in the sample queue. This would help monitor retention time drifts as well as intensity variations. Running standard compounds helps monitor the performance of the mass spectrometer, while running solvent blanks is indicative of sample-to-sample carryovers. However, reproducibility of data is affected significantly by pre-analytical variability, which I mentioned earlier, with respect to how samples are collected and stored. In essence, basic science researchers and clinicians have to work together so that the downstream data is of high quality and performs consistently for translational research projects.

Q: Can MS be used as a routine tool for clinical diagnostic testing?

A: There are many clinical labs in the U.S. that are already using MS for routine clinical testing, such as steroid panels. What is required is that people running those assays must have clinical chemistry expertise and the technical expertise to use MS and must be Clinical Laboratory Improvement Amendments (CLIA) compliant. However, a research finding, such as that reported in the Nature Medicine paper, needs to be validated in diverse and independent cohorts to establish the robustness and wide applicability of the biomarker panel before it can be considered for clinical use.

Q: Are there some improvements that you are looking for in MS?

A: One of the challenges we face with the current UHPLC instruments is the low resolution when working with polar metabolites. A large fraction of endogenous metabolism consists of polar metabolites, and it would be beneficial if vendors put in the effort, both on the front and back ends, to improve the detection of these metabolites. The other issue is to push the sensitivity for biomarker discovery. We have to really get down to the lowest end of the dynamic range to detect some of the low-abundance metabolites effectively, especially in the presence of high-abundance molecules. Improvements on these two fronts would really benefit lab managers.


Dr. Amrita Cheema obtained her doctoral degree in Biotechnology from Jawahar Lal Nehru University, New Delhi, India and currently serves as an associate professor in the Departments of Oncology and Biochemistry, Molecular and Cellular Biology and also co-directs the Waters Center of Innovation-Metabolomics at the Georgetown University Medical Center (GUMC). Dr. Cheema has a strong interdisciplinary background. One of the principal areas of her research interests has been to understand molecular events that accompany therapeutic or non-therapeutic radiation exposure using murine models as well as with human cohorts. Another area of study in her laboratory is the identification and validation of prognostic biomarkers of pancreatic cancer and to characterize the molecular profiles of response to therapy in patients presenting with pancreatic cancer. Dr. Cheema is a member of the Metabolomics Research Group (MRG) of the American Biomolecular Research Facilities and is actively engaged in collaborative and independent research focused on biomarker discovery and validation for pre-clinical detection of disease