Thomas Neubert, PhD, professor of cell biology and director of the New York University Protein Mass Spectrometry Core for Neuroscience, talks to contributing editor Tanuja Koppal, PhD, about analyzing small molecules and proteins from diverse samples using mass spectrometry (MS). He discusses some of the common issues that researchers often overlook when it comes to sample preparation. These issues, although seemingly trivial, have a significant impact on the separation of samples and analysis of data and could lead to false discovery and misinterpretations.
Q: Can you describe your work and the types of analyses you do?
A: I have been running a mass spectrometry core lab at the New York University School of Medicine since 1998. We collaborate with many researchers to use MS for analysis, and at any given time we have many different projects going on. Our main interest is in neuroscience, although we do work on other projects as well. Because we work on many projects we have to process a variety of samples and use different types of MS instruments. Some samples are tissues, others are cell culture, plasma, or serum. We analyze both proteins and small molecules in these samples and do most of the sample preparations ourselves.
Q: How important is sample preparation for your analysis? Can you explain some of the details?
A: Sample preparation is very important. For analyzing proteins, we usually get the protein in the form of a pellet or in a sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) gel. The main processing steps include digesting the protein into peptides using trypsin and fractionating or cleaning the peptides before putting them in the MS instrument. The exact processing steps depend on whether we are studying a protein or a small molecule, as the two workflows are very different. For studying posttranslational modifications, such as phosphorylation, ubiquitination, and glycosylation, on proteins, we have to enrich the modified peptides so they can be seen using MS. So that’s an important step. However, if we just have to identify the proteins and not measure their quantities, then we do only fractionation of the complex mixture and no enrichment. After the samples are processed, we inject them into a nanoflow high-performance liquid chromatography (HPLC) column, which is coupled to MS. While some labs study intact proteins, we typically study only peptides, which makes it easier for MS analysis. The small molecules that we study are typically metabolites found in the cell.
Q: Along with sample prep, I am assuming data analysis is also very important?
A: Data analysis is extremely important for what we do. We often analyze a few thousands of proteins after each LC-MS run. When we combine data from many different runs, we sometimes have to analyze 8,000-9,000 proteins at a time. We have to identify not only the proteins that are there in the sample, but also how much of each protein is present. We do relative quantification for most of our analysis and this requires advanced software. The Association of Biomolecular Research Facilities (ABRF) has a proteome informatics research group (iPRG), which often conducts various studies. In one such study, they gave a set of MS data to different labs around the world and asked them to analyze it. Using the same data, these labs found different proteins or different quantities of the same protein. That tells you how important data analysis is. This has been very consistent across many studies that the group has done. Even the same lab, when using different instruments or lab personnel, can generate different results. Hence, leading proteomics journals are now requesting that researchers submit their raw data along with the manuscript. Scientists looking at the data then don’t have to rely only on the analysis done by the lab publishing the results. Journals also require a certain standard for data analysis, and reviewers are now requesting details on analysis and statistical significance.
Q: Is there any improvement in sample prep and data analysis with this new mandate?
A: The field is improving but there continue to be challenges as new technologies are introduced and data sets get larger. Experiments have to be done carefully, and controls have to be used appropriately, so there are no mistakes in data interpretation. Experimental conditions have to be monitored, and there can be no bias when selecting samples, especially for clinical research. Even in cell biology experiments, conditions have to remain identical, with the exception of a few variables, when it comes to making accurate comparisons. As instruments like MS become more sensitive, researchers can analyze very small amounts of samples. However, sample processing also has to improve to accommodate these small amounts of material. Technologies have now moved to single-cell analysis with RNA sequencing, but analyzing proteins and small molecules in single cells is still quite difficult. To do that, you have to be able to process the cell and extract the analyte in a reliable way, which is very difficult and often involves microfluidics. At the same time, there are innovations taking place all the time to help with sample preparation.
Q: How do you overcome some of the challenges with sample preparation?
A: Working with different types of samples can be challenging, so it’s important to hire people who are skilled and collaborative, so they are willing to share their knowledge and techniques with others. I rely heavily on senior lab members to teach the junior members, so our protocols can be passed down to the next generation. Everyone who joins my lab learns the basics of sample preparation, which is extracting the proteins, digesting them, purifying, and fractionating them. However, some people develop expertise working with a particular sample type. For instance, some have more experience working with small tissue samples, while others work better with cell culture. Every individual has a niche, and sometimes they have to develop new methods for a specific type of sample based on their expertise.
Q: Do you rely on automation to help with sample handling and storage?
A: We mostly use manual sample preparation, because we work with so many types of samples, on many different projects. The protocols tend to be different for every sample type, which makes it difficult for us to automate. For a large clinical study that we did years ago, we did do some automated sample preparation. Sample handling and all the different steps in sample preparation are equally important. They have to be done in exactly the same way each time to get accurate results. A study done using matrix-assisted laser desorption/ ionization time of flight (MALDI- TOF) showed that even freezing and thawing of serum samples could give different results. All of our samples are manually identified and not bar-coded. Along with sample handling and sample preparation, we also keep a very close eye on the performance of the analytical instruments and on the data analysis. We spend a lot of time doing routine quality control using protein digests or samples that have been well-characterized, to check the performance of both the HPLC and MS instruments. Every three to six months, our instruments have to be shut down for routine cleaning and maintenance. We can’t be sloppy about anything.
Q: What are some of the resources that you rely on for help or guidance?
A: Studies done by research groups like the iPRG and expert advice offered by the ABRF have been very helpful in learning how to do things and identifying things that we should pay attention to. With sample preparation, we often turn to colleagues for help. For data analysis, the software that we typically use has excellent online resources and web-based tutorials. If we encounter problems or if we need features that are nonexistent, then we contact the companies or the researchers who have made the tool available, and they are often very responsive. Sometimes we write the software ourselves, if there is nothing publicly available.
Thomas Neubert received his BS in biology from Georgetown University and his PhD in immunology and infectious disease from Johns Hopkins University. He then did postdoctoral work in the labs of Dr. James B. Hurley at the University of Washington and Lubert Stryer at Stanford University. After three years as senior biochemist at Fournier Pharma GmBH in Heidelberg, Dr. Neubert joined the Skirball Institute at the New York University School of Medicine in 1998, where he is now professor of cell biology and director of the NINDSfunded NYU Protein Mass Spectrometry Core for Neuroscience. His research focuses on development of new methods for protein analysis by mass spectrometry and the study of cell signaling and posttranslational modification of proteins, mostly in neurons.