Lab Manager | Run Your Lab Like a Business

Ask the Expert

Ask The Expert: Accelerating Research with Mass Spectrometry

Allis Chien, Ph.D., director of the Vincent Coates Foundation Mass Spectrometry Laboratory at Stanford University’s mass spectrometry shared core resource facility, discusses the advances being made in the instrumentation and applications for mass spectrometry (MS).

by Tanuja Koppal, PhD
Register for free to listen to this article
Listen with Speechify
0:00
5:00


Allis Chien. Ph.D., is director of the Vincent Coates Foundation Mass Spectrometry Laboratory, Stanford University’s mass spectrometry shared core resource facility (http://mass-spec.stanford.edu). The laboratory provides researchers in diverse fields with broad-based mass spectrometry expertise and support, including qualitative and quantitative analyses and proteomics and metabolomics services. It also serves as the Proteomics Shared Resource for the Stanford Cancer Institute and as the mass spectrometry core facility for the Stanford Bio-X Initiative. Beyond making state-of-the-art, user-friendly facilities and services available, the laboratory enables education, methods development and new applications development, designed to meet the rapidly evolving needs of researchers. Dr. Chien graduated from the University of San Francisco with a B.S. in chemistry and an emphasis in biochemistry. She earned her Ph.D. in chemistry from Stanford University in 2000, and then stayed on to establish and grow the mass spectrometry facility.

Allis Chien, Ph.D., director of the Vincent Coates Foundation Mass Spectrometry Laboratory at Stanford University’s mass spectrometry shared core resource facility, talks to contributing editor Tanuja Koppal, Ph.D., about the advances being made in the instrumentation and applications for mass spectrometry (MS). The core facility has a variety of mass spectrometers that include ion trap, quadrupole, triple quadrupole, quadrupole time-of-flight (Q-TOF) and Orbitrap instruments, coupled with HPLC (high performance liquid chromatography), UHPLC (ultra HPLC), Nano UHPLC, and capillary LC and GC (gas chromatography) systems. Dr. Chien discusses some of the challenges that are commonly encountered while working with these instruments, such as sample prep and data analysis, and identifies areas for improvements.

Q: Why do you have so many different types of mass spectrometers in your facility? I guess there is no one size that fits all applications.

A: We do have a diverse group of users—chemists, biologists, clinicians, engineers—and they cover a broad range of applications. So we don’t have the luxury of specializing in any one type of MS. It also has to do with the way the lab has grown over the years. We started out as a chemistry support lab, so initially we had an ion trap mass spectrometer. Then people wanted to do proteomics, so we needed an MS system for proteomics. And then the users wanted to specifically quantify molecules for target analysis, and we needed a triple quadrupole instrument for that. So the lab has grown organically over the years and we’ve added different types of instrumentation— whatever’s most appropriate for what people need. Until two years ago, we were doing proteomics on an ion trap MS, then we invested in an Orbitrap instrument. That was a game changer. We have an open-access lab and chemistry and proteomics services are a big part of what we do. Metabolomics is new but it is growing, and quantitation is the other main application.

Q: Along with HPLC and UHPLC, you have also invested in Nano LC systems coupled to MS. Is that for proteomics?

A: The Nano LCs are definitely for proteomics, or if you need the sensitivity. A general rule of thumb is when you halve the column diameter, you should improve your sensitivity by about fourfold. We’re also looking to work with Capillary LC for some applications. Capillary LC is a good compromise between achieving the right robustness and sensitivity.

Q: In which areas do you see MS lagging?

A: MS is evolving so fast. The quality of the instruments nowadays versus the quality from just five years ago is a huge leap forward. But I think that as scientists, we always want more— more speed, more sensitivity, more accuracy. That’s because it’s really a continuum. Every advance in technology opens up more possibilities. So with the new instruments, something that was once impossible to do now becomes just really hard to do. Something that was excruciatingly difficult becomes just difficult, the difficult becomes routine, and the routine becomes easy. So we always want more; we’re never satisfied. We can do an immense amount today that we couldn’t do just a few years ago, and I am pretty happy with that.

Q: Can you talk about some areas that need improvement?

A: Perhaps it would be the instrumentcontrol software. In and of itself, the software for each instrument works pretty well, although they all have their idiosyncrasies. But when we need different systems to work together is when we get very strange glitches. A lot of times these glitches are intermittent, hence, they’re hard to reproduce. For example, with the MS software working with the LC software or even with our institution’s security software, you have strange things happening. You’ll come back in the morning and find that your run stopped in the middle of the night, and nobody knows why. Or, even worse, your mass spectrometer stopped collecting data but your LC kept running, so you lost all your samples. In that sense, I think the interfaces of different software just need to play better together.

Q: In terms of the analysis itself, what aspect still remains a big challenge?

A: What I’ve noticed is that the challenges lie at the interfaces. When going from the sample to the instrument, it’s about sample preparation, and that is definitely a big one. The mass spectrometers themselves are actually pretty easy to run, but the liquid chromatography end is sometimes a challenge. Going from the raw data to getting results, which is data processing, is another transition that’s tough. For bigger projects, you’re often pulling data off of different instruments, from different manufacturers, and you run into that compatibility problem of how to bring everything together. The researchers want to know more, so a lot of times the software that the manufacturers provide just isn’t enough, and you have to go to third-party software. So bridging that gap is another issue. However, I think one of the biggest challenges is actually talking to the researchers. The biologists or clinicians have their language and their methods, and as MS people, we have a different language. Learning to communicate with them so that we know what they need from the analysis and they know how to prepare the samples based on what we need is a challenge. We’re getting better at that as the lab matures, but there’s always someone new and something new to learn.

Q: What are some of the common concerns of users who are involved with large-scale proteomics and metabolomics projects?

A: Data analysis is the big concern. Users know they’re going to get reams of data and they need solutions for handling that data, particularly statistical analysis. A lot of studies will need a statistician on board, and the best-case scenario is when we can actually sit down and plan the study together with the statistician. Experiment design is also important, because it has to do with that interface challenge again. The experiment has to produce something that we can put into the mass spectrometer, or we have a protocol that we can transform to be compatible with MS. An example would be immunoprecipitation (IP), in which people traditionally use detergents like sodium dodecyl sulfate (SDS) to elute their proteins from an IP column. Previously, detergents were not compatible with MS. But with protocols recently developed for removing SDS, we can now use those detergent samples. Hence, taking care of the details and talking through the procedures beforehand is important.

Q: For your open-access lab, what types of training do you provide your users?

A: For the open access, we have a single quadrupole GC-MS and an LC-MS, and that’s what we let people work on. They are under software control, so people are not developing their own methods. They bring in their samples, select a method that’s already predefined, and run their samples using an autosampler. Users are not reconfiguring or tuning the instruments, and that’s necessary to keep the instruments running. Those instruments analyze 10,000 to 12,000 samples a year and are pretty busy. Usually the people who come in don’t have a lot of, or any, practical MS experience. So the training has to cover the whole range, from the sample prep all the way to deciding which method would be appropriate for the samples. And then there’s the practical training on how to get the data, a little bit on data interpretation, what libraries are available and such. We do have formal training sessions, but a lot of the actual learning comes afterward. It’s not until users have run a few samples, looked at their data and tried to make sense of it that they really have questions. Then we’ll sit down with them, one on one, and tutor them.