David Rimm, MD, PhD, professor of pathology and medicine at the Yale School of Medicine, talks to contributing editor Tanuja Koppal, PhD, about what prevented early adoption of multiplexing and automation in the field of pathology. Although automation and multiplexing have now been made possible due to digital and molecular detection pathology, challenges exist at every step—from standardization of protocols for sample procurement, handling, and storage to reimbursements for clinical pathology services.
Q: Why has pathology been so slow to adopt automation?
A: Pathology has been slow to adopt new technologies because in some ways it’s a very reactive specialty. Morphology testing requires a certain level of expertise, which is what pathologists receive training for in medical school. Once you are a trained pathologist, it takes a relatively small amount of time to look at a simple specimen preparation and provide a diagnosis to triage a patient. This success in surgical pathology and cytopathology has made adoption of automation rather slow. Secondly, it’s very expensive to do the kinds of clinical testing required to generate data that can directly impact patients. Even though a drug can cost thousands of dollars, the reimbursement for the test is very low. Hence, investors and other entities have not invested much in diagnostics because there is not much money to be made. It was a cost-based decision, not a value-based one.
Q: What is now bringing about a change?
A: There are two main reasons why automation is now creeping into pathology. One is the automation of digital pathology through artificial intelligence— AI—and we are in the early stages of that adoption. It is happening through the development of companies [that] recognize that even though there is not much money in reimbursement, there is still value and it can save money. Automation can sometimes prove to be more efficient than a pathologist, or it can be used as a tool by the pathologist to improve the specificity and sensitivity of the diagnosis. The other reason is the growing use of molecular testing, such as DNA and protein-based assessments. Multiplexed assessment with immunohistochemistry— IHC—started back in the ’70s and early ’80s, using antibodies to detect the presence or absence of a protein and coming up with a more specific diagnosis. One of the earliest companion diagnostic tests was the one for the estrogen receptor in breast cancer patients. If the patient had the receptor, they got one drug; if they didn’t, they got a different drug. In the ’80s, this test was considered one of the earliest “molecular biomarkers,” and now there are many such tests driving targeted therapies for lung cancer, melanoma, and other diseases. These immunochemistry or fluorescence in situ hybridization-based—or FISHbased— tests or mutation-based tests are all companion diagnostics, but the reimbursements are still challenging.
Even though some of these tests are quite mediocre—or not very specific— not much investment has been made to improve them. The improvements that have occurred are mostly through academia or by automation of staining by the big histology vendors. So, the bottom line is that there is really no economic driving force to justify automation. Even though the price of the drug is high, the historic use of lowcost diagnostic tests has slowed down the desire for high-cost automated tests. Unless you can come up with a quantitative, multiplexed, automated test that is very inexpensive and shows evidence that it is valuable to patients, it’s hard to get it approved and reimbursed. Another aspect that is on the horizon, which is quite exciting, is the reimbursement for drugs only if they work. For instance, in some situations only one out of four patients benefits from the drug. If the drug is very expensive, the insurance companies prefer to identify the one patient who is likely to benefit from the drug. Drug companies, on the other hand, would prefer the drug [be] given to every possible patient [who] could potentially benefit from the treatment. Insurance companies do not invest in diagnostics, but they can really drive the use of diagnostics, as we have seen recently in the use of CAR-T therapies. Two recent approvals for very expensive drugs have payment linked to patient response. When that happens throughout the industry, then the value of the diagnostic tests will change and people will start investing in the development of better diagnostics.
Q: Has lack of standardization also contributed to the slow adoption of automation?
A: Reimbursement is the biggest factor in why high-cost automated tests are not routinely done. Standardization is certainly important, and we are currently working on PD-L1 standardization for cancer immunotherapy. But even if you have a perfectly standardized test, if it’s not going to be reimbursed, it won’t get done. Many international groups are working on bringing about standardization in certain areas, but it’s certainly not holding back biomarker adoption in the clinic. Some multiparametric tests for gene expression levels or predicting patient response are fairly well established and clinically approved, although they can be done only at a few compliant sites. A few years ago, there was a big push toward setting guidelines for sample processing and handling for various specimen types. I won’t say the problem is solved, but there is a lot more awareness, and people are more inclined to follow the published guidelines.
Q: How much automation and multiplexing do you do at the Yale Pathology Tissue Services lab that you direct?
A: Our service lab tests around 40,000 specimens a year, mostly using the pathologists’ expertise. About 10 to 20 percent of this work has some IHC or companion diagnostic testing component. It is still interpreted by a pathologist but is a molecular-based test that helps augment the diagnosis. For some cancers, we do multiplexed gene mutation testing or IHC panels looking at four or five different proteins, and some of these tests are used by clinicians to pick the right therapy for the patient. We probably do 1,000 to 1,500 such molecular-based tests every year. However, all the quantitative multiplexed testing or use of AI in digital pathology is done only in our research labs and is not happening elsewhere on a routine basis. General pathology has been slow to adopt new technology, waiting for better proof of efficacy and cost-effectiveness.
Q: Where do you see the future going in terms of automated pathology testing?
A: We worked on quantitative immunofluorescence— QIF—which is more accurate than IHC and can be multiplexed. However, 10 years ago, there was no real reason to adopt it in the clinic. There was no companion diagnostic test that needed the quantitation, and QIF was inefficient for semiquantitative assays. Some drugs possibly failed back then due to lack of quantitative diagnostics that could have prevented poor patient selection. Immunotherapy may change diagnostic methods. We are now seeing multiplexed tests being developed on a fluorescence-based platform that are likely to be adopted in the next three to five years. Immunotherapy, which requires multiplexed quantitative testing for identifying the right treatment for the patient, might have been the killer application that we were looking for 10 years ago to bring QIF to the clinic.
David Rimm is a professor in the departments of pathology and medicine (oncology) at the Yale University School of Medicine. He is the director of Yale Pathology Tissue Services. He completed an MD and a PhD at Johns Hopkins University Medical School, followed by a pathology residency at Yale and a cytopathology fellowship at the Medical College of Virginia. He is boardcertified in anatomy and cytopathology. His research lab group focuses on quantitative pathology using the AQUA® technology invented in his lab, with projects related to predicting response to both targeted and immunotherapy in cancer. He also has supported projects related to rapid, low-cost diagnostic tests and direct tissue imaging.