Liron Pantanowitz, MD, is the vice chairman of pathology informatics, and the director of cytopathology at the University of Pittsburgh Medical Center Shadyside. He is also the director of the Pathology Informatics Fellowship Program.
Q: What is digital pathology, and how has it changed the way physicians and scientists interact with pathology data?
A: Most people take whole slide imaging to be synonymous with digital pathology, but digital pathology can mean many more things. You can take gross photos, microscopic photos, fluorescent photos, and more, but for the most part, digital pathology is considered to be whole slide imaging and the capability to digitize or scan your glass slides and then convert them to digital slides, e-slides, or whole slide images. Over the years, we’ve developed different technologies to determine morphology. Electron microscopy for example, followed by the field of immunohistochemistry to come up with biomarkers for more definitive diagnoses, and then molecular techniques to examine not only phenotype but also the genotype. Now, with the invention of imaging technology, we’ve been able to digitize slides and convert them to pixels, enabling the field of pathology. For example, it is easier to share those images and convert those pixels into data, which is perfect for artificial intelligence (AI). We’ve come a long way from the time of Virchow, the father of modern pathology, and his microscope, and now biologists are becoming more like data scientists as the result of digital pathology technology.
Q: How is digital pathology being combined with machine learning and AI?
A: For the first time, we are able to digitize slides in high throughput. Commercially available whole slide scanners have been available for about two decades, and that has enabled labs to scan and digitize slides to create a large data set. Now labs have a large amount of data, and this data is attached to metadata such as a pathology report that provides information about the diagnosis and patient outcome. As a result, these images have been used to begin to train algorithms. At the same time, two important things occurred in the field of computer science: first, computing capabilities and processing speeds have increased immensely, and we have cloud computing available to us; and second, within the umbrella of AI, there has been a shift away from machine learning to newer deep learning technology. Unlike traditional machine learning, which requires an expert pathologist to annotate images and train the algorithm, with more data sets we have been able to use convolutional neural networks that are able to distinguish important features required for an accurate diagnosis.
Q: Is an automated diagnosis possible? What implications might this have for patients?
A: Yes, it’s definitely possible. In fact, it was possible several decades ago because the field of pathology already applied automation to cervical cancer screening with pap smears, and there are many lessons to be learned from it. At one point, there was an overwhelming number of tests that required analysis. This led to a rush, and a greater number of mistakes and incorrect diagnoses. This led to the CLIA (Clinical Laboratory Improvement Amendments) regulations of 1988, which limited the number of tests an individual was permitted to screen in a day, to reduce the risk of errors. Unfortunately, this led to an even greater backlog, and created a situation where automation technology was required to solve the problem. Looking back on outcomes data, we observe that in this case, automation improved productivity and accuracy. We have learned from this that AI is best implemented as a solution to an existing problem. In some cases, new AI tools and technologies are being developed that aren’t necessarily needed, and are met with resistance from pathologists. It is also important to remember that integrating AI into an existing workflow requires a learning curve, and it is going to take time. Pre-imaging factors should also be considered, such as how the tissue was prepared and fixed, how thick the tissue is, and whether the stain is consistent. In the case of the pap smear analysis, the companies involved took control of the entire process—not just the machine learning part but pre-imaging as well.
As for the implications for patients, I think the main thing is that patients will benefit from a more reliable and accurate diagnosis. That’s the promise, and that’s what we are hoping for. At the same time, there could be some indirect negative consequences for patients. For example, if AI competes and takes away pathologist jobs, there may be fewer to deal with the increasing number of cases we see as baby boomers are aging. Another issue is that we are working with narrow AI, in which algorithms are trained to make one specific diagnosis. If there is an anomaly or a disease the algorithm wasn’t trained for, it will be missed. There has to be appropriate oversight in place in laboratories to make sure these mistakes are caught or that there are steps in place to prevent them from happening. At this time, we frequently hear the term “augmented pathology,” which means AI is able to augment what we do, but it is not at the point where there is no human intervention at all.
Q: What challenges lie ahead for the use of artificial intelligence in pathology?
A: I think there are several challenges that will have to be overcome before we see this in routine practice:
1. The first challenge is related to IT: Most labs do not currently have the IT infrastructure to support AI, and many hospitals do not have the budget for expensive servers. Further, data that could potentially be stored on the cloud would have to be linked to patient identifiers, which is problematic for hospital or institutional lab security.
2. Mindset: There is still the MD versus machine mindset, and while many pathologists are excited to see this technology, there is still some resistance and they haven’t fully embraced it themselves.
3. Ethics: All these data sets have to come from patients, and there is concern over whether these patients are informed, and whether they are providing consent for commercialization. Liability is also a concern when there is a mistake. Is the physician liable? How do you defend AI?
4. The reimbursement barrier: AI is expensive. Even if you factor in increased productivity and the added value, AI still drives up the cost, and there are no billing codes that enable labs to charge for it.
5. Regulations: Vendors and associations like the Digital Pathology Association are trying to work through regulations. They are concerned with whether the FDA will approve these AI algorithms, especially if they completely eliminate the human factor. Time is another concern, because it can take a long time for the FDA to clear an algorithm. This is also complicated by adaptive algorithms, that begin to improve and learn from mistakes. Their evolving nature means they may not receive FDA approval.
6. Generalizability: Most algorithms are trained on limited data sets, with limited outcomes. This makes it difficult to know whether, for example, an algorithm for a prostate biopsy that indicates the location of cancer and provides a prediction would provide the same prediction for someone living in the eastern United States and someone living in Indonesia. Despite many claims about AI, we don’t know if it will be generalizable to everyone.
7. A potential monopoly: If vendors charge a lot for an AI diagnosis, not everyone may be able to afford it or have access to it. It is possible that this could create some disparities in health care.