Robot microfluidics dispensation

Using High-Throughput Screening to Rapidly Identify Targets

High-throughput screening has become an important tool in drug discovery

High-throughput screening (HTS) is commonly used in the drug discovery process to identify candidates or “hits” that have the desired effect on the target. These hits are then used as a starting point for a more rigorous drug discovery pipeline17. Historically, lead generation for drug discovery was based on empirical observations rooted in theoretical and physiological models. These drug candidates would then be painstakingly evaluated in low-throughput experimental regimes. Over time, advances in recombinant protein technology, assays, automation platforms, robotics, and fluorescence chemistry have converged to enable the screening of 100,000+ compounds per day against a single target.4

Steps in the HTS process

Once the therapeutic target has been identified, the HTS process can be broken down into the following phases: assay preparation, pilot screen, primary screen, secondary screen, and lead selection8,17,18. In the assay preparation phase the samples are prepared and an assay size, type, and detection strategy are selected. Then, a pilot screen is run using a relatively small sample size to validate the robotic, chemical, statistical, and data storage aspects of the process. The primary screen is run against the selected compound library to generate hits. Secondary screening confirms the accuracy of these hits and helps to reliably transform them into leads.

The primary screen is designed to quickly identify hits from expansive compound libraries. For this reason, it is important that test results can be quickly read via a sensor. In practice, detection methods generally rely on fluorescence measurement11,18. Results are presented as a measure of activity relative to positive and negative controls11. The quality of screen results can be assessed by evaluating the signal to noise ratio and signal variation within the assay.

Data organization

Due to the widespread adoption and continued evolution of HTS technologies over the past few decades, there has been a rapid increase in the volume of HTS trial data available. Compound libraries have also increased in volume exponentially5. To be able to leverage this data, an IT and informatics infrastructure that complies with FAIR (findable, accessible, interoperable, reusable) data principles is essential14. A variety of software tools to capture, process, analyze, visualize, store, compare, and optimize HTS data exist. Platform-esque tools that offer users the ability to integrate these tools and systems into a common interface also exist11. Having a unified data environment is vital to running an effective HTS drug discovery program as it prevents the formation of data silos, tribal knowledge, manual data entry, and manual processing that put data integrity and reproducibility at risk. 

AI/ML innovations

Widespread adoption of FAIR principles and developments in data standardization, organization, and labeling have expanded the usable size of datasets for analysis. The availability of these large, well-labeled datasets has enabled the evolution of complementary computational techniques. Artificial Intelligence (AI) and machine learning (ML) applications in HTS processes can consume existing data libraries to train predictive models12,13,3. Virtual high-throughput screening (vHTS) is an entirely in silico process that integrates AI computational techniques and existing in silico compound libraries to identify “hits”15. When used to complement HTS, vHTS has been shown to increase lead quality11. There are ongoing industry-wide efforts to improve vHTS generalizability to the point where no prior data from an existing screen needs to exist before making accurate predictions1,2,3.

AI-driven iterative screening is another methodology that integrates AI into traditional HTS processes2. In traditional HTS, the hit rate is usually around one percent per assay2. It follows that using larger and larger compound libraries will generate a greater number of leads. Indeed, this belief dominated the early years of widespread HTS adoption in the pharmaceutical industry. However, larger libraries and increasingly complex screening technologies and chemistry lead to greater costs and lead times2. In recent times, information driven screening strategies are favored to balance costs, time, and hit rate. In iterative screening, results from a small screen are input into a machine learning model, which then selects the compounds to use for the next screen.2 This process can be repeated as many times as desired to further train the model and refine hit rate in the subsequent iteration. Such an approach requires a greater degree of assay automation than traditional HTS since assays must be assembled on demand based on model outputs18. Hit rates from iterative screening processes can be in excess of 35 percent while using a smaller portion of the compound library2.

Availability

As HTS continues to grow in importance within the drug discovery pipeline, many initiatives to reduce costs and improve accessibility within private, public, and academic spaces have been fruitful. “Cloud labs” are facilities that make HTS and other advanced techniques available in pay-per-use models11. Public-private partnerships make huge data sets available to researchers worldwide. There are also state-of-the-art HTS screening facilities operated by public-private partnerships (i.e. European Lead Factory) that make high-quality screening available to non-profit entities10,11. The advances in cost reduction, availability, and collaboration have increased the availability of HTS; which in turn has accelerated drug discovery in general. Advances in data management, AI, and ML will enable researchers to improve decision making and leverage ever growing datasets14. Multidisciplinary advances in automation, computing, data contextualization, and modeling will drive both the capability and importance of HTS in the years to come.

References:

1.    David, L. et al. Applications of deep-learning in exploiting large-scale and heterogeneous compound data in industrial pharmaceutical research. Front. Pharmacol. 10, 1303 (2019).

2.    Dreiman, G. H. S., Bictash, M., Fish, P. V., Griffin, L. & Svensson, F. Changing the HTS Paradigm: AI-Driven Iterative Screening for Hit Finding. SLAS Discov.  Adv. life Sci. R D 26, 257–262 (2021).

3.    Ekins, S. et al. Data Mining and Computational Modeling of High Throughput Screening Datasets. Methods Mol. Biol. 1755, 197 (2018).

4.    MacArron, R. et al. Impact of high-throughput screening in biomedical research. Nat. Rev. Drug Discov. 2011 103 10, 188–195 (2011).

5.    Mayr, L. M. & Bojanic, D. Novel trends in high-throughput screening. Curr. Opin. Pharmacol. 9, 580–588 (2009).

6.    Buterez, D., Janet, J. P., Kiddle, S. & Liò, P. Multi-fidelity machine learning models for improved high-throughput screening predictions. (2022) doi:10.26434/CHEMRXIV-2022-DSBM5.

7.    Roy, A., R. McDonald, P., Sittampalam, S. & Chaguturu, R. Open access high throughput drug discovery in the public domain: a Mount Everest in the making. Curr. Pharm. Biotechnol. 11, 764–778 (2010).

8.    High-throughput Screening Steps · Small Molecule Discovery Center (SMDC) · UCSF. UCSF https://pharm.ucsf.edu/smdc/tech-services/hts-steps.

9.    Pusterla, T. High-throughput screening (HTS) | BMG LABTECH. BMG Labtech https://www.bmglabtech.com/en/blog/high-throughput-screening/ (2019).

10.    Helping to create new medicines | European Lead Factory. https://www.europeanleadfactory.eu/.

11.    TRENDS IN HIGH THROUGHPUT SCREENING. 20/15 Visioneers https://www.20visioneers15.com/post/throughput-screening.

12.    Advancing Drug Discovery with Artificial Intelligence. Kantify https://kantify.com/use-cases/drug-discovery.

13.    Artificial Intelligence (AI) in drug discovery. Kantify https://www.kantify.com/insights/artificial-intelligence-ai-in-drug-discovery.

14.    Hanton, S. D. High Throughput Experimentation Drives Better Outcomes | Big Picture | Lab Manager. Lab Manager https://www.labmanager.com/big-picture/lab-automation-benefits/high-throughput-experimentation-drives-better-outcomes-24503 (2020).

15.    Virtual Screening - Creative Biolabs. Creative Biolabs https://www.creative-biolabs.com/drug-discovery/therapeutics/virtual-screening.htm.

16.    Szyma?ski, P., Markowicz, M. & Mikiciuk-Olasik, E. Adaptation of High-Throughput Screening in Drug Discovery—Toxicological Screening Tests. Int. J. Mol. Sci. 13, 427 (2012).

17.    Shukla, A. A. High Throughput Screening of Small Molecule Library: Procedure, Challenges and Future. J. Cancer Prev. Curr. Res. 5, (2016).

18.    Bokhari, F. F. & Albukhari, A. Design and Implementation of High Throughput Screening Assays for Drug Discoveries. High-Throughput Screen. Drug Discov. (2021) doi:10.5772/INTECHOPEN.98733.

19.    Brubacher, M. G. High-throughput Technologies in Drug Discovery | Technology Networks. Technology Networks: Drug Discovery https://www.technologynetworks.com/drug-discovery/articles/high-throughput-technologies-in-drug-discovery-330193 (2020).

20.    Subramaniam, S., Mehrotra, M. & Gupta, D. Virtual high throughput screening (vHTS) - A perspective. Bioinformation 3, 14 (2008).

21.    Wang, Y., Cheng, T. & Bryant, S. H. PubChem BioAssay: A Decade’s Development toward Open High-Throughput Screening Data Sharing. SLAS Discov. 22, 655–666 (2017).


John F. Conway

John F. Conway has 30 years of experience in R&D on all sides of the fence; industry, software and services, and consulting. Most recently, he was global head of R&D IT at AstraZeneca.


Graeme Dennis

Graeme Dennis has held various roles in biopharma informatics since 2004, and is currently a consultant with 20/15 Visioneers. He studied chemistry at Vanderbilt University and lives outside Nashville, Tennessee.


Amandeep Ratte

Amandeep Ratte has a background in chemical engineering, data science, and agricultural technology. He has architected and designed systems to improve automation, accessibility, and reliability in indoor farming. He is currently the head of data science at a leading ag-tech company.