Mastering Molecular
Biology: Tools, Techniques,
and Technologies
Optimize molecular workflows with critical
insights and advanced technologies
COMPARE
Methods and Platforms
ACHIEVE
Accurate Detection &
Quantification
EXPLORE
Emerging Innovations
MOLECULAR BIOLOGY
RESOURCE GUIDE
2 Lab Manager Molecular Biology Resource Guide
Table of Content
Navigating Advanced Workflows for Precision
and Insight ....................................................................3
Core Molecular Biology Workflows .................................5
Optimizing PCR Experiments.............................................. 6
Optimizing PCR Workflows: Four Strategies for Lab Managers.. 7
Methods for Protein Identification and Sequencing.................. 9
Next-Generation Sequencing: Library Preparation..................12
From Data to Discovery: Crafting Sequencing
Bioinformatics Workflows.................................................14
Digital Solutions to Streamline Cell Line Development Processes 18
Precision Detection: Immunoassays, Antibodies, and
Microplate Technology .................................................21
ELISA Tests: Beyond Just Antibody Detection ........................ 22
Advancing Precision in Science and Medicine with
Recombinant Antibodies ................................................. 26
Overcoming Challenges of Cell Culture
Monitoring: New Solutions to Old Problems........................ 28
Automated Microplate Technology: Application Highlights.......31
Single-Mode vs. Multimode Microplate Readers .................. 33
Omics Technologies in Molecular Biology ..................... 36
Integrating Multi-Omics Approaches in Life Science Research .. 37
New Opportunities in Proteomics.......................................41
3 Lab Manager Molecular Biology Resource Guide Introduction
Navigating Advanced
Workflows for Precision
and Insight
From clinical diagnostics and genetic research to
biotechnology development: essential strategies, tools, and
technologies for molecular biology laboratories
The ever-growing demand for precision, reproducibility, and scalability is perhaps most
acutely felt in molecular biology labs. With rapid advancements in instrumentation and the
continuous evolution of molecular techniques, lab managers and researchers must stay current
and continually refine their methods, which often means returning to the basics. Amidst all of
the exciting new developments, it’s easy to neglect the foundations.
This resource guide reviews critical molecular biology workflows, highlighting foundational
methods such as PCR, sequencing technologies, and bioinformatics tools. It offers practical
strategies to optimize workflows and enhance lab efficiency, accuracy, and data integrity, whether you’re improving existing operations, expanding into new fields, or just getting started.
4 Lab Manager Molecular Biology Resource Guide Introduction
For readers considering integrating new multi-omics approaches, this guide makes the
landscape more approachable, touching on its benefits, challenges, implementation paths,
and real-world research and clinical applications. A primer on proteomics, including mass
spectrometry, immunoassays, and emerging NGS-based approaches, is particularly helpful
for decision-makers selecting methods for new workflows.
Additional content explores new technologies for cell culture monitoring to support metabolomics, how to craft bioinformatics pipelines, and advances in detection methodologies—from
immunoassays and recombinant antibodies to microplate technologies designed for greater
throughput and reproducibility.
Framed to support decision-making from culture to analytics, this resource guide
assists labs in navigating new and established technology and techniques while
balancing budget, compliance, and expertise. It empowers lab leaders and
researchers to optimize operations, maintain robust data quality, and achieve
reproducible, impactful outcomes in their scientific endeavors.
Chapter One
Core Molecular
Biology Workflows
From PCR preparation to bioinformatics, molecular biology labs rely on optimized workflows, advanced instrumentation, and ever-evolving digital tools. Successful outcomes
depend on workflow efficiency, proper planning, and choosing the right equipment.
Beyond technical proficiency, mastering these processes requires a deep understanding
of the tools, strategies, and decisions that shape successful experiments and reproducible results.
This chapter explores core molecular biology workflows, including PCR, sequencing, and
bioinformatics. Readers will gain insight into how to select and operate different technologies, while also learning how to enhance lab efficiency, accuracy, and scalability.
5’
3’ 5’
5’
3’
Water Filter
The Polymerase Chain Reaction (PCR) method is used to amplify target DNA sequences. Primer
design, reaction mixtures, primer melting temperatures, and thermal cycling conditions contribute
to the success of the experiment. Optimizing these conditions and taking steps to prevent
contamination help to ensure successful outcomes.
The primer is a short nucleic acid sequence, required as a starting point for DNA synthesis.
Primers flank the DNA target region, with one annealing to the plus strand (5’ → 3’) and the
other annealing to the minus strand (3’ → 5’). DNA polymerase then extends the primers.
The PCR reaction mixture contains water, buffer, polymerase, dNTPs, MgCl2
, primers, and the
DNA template.
To minimize the risk of pipetting errors, prepare a PCR master mix consisting of sterile distilled
water, buffer, dNTPs, primers, and polymerase in a single tube. Aliquot the master mix, then
add MgCl2
and DNA template.
Thermal cyclers heat and cool the reaction mixture to facilitate denaturation, annealing, and
elongation. Cycle duration and temperature depend on characteristics of the polymerases,
buffers, template size, and GC content of the DNA.
PCR EXPERIMENTS
DESIGN
MIXTURE
CONDITIONS
Primer
Reaction
Thermal Cycling
Optimizing
PRIMER DESIGN
REACTION MIXTURE
PREPARATION
~ GC content between 40-60 percent.
~ 3’ end of the primer should contain a
C or G. The hydrogen bonds in GC pairs
will promote binding and ensure DNA
ends stay annealed.
~ Avoid primer dimers: ensure the 3’ ends
of a primer set are not complimentary
~ Avoid hairpin loops: ensure the 3’ end
of a single primer is not complimentary
to other sequences in the primer
~ Avoid slipping: do not use single base
runs of more than 4, and do not use
dinucleotide repeats
~ Magnesium is a DNA polymerase cofactor.
Magnesium concentration must be
optimized to ensure maximum yield
and specificity.
~ When selecting a buffer, refer to
the guidelines provided by the
DNA polymerase supplier. Buffer
recommendations will differ depending
on the DNA polymerase in use.
~ Add reagents to the reaction mixture
in the following order:
o Sterile distilled water
o PCR buffer
o dNTPs
o MgCl2
o Forward primer
o Reverse primer
o Template DNA
o Polymerase
Tips for
Tips for
There are many databases and
primer design tools available to
simplify the process.
It is important to optimize
the input amount of DNA, as too
much increases the risk of nonspecific
amplification and too little wil l
reduce yields.
Primer Tm is the temperature at which half the DNA duplex dissociates and becomes single
stranded. The primer melting temperatures should differ by no more than 5°C from each
other. The conventional calculation for Tm is as follows:
2°C (A + T) + 4°C (G + C)
The nearest-neighbor thermodynamic models are superior to the conventional calculation,
as they take into consideration the stacking energy of neighboring base pairs.
TEMPERATURE Melting
There are several online resources for calculating Tm
with the nearest-neighbor models.
PCR additives, co-solvents, and modified nucleotides can lower Tm.
tm
Tm
A+ ∆S + Rln - 273.15 + 16.6LOG [NA+] Sodium ion
concentration in M
or mol L-1
Oligonucleotide
concentration in M
or mol L -1 Gas constant of
0.00199 kcal K -1.
mol-1
Entropy change in
kcal K-1. mol-1
Enthalpy change in
kcal mol-1
Conversion factor to change
the expected temperature in
Kelvins to ºC
Constant of -0.0108
kcal K-1. mol-1
Melting
temperature in ºC
∆H
C
4
=
Denature DNA
Extend Primers
Anneal Primers
SETTING THERMAL CYCLING CONDITIONS
~ The initial denaturation step occurs at 94°C to 98°C, depending on the size of the template
and DNA GC content
o High GC content (>65 percent) may require a longer incubation or higher temperature
o Buffers containing high salts may also require a higher temperature
~ The annealing step should occur approximately 5°C below the Tm of the primers
~ Extension time depends on the DNA polymerase in use. Taq DNA polymerase requires
1 min/kb and Pfu DNA polymerase requires 2 min/kb. Check manufacturer recommendations
for temperature and time parameters specific to the DNA polymerase in use.
Tips for
PCR contamination may result in the production of unexpected amplicons. Taking precautions
throughout the experiment helps to ensure accurate results.
PCR CONTAMINATION
Avoiding
AVOIDING PCR
CONTAMINATION
~ Use PCR plates made with low binding
materials such as polypropylene to
increase recovery
~ Ensure plates, tubes, and pipette tips
are certified to be free from DNA,
DNase, RNase, and inhibitors
~ Employ a unidirectional workflow
during setup, separating pre- and postamplification space
~ Work in a space dedicated to PCR,
and work in separate PCR setup and
PCR analysis areas
~ Set aside dedicated equipment
(pipettes, centrifuge, vortex, etc.)
for PCR setup
~ Decontaminate equipment regularly
~ Open tubes carefully and as infrequently
as possible to minimize aerosols
Tips for
7 Lab Manager Molecular Biology Resource Guide
Optimizing PCR Workflows: Four
Strategies for Lab Managers
Best practices to enhance PCR efficiency while maintaining accuracy
By Morgana Moretti, PhD
While the fundamental technique seems straightforward in
textbooks, achieving consistently high-quality PCR results
requires more than just following a protocol. It demands
efficiency, accuracy, and well-optimized workflows. This article outlines four strategies lab managers can use to improve
PCR workflows in their labs.
Strategy one: Optimize reagent
management
Expired reagents, empty stocks, and poor storage in PCR
testing mean lost time and wasted resources.
Automated inventory systems track reagent usage, expiration
dates, and storage conditions in real time. These systems can
be integrated with laboratory information management systems, providing greater oversight and avoiding unnecessary
delays or errors.
Just-in-time ordering also helps by preventing excessive stockpiling while ensuring that critical reagents are available when
needed. This strategy also minimizes waste and helps lab managers control costs. Regular audits and supplier agreements for
priority delivery are additional strategies to improve reagent
management and prevent disruptions in PCR workflows.
8 Lab Manager Molecular Biology Resource Guide
Strategy two: Reduce contamination
risks
PCR contamination commonly arises from aerosol generation that spreads amplicons. Moreover, direct contact with
reagents, whether from ungloved hands, clothing particles, or hair, can introduce contaminants to PCR reagents.
Beyond reagents, laboratory disposables and equipment can
also be sources of contaminating DNA.
To mitigate these risks, it is essential to establish dedicated preand post-PCR workspaces with separate pipettes, racks, and
consumables. Using high-quality filtered pipette tips or positive
displacement pipettes helps minimize aerosol contamination.
Regularly cleaning work surfaces with DNA-degrading
solutions and correctly disposing of contaminated plasticware
further reduces contamination risks. Additionally, enforcing
strict pipetting protocols and handling reagents with care (e.g.,
using dedicated aliquots, keeping tubes closed, and changing
gloves regularly) ensures reliable, high-quality results.
Strategy three: Leverage automation
for efficiency
Automation transforms PCR workflows by improving processes, reducing human errors, and increasing throughput.
For example, high-throughput thermal cyclers with advanced temperature control optimize amplification conditions, reducing run times while enhancing consistency.
Data interpretation, often a bottleneck in the lab, also benefits from automation. Data analysis platforms integrated into
PCR workflows can record cycle thresholds, flag abnormal
amplification curves, and generate reports that reduce data
analysis time.
Cloud-based solutions enable real-time results review,
improving collaboration and reducing troubleshooting time.
Instead of manually compiling and validating results, lab
teams can focus on refining protocols, identifying trends
in gene expression, and making data-driven decisions that
enhance efficiency and accuracy.
Strategy four: Ensure workflow
standardization and staff training
Even the best PCR protocols can fail if lab personnel don’t
follow standardized procedures consistently.
Protocols should be documented and readily accessible to
all staff. Quick-reference guides and checklists can reinforce
adherence to best practices. Additionally, regular training
sessions and competency assessments ensure that lab personnel remain proficient in PCR techniques. Finally, encouraging a culture of continuous learning through refresher
courses and peer mentoring fosters consistency and reduces
preventable errors.
Droplet digital PCR (ddPCR)
technology
Over the years, there have been many advances in
PCR technology, including quantitative PCR, real-time
PCR, and digital PCR. More recently, ddPCR has
emerged as a gold standard due to its precision,
sensitivity, and ability to determine absolute
quantification of nucleic acids. With ddPCR, the
sample is partitioned into 20,000 nanoliter-sized
droplets, enabling the measurement of thousands
of data points, compared to a single result with
traditional PCR. Additionally, the lack of requirement
for reference standards helps labs save on costs and
time while avoiding preparatory errors.
As PCR workflows continue to evolve, lab managers
may want to explore where ddPCR fits in with their
existing workflows, particularly for applications
requiring high sensitivity and quantitative accuracy.
9 Lab Manager Molecular Biology Resource Guide
Methods for Protein Identification
and Sequencing
Understand common methods, their strengths, limitations, and applications
By Morgana Moretti, PhD
There is no debate over the utility of genome sequencing
to understand the complex molecular networks of cells in
health and disease. However, understanding a protein’s
amino acid sequence is equally important. This knowledge
is essential for understanding its structure and function and
can provide valuable insights into biological processes and
disease mechanisms.
This article provides an overview of the most common
approaches to protein identification and sequencing. We
discuss how each method works, the information it offers, its
pros, cons, and best applications. Additionally, we explore
potential developments in this rapidly evolving field.
Common approaches to protein
sequencing
Mass spectrometry
Mass spectrometry is a commonly used technique for studying proteins. It involves breaking down proteins into smaller
peptides, which are separated, fragmented, ionized, and
10 Lab Manager Molecular Biology Resource Guide
captured by mass spectrometers. The end result is a mass
spectrum containing ions characteristic of the sequence of
amino acids in the selected peptide.
Mass spectrometry is versatile and can identify various
proteins and protein modifications. It can handle complex
mixtures and quantify proteins across diverse samples.
However, existing mass spectrometry methods can typically
identify fewer than 50 percent of the proteins in a complex
sample. Another major limitation of mass spectrometry is
its inability to differentiate proteins with similar masses or
structures. This low sensitivity can lead to inaccurate identifications or missing relevant proteins. Moreover, the process
requires expertise in data analysis due to the complex nature
of the results.
Immunoassays
Immunoassays, such as western blotting and enzyme-linked
immunosorbent assay (ELISA), are well-established protein
detection and quantification techniques. These methods rely
on the specificity of antibodies to recognize target proteins.
While immunoassays are not typically used to determine the
sequence of amino acids in a protein, they are valuable tools
for confirming the presence of specific proteins in a sample.
Immunoassays are sensitive and specific tests that produce
rapid results. In addition, they offer the benefit of not requiring complex equipment or the use of radioactive labels.
However, challenges with these techniques do exist. Antibody binding efficiency can be problematic, potentially
leading to false positives or negatives. Cross-reactivity with
similar proteins can also occur, impacting the accuracy of
results. Finally, western blotting and ELISA are multistep
protocols; variations and errors can occur at any step, reducing the reliability and reproducibility of these techniques.
Next-generation sequencing (NGS)
While commonly associated with DNA and RNA sequencing, NGS has extended its applications to protein sequencing, improving the speed and ease with which researchers
can correlate biological function with changes in protein
sequence and modifications.
New NGS platforms enable very sensitive experiments and
allow researchers to study proteins that are in lower abundance. This can help uncover previously hidden aspects of
cellular biology and reveal valuable insights into protein
variations and their effects.
Best applications for each method
Each method mentioned above excels in specific applications.
Immunoassays are well-suited for confirming the presence
of specific proteins, validating results from other techniques, and routine protein quantification. Scientists can
also use western blotting and ELISA techniques to identify
the presence and extent of post-translational modifications
on proteins, including phosphorylation, glycosylation, or
ubiquitination. However, detecting the broad spectrum of
post-translational modifications is limited by the number of
available antibodies, substantial specificity issues, and lot-tolot variations, leading to reproducibility problems.
In contrast, mass spectrometry identifies and quantifies
specific post-translational modifications with high sensitivity and resolution. In addition, it is ideal for large-scale
proteomic studies and characterizing proteins and protein
complexes. However, mass spectrometry often fails to identify rare proteins in a complex sample.
NGS can identify rare genetic variations, including mutations and single-nucleotide polymorphisms that can
impact protein sequences. This is particularly valuable for
understanding how these variations translate to differences
in protein function and disease susceptibility. Moreover,
NGS can excel in detecting modifications that might remain
unmatched or obscured in mass spectrometry or present
as unidentified in immunoassays. Additionally, NGS can
facilitate de novo sequencing of proteins, a method in which
the amino acid sequence of a protein is directly determined
without prior knowledge of its DNA. This is useful for discovering novel proteins or protein variations.
“In the near-term, many studies
will likely seek to combine
the analysis of protein, DNA,
and RNA sequence data.”
11 Lab Manager Molecular Biology Resource Guide
The future: Integration of protein
sequencing with other assays
While the technologies outlined in this article offer advantages, there is a trade-off in protein analysis. Mass spectrometry can survey a large number of proteins, but its sensitivity
can vary depending on the specific approach and instrumentation employed. On the other hand, immunoassays tend
to be highly sensitive, but offer a very narrow survey of the
proteome. Methods like NGS, which enable single-molecule
sequencing of proteins, help bridge this gap by offering comprehensive proteome coverage and high sensitivity. Moreover, the ability to perform de novo sequencing of proteins
with NGS can be a game-changer in protein characterization, especially in cases where prior sequence knowledge
is limited or absent. These advantages position NGS as a
valuable addition to existing proteomic methods.
In the near term, many studies will likely seek to combine
the analysis of protein, DNA, and RNA sequence data. This
integrative approach could lead to a deeper understanding of
disease mechanisms and help identify key proteins involved
in various conditions. In addition, advancements in protein
sequencing could enable personalized treatment approaches,
tailoring interventions based on an individual’s protein profile.
As these advancements unfold, it’s essential to consider
streamlining data analysis pipelines and formulating standardized protocols. Such efforts not only enhance accessibility but also improve the reproducibility of protein sequencing
techniques. With further development, protein sequencing
methods can become sensitive, scalable, and accessible for
various proteomic applications. Additionally, they will likely
become increasingly portable, enabling their applications in
field-based settings.
12 Lab Manager Molecular Biology Resource Guide
Next-Generation Sequencing:
Library Preparation
Library preparation is a critical step in the workflow of several NGS paradigms
By Brandoch Cook, PhD and Rachel Brown, MSc
DNA sequencing is perhaps the most substantial development in molecular biology since the Watson-Crick structure
of the DNA double helix. The earliest method of nucleotide
sequencing used chemical cleavage followed by electrophoretic separation of DNA bases. Sanger sequencing improved
upon this method by employing primer extension and chain
termination, which gained primacy with its decreased reliance on toxic and radioactive agents.
Since then, pressure on the sequencing data pipeline led
quickly to considerable technological changes that far surpassed the Sanger method in terms of cost and efficiency by
flattening the workflow. The high-throughput sequencing
methods that followed, collectively known as next-generation sequencing (NGS), include several sequencing by
synthesis technologies that rapidly identify and record
nucleotide binding to complementary strands of amplified
DNA, in massively parallel synthesis reactions with a daily
throughput in the hundreds of gigabases.
Although the principle of massive parallel sequencing reactions has been shared across methods, the modes of nucleotide incorporation and fluorescence detection in the synthesis reactions differ among commercially available platforms.
13 Lab Manager Molecular Biology Resource Guide
The reagents and library preparation protocols required
for sequencing depend on the systems and models used, but
some generalities apply. Because of the sensitivity of the
technologies and the nature of much modern genomics research, success depends on high-quality, optimized libraries.
Library preparation dictates read depth (number of copies
of a given stretch of DNA sequenced), length, and coverage
(breadth of sequencing data), which need to be balanced
according to the sequencing goals. Greater read depths
improve the signal-to-noise ratio and increase confidence
in data validity. Regardless of the nature of the starting
material—genomic DNA, mRNA, DNA-protein complexes,
etc.—the precondition for generating useful NGS datasets
is a clean, robust library of nucleic acids. As in so much of
molecular biology, there is always a kit for that.
A typical, generic workflow for library preparation is as
follows: 1) sample collection, fragmentation via enzymatic
digestion or shear forces, 2) end-repair and phosphorylation
of 5’ ends, 3) ligation of oligo dT-based adapters, 4) and a
high-fidelity PCR-based amplification step to generate a
product with adapters at both ends, barcoded for identification of individual samples run as multiplex reactions. Most
library prep kits are engineered to appropriately modify and
amplify the given starting material while reducing the number of steps to accelerate sequencing workflows, maintain
sample quality, and minimize contamination. Manufacturers
typically provide a wide selection of library preparation and
sequencing kits optimized for their platform to suit a variety
of applications and sample types. Depending on the sequencing platform, third-party reagents or kits that increase
flexibility or reduce cost may be available.
Standard library prep kit protocols can usually be performed
manually or with varying degrees of automation from compact 96- or 384-channel pipette stations to high-throughput,
fully automated workstations requiring little to no manual
intervention. Automated library prep improves sequencing data by increasing consistency, accuracy, and precision
in the pipette-heavy workflows. The boost in accuracy
and precision also supports the miniaturization of sample
volumes, which can be particularly important for low-input
sequencing. Sequencing companies often work with multiple automation partners to develop validated methods for
different platforms.
While large robotic workstations are flexible in method
development and modifications, they are prohibitively expensive for many labs and best suited for very high-throughput environments. Smaller and mid-sized labs that want to
take advantage of end-to-end library prep automation that
increases walk-away time and reduces the dependency on
skilled technicians may have more luck with microfluidics
platforms. A shifting technological landscape has produced
a variety of self-contained, specialized instruments that
produce sequencing-ready libraries post-fragmentation from
low to high throughput (starting at around eight samples).
However streamlined library prep protocols become, they
require a high degree of precision and care to produce
reliable sequencing results. Labs have the benefit of many
options for methods, kits, and supporting equipment, depending on the application, chosen sequencing technology,
and degree of automation desired.
“Because of the sensitivity of the
technologies and the nature of
much modern genomics research,
success depends on highquality, optimized libraries.”
14 Lab Manager Molecular Biology Resource Guide
From Data to Discovery:
Crafting Sequencing
Bioinformatics Workflows
Discover the key factors for designing efficient, scalable, and high-quality
bioinformatics workflows for sequencing data analysis
By Jordan Willis
Modern sequencing technologies produce an intense amount
of data, but raw reads alone can’t lead to novel biological
insights. Well-designed bioinformatics workflows are now
essential for rapidly organizing sequencing data into accurate, reliable results for further analysis and interpretation.
The need for customized workflows has resulted in a variety
of methodologies, but how do labs decide upon the best
approach for their specific needs? Whether you’re optimizing an existing pipeline or designing one from scratch, this
article reviews the concepts and criteria needed to build and
maintain efficient sequencing bioinformatics workflows.
15 Lab Manager Molecular Biology Resource Guide
Understanding the advantages and
challenges of standard bioinformatics
solutions for sequencing data
Complexity, flexibility, and cost are the key factors to consider when designing a bioinformatics workflow. Overall, the
workflow should be adapted to the lab’s research objectives,
computational infrastructure, and technical expertise. Selecting the appropriate bioinformatics solution requires balancing
many factors against lab-specific needs and constraints.
Each solution has distinct strengths and challenges depending
on its workflow architecture and resource requirements. This
makes the choice context and goal dependent. The solutions fall
into three main categories that can be compared in terms of cost
considerations, data requirements and computational resources,
accuracy and reliability, and scalability and automation:
1. Do-it-yourself (DIY)
DIY bioinformatics workflows rely on in-house pipeline
development, often based on freely available open-source
tools and custom scripting. Offering the most flexibility and
transparency, DIY enables unparalleled pipeline customization and oversight. Programming languages like Python and
R are widely used for data analysis tasks, such as statistical modeling and genome analysis, while Bash serves as a
powerful shell scripting language for optimizing workflows
by automating tasks like file management and tool execution.
Bioinformatics software such as Burrows-Wheeler Aligner
(BWA), Genome Analysis Toolkit (GATK), and SAMtools
are often integrated into these workflows.
Comparison: The DIY approach is usually the most cost-effective and customizable method due to its reliance upon
open-source tools, but at the expense of investing in skilled
personnel, robust infrastructure, and ongoing maintenance.
It provides complete control over data handling, allowing for
deep customization of pipelines, benchmarking, and peer-reviewed validation. However, this level of flexibility requires
powerful computing resources and ongoing maintenance.
While DIY workflows can be highly scalable and capable of
automation, they require substantial manual configuration
and expertise to ensure long-term sustainability.
2. Third-party
Third-party bioinformatics platforms provide pre-configured analysis pipelines with user-friendly interfaces and
features like cloud-based computing. These options reduce
in-house technical expertise requirements and development
time by offering automated workflows for alignment, variant
calling, and quality control. Third-party options may include skilled customer support, built-in regulatory compliance features, and custom data security measures.
Comparison: Third-party solutions can reduce labor and
development costs but often require long-term financial
commitments. By leveraging cloud-based processing, they
reduce the need for local computational infrastructure;
however, data upload and transfer speeds can become bottlenecks, particularly with large datasets. These solutions minimize errors by providing validated and automated workflows,
Before discussing the different
solutions, several universal workflow
attributes should be highlighted:
Accuracy: The correctness of data-related
processes like variant calling, read alignment, and
quantification.
Reliability: The ability to maintain consistent
performance across datasets while minimizing errors
and batch effects.
Ease of use: The quality of user-friendly interfaces,
clear documentation, and workflow automation.
Computational efficiency: The optimization of
memory usage, processing power, data storage,
and resource allocation to minimize processing time
and costs.
Reproducibility: The ability to consistently generate
the same results when processing identical datasets
under the same computational conditions.
Integration capability: Forward and backward
compatibility with lab hardware and software.
Scalability: The capacity to efficiently upgrade
infrastructure and handle increasing data volumes.
16 Lab Manager Molecular Biology Resource Guide
but limit the control that in-house users have over pipeline
parameters. Designed for scalability, third-party platforms
often include automation features that improve efficiency in
high-throughput environments.
3. Manufacturer-provided
Many sequencing platform manufacturers offer proprietary
bioinformatics software optimized for their specific sequencing technology to ensure seamless integration and standardized data analysis. Manufacturer-provided pipelines often
include built-in quality control metrics and default parameter settings designed for comparability and reproducibility
within the lab and the broader research community.
Comparison: Manufacturer-provided solutions are often
bundled with sequencing hardware, which can reduce initial
investment costs, though additional fees may apply for extended features, software updates, or increased data storage.
These solutions are typically optimized for specific sequencing instruments, ensuring smooth data processing but often
limiting interoperability with other platforms. While they
include built-in quality control measures and incredible
efficiency within proprietary limitations, they may be less
adaptable to new sequencing methodologies or customized
analytical needs.
The importance of seamless integration
with lab software systems
Each standard solution should be designed to integrate with
laboratory informatics platforms to capitalize on validated
tools for streamlined data management, enhanced traceability, and improved reproducibility. There are three primary
types of lab software that can play a vital role in organizing,
storing, and managing sequencing data:
1. Laboratory information management system (LIMS)
LIMS platforms serve as an organizational hub for managing
metadata, overseeing tasks like reagent and workflow tracking, as well as sequencing run management. Integrating bioinformatics solutions with LIMS offers several advantages,
including automated sample tracking and traceability, which
ensures a complete history of processing steps. Workflow
automation reduces manual data entry and minimizes errors
in sequencing pipelines, improving efficiency and consistency. Additionally, LIMS enhances interoperability by facilitating communication between sequencing instruments and
downstream bioinformatics tools, resulting in a smooth and
integrated data management process.
2. Electronic lab notebook (ELN)
ELNs function as digital repositories for documentation and
data tracking. Integrating ELNs into bioinformatics pipelines
increases data consistency and reproducibility by enabling
standardized data recording, which is an important aspect of
maintaining regulatory compliance. They can also promote
collaboration in-house through shared protocols, scripts, and
documented results.
3. Scientific data management system (SDMS)
SDMSs serve as secure storage and management platforms
while ensuring compliance with data integrity standards.
Integrating an SDMS into a bioinformatics workflow helps
prevent data loss and enables controlled access to sequencing files. These systems also provide version control and
auditability, tracking changes in data processing pipelines
to provide transparency and reproducibility. Additionally,
SDMS solutions are designed to scale efficiently, handling
increasing data volumes without compromising performance, making them essential for managing large sequencing datasets.
Strategies for effective lab software
system integration
A well-integrated bioinformatics ecosystem enhances data
flow, minimizes errors, and improves the quality of sequencing data analysis. Before implementing lab software integration, labs should consider the following best practices:
1. Use standardized data formats: Ensure compatibility
between all bioinformatics tools and lab software
2. Implement API-based connectivity: Enable smooth
communication and coordination between LIMS, ELNs,
SDMS, and bioinformatics pipelines
“Complexity, flexibility, and cost
are the key factors to consider when
designing a bioinformatics workflow.”
17 Lab Manager Molecular Biology Resource Guide
3. Automate data transfers: Reduce manual intervention by
implementing scheduled data synchronization processes
4. Ensure compliance with regulations: Adhere to data integrity requirements, particularly in regulated environments
Data integrity in regulated laboratories
Ensuring data integrity is a critical requirement for sequencing bioinformatics workflows, especially in regulated environments such as clinical, pharmaceutical, and forensic laboratories. Regulatory bodies enforce strict guidelines to maintain
the reliability, traceability, and security of sequencing data.
The role of bioinformatics in data integrity compliance
Bioinformatics platforms facilitate data integrity through
built-in compliance features like audit logs, encryption, and
traceability tools. Additionally, by automating workflows, human errors are reduced, and analysis becomes standardized
to increase reproducibility. Seamless integration with LIMS,
ELN, and SDMS further enhances traceability and regulatory compliance. Maintaining data integrity is essential for
producing high-quality, reproducible sequencing results;
bioinformatics workflows must be created with this fundamental concept in mind.
Securing sequencing data integrity—best practices and
key challenges
To safeguard sequencing data integrity and ensure compliance, labs should:
1. Maintain audit trails and electronic signatures to log data
access, modifications, and approvals
2. Implement automated data validation, including quality
control checks, checksums, and redundancy measures
to detect errors
3. Use secure and redundant storage solutions with automated backups and disaster recovery protocols
Despite best efforts to maintain data integrity, several key
challenges persist in sequencing workflows that can compromise the accuracy and reliability of results:
1. Large data volumes from next-generation sequencing
require robust storage, validation, and traceability
2. Risks of data corruption or errors during transfer between
platforms, tools, and storage systems
3. Managing version control and ensuring reproducibility
amidst complex regulatory compliance requirements
Bioinformatics workflows are the
engine driving the exploration of
sequencing data
When properly designed and customized, bioinformatics
workflows are indispensable tools for analyzing sequencing
data and generating meaningful biological information.
Selecting the right bioinformatics solution requires careful
consideration of factors such as cost, data integrity, and integration with lab software, and long-term project goals should
be clearly defined to help inform decision-making. With
continuous improvements in sequencing and computational
methods, bioinformatics workflows for sequencing data will
continue to become more efficient at unlocking the world’s
biological secrets.
18 Lab Manager Molecular Biology Resource Guide
Digital Solutions to Streamline Cell
Line Development Processes
Digital tools can improve processes, data analysis, and collaboration in cell line
development
By Morgana Moretti, PhD
Cell line development (CLD) plays a crucial role in drug
discovery and the manufacturing of biopharmaceuticals—
usually recombinant proteins such as monoclonal antibodies,
bispecifics, and biosimilars.
Data silos between CLD process steps, operators, and labs
can lead to rework, bottlenecks, and issues with data integrity and compliance.
Avoiding these problems in the face of more data points than
ever requires solutions aimed at consolidating data, making
it accessible and understandable, and enabling interfaces
that maximize monitoring and collaboration. Advanced
bioinformatics tools and knowledge management systems are
becoming increasingly important in this context.
19 Lab Manager Molecular Biology Resource Guide
The digital lab approach: A game
changer in CLD
The digital lab approach involves leveraging software and
other digital tools to streamline laboratory processes, data
analysis, and communication. This approach can transform
the CLD landscape from a traditionally segmented process
to an integrated workflow.
The benefits are multifold: improved data accuracy and
transparency, efficient resource management, and enhanced
collaboration among research groups and sites.
A digital solution for every stage
of CLD
The CLD process encompasses several stages:
) Vector construction
) Vector sequence verification
) Transfection and selection of cells that have integrated
the vector genetic material
) Pool screening to identify cells with the desired traits
) Monoclonality verification
) Selection based on stability/expansion
) Antibody characterization
) Cell growth/expansion
) Creation of cell banks under Good Manufacturing
Practice standards
) Data management
In each of these stages, digital solutions offer a way to optimize the process while ensuring high-quality outputs and
strict compliance with regulatory standards.
A notable advancement in CLD is the use of high-throughput assays, which have drastically reduced the time needed
to collect essential data. These assays enable rapid quantification and characterization of biomolecules and cellular
attributes, key elements in stages like monoclonality verification and antibody characterization.
Building upon this advancement, bioinformatics software
can process, interpret, and visualize data from high-throughput assays. When combined with cloud computing platforms,
it can efficiently manage the large datasets typical in these
assays, offering scalable storage and processing power.
Laboratory information management systems (LIMS) are
also essential in this process. LIMS come into play at the
very beginning when samples are logged and tests are scheduled, and again at the end, to manage the collection, storage,
and reporting of results.
In addition, integrating reporting and analytics tools into
the CLD workflow empowers labs to generate comprehensive reports and conduct thorough analyses of assay data.
This capability is invaluable for presenting findings, sharing
insights with stakeholders, and supporting decision-making.
To further streamline the process, life cycle management
software designed for biopharmaceutical processes can help
labs manage data across the entire CLD spectrum. These
cloud-based systems provide analysis capabilities encompassing data access, reporting, visualization, and exploration.
This all-in-one approach eliminates the need for specialized
analytics tools or the expertise of data scientists.
Tips for choosing digital tools in CLD
When deciding on digital tools for CLD, prioritize those that
integrate with existing laboratory systems, protocols, and
workflows. This includes compatibility with data analysis
tools, LIMS, and electronic laboratory notebooks. Moreover,
it’s crucial to ensure that data can be transferred between
systems without loss of integrity or efficiency.
“Combining lab automation, digital
solutions, and analytics tools
appears to be the path toward faster
and more efficient outcomes in cell
line development.”
20 Lab Manager Molecular Biology Resource Guide
User-friendly interfaces, clear instructions, and responsive
customer support enhance the user experience and optimize
digital tool usage for lab staff.
Another key consideration is the security of the digital tools.
Choose software with robust security features such as strong
encryption protocols, multi-factor authentication, and secure
cloud storage options. In the context of CLD, if the data
includes personal information from human subjects, such
as genetic data from donors, the tool must comply with the
General Data Protection Regulation, Health Insurance Portability and Accountability Act, and other relevant industry
guidelines. This is necessary to meet legal requirements for
data handling and privacy.
It is also important to recognize that every laboratory has
unique needs based on the scale of its operations and the
nature of its projects. Therefore, digital tools must offer a
degree of customization to align with these specific requirements. This includes configurable workflows and data fields,
personalized reporting formats, and integration capabilities
with other tools. Software customization not only ensures it
complements the lab’s existing processes but also allows it to
adapt to the facility’s evolving needs.
The path forward
Combining lab automation, digital solutions, and analytics
tools appears to be the path toward faster and more efficient
outcomes in CLD. Lab automation increases operational
efficiency, digital solutions provide the necessary infrastructure for data management, and analytics tools offer the deep
insights needed for informed decision-making.
This integrated approach supports the development of cell lines
while addressing issues related to resources, human errors, turnaround times, redundancies, and data inconsistencies, ultimately
delivering maximum value to drug discovery organizations.
Chapter Two
Precision Detection:
Immunoassays,
Antibodies, and
Microplate Technology
Accurate detection, precise quantification, and real-time monitoring are essential capabilities in molecular biology. As scientific demands evolve, so too must the tools and
technologies supporting these workflows.
This chapter explores solutions that help labs improve data quality, throughput, and
reliability across a wide range of applications. It begins with a comprehensive overview of
ELISAs, detailing their core formats, diverse applications, and emerging innovations such
as multiplex and nanoparticle-based assays. Readers will also learn about the growing
role of recombinant antibodies in research and diagnostics, new solutions to common
cell culture monitoring challenges, and how automation and microplate technologies are
enhancing throughput, flexibility, and reproducibility across workflows.
22 Lab Manager Molecular Biology Resource Guide
ELISA Tests: Beyond Just
Antibody Detection
ELISA’s versatility, sensibility, and specificity make it an efficient method to
detect and quantify biomolecules in different settings
By Morgana Moretti, PhD
ELISA, which stands for enzyme-linked immunosorbent assay, has become a fundamental tool in basic sciences, clinical
diagnostics, and food safety testing. This article overviews
ELISA’s essential principles and various applications to give
you a comprehensive understanding of this versatile and
cost-effective method to detect and quantify biomolecules.
Principles of ELISA
ELISA uses antibodies that recognize and bind to specific proteins or molecules of interest called antigens. In an
ELISA, the antigen is immobilized on a microplate and then
complexed with an antibody linked to a reporter enzyme.
To detect the presence of a target molecule in an ELISA,
scientists measure the activity of the reporter enzyme by
incubating it with the appropriate substrate, which produces
a colorimetric or fluorescent signal that indicates the antigen’s presence. The amount of signal is proportional to the
amount of antigen present in the sample.
23 Lab Manager Molecular Biology Resource Guide
Formats: Direct, indirect, sandwich, and competitive ELISA
There are four primary types of ELISA: direct, indirect, sandwich, and competitive. Overall, the choice of ELISA type will
depend on the available budget, training of lab staff, and the specific needs of the experiment, including the target molecule
and required sensitivity.
ELISA Type Detection method Sensitivity Advantages Disadvantages
Direct ELISA Directly detects the
antigen
Low to
moderate
Simple and quick, with no
secondary antibody crossreactivity
Low sensitivity compared
to other types of ELISA
Indirect ELISA Detects primary
antibody-antigen
complex
High Increased sensitivity
compared to direct
ELISA; lower antigen
concentration required
Risk of cross-reactivity
between the secondary
detection antibodies
Sandwich ELISA Antigens are
sandwiched between
two layers of antibodies
(capture and detection
antibodies)
High Detects low
concentrations of
antigens; the highest
sensitivity among all the
ELISA types
Time-consuming and
expensive
Competitive ELISA Competition between
labeled and unlabeled
antigens
High Less sample purification
is needed; can measure
a large range of antigens
in a sample; can detect
small antigens
Low specificity and
cannot be used in dilute
samples
ELISA Flowchart
Direct ELISA
Directly detects
the antigen
Indirect ELISA
Detects primary
antibody-antigen complex
Competitive ELISA Competition between antigen
& reference antigen for the
labeled antibody
Sandwich ELISA
Antigen is sandwiched
between antibodies
(capture & detection)
Prepare samples
Wash and block microplate
Wash microplate
Wash microplate
Add sample
Add primary antibody Add sample
Add conjugated secondary antibody
Coat wells with reference antigen
Add antigen-antibody mixture
Coat wells with primary antibody
Add substrate
Read microplate
Calculate results
1° Ab
Ag
1° Ab
2° Ab
Ag 1° Ab
2° Ab
Ag 1° Ab
2° Ab
Reference Ag
Lab safety is a critical responsibility for all lab managers.
With the Lab Safety Management Certificate program from
Lab Manager Academy, you will learn how to mitigate risks,
improve safety culture, and manage your lab’s EHS systems.
Take the first steps towards a safer lab today by visiting
www.academy.labmanager.com
ELISA Unlocked: A Visual
Guide for Effective
Experimentation
A clear understanding of how each method works,
along with its pros and cons, is essential for navigating
these complexities. Our ELISA Flowchart offers a
detailed step-by-step guide for each method, with
insights into which format to use and when.
24 Lab Manager Molecular Biology Resource Guide
ELISA applications: From basic sciences
to environmental analysis
ELISA has many uses. Some applications of this technique include:
Diagnostic testing and investigation of disease
biomarkers
In the diagnostic space, ELISA can detect and quantify
analytes in biological samples like serum, plasma, and urine,
with applications ranging from viral and bacterial detection
to pregnancy testing.
ELISA can also be used to investigate disease biomarkers.
For example, in an article published in Nature Aging in
February 2023, researchers used sandwich ELISA to measure circulating growth differentiation factor 11 (GDF11)
levels in the blood of patients with depression.1
The authors
found that people with depression have a decrease in GDF11
compared to healthy controls and raised the possibility that
serum GDF11 levels can be a potential biomarker for depression in humans.
Additionally, ELISA is a method used to measure the
concentration of disease biomarkers over time, which can
provide insights into disease progression and the effectiveness of treatment.
For example, ELISA-based assays use specific antibodies
that recognize and bind to HIV proteins, allowing for the
measurement of HIV concentration in the patient’s blood. By
using ELISA to measure viral load over time in HIV-positive
patients, healthcare providers can monitor disease progression and treatment effectiveness.
ELISA can also detect and estimate the levels of tumor
markers, such as the prostate-specific antigen, which has
revolutionized the diagnosis, treatment, and monitoring of
patients with prostate cancer. In addition, ELISA can identify autoantibodies indicative of conditions such as lupus and
rheumatoid arthritis.
Food safety
ELISA is intensively applied in food safety as it can detect
bacteria, parasites, pesticides, and other food contaminants.2
For example, ELISA is a popular method to detect allergenic
substances, including egg whites, peanuts, and milk. Commercial kits can detect allergen proteins in various sample
types, including clean-in-place final rinse water, food ingredients, and processed food products.
By detecting allergens and other food contaminants, manufacturers can take appropriate action to prevent cross-contamination and ensure that their products remain safe.
Environmental monitoring
ELISA has proven to be a valuable tool to quantify surfactants, endocrine disruptors, estrogens, and persistent organic
pollutants such as dioxins and polychlorinated biphenyls in
environmental and industrial wastes.3
This can help identify
potential pollution sources and develop strategies to protect
public health and the environment.
Compared to instrumental analytical methods such as
high-performance liquid chromatography and liquid chromatography-mass spectrometry, ELISA offers considerable
advantages because of ease of handling, fast measurement,
high sample turnover, and acceptable costs.
Emerging ELISA techniques: Potentials
and pitfalls
ELISA has advanced significantly in recent years to meet
consumers’ demands for higher productivity, greater
25 Lab Manager Molecular Biology Resource Guide
sensitivity, and faster results. One such advancement is the
development of multiplex ELISA, which allows multiple
analytes to be simultaneously detected and quantified in a
single sample.4
In a multiplex ELISA, multiple capture antibodies, each
specific to a different target analyte, are immobilized onto a
solid support such as a microplate. After the sample is added,
detection antibodies labeled with fluorescent or luminescent
tags are introduced to the mixture. If a specific analyte is
present, the detection antibody will bind to it, allowing for
its identification.
The availability of commercial multiplex immunoassays
for research applications is expanding rapidly and offers
the benefit of reduced time and smaller volumes needed to
conduct an analysis. However, the transition to clinical use
requires rigorous validation and standardization to ensure
reliable and accurate results.
Nanoparticle-based ELISA is another recent modification of
traditional ELISA.5
In this approach, nanoparticles are coated with antibodies or antigens to capture target molecules
from a sample. The captured molecules are then detected
using a secondary antibody conjugated to an enzyme or
fluorescent molecule.
Nanoparticle-based ELISA offers several advantages over
traditional ELISA, including increased sensitivity and specificity due to the enhanced binding capacity of nanoparticles,
as well as the ability to detect multiple analytes simultaneously. This technique has shown promising results but is
still in the early stages of development and requires further
optimization and validation before it becomes widely used in
clinical and environmental analysis.
Recent significant innovations, coupled with the extensive
global expertise in ELISA, promise to expand this technique’s capabilities, sensitivity, and automation processing,
making it an even more attractive choice for analyte detection in labs worldwide.
References:
1. “Systemic GDF11 attenuates depression-like phenotype in
aged mice via stimulation of neuronal autophagy.” https://
www.nature.com/articles/s43587-022-00352-3
2. “Application of nano-ELISA in food analysis: Recent
advances and challenges.” https://www.sciencedirect.com/
science/article/abs/pii/S0165993618305399
3. “The use of enzyme-linked immunosorbent assays
(ELISA) for the determination of pollutants in environmental and industrial wastes.” https://pubmed.ncbi.nlm.
nih.gov/17302299/
4. “ELISA in the multiplex era: Potentials and pitfalls.”
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6680274/
5. “Recent improvements in enzyme-linked immunosorbent
assays based on nanomaterials.” https://pubmed.ncbi.nlm.
nih.gov/33303168/
“Overall, the choice of ELISA type
will depend on the available budget,
training of lab staff, and the specific
needs of the experiment, including
the target molecule and required
sensitivity.”
26 Lab Manager Molecular Biology Resource Guide
Advancing Precision in
Science and Medicine with
Recombinant Antibodies
From disease therapy to the detection of foodborne pathogens, discover
the innovative world of recombinant antibodies
By Morgana Moretti, PhD
Recombinant antibodies, which are also called genetically
engineered antibodies, have emerged as an innovative technology with applications in research, diagnosis, and therapy.
This article provides a general background to recombinant
antibodies and an overview of their applications, potential
benefits, and limitations.
Recombinant vs. traditional antibodies:
What is the difference?
The key differences between traditional and recombinant
antibodies are the production method and the antibodies’
resulting properties.
In traditional antibody production, researchers immunize animals with the target antigen to stimulate antibody
production. Antibodies are then harvested from the animal’s
blood or serum and purified.
Recombinant antibodies are produced in vitro rather than by
infecting living organisms. The technology typically involves
obtaining antibody genes from source cells, including hybridomas and phage display libraries, amplifying and cloning
the genes into an expression vector, introducing the vector
into a host (bacteria, yeast, or mammalian cell lines), and
achieving adequate expression of the functional antibody.1,2
27 Lab Manager Molecular Biology Resource Guide
Optionally, manufacturers can engineer the recombinant
antibodies to have specific properties, such as higher binding
affinity, improved stability, or the ability to target particular
cell types (which is not possible with traditional antibodies).
Applications of recombinant antibodies
Recombinant antibodies are a fast-growing class of biopharmaceutical products with many therapeutic applications.
For example, the anti-HER2 antibody trastuzumab has been
used to treat HER2-positive breast cancer. Another example
is the anti-EGFR antibody cetuximab, which is effective
against colorectal and head and neck cancer.3,4
In the biotechnology industry, scientists use recombinant antibodies to study protein structures and signal transduction
pathways. For example, Oregon Health & Science University
researchers have used recombinant antibodies to study the
structure and function of the human serotonin transporter.5
Recombinant antibodies can also be applied to detect foodborne pathogens, toxins, antibiotics, pesticides, and mycotoxins in food.6
Precise design and efficient production
The ability to precisely design and engineer recombinant
antibodies with desired properties, such as increased affinity,
specificity, and stability, makes them more versatile and
valuable in different applications.
Manufacturers can produce recombinant antibodies in large
quantities and with consistent quality, making them suitable
for commercial production. In addition, laboratories can produce an antigen-specific recombinant antibody in as few as
eight weeks. This compares favorably to historical timelines
of four months to produce an antibody using immunization
processes. The production of recombinant antibodies is highly reproducible since it relies on a known and defined DNA
sequence. Moreover, recombinant antibodies are genetically
stable. Unlike traditional antibody production, they do not
exhibit genetic drift, antibody expression variation, and antibody sequence mutation that can cause non-specific binding.
The high cost of recombinant
antibodies: A barrier to accessibility
Recombinant antibodies have advantages over traditional
antibodies that can improve their quality in specific applications. But making high-quality antibodies requires skilled
labor and increased manufacturing costs.
The high cost can make recombinant antibody-based therapies inaccessible to many patients who need them. For
instance, in Canada, trastuzumab costs $49,915 and $28,350
per patient treated in the adjuvant and metastatic breast cancer
settings, respectively.7
This corresponds to an average increase
in healthcare expenditure of approximately 19 percent over
conventional management without recombinant antibodies.
Recombinant antibodies represent a powerful technology
with increasing applications. As the field evolves, researchers
and industry professionals will explore new ways to optimize
and use this technology. We can expect new and transformative discoveries that will shape the fields of science
and medicine.
References:
1. “Hybridoma technology; advancements, clinical significance, and future aspects.” https://www.ncbi.nlm.nih.gov/
pmc/articles/PMC8521504/
2. “Recombinant antibodies for diagnostics and therapy
against pathogens and toxins generated by phage display.”
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7168043/
3. “Trastuzumab for early-stage, HER2-postive breast cancer: a meta-analysis of 13 864 women in seven randomised
trials.” https://www.thelancet.com/journals/lanonc/article/PIIS1470-2045(21)00288-6/fulltext
4. “Radiotherapy plus Cetuximab for Squamous-Cell Carcinoma of the Head and Neck.” https://www.nejm.org/doi/
full/10.1056/nejmoa053422
5. “X-ray structures and mechanism of the human serotonin
receptor.” https://www.nature.com/articles/nature17629
6. “Recombinant antibodies and their use for food immunoanalysis.” https://link.springer.com/article/10.1007/
s00216-021-03619-7
7. “The cost burden of trastuzumab and bevacizumab therapy for solid tumours in Canada.” https://www.ncbi.nlm.
nih.gov/pmc/articles/PMC2442764/
28 Lab Manager Molecular Biology Resource Guide
Overcoming Challenges of Cell
Culture Monitoring: New Solutions
to Old Problems
Advanced sensor technologies can help researchers overcome difficulties
By Jordan Willis
Researchers conducting metabolic studies have always faced
a common challenge: the development and implementation
of accurate instrumentation and rigorous methodologies for
the continuous monitoring of cell cultures.
The solutions for this challenge must ensure that the data
gathered from cell monitoring is predictive of in vitro biological model systems and performed in a way that doesn’t
compromise cell cultures or data integrity. This is crucial
for research and development labs that depend on having
high-quality and reproducible data.
This article will identify the difficulties inherent to traditional methods of cell culture analysis and discuss solutions
to the challenges facing cell-culture-based laboratorians.
Existing methods of cell culture analysis
Cell cultures are a component of many biological research efforts, but are particularly crucial to the study of metabolism.
Metabolic studies typically require specialized equipment,
such as incubators, meant to imitate physiological conditions
that can vary depending on the cell types or project goals.
29 Lab Manager Molecular Biology Resource Guide
Cell culturing is usually performed on either adherent cells,
typically used for examination of cell-to-cell interactions, or
suspension cells, which grow freely in liquid medium. Regardless of how researchers grow the cells, modern standards
dictate that the cultures need continuous monitoring, which
can easily lead to complications when monitoring equipment
fails or due to the invasiveness of the monitoring technique.
Traditional methods of cell culture analysis each have their
difficulties in application. To address these issues, multiple
methods can be applied in conjunction to compensate for the
shortfalls affecting singular methods, for example, combining
the use of visual inspection via microscopy with offline sampling. Microscopy enables assessment of cell morphology,
culture density, and general health using an instrument that
is immediate and non-invasive. But this approach is limited
in the quantitative data collected, giving no information
about cell viability or metabolic data.
To offset this problem, offline sampling can gather this missing data (e.g., monitor oxygen levels, nutrient abundance,
etc.), but it risks disrupting the culture, introducing non-experimental stressors, and increasing the probability
of contamination.
Further, when combining methodologies, considerations must
include the added complexity and cost of using multiple analytical techniques, whether simultaneously or intermittently,
to gather the necessary data. For example, it’s not always clear
how to minimize experimental bias by timing the application
of non-invasive and invasive monitoring techniques.
Issues in cell culture monitoring are not only logistical, but
also practical. These challenges tend to be about problems
with the sensors used in many methods and how their performance can be affected over the duration of the study.
For example, data-gathering sensors can be prone to fouling
over the course of a measurement period due to the culture
medium accumulating on them, especially in the case of
dissolved oxygen probes. Another challenge is the inevitable
degradation of the sensor itself, usually in the form of wear
and tear from physical or chemical stressors. Furthermore, it
may be difficult to adequately match the calibration environment to the environment of the culture medium. Should
the two environments not match in a significant way (e.g.,
a temperature discrepancy), then the data gained from that
sensor can be skewed and jeopardized.
New technologies prevent degradation,
provide multiplexing
The challenges above display the need for using robust and
powerful technologies coupled with careful planning to
ensure cell culture integrity while collecting valuable data
from accurate, well-maintained sensors and equipment.
Manufacturers have begun introducing new technologies
to address these many challenges. One such technology is
advanced sensors that offer consistent measurement data
via non-invasive optical sensors that don’t require physical
contact with the culture medium.
Other concerns include:
• The recurring cost of purchasing reagents
• High initial cost of equipment
• The hiring of skilled personnel to calibrate and
maintain this equipment
• How measurement tools might be affected by
day-to-day operations or by the cultures they
measure
30 Lab Manager Molecular Biology Resource Guide
These sensors are constructed of materials that make them
resistant to sensor fouling, prevent degradation, and mitigate errors like temperature drift. Additionally, sensors that
are capable of multiplexing and simultaneously recording
multiple analytes or environmental conditions can reduce
the device count within culture chambers, saving space and
reducing the total surface area for potential contamination.
Furthermore, sturdier sensors that don’t require replacement
or recalibration enhance closed systems, making them far
less prone to contamination risks and additional stressors.
New sensors step in to aid cell culture
monitoring
Advanced sensor technologies present exciting solutions for
improving the quality of research data in ways that circumvent many traditional challenges of cell culture monitoring.
The limitations of individual methods, the risk of contamination during examination or calibration, the costs associated with each method, and current-generation sensor issues
are all important factors to consider when designing successful cell culture monitoring and metabolic studies.
The advent of newer technologies that can mitigate or
eliminate these issues are incredible advancements, which
can be used to gather increasingly complex, sensitive, and
high-quality data that would facilitate further discovery in
the biological and pharmaceutical research fields.
“Sturdier sensors that don’t require
replacement or recalibration enhance
closed systems, making them far
less prone to contamination risks
and additional stressors.”
31 Lab Manager Molecular Biology Resource Guide
Automated Microplate
Technology: Application Highlights
Microplate technology and automation for immunoassays, protein research, and
next-generation sequencing
By Mike May, PhD and Michelle Dotzert, PhD
Microplate readers are integral for workflows in a variety of
fields of research and clinical diagnostics. Their versatility
is particularly useful for molecular biology research, where
ongoing improvements in technology and automation support immunoassays, protein research, and next-generation
sequencing (NGS) applications, among others.
Automating immunoassays
The enzyme-linked immunosorbent assay (ELISA) detects
and quantifies a target protein in a sample. Applications
range from biomedical research to clinical diagnostics,
food safety, and more. The assay involves incubation and
washing steps, making it complicated to run multiple assays
simultaneously. Incorporating a microplate handler into the
workflow can improve throughput and consistency. Combining automation with scheduling software can ensure proper
timing when processing multiple plates in a batch.
32 Lab Manager Molecular Biology Resource Guide
For certain assays, it is imperative to maintain a controlled
environment for the plates while they are being moved.
Some automated incubators can move plates to liquid dispensers and return them to the incubator for further imaging
or detection processes. Other sophisticated solutions use
machine vision—a combination of cameras, sensors, and
algorithms that enable the instrument to perceive its surroundings and make decisions based on this data—to move
plates within a workflow.
Ultimately, automating an immunoassay requires thinking
about how to fit in the robotics. Factors such as footprint, device integration capability, and software must be considered.
In addition to meeting the lab’s needs for current assays, it is
important to consider potential future assays and how this
may impact automation needs. With the range of options,
most scientists can find something that fits many—if not
all—of their needs in automating immunoassays.
Solutions for protein research
Faced with a plate of multiple wells, a scientist in search of
proteins often turns to automated methods. The likely tasks
range from confirming the presence of a specific protein to
quantifying or isolating it. Using a process that includes a
handler makes the results easier to collect as well as more
accurate and repeatable.
In a pharmaceutical company’s bioanalytical labs, scientists
typically know which proteins are of interest. Then they
build targeted methods for them and use microplate handlers
to do the sample preparation. For example, when performing
targeted quantitation of endogenous or exogenous proteins,
a liquid handler can perform most of the processing—aliquoting unknown samples, preparing standards and quality
control samples, and adding reagents for affinity capture and
protein digestion.
Accuracy and precision in the liquid-handling steps, a robust
system, and ease of use for the operator are key attributes of
automation technology. Microplate handling also requires
flexibility and integration. The key challenge for protein applications revolves around integrating various platforms and
timing the exchanges of samples and processes. Scheduling
software can be used for systems that integrate microplate
handlers with other devices such as incubators.
Getting the most from analyzing proteins in microplates
requires teamwork. Many platforms must work together,
and that requires hardware and software that combine in a
system and work as one.
Plate readers support NGS applications
Microplate readers are instrumental in NGS workflows,
particularly in the preparation and quality control stages.
They are used for quantifying DNA libraries, assessing their
quality, and ensuring consistency in sequencing outputs.
In selecting a microplate reader, various features should be
considered, including recent improvements. Some of the
most exciting advances in microplate reader technology for
NGS applications improve the user experience and workflow.
For example, automated readers often include features
for precise temperature control and mixing. Maintaining
optimal conditions helps to ensure reliable and reproducible results. Automated readers designed to handle multiple plates simultaneously can provide higher throughput
capabilities, accelerating sample processing and increasing
efficiency. Finally, integrating readers with robotic systems
and liquid-handling platforms aids in workflow automation.
The continued improvement of microplate readers and
supporting automation technology offers many benefits for
molecular biology applications. Considering current and
future needs will help to ensure the lab’s productivity well
into the future.
“Accuracy and precision in the
liquid-handling steps, a robust
system, and ease of use for the
operator are key attributes of
automation technology.”
33 Lab Manager Molecular Biology Resource Guide
Single-Mode vs. Multimode
Microplate Readers
The choice to maximize efficiency and capability on a multimode reader is
dependent on present and future needs
By Brandoch Cook, PhD
Microplate-based applications tend to fit into two experimental
streams. The first involves discrete, end-point measurements of
changes in sample parameters (color, brightness, or fluorescence)
as surrogates for intrinsic properties of biological materials that
activate, quench, or metabolize substrates. These measurements
are pillars of laboratory bioscience. One can use them to quickly
obtain protein, RNA, and DNA concentrations in finite series of
samples and compare them to standard curves.
The second stream fits into a drive to automate those workflows and to incorporate dynamic labels and technologies
to run limitless numbers of samples through cutting-edge
screening and characterization assays. In this second stream,
there is a premium on throughput, miniaturization, reproducibility, and the flexibility to add or develop new assays
based on emerging technology.
34 Lab Manager Molecular Biology Resource Guide
The types of microplate readers available fall along two lines
appropriate to those experimental streams. A single-mode
reader only handles one of the following: absorbance, fluorescence, or luminescence. In contrast, a multimode reader
combines at least two, if not all three, platforms in one system. Moreover, additional capabilities can handle dynamic,
real-time assays based on variations of them. Multimode
instruments usually come with bigger price tags than their
single-mode counterparts. However, users should primarily
base purchasing decisions on the current and predicted diversity of their workflows and how different applications will
drive needs for different read modes.
Single-mode readers: Absorbance,
fluorescence, luminescence
A single-mode absorbance reader usually uses an internal
monochromator to quickly split focused light across a wide
spectrum (typically 230–1,000 nm) and select a wavelength
particular to the target being measured. This target is often
represented by a colorimetric change forced by the binding of
a dye or chromogenic reagent. A classic example is measuring
protein concentrations via assays such as Bradford or bicinchoninic acid. Other common absorbance assays include ones
for the determination of nucleic acid concentrations and the
quantification of ELISA-based antibody-ligand interactions.
A fluorescence reader can also use monochromator technology to resolve fluorescent signal intensity, although with a
dual system corresponding to excitation and emission wavelengths. There is a higher degree of sensitivity compared
to absorbance, allowing one to measure comparatively rare
events in a sample rather than an overall change in the whole
sample. Therefore, fluorescence readers are particularly
suitable for cell-based assays that use reporters to quantify expression of engineered fusion proteins. Fluorescence
intensity assays can also be used to examine cell populations
for death, survival, and proliferation using dyes and antibodies that can tag fragmented DNA, regulatory proteins,
or newly incorporated nucleotides. They can also measure
changes in protein signaling based on quantification of
fluorescent dyes that bind downstream effectors such as calcium. The availability of newer fluorophores with narrower
absorbance and emission ranges can extend the capabilities
of fluorescent readers into multiplex analysis of more complicated expression patterns.
35 Lab Manager Molecular Biology Resource Guide
A luminescence reader can quantify the glow or flash of a
naturally emitting sample or of an engineered reporter with
much greater sensitivity than even a fluorescent measurement. It usually achieves this through the use of filters, rather
than a monochromator, favoring sensitivity of detection
over flexibility in choice of wavelength. For flash-based
assays with short half-lives, instruments use xenon lamps
with photomultiplier tubes, but they must be modified to
include injectors to properly control assay timing. The most
famous luminescence assay uses a fragment of luciferase, the
protein that makes fireflies glow, as a reporter to measure the
activation of gene promoters, or the formation and dissolution of protein complexes. This technology can also be
extended to measure how close drug molecules are to their
protein targets.
Multimode readers
Although single-mode microplate readers satisfy many
standard laboratory workflows, and even some specialized
ones, it is multimode readers that really extend and expand
capabilities into new areas of discovery, particularly via
high-throughput screening (HTS) applications. At minimum, a multimode reader provides the ability to choose
among absorbance, fluorescence, and luminescence with
one machine. Because these three platforms use different
light-splitting technologies, the user can choose between
monochromator and filter modes, with some models employing large and extensive filter wheels, or dual photomultipliers
to improve the available wavelength range. This allows a
user to optimize assays for sensitivity, speed, and accuracy.
Additionally, it imparts the capability to multiplex several
different signals to analyze the response or status of multiple
proteins, reporters, or interactions.
Where multimode readers distinguish themselves from single-mode readers is their extension of fluorescence and luminescence capabilities, particularly to plug into HTS-oriented
workflows. These capabilities include fluorescence polarization, time-resolved fluorescence, fluorescent resonance
energy transfer, and ALPHAScreen (amplified luminescent
proximity homogeneous assay).
“Although single-mode microplate
readers satisfy many standard
laboratory workflows, and even some
specialized ones, it is multimode
readers that really extend and
expand capabilities into new areas
of discovery, particularly via highthroughput screening applications.”
Fluorescence polarization: A fluorescent reporter
hits a target molecule, altering its rotation and the
trajectory of plane-polarized light.
Time-resolved fluorescence: Specialized lanthanide
chelate fluorophores with wider Stokes shifts cause
emission to follow excitation after a delay, rather
than occurring almost simultaneously. This improves
sensitivity and signal-to-noise ratios, which results in
better z prime numbers while validating screening
strategies, in comparison to end-point fluorescence
intensity readings.
Fluorescent resonance energy transfer (FRET):
Quantification of light energy transfer between
donor and acceptor fluorophores functions as a
surrogate for the distance between substances
conjugated to them.
ALPHAScreen: Operates on analogous principles but
uses laser excitation on donor beads to kick ambient
oxygen into a higher energy state that decays across
space but causes high-intensity acceptor bead
emission if the two beads are close enough. Among
the wide array of applications for these assay
platforms, streamlining identification and validation of
therapeutic molecules and antibodies is paramount.
Chapter Three
Omics Technologies
in Molecular Biology
Multi-omics is a powerful approach to understanding the complexity of biological systems, enabling a more comprehensive view than any single approach. At the same time,
innovations in protein sequencing are expanding the possibilities for proteomics, a field
that has historically lagged behind genomics in terms of accessibility and adoption.
This chapter explores genomics, transcriptomics, proteomics, and metabolomics, providing insights into the contributions of each. It highlights the benefits and challenges of
integrating multi-omics data, examines real-world applications (e.g., biomarker discovery,
personalized medicine, and environmental monitoring), and illustrates the power of these
approaches in advancing research and translational science. Readers will then learn about
the accessibility challenges in proteomics and how innovations like single-molecule protein sequencing are opening new doors.
37 Lab Manager Molecular Biology Resource Guide
Integrating Multi-Omics Approaches
in Life Science Research
Learn how omics technologies are accelerating research breakthroughs
By Marnie Willman
The life sciences have undergone a technological revolution
driven by the development of various omics approaches
such as genomics, proteomics, and metabolomics. These
tools have transformed research by offering unprecedented insights into the molecular underpinnings of health
and disease.
Each omics technology reveals a piece of the puzzle, but the
real power lies in integrating these different datasets—a
concept known as multi-omics. By combining insights from
multiple molecular layers, researchers can form a comprehensive view of complex biological systems and gain deeper
insights into disease mechanisms.
Overview of omics technologies
Genomics
The development of next-generation sequencing (NGS)
technologies has propelled genomics forward, allowing
researchers to sequence entire genomes quickly and cost-effectively. NGS platforms provide high-resolution data that
38 Lab Manager Molecular Biology Resource Guide
enable the identification of genetic variations, including
mutations associated with diseases like cancer. Recently,
genomics has expanded into areas such as epigenomics and
structural genomics, enabling scientists to study not only the
genetic code but also the regulatory mechanisms controlling
gene expression and large-scale genomic architecture.
Proteomics
Since proteins carry out most cellular functions, studying
their expression patterns can reveal much about disease
processes and cellular health. Mass spectrometry and
protein arrays are core techniques in proteomics, allowing
the quantification and identification of thousands of proteins
from complex biological samples. Recent advancements in
proteomics include quantitative proteomics and post-translational modification analysis, providing critical insights into
how proteins are regulated and how their activity can change
in disease states. Proteomics is particularly valuable in drug
development and biomarker discovery.
Metabolomics
By studying the metabolome, we can uncover changes in
metabolic pathways associated with disease, nutrition, or environmental exposure. Techniques such as nuclear magnetic
resonance spectroscopy and liquid chromatography-mass
spectrometry are commonly used to detect and quantify
metabolites. Advances in targeted and untargeted approaches
allow researchers to either focus on specific metabolites or
perform a broad sweep of the metabolic landscape. Metabolomics is key to understanding diseases like diabetes, cardiovascular disorders, and metabolic syndromes.
Other omics
Other omics fields, such as transcriptomics and epigenomics,
provide additional layers of information. Techniques like
RNA sequencing allow researchers to measure transcript
levels and analyze differential expression patterns. Epigenomics investigates heritable changes in gene function,
focusing on modifications such as DNA methylation and
histone modification that can alter gene expression without
changing the underlying genetic code. These omics approaches add further depth to our understanding of cellular
processes and disease mechanisms.
The benefits of multi-omics integration
Comprehensive view of biological systems
By integrating data from multiple omics layers, researchers
can gain a holistic view of cellular functions and molecular
interactions. For instance, genomics can reveal mutations
present in a cell, but combining it with proteomics can show
how those mutations alter protein expression and activity.
Metabolomics provides additional context by showing how
these changes impact metabolic pathways. This integrated
approach offers a detailed understanding of biological systems
and disease mechanisms that would be missed using a single
omics approach. Cancer research has particularly benefited
where integrating genomics with proteomics has led to new
insights into the molecular pathways driving tumor growth.
Enhanced disease mechanism understanding
Multi-omics integration has proven powerful in uncovering
the underlying mechanisms of complex diseases. In oncology, multi-omics approaches have revealed how genetic mutations, protein expression changes, and metabolic shifts work
together to drive disease progression. This detailed understanding enables researchers to map signaling networks that
control cell growth and survival, identifying potential therapeutic targets that might be overlooked when using a single
omics approach. Multi-omics research is also advancing our
understanding of neurodegenerative diseases, autoimmune
disorders, and cardiovascular diseases, where complex molecular changes occur across different biological layers.
Improved biomarker discovery and personalized
medicine
Combining genomic and proteomic data has led to the identification of new biomarkers for cancer and cardiovascular
diseases. These biomarkers enable the development of more
precise diagnostic tools and real-time patient monitoring.
Multi-omics is also paving the way for personalized medicine, where treatment plans are tailored to individual molec-
“By integrating data from multiple
omics layers, researchers can gain a
holistic view of cellular functions and
molecular interactions.”
39 Lab Manager Molecular Biology Resource Guide
ular profiles. Returning to the previous oncology example,
multi-omics data allows researchers to stratify patients into
subgroups based on their unique molecular characteristics,
leading to more targeted therapies and better outcomes.
Challenges and solutions in multi-omics
integration
Data complexity and management
One of the greatest challenges in multi-omics research is the
vast volume and complexity of data generated by each omics
technology. Genomic datasets can contain millions of data
points, and when combined with proteomic and metabolomic data, the complexity increases. Managing, storing, and
analyzing such vast data requires robust bioinformatics tools.
New computational pipelines and data integration frameworks are helping address these challenges by processing and
standardizing data from multiple omics sources, enabling
researchers to draw meaningful conclusions.
Interpreting multi-omics data
Interpreting multi-omics data is also challenging because
researchers must correlate findings from different molecular
layers. Changes in gene expression may not correspond directly to changes in protein levels due to post-transcriptional
regulation. Advanced integration algorithms and statistical
models are being developed to identify relationships between
omics datasets, bridging gaps between genomics, proteomics,
and metabolomics, and creating unified biological models
that reflect the interaction between genes, proteins, and
metabolites.
Cost and resource considerations
Conducting multi-omics studies can be resource-intensive,
requiring multiple high-throughput platforms and advanced
data analysis tools. However, technological advancements,
such as miniaturized sequencing platforms and automation
technologies, are making these techniques more cost-effective. Cloud-based bioinformatics solutions also provide
scalable data processing options, reducing the need for
specialized infrastructure and increasing accessibility for a
broader range of researchers.
Future directions and emerging trends
Advances in technology
Several technological innovations are shaping the future of
multi-omics, including the development of single-cell omics.
Traditional bulk analyses often average out molecular signals
across populations of cells, but single-cell technologies, such
as single-cell RNA sequencing, allow for the exploration
of cellular heterogeneity. As single-cell techniques become more scalable, they will continue to play a key role in
multi-omics studies.
40 Lab Manager Molecular Biology Resource Guide
Real-time in vivo monitoring is another emerging trend.
Technologies that can analyze omics data in real time within
living organisms allow for dynamic tracking of disease progression and treatment responses. Innovations like wearable
biosensors and microfluidic chips enable continuous monitoring of molecular changes, bringing multi-omics into
real-time healthcare and disease monitoring.
Long-read sequencing technologies are improving data
quality in multi-omics studies by accurately sequencing
complex regions of the genome. These technologies enhance
our understanding of gene regulation processes and provide
deeper insights into structural genomic variations.
Integration with artificial intelligence
Artificial intelligence (AI) and machine learning are playing
an increasingly important role in analyzing and interpreting multi-omics data. AI models can detect patterns across
genomics, proteomics, metabolomics, and other datasets that
traditional methods might miss. AI-driven predictive models
are also being developed to forecast patient responses to
treatments based on multi-omics profiles, which is advancing the field of precision medicine. AI is further integrated
into data analysis platforms, automating the process of data
integration and interpretation, making multi-omics more
accessible and efficient.
Potential applications and implications
The integration of multi-omics is poised to considerably
advance disease research, drug discovery, and personalized
medicine. In neurodegenerative diseases like Alzheimer’s
and Parkinson’s, multi-omics is uncovering the complex
interplay between genetic, protein, and metabolic changes
that contribute to disease progression. This comprehensive
approach is also making strides in infectious disease research
by helping us better understand how pathogens interact with
their hosts and identifying key molecular targets for vaccines
and treatments.
In the realm of drug discovery, multi-omics is enabling the
development of more detailed models of disease pathways,
leading to the identification of new drug targets. This integrated approach not only accelerates drug development but
also supports drug repurposing by revealing new uses for
existing compounds based on shared molecular mechanisms.
Moreover, multi-omics research is driving the creation of
next-generation diagnostics. Non-invasive tests like liquid
biopsies, which analyze circulating tumor DNA, proteins,
and metabolites, are emerging as powerful tools for more
precise disease detection and monitoring.
The future of multi-omics is bright, with ongoing innovations in AI, single-cell analysis, and real-time monitoring.
These advances will continue to improve our understanding
of biology and disease, accelerating the development of new
treatments and diagnostic tools that could transform personalized medicine.
41 Lab Manager Molecular Biology Resource Guide
New Opportunities in Proteomics
A snapshot of technologies like single-molecule protein sequencing and their
impact on proteomics research accessibility
By Maria Rosales Gerpe, PhD
The term proteomics turned 30 in 2024. The concept of a
more complex protein biochemistry can be traced back to
the mid-1970s with the breakthrough of 2D gel analysis. Or
even earlier: in 1958, Frederick Sanger, colloquially known
as the father of genomics, accepted his first Nobel Prize for
sequencing the first molecule—insulin, a protein.
Despite a protein molecule being the first to be sequenced, it
was Sanger’s DNA sequencing—his second Nobel Prize—
that went on to permanently alter the biological sciences,
ushering scientists into the genomics era in the late 1970s.
Proteomics was coined to denote that genomics alone could
not be used to fully explain biological processes. While the
genomics revolution took on exponential growth since the
late 1970s, proteomics has been slow to churn due to inaccessibility, technical challenges, and cost—big hurdles for the
average biomedical lab looking to venture into proteomics.
Though challenges persist, new protein sequencing technologies are emerging with the potential to meet these challenges, and perhaps open a door to mainstream proteomics.
Challenges in obtaining the protein
sequence
Proteins are vital to our understanding of biological function
and disease. But a protein’s structure is complex, the result of
42 Lab Manager Molecular Biology Resource Guide
the odd mutation, frameshifts, alternative splicing, somatic
recombination, post-transcriptional regulation, post-translational modifications (PTMs), proteolytic cleavages, and
more—all of which are vital to a protein’s function.
DNA sequencing data alone cannot predict the final protein
structure, or its expected quantity; however, direct access
to the protein sequence can reveal whether the protein is
expressed from the expected reading frame, whether it is the
result of a frameshift, whether it needs to be modified to be
biologically active, and much more.
The main methods currently used to identify proteins in
proteomics are immunoassays, mass spectrometry (MS), and
Edman degradation. Accessing the protein sequence through
these methods has been difficult due to technical constraints
and inaccessibility.
For instance, because proteins are not replicated—they
are made ad hoc—it can be difficult to detect them, unlike
genes, which can be amplified for detection and quantification. Some researchers highlight that proteins’ limited
bioavailability may be a reason why 10 percent of the human
proteome remains unelucidated, despite knowledge of the
human genome.
Large, purified quantities of a protein are typically required
for MS and Edman degradation analysis. Another reason is
that some proteins may be resistant to chemical or enzymatic
manipulation; trypsin is often the enzyme of choice for MS
analysis, but not all proteins are sensitive to its proteolysis.
Many more enzymes are now used in MS-based protein
sequencing, but this results in complex datasets that require
sophisticated algorithms for analysis. Unelucidated proteins and proteoforms could also be poor binders, avoiding
detection.
Similar to enzymatic resistance, antibodies may not bind
proteins during modified states, such as activation through
phosphorylation. 85 percent of proteins are not suitable for
targeting by small-molecule binders or antibodies, earning
them the moniker of the “undruggable proteome.”
Furthermore, antibodies’ specificity and reproducibility
are not always consistent, requiring extensive validation.
Non-specific signaling and incorrect target binding are not
uncommon for immunoassays, despite topping the list of
accessible methods available for protein identification. Researchers can readily learn and perform immunoassays with
little cost compared to MS.
Relatively accessible to non-proteomics researchers, Edman degradation, which involves the sequential cleavage of
a protein via its exposed N-terminus, is limited by its low
throughput and chemical incompatibility to certain PTMs
that block the N-terminus.
Least accessible to the average biomedical researcher, MS,
which measures the mass-to-charge ratio of ionized peptides, must be carried out by highly trained personnel. Plus,
instruments like mass spectrometers tend to occupy significant space in labs, in contrast to DNA sequencing instru-
43 Lab Manager Molecular Biology Resource Guide
ments that can fit snugly in the corner of a bench. MS is also
expensive to conduct routinely.
New technologies and innovations
making an impact in proteomics
In recent years, scientists have sought inspiration from DNA
sequencing by seeking ways to boost the signal of individual
peptides and enable single-molecule sequencing without
relying on prior knowledge of the genetic code.
Examples include Schaus et al. (2017)’s DNA nanoscope
or DNA proximity recording that localizes and identifies
specific amino acids through amplification of proximal
DNA-barcoded probes with complementary primers, and
nanopores—membrane pore that only permits single-file
flow of molecules. But, like DNA proximity recording,
“nanopore-based protein sensing is still in its infancy,” reads
a 2021 Nature Methods’ review.
Recent innovative alternatives to Edman sequencing—
fluorosequencing and N-terminus amino acid binding
(NAAB)—may be more suitable for mainstream adoption
of protein identification, allowing diverse research groups
to contribute to proteomics. Of the two, fluorosequencing’s
complex chemistry requirements may incur problems such
as chemical destruction of dyes, not reported with NAAB
single-molecule protein sequencing technology.
The latter relies on NAAB proteins, which bind to N-terminal amino acids, and can be fine-tuned in specificity and
affinity to different amino acids through directed evolution (yeast or phage display), already accessible in many
molecular biology labs around the world, and much more
cost-effective.
NAAB-based single-molecule protein sequencing involves
binding C-terminus immobilized proteins or peptides to single amino acid-specific NAAB dye-labeled probes before or
while enzymatic stepwise peptide cleavage occurs, revealing
additional N-termini for several cycles until a sequence of
the peptide is identified.
In 2022, a paper published in Science showed that immobilizing peptides into a semiconductor chip for NAAB-based
single molecule protein sequencing could be successfully
used to generate unique “fluorescence properties and pulsing
kinetics” signatures for each amino acid. The distinct signals
could then be used to train software to identify amino acids
as fluorescence and kinetics were recorded per well in the
semiconductor chip.
The latter study also used directed evolution to discover
additional NAABs, which they dubbed “recognizers,” from
ClpS aminopeptidases and UBR family of ubiquitin ligases,
showing that specificity and sensitivity can be optimized for
single-molecule detection. This type of sensitivity is not yet
found in MS-based protein sequencing.
Conclusion
It’s no coincidence that the first molecule to be sequenced
was a protein, as they are vital to our understanding of biological function and disease. But for many years, the average
research lab has mostly used immunoassays to study them,
resorting, when affordable, to MS.
The use of Edman sequencers comes with limitations such as
low throughput problems and other technical issues. Recent
innovations, such as single-molecule protein sequencing
through NAAB proteins, may pave the way to mainstream
adoption of protein sequencing by offering high-throughput capabilities and higher specificity not possible through
Edman sequencing or MS.
“Recent innovative alternatives
to Edman sequencing—
fluorosequencing and N-terminus
amino acid binding (NAAB)—may
be more suitable for mainstream
adoption of protein identification,
allowing diverse research groups
to contribute to proteomics.”