Streamlining Genomics: From Core Techniques to Automation
A deep dive into the techniques and technologies driving genomics workflows
COMPARISON of Sequencing Technologies
AUTOMATING Genomics Workflows
OPTIMIZING Bioinformatics Pipelines
GENOMICS RESOURCE GUIDE
Table of Contents
Building Better Genomics Workflows...............................3
Core Techniques in Genomics..........................................4
Comprehensive Introduction to DNA Extraction ...................... 5
DNA Quantification Methods: Determine the Concentration, Yield, and Purity of Samples .........................10
DNA Purification: Comparing Different Methods and Techniques .............................................................15
Next-Generation Sequencing: Library Preparation ................ 20
Comparing Sequencing Technologies................................ 22
Scaling Genomics........................................................ 23
Scaling Your Laboratory Automation: From Basics to Blue-Sky .. 24
Automated Liquid Handling: Keeping Antibody Engineering Consistent ................................................... 26
Integrating Multi-Omics Approaches in Life Science Research .. 29
From Data to Discovery: Crafting Sequencing Bioinformatics Workflows................................................ 33
Introduction
This eBook covers foundational techniques, including DNA extraction, purification, quantification, and library preparation, as well as a side-by-side comparison of sequencing technologies. The second chapter focuses on scaling and optimizing workflows, with strategies for adopting automation, integrating multi-omics, and building robust bioinformatics infrastructure.
Building Better Genomics Workflows
Foundational tools and strategies to improve data quality, scalability, and research outcomes in genomics
From sample preparation to sequencing, every stage of genomic workflows involves decisions about which techniques and tools to use—choices that directly influence quality, consistency, and scalability. Lab managers must be familiar with the full range of available methods, understand how they compare, and evaluate how they align with their application’s needs.
Automation and informatics solutions also play a key role in shaping modern genomics workflows. When thoughtfully implemented, these technologies can increase efficiency, reduce variability, and help labs keep pace with expanding sample volume and complexity. Knowing when and what to automate, and how to ensure data integrity, is essential for maintaining productivity and enabling long-term flexibility.
Chapter One
Core Techniques in Genomics
In genomics workflows, every stage has an impact on the quality and reliability of downstream data. Errors or inconsistencies in the early steps can compromise outcomes, making it critical to align techniques and technologies with sample type, throughput requirements, and research goals. A strong foundation ensures that later stages yield meaningful and reproducible results.
This chapter introduces the essential tools and techniques that form the foundation of genomics workflows. Readers will find a closer examination of DNA extraction, purification, and quantification methods, along with an in-depth look at the library preparation process for next-generation sequencing. Alongside these resources is a comparative chart to help lab managers evaluate sequencing technologies based on performance, cost, and application needs.
Comprehensive Introduction to DNA Extraction
The principles, methods, and equipment behind DNA extraction
By Éva Mészáros, PhD, INTEGRA
DNA extraction is often the initial step for molecular biology applications. When samples arrive in the lab, several techniques can be employed to isolate DNA for downstream applications, such as PCR. In this article, we’ll explore the different methods available and their equipment requirements, advantages, and drawbacks. We’ll also revise some outdated techniques and explain why they’re no longer suitable today. Understanding these processes will help you to select the best approach for your lab, ensuring accurate DNA isolation for all your applications.
What is DNA extraction?
Various processes have to be performed to extract DNA from samples—such as blood, cultured cells, microbes, soil, or plant and animal tissues—depending on the sample type and downstream application. These steps include cell lysis, inactivation of nucleases, and purification to separate the target DNA molecules from cellular debris.
The first DNA extraction was performed in 1869 by the Swiss physician Friedrich Miescher, who isolated DNA from leukocytes when he was trying to determine the chemical composition of cells.1
Since then, DNA extraction has been extensively studied and further developed. The different methods available are discussed in more detail below. Please note that most of these techniques can also be used if you want to extract RNA instead of DNA from your samples.
DNA extraction methods
To extract a sufficient yield of high-quality, purified DNA for your downstream applications, you have to find the best extraction method, or combination of methods, for your sample type. This article gives an overview of the most common techniques, explains how they work, and discusses their advantages and disadvantages. We’ll also tell you what kind of equipment you need for which method, and what we work with in our own lab.
Conventional methods
Let’s first take a look back at two DNA extraction methods that have been developed over the last 150 years, and discuss why they should no longer be used, even though they are quick and easy to perform.
Rapid one-step extraction
As the name implies, this method consists of only one step. An extraction buffer containing Tris-HCl, EDTA, sodium lauryl sarkosyl, and water-insoluble PVPP is added to the sample. The mixture is then incubated, cooled, and diluted in double-distilled water.2
This sounds great, but unfortunately, this method lacks any purification steps, so it often leads to inaccurate or unreliable results. Although you would be able to extract DNA from non-complex samples, such as Gram-positive bacteria, with this method, it is no longer used because PCR inhibitors are co-extracted, and buffer substances, like EDTA, are carried over and can strongly influence downstream applications. As well as this, today’s applications, such as qPCR or next-generation sequencing, are very sensitive and require DNA with high purity.
Chelex 100 extraction
For Chelex 100 extraction, an extraction buffer containing Chelex 100 resin, SDS, NP40, and Tween® 20 is added to the sample. The mixture is incubated at 100 °C for 30 minutes, then centrifuged. The supernatant is removed and adjusted to a final concentration of 10 mM Tris-HCl and 1 mM EDTA.2
Due to a lack of purification steps, this method results in low purity and cannot efficiently remove PCR inhibitors from complex matrices. In addition, the high temperature and alkalinity of the protocol can denature the DNA.3
Therefore, Chelex 100 extraction is rarely used today when alternative methods, such as spin column extraction, take the same amount of time and offer much better results. Furthermore, conventional DNA extraction methods are not
suitable for tough-to-lyse samples, including yeast, human tissue, animal tissue, or plant material, which require both chemical and physical cell lysis steps.
Commonly used methods
Most labs now isolate DNA by using sample- and application-specific kits for spin column or magnetic bead extraction workflows. But before we look at these methods in detail, we’ll briefly discuss some solution-based methods.
Solution-based methods
Solution-based methods can be useful for sample types that don’t provide the desired output with spin column or magnetic bead extraction. These materials may contain large particles, such as soil or dust, or could involve large sample volumes, for example, chemostat cultures.
CsCl density gradient centrifugation with EtBr
To extract DNA using cesium chloride (CsCl) density gradient centrifugation with ethidium bromide (EtBr), you need to mix your lysed samples with CsCl and EtBr and subject them to high-speed centrifugation. CsCl is an extremely dense salt and will set up a concentration gradient, whereas EtBr will intercalate into DNA molecules, which separate into bands according to their density. Since EtBr becomes fluorescent under UV light, you can easily locate and extract the DNA bands. EtBr can subsequently be removed from the extracted DNA using ethanol precipitation.3,4,5,6
The advantage of this method is that it provides good yields of high-purity DNA. However, the disadvantages are that it’s laborious and time-consuming, since the samples need to be centrifuged for at least 24 hours. The technique is also costly because it requires the purchase of an expensive ultracentrifuge. In addition, EtBr is a mutagen, so users must take adequate precautions when working with this substance.3,4,5,6 You will therefore need a biosafety cabinet for the pipetting steps involving EtBr if you plan to use this DNA extraction method in your lab.
Phenol-chloroform extraction
To perform phenol-chloroform extractions, you need to mix your lysed samples with a phenol-chloroform solution and centrifuge them for a few minutes. After centrifugation, you will see three phases: an upper aqueous phase containing DNA, a lower organic phase of lipids, and an interphase comprising proteins. Remove the aqueous phase and use ethanol precipitation to purify and concentrate the DNA.7,8
Just like CsCl density gradient centrifugation with EtBr, phenol-chloroform extraction provides high yields and, since you only need a centrifuge instead of an ultracentrifuge, CsCl density gradient centrifugation separates DNA based on density, allowing for high-purity isolation.
Phenol-chloroform extraction partitions DNA into the aqueous phase while proteins and lipids are separated into the organic phase and interphase, respectively.
it’s much faster and more economical. However, compared to the solid phase extraction methods below, it’s still very time-consuming and usually needs to be performed manually, which leads to higher variability and lower reproducibility. In addition, if you want to perform phenol-chloroform extractions, you will not only need to purchase a centrifuge, but also a chemical fume hood, as phenol and chloroform are volatile and highly toxic, and shouldn’t be handled on an open bench.
Solid phase extraction methods
In a nutshell, solid phase extraction can be defined as follows:
DNA is bound to a solid surface—such as a silica membrane or magnetic beads—unwanted unbound materials are washed away, and the DNA is detached from the solid phase.
The two most common solid phase extraction methods that we’ll look at in more detail are spin column extraction and magnetic bead extraction.
Spin-column extraction
Spin-column extraction is usually performed with a specific kit. The various kits available on the market differ slightly from each other, but all follow the same basic principle. First, you need to lyse your samples by adding a lysis buffer. Chemical lysis is often combined with physical methods such as bead beating or shaking to break down the cell membrane. Then, the samples are transferred into spin columns. These are centrifuged, and DNA binds to the membrane inside the column while other unwanted materials pass through it. To wash away all non-bound components, several centrifugation steps with a wash buffer are needed. At the end, you have to add an elution buffer to the spin columns to liberate the DNA from the membrane and centrifuge them one last time to elute the DNA.
Spin columns are available as individual columns or in a 96 well format. The 96 well silica membrane plates can also be placed on a vacuum manifold instead of being centrifuged. This means that you either need a centrifuge or a vacuum manifold with a pump to perform this method in your lab. The advantages of spin column extraction are that it’s quick and easy to perform, and that you can adapt it to the number of samples that you have; use single columns if you have only a few samples, and the 96 well format if you need higher throughput. The major drawbacks are that the membrane can sometimes get clogged, and that a minimum elution volume of 30-50 μl is required, which leads to lower DNA concentrations.
Magnetic bead extraction
Magnetic bead extraction is also performed with specific kits. As with spin column extraction methods, you need a lysis buffer and, depending on the sample, a physical lysis method to disrupt the cells. After sample lysis, magnetic beads that bind the DNA of your samples are added. The tubes are then placed on a magnet, and the supernatant is aspirated to remove unwanted unbound material. This step is repeated several times, replacing the wash buffer in between.
For the final stage, an elution buffer is added to detach the DNA from the beads, before transferring the samples to a different vessel. The huge advantage of magnetic bead extraction is that the equipment can be adapted to your budget. Essentially, all you need for the technique is a magnetic stand. However, if you don’t want to perform the workflow manually, you can also buy a dedicated purification system. It also works with magnetic beads, but typically uses several vessels pre-filled with the different buffers, and then transfers the beads from one vessel to the next using magnetic rods. The third Spin column extraction binds DNA to the membrane while non-bound components are washed away. The DNA is then recovered using an elution buffer.
option for those who want to reduce manual pipetting steps, but don’t have the budget to get a purification system, is to purchase a benchtop pipetting robot or a 96 or 384 channel pipette. Both devices reduce manual liquid handling steps and increase throughput, and can also be used for other applications in the lab.
Compared to the spin column method, magnetic bead extraction can work with lower elution volumes, and its throughput can be ramped up more easily, because you can work in a 384 well format. However, manually performing magnetic bead extractions is more tedious and error-prone, as you need to be careful not to aspirate the magnetic beads.
Which method do we use?
In our in-house lab, we decided to go with both spin column and magnetic bead extraction methods, and purchased a centrifuge, vacuum manifold, and pump. This gives us the flexibility to work with a wide range of kits and meet different throughput requirements. If we have a very low sample number, we can extract DNA quickly by using single-spin columns and the centrifuge. If we have higher sample numbers, we can use either 96 well silica membrane plates and the vacuum manifold, or one of our benchtop pipetting platforms in combination with magnetic modules for bead extraction.
References
1. “DNA, RNA, and Protein Extraction: The Past and The Present.” https://doi.org/10.1155/2009/574398
2. “Back to basics: an evaluation of NaOH and alternative rapid DNA extraction protocols for DNA barcoding, genotyping and disease diagnostics from fungal and oomycete samples.” https://doi.org/10.1111/1755-0998.12031
3. “Current Nucleic Acid Extraction Methods and Their Implications to Point-of-Care Diagnostics.” https://doi.org/10.1155/2017/9306564
4. “Traditional Methods for CsCl Isolation of Plasmid DNA by Ultracentrifugation.” https://tools.thermofisher.com/content/sfs/brochures/D17309~.pdf
5. “Nucleic Acid Extraction Methods.” https://www.biochain.com/blog/nucleic-acid-extraction-methods/
6. “Molecular markers - DNA & RNA purification.” https://www.pathologyoutlines.com/topic/moleculardnapurintro.html
7. “DNA Extraction: No Beads Required.” https://www.specanalitica.pt/documentos/pdfs/464/eBook_DNA_Extraction_-_No_Beads_Required.pdf
8. “How to Use Phenol/Chloroform for DNA Purification.” https://www.thermofisher.com/ch/en/home/References/protocols/nucleic-acid-purification-and-analysis/dna-extraction-protocols/phenol-chloroform-extraction.html
Magnetic beads bind DNA while non-bound components are washed away. The DNA is then eluted using an elution buffer.
DNA Quantification Methods: Determine the Concentration, Yield, and Purity of Samples
Comparison of key techniques to help you choose the right DNA quantification method for your workflow
By Éva Mészáros, PhD, INTEGRA
Accurate DNA quantification is key for many molecular biology workflows. Without precise knowledge of your sample’s concentration, yield, and purity, the success of downstream applications can be put at risk. This article describes the different methods available for quantifying DNA, highlighting their strengths, weaknesses, and the essential equipment you’ll need to perform them. Please note that most of the techniques explained in this article can also be used if you want to quantify RNA instead of DNA.
Agarose gel electrophoresis
Gel electrophoresis can be performed with different types of gels, each of which is suitable for a different sample type.
DNA is usually analyzed with an agarose gel.
How does it work?
For agarose gel electrophoresis, you first cast a gel by dissolving agarose—a natural polysaccharide derived from a type of seaweed—in a conductive buffer and allow it to set in a gel tray. Next, use a plastic comb to create sample wells.
Once the gel has set, place the tray in a gel tank filled with a conductive buffer solution or add the buffer solution into the gel tray, mix your DNA samples with a loading dye, and pipette them into the sample wells. To compare the size of your DNA fragments with a molecular weight size marker, add it to the first well. Then apply an electrical field along the length of the gel. As the backbones of the DNA are negatively charged, the fragments will migrate towards the positively charged electrode and separate depending on their size—the bigger the fragments, the slower they migrate.
After running the gel, visualize your DNA fragments by staining the gel with a fluorescent intercalating dye such as ethidium bromide and capture an image with a gel documentation system. Depending on the dye, you may need to add it before casting the gel.
Equipment and buffer needed
To perform agarose gel electrophoresis, you need a gel electrophoresis system, an external power supply, and a biosafety cabinet, as the intercalating dyes are hazardous.
The two most common buffers for agarose gel electrophoresis are TAE (Tris-acetate-EDTA) and TBE (Tris-borate-EDTA). The differences between the two are that TAE is better at separating large DNA segments (>15,000 bp) while TBE is well suited for smaller fragments (<1,000 bp).
TBE should also be chosen for long or repeated runs, as it has a better buffering capacity and is therefore less prone to overheating. However, as borate is an enzyme inhibitor, you have to use TAE if you want to recover the bands and use the DNA for downstream applications involving an enzyme, such as PCR. And, most importantly, ensure that you don’t mix the different buffers, e.g., by using TAE to cast your gel and then placing the tray in a tank filled with TBA.1,2,3
How to determine concentration and purity
Using agarose gel electrophoresis, the DNA concentration of your sample can only be roughly estimated by comparing the intensity of your DNA band with the corresponding band of the size marker. For example, if you load a 2 µl sample of undiluted DNA on the gel and your band has about the same intensity as the band from the 100 ng standard of the same length, your sample has a concentration of 50 ng/µl (100 ng divided by 2 µl).4
In addition to giving you a rough estimate of the concentration of your samples, agarose gel electrophoresis helps you to check their purity after DNA extraction protocols. For instance, if you have extracted genomic DNA, you can run a gel to see if your samples are contaminated with RNA, which could be detected as a low molecular weight smear.5
Running a gel is also recommended after PCR reactions, as they are highly sensitive and prone to contamination. Checking if the negative and positive controls provide the expected results will help you to identify master mix contaminations early and confirm the performance of the extraction protocol, reagents, and amplification steps. Moreover, you will be able to see if only the sequence of interest has been amplified by verifying if you only get a single band. And, if you use a size marker, you can validate that the amplified DNA segments have the expected size.
Overview of the DNA purification workflow, highlighting the DNA quantification and visualization step.
Spectrophotometry
Spectrophotometry measures the proportion of light of a certain wavelength that is absorbed by a sample. The absorbance values allow both qualitative and quantitative analysis of the sample, e.g., by determining the purity of a solution or the concentration of a certain analyte.
How does it work?
To measure DNA concentration and purity using spectrophotometry, you can either work with a microspectrophotometer or a spectrophotometer and microcuvettes. If you’re using a microspectrophotometer, you first need to clean the upper and lower pedestals with distilled water by pipetting a droplet onto the lower pedestal, closing the pedestal arm, waiting for a short time, raising the arm, and wiping the pedestals with a dry, lint-free lab wipe.
Once you’ve cleaned the instrument, you should pipette a droplet of the buffer that the DNA is suspended or dissolved in onto the lower pedestal. As soon as you lower the microspectrophotometer’s arm, the liquid will form a column between the upper and lower pedestals, held in place by surface tension. To perform the spectral measurements, the microspectrophotometer passes light from a xenon flash lamp from the upper pedestal through the liquid and the lower pedestal, where it is detected by the integrated spectrometer. Performing a blank measurement with the buffer allows you to eliminate the influence of the absorbance of the buffer. Once you’ve zeroed the microspectrophotometer, you can measure the absorbance of your samples at wavelengths of 230 nm, 260 nm, and 280 nm. In order to get reproducible results, your samples need to be homogeneous and well mixed.
Additionally, ensure that you keep the pedestals clean by wiping each sample from both the upper and lower pedestals before adding the next one. We recommend performing a blank measurement after every 10 samples. If it shows no absorbance, proceed with the next batch of 10 samples, and if you detect absorbance, remove the residues of the previous samples by cleaning the pedestals with distilled water.
Working with a spectrophotometer is very similar, but instead of pipetting your blank and samples as droplets onto the lower pedestal, you instead pipette them into microcuvettes and insert them one by one into the spectrophotometer, which will perform the absorbance measurements for you.
Equipment needed
As just discussed, you can either use a spectrophotometer working with microcuvettes or a microspectrophotometer. Whereas a spectrophotometer is more sensitive, a microspectrophotometer can work with sample volumes as low as 0.5 µl. Many manufacturers also offer devices that combine the two.
How to determine concentration, yield, and purity Spectrophotometers and microspectrophotometers calculate DNA concentration using the following formula:
Concentration (µg/ml) = A260 reading x conversion factor
The conversion factor is 50 µg/ml for dsDNA and 33 µg/ml for ssDNA.
Once you know the concentration of your sample, you can calculate its yield as follows, e.g., to determine if your PCR reaction generated a sufficient amount of DNA for your downstream application:
Yield (µg) = Concentration x Total sample volume (ml)
Spectrophotometry can also be used to determine the purity of a sample. Calculate the A260/A280 ratio to detect protein or RNA contamination and the A260/A230 ratio to detect contamination with chaotropic salts, EDTA, non-ionic detergents, proteins, and phenol. Pure dsDNA has an A260/A280 ratio of 1.85-1.88. The A260/A230 ratio is commonly higher than the A260/A280 ratio and typically lies between 2.3 and 2.4 for dsDNA.6
Note: Don’t forget that you need to multiply the concentration by your dilution factor if you diluted your samples before taking the measurements.
Fluorometry
Fluorometry is a technique used to identify and quantify analytes in a sample by adding fluorophores that bind to the analyte of interest to the sample, exciting them with a beam of UV light, and detecting and measuring the emitted wavelength. By comparing the sample fluorescence to known standards, the analyte can be quantified.
How does it work?
In contrast to spectrophotometry, fluorometry requires an assay set-up. This means that you need to get an assay kit suitable for your sample, consisting of a fluorescent dye, buffer, and standards. When preparing your samples, follow the kit manufacturer’s instructions to ensure that the fluorescent dye binds to your DNA and the standards. Note that it’s important to avoid introducing air bubbles during pipetting and mixing steps, as they could negatively influence the measurements of the fluorometer later on.
Before starting your measurements on the fluorometer, you need to calibrate the instrument by reading the PCR tubes containing the standards. The first standard should indicate a concentration of 0 ng/ml, and the last one should indicate the maximum concentration of your assay range. After calibrating the fluorometer by creating the standard curve, you can add PCR tubes containing your samples one after the other.7
Equipment needed
As explained in the section above, you need a fluorometer and an assay kit to perform fluorescence measurements. If you have many samples to analyze, you can also use a microplate reader capable of measuring fluorescence with an integrated PC. The difference in the assay set-up would be that you add your samples and standards into a microplate
instead of PCR tubes and then measure the fluorescence of the entire plate in one go.
How to determine concentration and yield
Fluorometers usually calculate the DNA concentration for you. If not, you can calculate it by comparing the fluorescence of your sample against the standard curve. As for spectrophotometry, don’t forget to multiply the oncentration you get by the dilution factor and calculate the yield by multiplying the concentration by the total sample volume.
The purity of your DNA can’t be calculated with fluorometry, as it only detects the fluorophores bound to the DNA and therefore can’t detect contaminants.
Spectrophotometry vs. fluorometry
As we’ve just explained, fluorometers can’t measure the purity of DNA samples; they only measure their concentration and yield. Another downside of fluorometry is that it’s more costly than spectrophotometry because of the expensive assay kits. It is also more time-consuming, as you need to mix your samples with the fluorescent dye before analysis.
Its advantages over spectrophotometry are that it’s much more sensitive, providing better results with diluted samples, e.g., after DNA extraction. For example, the lower detection limit of the NanoDrop™ spectrophotometer is 0.4 ng/µl, whereas that of the Qubit fluorometer is 0.005 ng/µl.8,9
On top of this, fluorometers don’t overestimate the DNA concentration, as they don’t detect the absorbance of other sample components, such as proteins. Another advantage of fluorometry is that you can use microplate readers capable of analyzing an entire microplate in one run, significantly increasing your throughput.
After PCR, neither the sensitivity nor the overestimation of the concentration is usually a problem, as you have a very high number of DNA segments of interest (amplicon) anyway. This means that spectrophotometry is the preferred method after a PCR reaction, unless you need to determine the concentration very precisely, for example, if your down-stream application is next-generation sequencing (NGS). For NGS assays, you can pool up to 96 samples to analyze them in one go, and these samples need to have exactly the same concentration to deliver the desired results, making fluorometry the ideal method.
“Gel electrophoresis and spectrophotometry are suitable for most samples and applications, but if you work with diluted samples or need highly precise concentration measurements for downstream applications, such as NGS, a fluorometer may be required as well.”
As it’s very critical to have the same DNA concentration in all your samples when pooling them for an NGS assay, you may even want to double-check your fluorometry results, for example, with the TapeStation system from Agilent. It works similarly to gel electrophoresis, but is fully automated. Once you have inserted your samples, tips, and the ScreenTape (a small cassette containing electrodes, a gel matrix, and a buffer suitable for your sample type), the TapeStation can automatically analyze up to 16 samples and will provide you with a gel image and a concentration measurement for each sample.10
What we use in our lab
For our lab, we bought a gel electrophoresis system for the visualization of DNA and a microspectrophotometer that we use for an additional quality check and concentration measurements. Since spectrophotometry is usually precise enough for our downstream applications, we didn’t purchase a fluorometer. If we do need to perform an NGS assay, we buy an assay kit and perform the fluorescence measurements on a microplate reader that we already own.
Conclusion
You usually need to use a combination of quantification methods to check the success of your DNA extraction protocols and PCR reactions and determine the concentration, yield, and purity of your DNA samples. Gel electrophoresis and spectrophotometry are suitable for most samples and applications, but if you work with diluted samples or need highly precise concentration measurements for downstream applications, such as NGS, a fluorometer may be required as well. You could even consider getting a TapeStation to double-check the results.
References
1. “Agarose Gel Electrophoresis.” https://www.addgene.org/protocols/gel-electrophoresis
2. “FAQ – What buffer conditions give the best resolution for agarose gel electrophoresis?” https://www.qiagen.com/ch/resources/faq?id=728396e4-7c2f-486e-b0b4-19f6037347be&lang=en
3. “TAE and TBE Running Buffers Recipe & Video.” https://www.sigmaaldrich.com/technical-documents/articles/biology/tae-and-tbe-running-buffers-recipe.html
4. “How do I determine the concentration, yield, and purity of a DNA sample?” https://ch.promega.com/resources/pubhub/enotes/how-do-i-determine-the-concentrationyield-and-purity-of-a-dna-sample
5. “How to Interpret Agarose Gel Data: The Basics” https://www.labxchange.org/library/items/lb:LabXchange:a03c81b4:html:1#:~:text=If%20you%20are%20concerned%20about,as%20a%20high%20molecular%20band.
6. “A Practical Guide to Analyzing Nucleic Acid Concentration and Purity with Microvolume Spectrophotometers.” https://international.neb.com/-/media/nebus/files/application-notes/technote_mvs_analysis_of_nucleic_acid_concentration_and_purity.pdf?rev=3f7ae1cf10a14d68af110b41e6a902a9
7. “Qubit™ 4 Fluorometer.” https://assets.thermofisher.com/TFS-Assets/LSG/manuals/MAN0017209_Qubit_4_Fluorometer_UG.pdf
8. “NanoDrop 2000/2000c Spectrophotometer.” https://www.thermofisher.com/document-connect/document-connect.html?url=https%3A%2F%2Fassets.thermofisher.com%2FTFS-Assets%2FCAD%2Fmanuals%2FNanoDrop-2000-User-Manual-EN.pdf
9. “RNA/DNA Quantification.” https://www.thermofisher.com/ch/en/home/life-science/dna-rna-purification-analysis/nucleic-acid-quantitation.html
10. “Determining the Quantity, Integrity, and Molecular Weight Range of Genomic DNA Derived From FFPE Samples.” https://www.americanlaboratory.com/914-Application-Notes/144308-Determining-the-QuantityIntegrity-and-Molecular-Weight-Range-of-GenomicDNA-Derived-From-FFPE-Samples/
DNA Purification: Comparing Different Methods and Techniques
Explore how different DNA purification methods work, along with their advantages and disadvantages
By Éva Mészáros, PhD, INTEGRA
Purifying DNA is a common process in molecular biology.
Unlike DNA extraction, it doesn’t include any lysis steps to break the cell membrane and liberate the DNA. Instead, it involves the clean-up of your samples, e.g., to effectively remove all components that were used to facilitate amplification of the target sequence during PCR.
Various DNA purification methods are available, and this article will provide a detailed comparison of their advantages, disadvantages, and applications. Read on to learn how to ensure that your samples are pure enough for your downstream application. Please note that most of the techniques explained in this article can also be used if you want to purify RNA instead of DNA.
Ethanol and isopropanol precipitation
The first purification method we’ll have a closer look at is ethanol and isopropanol precipitation. This is the method of choice for purifying genomic DNA and can be used to concentrate and desalt samples after applications such as CsCl density gradient centrifugations with EtBr, phenol-chloroform extractions, digestions, and PCRs.
How does it work?
Ethanol and isopropanol precipitation are all about solubility. Water and DNA are both polar, which is why DNA molecules dissolve easily in water. To precipitate them, you can either use ethanol or isopropanol.
For ethanol precipitation, you need to add twice the sample volume of ice-cold 96 percent ethanol and salt (commonly sodium acetate) to the solution. Ethanol lowers the dielectric constant, allowing the negative charges on the sugar-phosphate backbone to be neutralized by the Na+ ions of sodium acetate. Since the DNA molecules are now less hydrophilic, they will drop out of the solution when you incubate the mixture on ice. Then, centrifuge your sample to separate the DNA from the rest, and wash the pellet in cold 70 percent ethanol to remove any residual salt. Centrifuge the sample a second time, remove the ethanol, allow the DNA pellet to dry, and resuspend it in a clean aqueous buffer.1
To dry the pellet, you can either place the tube (with the lid open) in a laminar flow hood for several hours or use a vacuum centrifuge. The method you choose is up to you, but you have to ensure that the pellet is completely dry to avoid residual ethanol negatively affecting your downstream applications.
Isopropanol precipitation is very similar. The only differences are that you can skip the incubation on ice step, and replace ice-cold ethanol with room-temperature isopropanol for the first step. Regarding the volume of isopropanol, an amount equal to the sample volume is sufficient.2
Whether ethanol or isopropanol is more suitable depends on your sample volume, concentration, and the size of your DNA fragments. If you have a large sample volume, it may be impossible to add twice the sample volume of ethanol into the tube. Isopropanol precipitation is also preferable for the precipitation of larger DNA fragments and lower sample concentrations, as DNA is less soluble in isopropanol. On top of that, isopropanol precipitation is the faster method, as you don’t need to incubate your samples before centrifugation.2
Advantages and disadvantages
Ethanol and isopropanol precipitation aren’t costly at all, as ethanol, isopropanol, and sodium acetate are very affordable, and the process provides a good yield of high-purity DNA.
They are, however, very time-consuming processes and, as they need to be performed manually, are highly variable.
Consequently, low reproducibility can be a problem.
Equipment needed
All you need for ethanol or isopropanol precipitation is a centrifuge and a laminar flow hood or vacuum centrifuge to dry the pellet.
Gel electrophoresis
As gel electrophoresis separates DNA fragments based on their length, you can use this method to separate sequences of interest from other nucleic acid types and contaminants.
How does it work?
First of all, you need to run a gel. After visualizing your gel, use a sharp scalpel to excise the DNA band of interest. Always remember to wear appropriate personal protective equipment, such as a face shield and gloves, for this step, especially when using UV light for the visualization of your bands. Then, purify your DNA bands from the TAE- or TBE-buffered agarose gel by using a suitable spin column purification kit. As the various kits available differ slightly from one another, you should carefully follow the manufacturer’s instructions. Usually, you need to weigh the DNA bands, add a specified amount of buffer for every 100 mg of gel slice, and heat the mixture to solubilize the agarose.3,4 You then transfer your samples to the spin columns and purify the DNA using binding, washing, and elution steps.
Advantages and disadvantages
The huge advantage of this clean-up method is that agarose doesn’t denature the DNA fragments, making it easy to recover them from the gel without any damage. It is, however, not suitable for high-throughput labs, because running a gel is very time-consuming, and is limited to a low number of samples.
Equipment needed
Compared to other purification methods, you need a lot of different instruments for this workflow. Agarose gel electrophoresis requires a gel electrophoresis system, an external power supply, and a biosafety cabinet, as you’ll be working with hazardous intercalating dyes. Spin column purification is less demanding, as you only need a centrifuge.
Spin column purification
Spin columns are not only used to extract DNA molecules from lysed samples, but also to purify them, e.g., after a PCR reaction to remove salts, enzymes, primers, primer dimers, and nucleotides that may inhibit subsequent applications.
How does it work?
As explained above, spin column purification protocols consist of binding, washing, and elution steps. After transferring your samples into the spin columns, you centrifuge them to bind the DNA to the membrane inside the column. This allows unwanted components to pass through. Several additional centrifugation steps with a wash buffer remove residual unwanted materials, and a final centrifugation step with an elution buffer liberates the DNA from the membrane.
Advantages and disadvantages
As you can see, this purification method is quick and easy. On top of this, it can be adapted to your sample number, as you could also use 96 well silica membrane plates instead of single spin columns. However, the membrane may get clogged and, as a minimum elution volume of 30-50 μl is required, you may get rather low DNA concentrations.
Equipment needed
The only piece of equipment needed for spin column purification is a centrifuge, unless you work in the 96 well format and prefer to use a vacuum manifold with a pump, which is also possible.
Magnetic bead purification
Just like spin columns, magnetic beads can be used either for the extraction of DNA molecules from lysed samples or for their purification. For example, you can use magnetic beads to remove salts, enzymes, primers, primer dimers, and nucleotides from PCR products that would otherwise inhibit subsequent applications.
How does it work?
The purification workflow with magnetic beads is very similar to the extraction workflow. You first add magnetic beads that bind the DNA molecules to your samples.
You then place the tubes on a magnet to remove the supernatant containing unwanted, unbound material. You repeat this step several times, replacing the wash buffer in between. In the end, you add an elution buffer and transfer the samples to a different vessel.
In contrast to extraction protocols with magnetic beads, only DNA fragments of a particular length bind to the beads during purification. This size exclusion mechanism is achieved by creating the perfect binding conditions for the DNA fragments of interest through varying the buffers, salts, and their concentrations, and therefore the hydrophilicity/hydrophobicity.
Advantages and disadvantages
A huge advantage of magnetic bead purification is that the equipment needed can be chosen based on your budget.
Option one: Get a magnetic stand and perform the workflow manually. This is the cheapest option, but also the most tedious and error-prone method. You need to be very careful not to aspirate the beads, as this would result in sample loss.
Option two: For high-throughput applications, you can also buy a benchtop pipetting robot, or a 96 or 384 channel pipette. These devices reduce manual liquid handling steps, increasing productivity and reproducibility, and can also be used for other applications.
Option three: Automate the entire workflow by buying a dedicated purification system.
Another convenience of magnetic bead purification is that you can work with lower elution volumes than for spin column purification. It is also easily scalable, as you can use it with 384 well plates.
Equipment needed
As described, you can match the equipment for magnetic bead purification that you purchase to your budget. Either buy a magnetic stand if you have a limited budget, get a benchtop pipetting robot or a 96 or 384 channel pipette if you have more money available, or go with a dedicated purification system.
Sephadex® purification
Sephadex purification can be used to purify DNA from smaller molecules, e.g., primers, nucleotides, or dyes.
How does it work?
Sephadex is a gel filtration resin made of dextran crosslinked by epichlorohydrin.5
To prepare the resin, you need to add Sephadex powder to spin columns, rehydrate it with water, and centrifuge the columns to eliminate excess water. Before spinning the columns a second time, add your samples on top of the resin. During centrifugation, larger molecules will easily pass through the resin and elute, whereas smaller molecules will get trapped in the pores of the dextran beads. The size exclusion capability of the beads depends on the Sephadex type you choose. For example, Sephadex G-25 Medium5 can be used to purify DNA with a molecular weight of >5,000, and G-50 Medium6 is suitable for molecules with a molecular weight of >30,000.
To speed up this workflow, opt for spin columns prepacked with Sephadex instead of creating them yourself, or work with a 96 well filter plate.
Advantages and disadvantages
Just like spin column purification, this method can be adapted to your throughput needs by performing it in spin columns or 96 well filter plates. The absence of a molecule-matrix binding step also prevents unnecessary damage to the DNA,7 making it a rather gentle purification method.
However, it’s quite time-consuming, especially when preparing spin columns or filter plates yourself, as Sephadex has to rehydrate for about three hours, and can’t be automated.
Equipment needed
The only piece of equipment needed for Sephadex purification is a centrifuge.
Enzymatic approaches
Some manufacturers offer enzymatic approaches to clean up your PCR products if your subsequent application—including Sanger sequencing, next-generation sequencing, or SNP analysis—requires your samples to be free from primers and nucleotides.
How does it work?
Enzymatic approaches consist of only two steps. First, add an enzyme mix to your samples and incubate them for 15 minutes at 37 °C. During incubation, the first enzyme in the mix, exonuclease I, will digest excess primers, and the second enzyme, alkaline phosphatase, will dephosphorylate nucleotides that were not consumed during PCR. Once the enzymes have served their purpose, you can heat up your samples to 80 °C for another 15 minutes to deactivate the enzymes.8,9,10
Advantages and disadvantages
In addition to being quick and easy, enzymatic approaches result in no sample loss and can easily be adapted to your sample number. Their only disadvantage is that they can only be used to clean up PCR products from primers and nucleotides, and do not remove any other contaminants.
Equipment needed
All you need for enzymatic PCR clean-ups is an incubator.
Conclusion
As you can see, there is a DNA purification method available for every sample type, downstream application, and budget.
We hope that this article has helped you determine which one to choose for your specific needs.
References
1. “Ethanol Precipitation of DNA and RNA: How it Works.” https://bitesizebio.com/253/the-basics-how-ethanol-precipitation-of-dna-and-rna-works
2. “DNA Precipitation Protocol: Ethanol vs. Isopropanol.” https://bitesizebio.com/2839/dna-precipitation-ethanol-vs-isopropanol
3. “Extraction of DNA from an Agarose Gel.” https://benchling.com/s/prt-Zwi59k9BBRU0Qx6VEqC4/edit
4. “How DNA Gel Extraction Works.” https://bitesizebio.com/13533/how-dna-gel-extraction-works
5. “Sephadex™ G-25 Medium.” https://www.cytivalifesciences.com/en/us/shop/chromatography/resins/size-exclusion/sephadex-g-25-medium-p-05608
6. “Sephadex™ G-50 Medium.” https://www.cytivalifesciences.com/en/us/shop/chromatography/resins/size-exclusion/sephadex-g-50-medium-p-05605
7. “Gel-Filtration Chromatography.” https://doi.org/10.1007/978-1-60761-913-0_2
8. “Proper PCR Cleanup before Sanger Sequencing – Seq It Out #12.” https://www.thermofisher.com/blog/behindthebench/proper-pcr-cleanup-before-sanger-sequencing-seq-it-out-12
9. “A Simple Way to Treat PCR Products Prior to Sequencing Using ExoSAP-IT®.” https://doi.org/10.2144/000112890
10. “ADS™ Exo-Alp PCR Cleanup Mix.” https://advancedseq.com/ads-exo-alp-pcr-cleanup-mix
Next-Generation Sequencing: Library Preparation
Library preparation is a critical step in the workflow of several NGS paradigms
By Brandoch Cook, PhD and Rachel Brown, MSc
DNA sequencing is perhaps the most substantial development in molecular biology since the Watson-Crick structure of the DNA double helix. The earliest method of nucleotide sequencing used chemical cleavage followed by electrophoretic separation of DNA bases. Sanger sequencing improved upon this method by employing primer extension and chain termination, which gained primacy with its decreased reliance on toxic and radioactive agents.
Since then, pressure on the sequencing data pipeline has led quickly to considerable technological changes that far surpassed the Sanger method in terms of cost and efficiency by flattening the workflow. The high-throughput sequencing methods that followed, collectively known as next-generation sequencing (NGS), include several sequencing by synthesis technologies that rapidly identify and record nucleotide binding to complementary strands of amplified DNA, in massively parallel synthesis reactions with a daily throughput in the hundreds of gigabases.
Although the principle of massive parallel sequencing reactions has been shared across methods, the modes of nucleotide incorporation and fluorescence detection in the synthesis reactions differ among commercially available platforms.
The reagents and library preparation protocols required for sequencing depend on the systems and models used, but some generalities apply. Because of the sensitivity of the technologies and the nature of much modern genomics research, success depends on high-quality, optimized libraries.
Library preparation dictates read depth (number of copies of a given stretch of DNA sequenced), length, and coverage (breadth of sequencing data), which need to be balanced according to the sequencing goals. Greater read depths improve the signal-to-noise ratio and increase confidence in data validity. Regardless of the nature of the starting material—genomic DNA, mRNA, DNA-protein complexes, etc.—the precondition for generating useful NGS datasets is a clean, robust library of nucleic acids. As in so much of molecular biology, there is always a kit for that.
A typical, generic workflow for library preparation is as follows: 1) sample collection, fragmentation via enzymatic digestion or shear forces, 2) end-repair and phosphorylation of 5’ ends, 3) ligation of oligo dT-based adapters, 4) and a high-fidelity PCR-based amplification step to generate a product with adapters at both ends, barcoded for identification of individual samples run as multiplex reactions. Most library prep kits are engineered to appropriately modify and amplify the given starting material while reducing the number of steps to accelerate sequencing workflows, maintain sample quality, and minimize contamination. Manufacturers typically provide a wide selection of library preparation and sequencing kits optimized for their platform to suit a variety of applications and sample types. Depending on the sequencing platform, third-party reagents or kits that increase flexibility or reduce cost may be available.
Standard library prep kit protocols can usually be performed manually or with varying degrees of automation from compact 96 or 384 channel pipette stations to high-throughput, fully automated workstations requiring little to no manual intervention. Automated library prep improves sequencing data by increasing consistency, accuracy, and precision in the pipette-heavy workflows. The boost in accuracy and precision also supports the miniaturization of sample volumes, which can be particularly important for low-input sequencing. Sequencing companies often work with multiple automation partners to develop validated methods for different platforms.
While large robotic workstations are flexible in method development and modifications, they are prohibitively expensive for many labs and best suited for very high-throughput environments. Smaller and mid-sized labs that want to take advantage of end-to-end library prep automation that increases walk-away time and reduces the dependency on skilled technicians may have more luck with microfluidics platforms. A shifting technological landscape has produced a variety of self-contained, specialized instruments that produce sequencing-ready libraries post-fragmentation from low to high throughput (starting at around eight samples).
However streamlined library prep protocols become, they require a high degree of precision and care to produce reliable sequencing results. Labs have the benefit of many options for methods, kits, and supporting equipment, depending on the application, chosen sequencing technology, and degree of automation desired.
COMPARING SEQUENCING TECHNOLOGIES
Key technologies meet different sequencing needs
SANGER
Labeled dideoxynucleotides (ddNTPs) are incorporated to terminate DNA synthesis at specific points and are distinguished using capillary array electrophoresis.
• Reliable with a low error rate for relatively short DNA fragments
• Can be more cost-effective for lowthroughput applications
• Relatively low throughput
•Inefficient DNA isolation, purification, fragmentation, and PCR amplification.
BRIDGE AMPLIFICATION SEQUENCING BY SYNTHESIS (SBS)
Detects fluorescent signals from labeled nucleotides with reversible blockers during synthesis. Library amplification forms clusters of forward and reverse strands of DNA fragments on a flow cell using bridge amplification.
• High throughput
• Cost effective
• Relatively low error rates
• Short read lengths
• Accurate DNA quantification is required to prevent overclustering DNA fragmentation, adapter ligation, and library amplification.
ION SEMICONDUCTOR SBS
Detects hydrogen ions released during DNA synthesis based on voltage change with an ion sensor. The sequence is determined based on signal timing and strength, as nucleotides are washed over the chip sequentially, one at a time. Libraries are amplified using emulsion PCR after DNA fragments are bound to beads by their adapters.
•Fast
• High throughput
• Cost effective
• Relatively low error rates
• Short read lengths (though typically longer than bridge amplification SBS)
• Higher error rates for long homopolymer repeats DNA fragmentation, adapter ligation, and library amplification.
SINGLE-MOLECULE REALTIME (SMRT) SEQUENCING
Uses image-based detection of fluorescently labeled nucleotides during synthesis. Fragments form circular DNA molecules that are read multiple times to create a circular consensus sequence.
Sequencing DNA molecules in real-time by capturing fluorescent signals from nucleotide incorporation.
•Long read lengths
• Does not require PCR amplification
• Can detect base modifications relevant to epigenetic studies
• Can be costly
•Lower throughput
• High error rate (partially mitigated by the consensus sequence formed from multiple sequencing reads per template) DNA fragmentation and adapter ligation.
NANOPORE
Detects nucleotide-specific changes in ionic current as DNA molecules pass through nanopores in charged biological or synthetic membranes.
•Long reads
• Does not require PCR amplification
• Can detect base modifications relevant to epigenetic studies
• Direct RNA sequencing
• Well suited to field use and environmental DNA studies
•Throughput may be lower compared to other technologies
• Higher error rates
Optional fragmentation or size selection and adapter ligation.
TECHNOLOGY ADVANTAGES LIMITATIONS LIBRARY PREPARATION
Chapter Two
Scaling Genomics
Genomics research requires consistency, adaptability, and data integrity across entire workflows. As projects grow in scale and complexity, labs must ensure their tools, techniques, and infrastructure can support their needs without compromising quality.
This chapter brings together strategies for expanding research capacity while maintaining accuracy. Readers will explore how to evaluate and scale lab operations, implement automation where it adds the most value, and integrate multi-omics technologies for deeper biological insights. The chapter concludes with a look at how labs can develop efficient bioinformatics workflows to manage and interpret the growing volume of sequencing data.
Scaling Your Laboratory Automation: From Basics to Blue-Sky
Questions to ask yourself when scaling up your laboratory automation
By Michael Schubert, PhD
Taking the first step into lab automation can be difficult, but the challenges don’t stop when your first robot is online.
Similarly, even the most innovative automation platform is no silver bullet if needs and processes aren’t clearly defined.
For laboratories interested in scaling up their automation, it’s crucial to consider not just your needs but also how your plans will integrate or adapt to existing equipment, workflows, and work volumes.
What are your lab automation needs?
Before making scaling decisions, consider your end-to-end workflow. Identify any bottlenecks or pain points that are causing throughput issues or that may limit throughput after increasing your use of automated technologies. Consult with individuals across all aspects of your lab’s operations, from sample handling to regulatory compliance, to factor their knowledge into your decision-making. Finally, consider your lab’s current and future testing prospects. High demand for a specific analysis or workflow may indicate a strong candidate for advanced laboratory automation, whereas steady demand across domains may suggest a need to prioritize automating multiple functions or those used across multiple workflows, such as automated liquid-handling instruments rather than maximizing the throughput or sophistication of a single system.
Envisioning your lab’s future can also help you determine the best approach to scaling now. For instance, labs that plan to scale further in a stepwise manner may choose modular options that can be upgraded or extended according to needs and budgets. Labs already facing significant time or workforce pressures may choose fully automated workflows with minimal training requirements, select vendors who can provide off-the-shelf protocols and integrations alongside extensive support services, or opt for more heavily artificial intelligence-supported solutions.
What is your lab automation setup?
Not all laboratories can accommodate all workflows. Your lab’s computing power, flexibility, and capacity will determine your lab automation options—from LIMS integrations to data storage and encryption. Existing equipment, software, and even the design of your lab space can further dictate your choices; these factors must be embedded into your plans from their earliest stages. Poor planning can lead to inadequate integration, complex or failure-prone workarounds, or hidden costs and inefficiencies that negate the benefits of scaling your laboratory automation.
To make sure you’re scaling in a way that’s right for your lab, start by mapping out your existing workflows and processes.
Understand how samples move through the lab physically, how data moves through your systems, and how workloads are distributed between the steps of your existing and anticipated processes. This will not only provide insight into the areas where laboratory automation can confer the greatest benefit but also highlight adjustments that can be made to existing processes ahead of scale-up.
What can you learn?
Lab automation is a journey, not a destination. Once you’ve implemented your chosen solutions, it’s vital to continue monitoring lab functions. Are your upgrades meeting all of your lab’s needs and allowing you to achieve key performance measures? Has resolving one bottleneck introduced another elsewhere? Are your new instruments interfering with existing systems, or even with other considerations such as traffic flow or ergonomics? Labs aiming to maximize the benefits of laboratory automation should regularly revisit their operations and look for enhancement opportunities, especially if bugs or issues arise. By engaging in continuous improvement, your lab can gain insights into how you use your systems, where challenges and pain points may arise, and how you can take the next step into scaling your lab automation.
“Labs aiming to maximize the benefits of laboratory automation should regularly revisit their operations and look for enhancement opportunities, especially if bugs or issues arise.”
Automated Liquid Handling: Keeping Antibody Engineering Consistent
When engineering a mAb for research or therapeutic applications, many steps require precise liquid handling
By Mike May, PhD
Monoclonal antibodies (mAbs) make up a crucial workhorse of molecular biology, as well as a growing number of therapeutics. In a research lab, scientists use mAbs to label and track a wide range of targets. Pharmaceutical and biotechnology companies also turn mAbs into therapeutics, including cancer treatments. When engineering a mAb for research or therapeutic applications, many steps require precise liquid handling, which can be accomplished accurately and repeatably with automated platforms.
Most labs engineer mAbs through hybridoma technology. This involves fusing an immune B cell that makes the desired mAb and a long-lasting myeloma cell. The B cell tends to be short-lived, which is the reason for fusing it with a myeloma cell.
Throughout the antibody engineering workflow, consistency and reproducibility are critical. Any variability reduces confidence in the results produced with mAbs in research and could reduce the efficacy and safety of a mAb-based therapeutic.
Areas for automated liquid handling
In sorting blood cells for mAb production, screening for the most effective antibody, and a variety of analytics, automated liquid handling can be used.
Their repetitive accuracy is crucial in mAb engineering, which is usually performed in microplates at microliter volumes. In addition to consistency, automated liquid handling is more convenient and saves time compared to a manual approach.
The antibody technologies facility at Monash University in Australia focuses on generating high-affinity monoclonal antibodies through advanced discovery techniques and antibody engineering. “The incorporation of automated liquid handling in antibody production has been pivotal to our success,” says manager Hayley Ramshaw.
At Monash, automating the liquid handling in the engineering of mAbs allowed the facility to manage an increased workload. Ramshaw says that this technology allows them to handle multiple fusions per week.
She and her colleagues can use automated liquid handling for many processes—the fusion itself, plating of the cells post-fusion, analysis of all samples for antibody presence, and cell-culture techniques, including expansion of cultures and cryopreservation of cell lines.
Easing the transition
Automating a portion of the engineering process takes less capital investment than automating an entire workflow. For example, a lab could start by automating liquid handling in next-generation sequencing used in mAb engineering.
To automate a complete engineering process, working with an expert eases the transition. A single vendor might suggest a system built with devices from more than one source.
In both basic and medical research, automated liquid handling provides many benefits. The improved accuracy alone is worth the transition. Automation also speeds up a process and allows higher throughput. In combination, automated liquid handling can create higher volumes of consistent mAbs, which benefits scientists and patients.
INTEGRA D-ONE with ASSIST PLUS
The D-ONE single channel pipetting module enables hands-free transfers from individual tubes or wells using the ASSIST PLUS pipetting robot. This system effectively automates tedious tasks such as serial dilutions, sample normalization, hit picking, or pipetting of complex plate layouts, increasing productivity and reproducibility in the lab, while reducing hands-on time, processing errors, and physical strain.
The D-ONE is available in two volume ranges to ensure optimal pipetting performance across a wide volume range. Each D-ONE module has two pipetting channels, using 12.5 and 300 μl, or 125 and 1,250 μl GRIPTIPS® pipette tips for high and low volumes, respectively. The D-ONE pipetting module is compatible with all INTEGRA GRIPTIPS used for benchtop pipetting devices, avoiding the need for special tips. The D-ONE tip deck can also accommodate two tip racks, which allows the ASSIST PLUS to automatically switch between the different GRIPTIPS without tedious manual intervention, offering longer walk-away times.
Integrating Multi-Omics Approaches in Life Science Research
Learn how omics technologies are accelerating research breakthroughs
By Marnie Willman
The life sciences have undergone a technological revolution driven by the development of various omics approaches, such as genomics, proteomics, and metabolomics. These tools have transformed research by offering unprecedented insights into the molecular underpinnings of health and disease.
Each omics technology reveals a piece of the puzzle, but the real power lies in integrating these different datasets—a concept known as multi-omics. By combining insights from multiple molecular layers, researchers can form a comprehensive view of complex biological systems and gain deeper insights into disease mechanisms.
Overview of omics technologies
Genomics
The development of next-generation sequencing (NGS) technologies has propelled genomics forward, allowing researchers to sequence entire genomes quickly and cost effectively. NGS platforms provide high-resolution data that enable the identification of genetic variations, including mutations associated with diseases like cancer. Recently, genomics has expanded into areas such as epigenomics and structural genomics, enabling scientists to study the genetic code and the regulatory mechanisms controlling gene expression and large-scale genomic architecture.
Proteomics
Since proteins carry out most cellular functions, studying their expression patterns can reveal much about disease processes and cellular health. Mass spectrometry and protein arrays are core techniques in proteomics, allowing the quantification and identification of thousands of proteins from complex biological samples. Recent advancements in proteomics include quantitative proteomics and post-translational modification analysis, providing critical insights into how proteins are regulated and how their activity can change in disease states. Proteomics is particularly valuable in drug development and biomarker discovery.
Metabolomics
By studying the metabolome, we can uncover changes in metabolic pathways associated with disease, nutrition, or environmental exposure. Techniques such as nuclear magnetic resonance spectroscopy and liquid chromatography-mass spectrometry are commonly used to detect and quantify metabolites. Advances in targeted and untargeted approaches allow researchers to either focus on specific metabolites or perform a broad sweep of the metabolic landscape. Metabolomics is key to understanding diseases like diabetes, cardiovascular disorders, and metabolic syndromes.
Other omics
Other omics fields, such as transcriptomics and epigenomics, provide additional layers of information. Techniques like RNA sequencing allow researchers to measure transcript levels and analyze differential expression patterns. Epigenomics investigates heritable changes in gene function, focusing on modifications such as DNA methylation and histone modification that can alter gene expression without changing the underlying genetic code. These omics approaches add further depth to our understanding of cellular processes and disease mechanisms.
The benefits of multi-omics integration
Comprehensive view of biological systems
By integrating data from multiple omics layers, researchers can gain a holistic view of cellular functions and molecular interactions. For instance, genomics can reveal mutations present in a cell, but combining it with proteomics can show how those mutations alter protein expression and activity.
Metabolomics provides additional context by showing how these changes impact metabolic pathways. This integrated approach offers a detailed understanding of biological systems and disease mechanisms that would be missed using a single omics approach. Cancer research has particularly benefited from integrating genomics with proteomics, leading to new insights into the molecular pathways driving tumor growth.1
Enhanced disease mechanism understanding
Multi-omics integration has proven powerful in uncovering the underlying mechanisms of complex diseases. In oncology, multi-omics approaches have revealed how genetic mutations, protein expression changes, and metabolic shifts work together to drive disease progression.2
This detailed understanding enables researchers to map signaling networks that control cell growth and survival, identifying potential therapeutic targets that might be overlooked when using a single omics approach. Multi-omics research is also advancing our understanding of neurodegenerative diseases, autoimmune disorders, and cardiovascular diseases, where complex molecular changes occur across different biological layers.3,4,5
“By integrating data from multiple omics layers, researchers can gain a holistic view of cellular functions and molecular interactions.”
Improved biomarker discovery and personalized medicine
Combining genomic and proteomic data has led to the identification of new biomarkers for cancer and cardiovascular diseases. These biomarkers enable the development of more precise diagnostic tools and real-time patient monitoring.
Multi-omics also paves the way for personalized medicine, where treatment plans are tailored to individual molecular profiles. Returning to the previous oncology example, multi-omics data allows researchers to stratify patients into subgroups based on their unique molecular characteristics, leading to more targeted therapies and better outcomes.6
Challenges and solutions in multi-omics integration
Data complexity and management
One of the greatest challenges in multi-omics research is the vast volume and complexity of data generated by each omics technology. Genomic datasets can contain millions of data points, and when combined with proteomic and metabolomic data, the complexity increases. Managing, storing, and analyzing such vast data requires robust bioinformatics tools.
New computational pipelines and data integration frameworks are helping address these challenges by processing and standardizing data from multiple omics sources, enabling researchers to draw meaningful conclusions.
Interpreting multi-omics data
Interpreting multi-omics data is also challenging because researchers must correlate findings from different molecular layers. Changes in gene expression may not correspond directly to changes in protein levels due to post-transcriptional regulation. Advanced integration algorithms and statistical models are being developed to identify relationships between omics datasets, bridging gaps between genomics, proteomics, and metabolomics, and creating unified biological models that reflect the interaction between genes, proteins, and metabolites.
Cost and resource considerations
Conducting multi-omics studies can be resource intensive, requiring multiple high-throughput platforms and advanced data analysis tools. However, technological advancements, such as miniaturized sequencing platforms and automation technologies, like automated liquid handlers, are making these techniques more cost effective. Cloud-based bioinformatics solutions also provide scalable data processing options, reducing the need for specialized infrastructure and increasing accessibility for a broader range of researchers.
Future directions and emerging trends
Advances in technology
Several technological innovations are shaping the future of multi-omics, including the development of single-cell omics.
Traditional bulk analyses often average out molecular signals across populations of cells, but single-cell technologies, such as single-cell RNA sequencing, allow for the exploration of cellular heterogeneity.7
As single-cell techniques become more scalable, they will continue to play a key role in multi-omics studies.
Real-time in vivo monitoring is another emerging trend.
Technologies that can analyze omics data in real time within living organisms allow for dynamic tracking of disease progression and treatment responses. Innovations like wearable biosensors and microfluidic chips enable continuous monitoring of molecular changes, bringing multi-omics into real-time healthcare and disease monitoring.
Long-read sequencing technologies are improving data quality in multi-omics studies by accurately sequencing complex regions of the genome.8
These technologies enhance our understanding of gene regulation processes and provide deeper insights into structural genomic variations.
Integration with artificial intelligence
Artificial intelligence (AI) and machine learning are playing an increasingly important role in analyzing and interpreting multi-omics data.9
AI models can detect patterns across genomics, proteomics, metabolomics, and other datasets that traditional methods might miss. AI-driven predictive models are also being developed to forecast patient responses to treatments based on multi-omics profiles, advancing the field of precision medicine. AI is further integrated into data analysis platforms, automating the process of data integration and interpretation, making multi-omics more accessible and efficient.
Potential applications and implications
The integration of multi-omics is poised to considerably advance disease research, drug discovery, and personalized medicine. In neurodegenerative diseases like Alzheimer’s and Parkinson’s, multi-omics is uncovering the complex interplay between genetic, protein, and metabolic changes that contribute to disease progression. This comprehensive approach is also making strides in infectious disease research by helping us better understand how pathogens interact with their hosts and identifying key molecular targets for vaccines and treatments.
In the realm of drug discovery, multi-omics is enabling the development of more detailed models of disease pathways, leading to the identification of new drug targets.10 This integrated approach not only accelerates drug development but also supports drug repurposing by revealing new uses for existing compounds based on shared molecular mechanisms.
Moreover, multi-omics research is driving the creation of next-generation diagnostics. Non-invasive tests like liquid biopsies, which analyze circulating tumor DNA, proteins, and metabolites, are emerging as powerful tools for more precise disease detection and monitoring.
The future of multi-omics is bright, with ongoing innovations in AI, single-cell analysis, and real-time monitoring.
These advances will continue to improve our understanding of biology and disease, accelerating the development of new treatments and diagnostic tools that could transform personalized medicine.
References
1. “The next horizon in precision oncology: Proteogenomics to inform cancer diagnosis and treatment.” https://www.cell.com/cell/fulltext/S0092-8674(21)00285-3
2. “Multi-Omics Profiling for Health.” https://pmc.ncbi.nlm.nih.gov/articles/PMC10220275/
3. “The promise of multi-omics approaches to discover biological alterations with clinical relevance in Alzheimer’s disease.” https://pmc.ncbi.nlm.nih.gov/articles/PMC9768448/
4. “Multi-Omics Approaches in Immunological Research.” https://pmc.ncbi.nlm.nih.gov/articles/PMC8226116/
5. “Multi-Omics Network Medicine Approaches to Precision Medicine and Therapeutics in Cardiovascular Diseases.” https://pmc.ncbi.nlm.nih.gov/articles/PMC10038904/
6. “Chapter Four - Precision medicine with multi-omics strategies, deep phenotyping, and predictive analysis.” https://www.sciencedirect.com/science/article/abs/pii/S1877117322000254?via%3Dihub
7. “Dissecting Cellular Heterogeneity Using Single-Cell RNA Sequencing.” https://pmc.ncbi.nlm.nih.gov/articles/PMC6449718/
8. “Method of the year: long-read sequencing.” https://www.nature.com/articles/s41592-022-01730-w
9. “Artificial Intelligence in Omics.” https://pmc.ncbi.nlm.nih.gov/articles/PMC10025753/
10. “Multi-Omics Integration for the Design of Novel Therapies and the Identification of Novel Biomarkers.” https://pmc.ncbi.nlm.nih.gov/articles/PMC10594525/
From Data to Discovery: Crafting Sequencing Bioinformatics Workflows
Discover the key factors for designing efficient, scalable, and high-quality bioinformatics workflows for sequencing data analysis
By Jordan Willis
Modern sequencing technologies produce an intense amount of data, but raw reads alone can’t lead to novel biological insights. Well-designed bioinformatics workflows are now essential for rapidly organizing sequencing data into accurate, reliable results for further analysis and interpretation.
The need for customized workflows has resulted in a variety of methodologies, but how do labs decide upon the best approach for their specific needs? Whether you’re optimizing an existing pipeline or designing one from scratch, this article reviews the concepts and criteria needed to build and maintain efficient sequencing bioinformatics workflows.
Understanding the advantages and challenges of standard bioinformatics solutions for sequencing data
Complexity, flexibility, and cost are the key factors to consider when designing a bioinformatics workflow. Overall, the workflow should be adapted to the lab’s research objectives, computational infrastructure, and technical expertise. Selecting the appropriate bioinformatics solution requires balancing many factors against lab-specific needs and constraints.
Each solution has distinct strengths and challenges depending on its workflow architecture and resource requirements.
This makes the choice context and goal dependent. The solutions fall into three main categories that can be compared in terms of cost considerations, data requirements and computational resources, accuracy and reliability, and scalability and automation:
1. Do-it-yourself (DIY)
DIY bioinformatics workflows rely on in-house pipeline development, often based on freely available open-source tools and custom scripting. Offering the most flexibility and transparency, DIY enables unparalleled pipeline customization and oversight. Programming languages like Python and R are widely used for data analysis tasks, such as statistical modeling and genome analysis, while Bash serves as a powerful shell scripting language for optimizing workflows by automating tasks like file management and tool execution.
Bioinformatics software such as Burrows-Wheeler Aligner (BWA), Genome Analysis Toolkit (GATK), and SAMtools are often integrated into these workflows.
Comparison: The DIY approach is usually the most cost-effective and customizable method due to its reliance upon open-source tools, but at the expense of investing in skilled personnel, robust infrastructure, and ongoing maintenance.
It provides complete control over data handling, allowing for deep customization of pipelines, benchmarking, and peer-reviewed validation. However, this level of flexibility requires powerful computing resources and ongoing maintenance.
While DIY workflows can be highly scalable and capable of automation, they require substantial manual configuration and expertise to ensure long-term sustainability.
2. Third-party
Third-party bioinformatics platforms provide pre-configured analysis pipelines with user-friendly interfaces and features like cloud-based computing. These options reduce in-house technical expertise requirements and development time by offering automated workflows for alignment, variant calling, and quality control. Third-party options may include skilled customer support, built-in regulatory compliance features, and custom data security measures.
Comparison: Third-party solutions can reduce labor and development costs but often require long-term financial commitments. By leveraging cloud-based processing, they reduce the need for local computational infrastructure; however, data upload and transfer speeds can become bottlenecks, particularly with large datasets. These solutions miniBefore discussing the different solutions, several universal workflow attributes should be highlighted:
Accuracy: The correctness of data-related processes like variant calling, read alignment, and quantification.
Reliability: The ability to maintain consistent performance across datasets while minimizing errors and batch effects.
Ease of use: The quality of user-friendly interfaces, clear documentation, and workflow automation.
Computational efficiency: The optimization of memory usage, processing power, data storage, and resource allocation to minimize processing time and costs.
Reproducibility: The ability to consistently generate the same results when processing identical datasets under the same computational conditions.
Integration capability: Forward and backward compatibility with lab hardware and software.
Scalability: The capacity to efficiently upgrade infrastructure and handle increasing data volumes.
mize errors by providing validated and automated workflows, but limit the control that in-house users have over pipeline parameters. Designed for scalability, third-party platforms often include automation features that improve efficiency in high-throughput environments.
3. Manufacturer-provided
Many sequencing platform manufacturers offer proprietary bioinformatics software optimized for their specific sequencing technology to ensure seamless integration and standardized data analysis. Manufacturer-provided pipelines often include built-in quality control metrics and default parameter settings designed for comparability and reproducibility within the lab and the broader research community.
Comparison: Manufacturer-provided solutions are often bundled with sequencing hardware, which can reduce initial investment costs, though additional fees may apply for extended features, software updates, or increased data storage.
These solutions are typically optimized for specific sequencing instruments, ensuring smooth data processing but often limiting interoperability with other platforms. While they include built-in quality control measures and incredible efficiency within proprietary limitations, they may be less adaptable to new sequencing methodologies or customized analytical needs.
The importance of seamless integration with lab software systems
Each standard solution should be designed to integrate with laboratory informatics platforms to capitalize on validated tools for streamlined data management, enhanced traceability, and improved reproducibility. There are three primary types of lab software that can play a vital role in organizing, storing, and managing sequencing data:
1. Laboratory information management system (LIMS)
LIMS platforms serve as an organizational hub for managing metadata, overseeing tasks like reagent and workflow tracking, as well as sequencing run management. Integrating bioinformatics solutions with LIMS offers several advantages, including automated sample tracking and traceability, which ensures a complete history of processing steps. Workflow automation reduces manual data entry and minimizes errors in sequencing pipelines, improving efficiency and consistency. Additionally, LIMS enhances interoperability by facilitating communication between sequencing instruments and downstream bioinformatics tools, resulting in a smooth and integrated data management process.
Each standard solution should be designed to integrate with laboratory informatics platforms to capitalize on validated tools for streamlined data management, enhanced traceability, and improved reproducibility.
2. Electronic lab notebook (ELN)
ELNs function as digital repositories for documentation and data tracking. Integrating ELNs into bioinformatics pipelines increases data consistency and reproducibility by enabling standardized data recording, an important aspect of maintaining regulatory compliance. They can also promote collaboration in-house through shared protocols, scripts, and documented results.
3. Scientific data management system (SDMS)
SDMSs serve as secure storage and management platforms while ensuring compliance with data integrity standards. Integrating an SDMS into a bioinformatics workflow helps prevent data loss and enables controlled access to sequencing files. These systems also provide version control and auditability, tracking changes in data processing pipelines to provide transparency and reproducibility. Additionally, SDMS solutions are designed to scale efficiently, handling increasing data volumes without compromising performance, making them essential for managing large sequencing datasets.
Strategies for effective lab software system integration
A well-integrated bioinformatics ecosystem enhances data flow, minimizes errors, and improves the quality of sequencing data analysis. Before implementing lab software integration, labs should consider the following best practices:
Use standardized data formats: Ensure compatibility between all bioinformatics tools and lab software.
Implement API-based connectivity: Enable smooth communication and coordination between LIMSs, ELNs, SDMSs, and bioinformatics pipelines.
Automate data transfers: Reduce manual intervention by implementing scheduled data synchronization processes.
Ensure compliance with regulations: Adhere to data integrity requirements, particularly in regulated environments.
Data integrity in regulated laboratories
Data integrity is critical for sequencing bioinformatics workflows, especially in regulated environments such as clinical, pharmaceutical, and forensic laboratories. Regulatory bodies enforce strict guidelines to maintain the reliability, traceability, and security of sequencing data.
The role of bioinformatics in data integrity compliance
Bioinformatics platforms facilitate data integrity through built-in compliance features like audit logs, encryption, and traceability tools. Additionally, by automating workflows, human error is reduced and analysis becomes standardized to increase reproducibility. Seamless integration with LIMSs, ELNs, and SDMSs further enhances traceability and regulatory compliance. Maintaining data integrity is essential for producing high-quality, reproducible sequencing results; bioinformatics workflows must be created with this fundamental concept in mind.
Securing sequencing data integrity: best practices and key challenges
To safeguard sequencing data integrity and ensure compliance, labs should:
) Maintain audit trails and electronic signatures to log data access, modifications, and approvals
) Implement automated data validation, including quality control checks, checksums, and redundancy measures to detect errors
) Use secure and redundant storage solutions with automated backups and disaster recovery protocols
Despite best efforts to maintain data integrity, several key challenges persist in sequencing workflows that can compromise the accuracy and reliability of results:
) Large data volumes from next-generation sequencing require robust storage, validation, and traceability
) Risks of data corruption or errors during transfer between platforms, tools, and storage systems
) Managing version control and ensuring reproducibility amidst complex regulatory compliance requirements
Bioinformatics workflows are the engine driving the exploration of sequencing data
When properly designed and customized, bioinformatics workflows are indispensable tools for analyzing sequencing data and generating meaningful biological information.
Selecting the right bioinformatics solution requires careful consideration of factors such as cost, data integrity, and integration with lab software, and long-term project goals should be clearly defined to help inform decision-making. With continuous improvements in sequencing and computational methods, bioinformatics workflows for sequencing data will continue to become more efficient at unlocking the world’s biological secrets.
INTEGRA Biosciences is a leading provider of high-quality laboratory tools and consumables for liquid handling. The company is committed to creating innovative solutions that fulfill the needs of its customers in research, diagnostics, and quality control within the life sciences markets and medical sector. INTEGRA’s engineering and production teams in Zizers, Switzerland, and Hudson, NH, USA, strive to develop and manufacture instruments and consumables of outstanding quality. Today, INTEGRA’s innovative laboratory products are widely used all around the world, where they help scientists accelerate scientific discovery. INTEGRA is an ISO 9001 certified company.
www.integra-biosciences.com