Building Better Imaging Systems from the Ground Up
From foundational practices to advanced capabilities, learn how to improve image quality, reproducibility, and efficiency
IMPROVE Accuracy in Microscopy
MANAGE Infrastructure Requirements
INTEGRATE AI into Imaging
Table of Contents
Exploring Microscopy: Tools, Techniques, and Innovations 3
Building a Strong Foundation in Microscopy ....................4
Making Measurements with a Microscope ........................... 5
How SEM/EDS Works and Its Applications
in Materials Science ........................................................ 7
Managing Electron Microscope Safety in the Lab..................10
How to Plan for Vibration Control in Lab Buildings.................12
Microscopy Data Integrity Checklist....................................17
The Future of Microscopy..............................................18
From Separate Systems to Single Workflows:
Making Microscopy Integration Work.................................19
Innovations in Microscopy............................................... 22
Advances in Microscopy and Imaging: Enhancing
Live Cell Capabilities with New Technology and Software..... 24
Leveraging AI to Enhance Super-Resolution
Confocal Microscopy .................................................... 27
Integrating Artificial Intelligence with Digital Pathology .......... 30
Advancements in Scanning Electron Microscopes................. 33
Cryo-EM in Drug Discovery ............................................. 35
This resource guide brings together foundational topics such as measurement accuracy and instrument safety with emerging capabilities in advanced imaging, including cryo-EM, super-resolution techniques, and AI-driven image analysis. A data integrity checklist is also included to help labs strengthen documentation practices.
Exploring Microscopy: Tools, Techniques, and Innovations
Navigate core concepts and recent advances in microscopy
Microscopy is a central technique in many labs, supporting applications ranging from materials characterization and quality control to drug discovery. But working with microscopes today involves far more than just operating the instrument. It requires close attention to the many variables that influence image quality and reproducibility, as well as adherence to regulatory standards and documentation practices. Labs must also navigate a growing number of imaging technologies, each offering distinct capabilities, infrastructure needs, and trade-offs.
Quality is directly shaped by the capabilities of the microscope, as well as by the broader systems and workflows in which it operates. As pressure mounts to deliver high-quality results quickly and consistently, understanding how these technical and operational factors interact is essential for making informed decisions and maintaining long-term performance.
Chapter One
Building a Strong Foundation in Microscopy
Before you can fully take advantage of the latest microscopy techniques and technologies, it’s essential to understand the fundamentals—how to collect measurements, how the technology functions, and how to ensure optimal performance.
This chapter covers the core principles that guide microscopy in both research and industrial labs, from SEM-EDS applications to facility design and instrument safety. These resources also serve as a starting point for evaluating current practices and identifying opportunities to strengthen compliance, measurement accuracy, and overall system performance.
Making Measurements with a Microscope
The instrument, calibration, and surroundings determine the results
By Mike May, PhD
Microscopes are typically used to collect qualitative information, such as observing the behavior of microscopic organisms or cell replication in real time. But they’re also very useful in collecting quantitative information. Academic and industrial scientists often rely on microscopy to measure dimensions or inspect materials for research or production purposes. In these applications, a microscope’s accuracy, resolution, and precision determine the value of the measurements.
Collecting the desired measurements, however, is often a challenging task. It is dependent on the right specifications and environmental factors, as well as proper maintenance procedures.
The key characteristics of measuring with microscopes
There are several factors that determine how effectively a microscope will perform precise measurements. “The most influential factor is typically the numerical aperture of the objective lens,” says Martí Duocastella, PhD, professor of applied physics at the University of Barcelona in Spain. For a microscope, a higher numerical aperture produces better resolution. The higher the resolution, the more granular the spatial resolution—that is, the minimum distance at which two objects can be distinguished—will be. Without being able to distinguish between objects, measurements will be skewed.
For three-dimensional measurements in particular, Duocastella says, “The ability to control the distance between the sample and the objective lens can also strongly affect the final microscope accuracy.”
For instance, “the quality and strength of the light source are critical for obtaining clear images,” says Cahit Perkgöz, PhD, assistant professor of computer engineering at Eskişehir Technical University in Turkey. “Also, a stable environment, minimizing vibrations, and controlling environmental conditions, such as temperature and humidity, positively impact microscope performance and accuracy.”
Confirming the specs
After deciding that a microscope’s specifications will, on paper, meet your research needs, it is still important to confirm that the microscope will yield the performance you expect.
This is accomplished by calibrating the microscope for accuracy, along with determining its real-world spatial resolution and precision.
“First and foremost is calibration,” Perkgöz notes. To calibrate a microscope, you must first set it up in the location where it will be used, with the light source that will be used for measurements. Calibrating a microscope in the same conditions in which it will be used minimizes the risk of adverse environmental influence.
With the microscope’s light intensity adjusted for the best view of a reference sample of a known size, you can then calibrate the microscope to ensure accuracy. Then, you can use resolution test strips or gratings to determine the device’s spatial resolution. Resolution test strips are cards with line patterns that are placed alongside the sample to be observed.
The patterns have lines of varying thickness and spacing. The smallest line pattern that can be distinctly resolved by the microscope, with no lines blurring together, indicates the microscope’s spatial resolution. Finally, determining a microscope’s precision requires multiple measurements of an object. “Statistics of the acquired data, such as the standard deviation of the object’s position, directly provide information on the precision,” Duocastella says.
So, for the most accurate and precise data, the microscope, calibration, analysis, and the surroundings all matter.
“After deciding that a microscope’s specifications will, on paper, meet your research needs, it is still important to confirm that the microscope will yield the performance you expect.”
How SEM/EDS Works and Its Applications in Materials Science
This versatile technique offers insight into the structure and composition of a range of materials
By Aimee Cichocki
Scanning electron microscopy with energy-dispersive X-ray spectroscopy (SEM/EDS) is an important tool in the field of materials science and can be used to examine the structure and composition of a wide range of samples. It enables advanced surface analysis, with applications in multiple areas, including product failure investigation and contaminant identification.
SEM/EDS offers several key advantages. This technique is versatile, accurate, and usually non-destructive. What’s more, it can provide qualitative analysis of all but two elements in the periodic table (hydrogen and helium), making it applicable to a range of materials science applications.
How SEM/EDS works
SEM/EDS is a combined technique that uses a scanning electron microscope and energy-dispersive X-ray spectroscopy to analyze materials. SEM provides the imaging component, while EDS is used for detection. While traditional microscopy uses light to create an optical signal, a scanning electron microscope uses electrons. EDS is then used for detection and analysis.
The microscope works by generating a beam of electrons from an emitter-cathode within an electron gun. This beam is then accelerated and focused by an anode and a series of electromagnetic lenses. It is scanned across the surface of the sample, interacting with the atoms in the sample and causing secondary electrons to be emitted from the surface. The emitted electrons then reach the EDS detector.
EDS can be used with electron microscopes to determine the chemical composition of materials. It works by measuring the energy of X-rays emitted when the electron beam strikes the specimen surface and then uses this information to determine which elements are present and at what concentration.
Because the electron beam is highly localized, EDS is used to provide high-resolution chemical composition maps, giving a clear understanding of processes occurring within a material. EDS is widely used in application-specific packages, for example, to obtain detailed particle classifications.
Applications of SEM/EDS in materials science
The versatility and high-resolution capabilities of SEM/EDS lend itself to a variety of applications. EDS is commonly used across various materials science fields, including geology, metallurgy, microelectronics, ceramics, coatings, cements, and soft materials, among others. It can be used to characterize every aspect of a material’s life cycle, including development, process control, and failure analysis.
SEM/EDS is typically used as an investigative approach and can be tailored to specific applications. A broad range of industries find use for this technique, including automotive supplies, plastics manufacturing, pharmaceuticals, and electronics manufacturing, to name a few.
One of the most common uses for SEM/EDS is surface characterization. The technique can be used to study the surface topography and morphology of materials such as metals, composites, polymers, and ceramics. This information is helpful in understanding things like the effects of the manufacturing process and the degradation and wear of materials.
For example, it enables manufacturers to investigate failure mechanisms or characterize defects in devices like transistors or integrated circuits. Not only can SEM/EDS provide information about the surface structure of various materials, but it can also measure their elemental composition. This makes it particularly useful for applications such as studying nanoparticles or examining corrosion layers. Moreover, SEM/EDS can be used to study organic and inorganic materials. Other common uses for SEM/EDS include contaminant identification in various manufacturing processes and forensic analysis to analyze trace evidence, such as gunshot residue, paint fragments, and explosives.
SEM/EDS applications are further enhanced through the use of advanced technologies such as machine learning and 3D imaging. SEM/EDS processes often produce large datasets, which can be labor-intensive to analyze manually. Machine learning algorithms can be used to identify correlations between material properties and speed up analysis. Meanwhile, electron tomography can be used alongside EDS to develop 3D images of materials. This has a variety of applications, including process control and technical cleanliness.
Advantages and limitations of SEM/EDS
As with all analytical techniques, SEM/EDS has its own set of advantages and drawbacks. A key quality is that it can provide precise chemical information about samples on very small scales. A key benefit of EDS is its versatility, as it provides specimen information over a range of tens of nanometers to tens of centimeters. It is also accurate, sensitive to low concentrations, and non-destructive in most situations.
EDS is also relatively straightforward to execute and can be used alongside additional investigative methods. It requires minimal sample preparation and is easily combined with other techniques.
One such example is electron backscatter diffraction (EBSD). In addition to secondary electrons, backscattered electrons are emitted from the surface of the analyzed sample. Unlike secondary electrons that come from the sample, the backscattered electrons are incident electrons from the emission source. EBSD is used to determine crystallographic data that EDS alone cannot provide.
A limitation of EDS is that it can’t be used to analyze hydrogen and helium. The nuclei of these elements each have only one neutron, so there are no free electrons to emit. In addition, X-rays produced by lithium, beryllium, and other low atomic number elements may be insufficient for measurement. Carbon also represents issues, as it is often present as a surface contaminant.
Another drawback is that SEM involves subjecting the sample to high-vacuum conditions. As such, the technique is generally not used to analyze liquid samples, although special preparation techniques have been developed for select cases. As with other techniques, there are also limitations in terms of sample size and element concentration. Some of these may be overcome by adjusting sample preparation techniques, although an alternative technique may be necessary in some cases.
SEM/EDS is considered a vital tool for many applications. With its many advantages, few drawbacks, and potential for combination with other detection methods, SEM/EDS is an exciting and promising technique for high-resolution imaging and chemical analysis in materials science research.
“Not only can SEM/EDS provide information about the surface structure of various materials, but it can also measure their elemental composition.”
Managing Electron Microscope Safety in the Lab
High-voltage and radiation hazards make electron microscope safety a critical responsibility for lab managers
By Michelle Gaulin
Electron microscopes are powerful tools for nanoscale imaging and materials analysis, but their use involves specific safety risks—particularly from high-voltage components and the potential for radiation. While modern instruments are heavily shielded and radiation levels are minimal, unsafe conditions can arise during servicing, hardware modification, or improper use. Lab managers are responsible for mitigating these hazards through training, procedural oversight, and coordination with environmental health and safety teams.
Understanding electron microscopy safety risks
Electron microscopes use high-energy beams to illuminate and analyze specimens. As the beam interacts with internal components and the sample, it can release secondary emissions, including heat, light, and in some cases, low-level X- rays. These emissions are typically well-contained, but unsafe conditions can arise during servicing, shielding removal, or unauthorized hardware modifications.
Additionally, these instruments operate at high accelerating voltages. Scanning electron microscopes use between one and 30 kilovolts, while transmission electron microscopes use between 30 and 300 kilovolts, making electrical shock a serious concern. Properly functioning safety interlocks are critical to prevent access to high-voltage areas while the microscope is energized.
Managing electron microscopy radiation safety in the lab
While modern electron microscopes are heavily shielded and external radiation is generally insignificant, these instruments are still classified as radiation-producing devices. They must be registered with the appropriate state or institutional authority, and routine radiation surveys are required to verify that emissions remain within safe limits. Surveys should also be conducted at installation, after relocation, or whenever hardware changes could affect shielding. To meet these requirements, it is helpful to have a staff member trained as the radiation safety officer (RSO).
Electron microscopy safety protocols for lab managers
To ensure safe operation and regulatory compliance, lab managers should implement the following procedures, based on environmental health and safety guidelines:
) Ensure that only trained and authorized personnel are permitted to operate electron microscopes
) Confirm that a visible indicator light is installed and functioning to show when high voltage is being applied
) Use interlocks, physical barriers, or administrative controls to prevent access to active beam paths or areas where radiation scatter may occur
) Regularly verify shielding integrity and radiation levels using a calibrated survey meter
) Secure instruments against unauthorized use by locking the unit or the room and ensuring that beam paths are protected by fixed shielding
) Keep all unused ports closed and properly sealed to avoid accidental radiation exposure
) Maintain an operating log that records each session’s date, operator name, beam voltage, and exposure duration
) Do not alter shielding, ports, or other built-in safety features; if changes are necessary, consult your RSO before proceeding
) Immediately report any suspected radiation exposure or equipment malfunction, and remove the instrument from service until it has been evaluated
) Register all electron microscopes with your institution’s environmental health and safety office before installation, and notify them of any future relocations, disposals, or acquisitions
) In an emergency or accident, notify appropriate safety personnel immediately and suspend microscope use until a full safety review has been completed
Why electron microscopy safety in the lab depends on oversight
Most hazards associated with electron microscopy stem from preventable conditions such as untrained use, unauthorized modifications, or incomplete monitoring. With proper procedures, lab managers can ensure this essential equipment continues to operate safely, compliantly, and without disruption.
“These emissions are typically well‑contained, but unsafe conditions can arise during servicing, shielding removal, or unauthorized hardware modifications.”
How to Plan for Vibration Control in Lab Buildings
How to build low-vibration, flexible laboratory buildings that cater to the life sciences market
By Matthew Fickett, AIA, CPHC, LEED
With demand for flexible lab buildings on the rise, the importance of strategic lab planning has never been greater.
While many factors play into the lab design process, one of the most critical elements to plan for is vibration control.
Vibration—the periodic back-and-forth motion of the particles of an elastic body or medium—happens everywhere, and is oftentimes below the threshold of human perception.
However, this natural phenomenon has a major impact on lab testing and processes and can even dramatically alter the outcome of scientific experiments, making it a critical but sometimes overlooked component of lab design. This is because common lab equipment like microscopes, PCR machines, incubators, and 3D printers are highly sensitive to vibration, which can be caused by a number of internal and external factors.
Internally, floor vibrations from foot traffic, elevators, HVAC, fans, and air handling systems play a significant role in lab vibration, while external elements like road traffic, railroad proximity, and nearby construction sites can also cause an impact. Essentially, the extent to which vibration can be minimized throughout the building will ultimately influence the success of the research.
But to understand how to control vibration, it is important to first understand how it actually works. While vibration is the oscillation of something, examining how it oscillates, in terms of its frequency and velocity, is key to determining how to control it.
Frequency is simply how much time it takes to go from one place to another, and back, like a lap. In terms of vibration, this is described as the number of laps the vibrating medium accomplishes per second. Measured in Hertz (Hz), one lap per second is equivalent to 1 Hz.
While understanding vibration speed is important, it is only the first step. The next step is determining how much the medium is vibrating. More specifically, identifying how much energy the vibration carries will reveal the amount of energy it has to impact the lab building. Therefore, we talk about a quantity called the “root mean square velocity,” abbreviated “RMS.” The higher the RMS number, the higher the likelihood that the interior elements or composition of the building will vibrate.
Further, a single object can vibrate in more than one way at the same time. At 1 Hz, it can vibrate very little; at 100 Hz, it can vibrate a lot. Take music, for example: a high-pitch violin note could vibrate in the air at 10,000 Hz, while a low bass singer could also vibrate in the air at 100 Hz. Both can be heard at once. Much in the same way, the floor in a building could be vibrating at both frequencies at the same time. In fact, in real life, everything vibrates to some extent at every frequency all the time. The question lies in how much.
Floor vibration in buildings is often described in micro-inches per second or micrometers per second. This refers to the RMS velocity figure, but without frequency information, only a fragment of the picture is shown. Is the floor vibrating at 8,000 mips at 100 Hz, but perfectly still at 10,000 Hz? Is it vibrating at 8,000 at all frequencies? Without frequency information, there is no way to determine the scope of the vibration. Instead, the full spectrum needs to be shown. This graph illustrates the RMS velocity of vibration for a hypothetical floor at various frequencies.
Note that “micro-inches per second” is often abbreviated “mips” but more properly written “μin/s.” If the metric system is used, the relevant unit would be “μm/s.” In the example above, the floor is vibrating at almost 10,000 μin/s at 8 Hz but only at 10 μin/s at 80 Hz.
However, not every project comes with a vibration spectrum graph. So, how can this be shown without having to draw one?
In the 1980s, Eric Ungar and Colin Gordon faced similar obstacles and developed what is now known as VC (vibration criteria) curves. These are a set of ready-made lines on a graph that can be used to easily describe vibration with abbreviations like “VC-D” without the need for a spectrum or a table. They also incorporate ISO standards for vibration in various types of space.
Notice that the floor vibration in this example is just barely below the “Residential Day (ISO)” line at its highest point, which means that this floor is suitable for an apartment but not for an operating room (or a sensitive lab!). Less vibration is always better for science.
Although it is important to note that different science activities have varying vibration standards when determining suitable vibration levels for labs.
The most common vibration-sensitive activity in a lab is optical microscopy. Naturally, when examining very small objects, shaking the table makes it difficult to see them clearly. The crucial question here is: just how much shaking will make it too hard to see?
Fortunately, Colin Gordon and Bassett Acoustics produced an excellent reference work that relates various levels of magnification to acceptable vibration limits. Table 1 below is adapted from their paper. Note that the vibration limits are maximum allowable, meaning that measured vibration must be less than these levels.
Much of modern biological science involves modifications or additions to an optical microscope, which enables enhanced imaging or actual scientific work conducted under a microscope. These alterations substantially increase the sensitivity of the microscope. The paper described above gives values for some examples of these activities (Table 2).
It would be impractical to provide VC-D throughout the entire floor plate, although every building designer and owner wants to provide maximum flexibility to accommodate future scientific needs. So, how can a whole building be planned?
Fortunately, most labs only need a few areas of low vibration to support a few microscopes (it is rare to see a microscope on every lab bench). This allows for the creation of microscope-suitable spaces without extending the low-vibration area to the whole floor plate. For this, two major tools can be utilized:
) Microscopes can be placed on pneumatic tables, whereby tabletops float on a piston filled with compressed air or nitrogen. These work just like shock absorbers in a car, and generally cut about 90 percent of the vibration from the floor. Because the lines on the vibration criteria are logarithmic, where each is 10 times the previous one, a 90 percent reduction from such a table can improve conditions by one whole criterion (for example, from VC-B to VC-C).
Magnification Vibration criteria Comments
100x or less Operating Room (ISO) Flat portion of line at 4,000 μin/s
400x VC-A Flat portion of line at 2,000 μin/s
1000x VC-C Flat portion of line at 500 μin/s
Table 1: Vibration limits by microscope magnification level.
Activities Vibration criteria Comments
• Digital imaging
• Fluorescence
VC-C Flat portion of line at 500 μin/s
• Microinjection
• Micromanipulation
• Electrophysiology
• Confocal microscopy
VC-D Flat portion of line at 250 μin/s
Table 2: Acceptable vibration limits for microscopy activities.
) The floor structure is not homogeneous. At the center of a structural bay, far from any column, the floor can vibrate significantly. Directly next to a column, it is very unlikely that the floor will vibrate as much. In general, locating a microscope near a column can improve vibration by another whole criteria step (for example, from VC-C to VC-D).
Each of these two techniques achieves one criterion of improvement (that is, it reduces vibration to 1/10th of its original level). Together they achieve 1/100th of what there was before, or two criteria.
So, how is the right vibration target for a building determined without knowing the kind of lab that will be inside it? Imagine an optical microscope being used at 1,000x magnification, located on a pneumatic table, next to a column. In this case, a high-spec, yet still common, microscope is being considered, located using both techniques previously discussed.
) The microscope must sit on a surface that is VC-C, per the table above. That is the top of the pneumatic table.
) That means the floor under the table has to be at least VC-B, since the table improves vibration by about one “step.”
) If the floor right next to a column is VC-B, then a worst-case spot in the middle of a structural bay can be VC-A. This process reveals that VC-A, which is approximately 2,000 μin/s above 8 Hz, serves as a safe baseline criterion for a laboratory building.
Note that this example started with a very high-end optical microscope. It is also common to assume that this type of optical work would be done on the ground floor or in a specially reinforced area of the structure, and instead use a microscope operating at around 400x or 600x as a baseline. In that case, the ISO standard for operating rooms (4,000μin/s above 8 Hz) would be sufficient as a general baseline. In fact, 4,000μin/s is often quoted as a figure for lab building planning.
Table 3 summarizes what work is possible for different baseline criteria. In this table, “special construction” can mean several things:
) The ground floor with a concrete slab sitting on the earth. This generally vibrates a lot less than upper floors.
) Deeper structural beams. The depth of the beams reduces flexing, which in turn reduces the vibration energy (as measured by the RMS value).
) Specialty active vibration cancellation feet or tables. These are tiny motors, controlled by a computer that vibrates in a way that offsets the floor vibration to produce a steady surface. This is effective, but it isn’t cheap!
Of course, these are only general rules. There are many other factors, including the location of stairs, elevators, mechanical rooms, and perhaps most importantly, beam span length. Every individual science task is likely to have slightly different requirements as well.
It’s also important to remember that vibration isn’t just one number; the whole spectrum needs to be considered. VC curves are a good way to do that. Further, VC-A is a very good baseline target for lab buildings, but the ISO operating room standard can also be effective.
Armed with this knowledge, low-vibration, flexible laboratory buildings that cater to the life sciences market can be built. In doing so, this will further safeguard the costly research efforts of life sciences companies, creating an even stronger, science-forward future.
Microscopy type Building baseline (Vibration in the center of a structural bay)
ISO operating room
(4,000μin/s above 8 Hz)
VC-A
(2,000μin/s above 8 Hz)
100x or less On a pneumatic table or near a column Anywhere
400x On a pneumatic table and near a column On a pneumatic table or near a column
1000x Special construction required On a pneumatic table and near a column
Digital imaging Fluorescence
Special construction required Special construction required
Microinjection
Micromanipulation
Electrophysiology
Confocal microscopy
Special construction required Special construction required
Table 3: Building conditions for various microscopy applications.
Image acquisition
Establish a regular calibration schedule according to the manufacturer’s guidelines, and document calibration in an accessible log
Use standardized protocols for image capture across users and experiments
Ensure all users are trained and qualified to operate the microscope and associated software
Implement software that automatically captures metadata, including experimental parameters, user ID, and date and time of acquisition Data storage and security
Save original image files in the microscope’s proprietary file format—such as .nd2, .lif, .czi, etc.—to preserve image quality and metadata
Use clear and consistent file naming conventions
Never modify original files; all changes should be made to a separate, clearly labeled copy
Store data in a secure location with user authentication, access controls, and digital signature support
Use systems that automatically log all user activities and associated information, including time stamps and user IDs, to maintain a comprehensive audit trail
Enable automatic backups to local and off-site or cloud-based storage to protect against data loss and corruption
ALCOA
The key principles of data integrity are captured by the acronym ALCOA:
Attributable: Data must be traceable to the person who performed the work and the instrument used. If multiple individuals are involved, documentation should indicate who completed each task.
Legible: Data must be readable, clear, complete, and understandable. This includes the image and all associated metadata.
Contemporaneous: Records must be time-stamped and created at the time the activity is performed.
Original: Data should be kept in its original form. If processing occurs, the original data must remain easily retrievable, with a clear distinction between raw and processed versions.
Accurate: Data must be error-free and a true representation of the work performed. It should also contain sufficient information to support reproducibility.
Microscopy Data Integrity Checklist
How to protect and preserve your imaging data
Proper documentation, secure storage, and traceability are essential to ensure that microscopy workflows meet both scientific and regulatory standards, such as those outlined in 21 CFR Part 11. This checklist aims to help you maintain accuracy, reproducibility, and compliance.
Chapter Two
The Future of Microscopy
Microscopy is advancing rapidly, with innovations in optics, software, and artificial intelligence transforming how labs visualize and analyze samples. This chapter highlights breakthroughs across a wide range of imaging technologies—from live-cell imaging and super-resolution confocal microscopy to digital pathology and cryo-EM in drug discovery. You’ll also learn how AI is being integrated to enhance image quality, speed up interpretation, and create more reproducible results.
From Separate Systems to Single Workflows: Making Microscopy Integration Work
Combining imaging with analytical measurements streamlines data collection and shortens the time to actionable results
By Jordan Willis
Microscopy remains a core technique in research and quality assurance. However, its analytical utility can be limited without complementary data, whether it’s chemical, thermal, or mechanical. In traditional workflows, which may require multiple technicians, switching between instruments, and manually integrating results from separate systems. Conversely, integrated microscopy workflows combine multiple analytical systems to reduce analysis time, minimize rework, and enable context-rich data interpretation under time-sensitive conditions.
Expanding analytical capability through multimodal integration
Integrating microscopy with other analytical methods enables labs to generate more complete, directly correlated datasets. Several broad analytical categories lend themselves especially well to integration with microscopy and are increasingly central to efficient lab workflows:
Spectroscopy
• Use: Adds molecular or chemical functional group information to visual observations.
• Example: Identifying mixed polymer domains in a blend using optical microscopy alongside infrared spectroscopy.
• Benefit: Improves material verification by correlating visual structure with chemical composition.
Thermal analysis
• Use: Links physical changes driven by heat absorption to visible structural effects.
• Example: Monitoring heat-induced transitions in emulsions while observing structural breakdown under polarized light.
• Benefit: Connects visible microscopic changes with thermal behavior, which is important for stability studies.
Mechanical failure testing
• Use: Shows how mechanical stress relates to failure at the microscopic level.
• Example: Imaging fracture surfaces of a composite after tensile testing.
• Benefit: Links physical failure with visible deformation patterns to guide design improvements.
Electrical or conductivity measurements
• Use: Combines imaging with electronic performance data.
• Example: Examining surface defects on a printed circuit while monitoring for local electrical resistance.
• Benefit: Combines structural and functional data for troubleshooting.
Elemental mapping
• Use: Pinpoints the location and presence of specific elements or compounds.
• Example: Combines scanning electron microscopy (SEM) with energy-dispersive X-ray (EDX) analysis to investigate contamination in a metal part.
• Benefit: Enables spatially resolved chemical identification for quality investigations.
Implementing multimodal microscopy workflows in the lab
Lab managers considering integrated microscopy workflows must select the optimal combination of techniques to strike a balance between performance, practicality, and purpose.
When thoughtfully integrated, multimodal microscopy systems can increase confidence in results and reduce time spent on multiple instruments or re-running analyses.
Successful implementation depends on practical planning involving these main areas:
Integration
Look for hyphenated systems with integrated microscopy and analysis functions, like SEM-EDX, microscope FTIR, or advanced AFM platforms.
Compatibility and workflow fit
Investigate microscopy workflows for samples and projects that require both microscopy and other characterization. Prioritize systems that align with the lab’s sample types and throughput demands.
Data integration and interpretation
Seek software applications that can handle and harmonize data from multiple sources, including spatial, spectral, and physical formats.
Regulatory and documentation requirements
Ensure integrated workflows meet relevant standards for traceability, validation, and audit readiness, especially in regulated environments.
Scalability and flexibility
Prioritize platforms and protocols that are flexible for expansion or reconfiguration.
Reproducibility and documentation
Ensure that the combined techniques meet or exceed all required standard operating procedures and validation protocols.
Toward more connected analytical workflows
As microscopy becomes more inter connected with complementary techniques, labs are setting new standards for streamlined data collection. With ongoing improvements in software, automation, and instrument compatibility, multimodal analysis is becoming increasingly practical for labs looking to modernize and streamline their analytical capabilities. When well-implemented, these systems can improve turnaround time while enhancing the consistency and interpretability of complex datasets.
“With ongoing improvements in software, automation, and instrument compatibility, multimodal analysis is becoming increasingly practical for labs looking to modernize and streamline their analytical capabilities.”
Innovations in Microscopy
Advancements that are reducing damage to sensitive samples and enabling faster, more reliable results
By Rachel Muenz
Advancement in microscopy comes at a rapid pace. Here we break down some of the standout innovations in the microscopy space, as well as a few of the major trends.
Microscopy techniques and capabilities
A breakthrough in fluorescent dye chemistry has improved super-resolution microscopy by enabling high-quality image capture under green excitation. By replacing the benzene ring in the rhodamine core, researchers were able to boost the photoswitching behavior of rhodamine dyes, introducing a new color channel to contrast with the existing red.
Avoiding damage to sensitive samples
In other fluorescence microscopy-related work, researchers at the Max Planck Institute of Molecular Physiology and their colleagues have developed a method to address some of the limitations of observing living cells. They cooled living cells at speeds up to 200,000°C per second to -196°C, allowing the researchers to preserve cellular biomolecules, “in their natural arrangement at the moment of arrest,” according to a press release on the work. Their technique, called ultrarapid cryo-arrest, prevents cell destruction by phototoxicity, allowing scientists to observe molecular processes they previously couldn’t.
The researchers point out in their Science Advances paper that previous cooling methods for fixing microorganisms aren’t fast enough to prevent damaging ice crystal formation. In contrast, the researchers didn’t observe any ice crystal formation in their method and noted that the protein structure of cells remained intact after the procedure. “Thus, the ultrahigh speed of cryo-fixation faithfully arrests a temporal state of dynamic molecular patterns in cells that can be observed at multiple resolutions,” they conclude.
To improve biomechanical studies of living cells, scientists have simplified optical tweezer calibration. As described in Scientific Reports, scientists from the University of Münster in Germany and the University of Pavia in Italy have developed a method for calibrating optical tweezers, which helps researchers avoid damaging sensitive cells due to light-induced heating.
Optical tweezers use the momentum of light to trap and examine the properties of micro- or nanoscale particles, avoiding the damage caused by other methods. The researchers’ simplified calibration method for this tool allows better measurement of the biomechanical properties of living cells, such as viscoelasticity, viscosity, and stiffness—essential for understanding a variety of processes, including how diseases progress.
Another innovation in light microscopy comes from scientists at the University of Texas Southwestern and their colleagues in Australia and England. They invented an optical device that converts commonly used microscopes into multi-angle projection imaging systems, allowing users to obtain 3D image information at a fraction of the time and cost normally required.
“It is as if you are holding the biological specimen with your hand, rotating it, and inspecting it, which is an incredibly intuitive way to interact with a sample,” said Kevin Dean, PhD, one of the researchers involved, in a press release. “By rapidly imaging the sample from two different perspectives, we can interactively visualize the sample in virtual reality on the fly.”
Advancements in automation and AI Recent research involving automation and artificial intelligence (AI) is also advancing the capabilities of microscopes. For example, scientists at the Woods Hole Oceanographic Institution and their collaborators have addressed several challenges in confocal microscopy with their confocal platform. The platform, referred to as multiview confocal super-resolution microscopy, incorporates deep learning algorithms and solutions from other high-powered imaging systems, enhancing the volumetric resolution by more than 10 times while reducing phototoxicity. Using these AI algorithms, labs can potentially achieve improved performance from their existing confocal microscopes without the need to invest in a new system.
Another AI-related development comes from researchers at the University of Gothenburg, who developed a new deep learning method to replace fluorescence microscopy, saving time and expense associated with staining and reducing the risk of chemical interference. The method enables scientists to capture images with bright-field microscopes and apply Python scripts to produce virtually stained images, thereby allowing for more reproducible and reliable results.
Trends
The major trends driving recent advancements in microscopy include developments in AI, automation, and continuous improvements in super-resolution microscopy. Manufacturers continue to improve upon the AI and automation capabilities of their microscopes, allowing for faster imaging, reduced sample preparation, and new functionalities.
Developments in AI-related microscopy research also center on making things faster for end users. In an example from the Institute for Bioengineering of Catalonia’s nanoscale bioelectrical characterization group, researchers developed a machine learning method that reduced microscope data processing from months to seconds.
Another key trend includes the widening accessibility of super-resolution microscopy, with advancements bringing super-resolution capabilities to standard microscopes. For example, engineers at the University of California, San Diego developed a metamaterial that allows regular microscopes to “see” in super resolution.
As these developments progress and scientists continue to advance microscopy, lab processes will likely become faster and easier, allowing more time to be spent on making the next big discovery, rather than on tedious microscopy setup tasks.
Advances in Microscopy and Imaging: Enhancing Live Cell Capabilities with New Technology and Software
Technological advancements are redefining live cell imaging, expanding the limits of what’s possible in real-time cellular observation
By Magaret Sivapragasam, PhD
The optical microscope has evolved from a rudimentary observational tool into a powerful bioanalytical platform driving scientific discoveries. Today, live cell imaging is essential in modern research, offering unprecedented insights into cellular processes.
Several factors drive the growing demand for advanced imaging technologies. The rise in chronic diseases has intensified the need for improved diagnostics and treatments, driving demand for imaging technologies capable of visualizing disease processes at subcellular levels. In the pharmaceutical industry, the shift toward high-content screening requires imaging platforms that capture multiparametric cellular responses with precision. Additionally, the expansion of precision medicine increasingly relies on detailed cellular phenotyping to develop targeted therapies.
Innovations in microscope hardware
Recent innovations in hardware and software are revolutionizing live cell imaging by enhancing resolution, speed, and analytical capabilities.
Super-resolution microscopy techniques like stimulated emission depletion and photoactivated localization microscopy allow visualization beyond the diffraction limit of light microscopy. These methods enable sub-100 nm resolution, revealing molecular interactions within cells with greater clarity and providing new opportunities to study previously inaccessible cellular dynamics and molecular behaviors.
Light-sheet fluorescence microscopy (LSFM) addresses two major challenges in live-cell imaging: phototoxicity and photobleaching. By decoupling excitation and detection, this highspeed technique illuminates the specimen with a thin sheet of light, minimizing light exposure and reducing cell damage.
Adaptive optics further improve imaging quality by correcting optical aberrations, especially in deep tissue imaging.
Using deformable mirrors to adjust distortions in the optical wavefront, this technology enhances focus and allows for deeper tissue penetration, improving signal strength while minimizing phototoxic effects.
Integrated environmental control systems via specialized chambers or platforms maintain optimal conditions during experiments by regulating temperature, CO2, O2, and humidity levels. These systems are crucial for replicating the physiological conditions cells experience within an organism.
Software innovations and AI-powered imaging
As microscopy systems generate large volumes of datasets, advanced software solutions have emerged to process, analyze, and extract meaningful biological insights. Automated image analysis accelerates workflows by efficiently processing thousands of samples, transforming time-consuming tasks into high-throughput analytical pipelines. These systems form the backbone of high-content screening (HCS) and high-throughput microscopy, which have advanced pharmaceutical research and drug discovery.
Given the scale of the data generated, manual analysis has become impractical, necessitating AI-powered approaches that can interpret complex datasets with exceptional speed and accuracy. Deep learning algorithms refine image quality by enhancing contrast, reducing noise, and automating segmentation and restoration processes. They also facilitate image-to-image translation, such as predicting fluorescence signals from label-free images, reducing the need for invasive labeling techniques.
According to Khalisanni Khalid, an expert in flexible nanoparticle imaging and characterization from the Malaysian Agricultural Research and Development Institute, “Deep learning algorithms now assist in real-time image analysis, enabling automated identification and tracking of cellular components. These tools enhance the accuracy of data interpretation and reduce the time required for analysis. Additionally, software platforms have been developed to optimize illumination settings dynamically, balancing image quality with phototoxicity concerns.”
Convolutional neural networks can identify cellular structures, classify cell types, and detect subtle morphological changes linked to disease progression or drug effects. These models can also predict subcellular localization patterns and perform sophisticated image restoration.
Cloud-based imaging platforms expand research capabilities by providing remote access, secure data storage, and collaborative tools. By integrating multi-omics data with imaging and leveraging the Internet of Things capabilities, users can seamlessly access, analyze, and share microscopy data globally.
“Recent innovations in hardware and software are revolutionizing live cell imaging by enhancing resolution, speed, and analytical capabilities.”
Real-time processing further boosts these capabilities, enabling users to visualize and quantify dynamic cellular processes as they occur. This is particularly useful for biologically relevant assays that track cellular responses to drugs, environmental changes, or genetic modifications.
Expanded capabilities for live cell research
Multimodal live cell imaging integrates two or more imaging techniques, such as fluorescence, phase contrast, and label-free, to provide a holistic view of biological samples. The information obtained from a multimodal imaging approach can be valuable for understanding the relationship between cellular behaviors, such as cell proliferation and migration, and their microenvironments.
Fluorescence-based techniques, such as fluorescence resonance energy transfer and fluorescence lifetime imaging microscopy, provide dynamic insights into intracellular signaling and molecular interactions. Coherent Raman cattering enables label-free imaging of biochemical and metabolic activities, further expanding live-cell research capabilities.
The integration of automated imaging systems has transformed high-throughput live cell screening, accelerating drug discovery and disease modeling. Time-lapse microscopy combined with automated HCS platforms captures real-time cellular responses to treatments, enabling the rapid identification of potential drug candidates.
Innovations in fluorophores and biosensors have further expanded live cell imaging by integrating non-invasive tracking of dynamic cellular events. Hybrid fluorophores merge the stability and brightness of synthetic dyes with the specificity of protein-based sensors, which enhances super-resolution microscopy. Organelle-specific dyes and genetically encoded biosensors enhance subcellular visualization, aiding in co-localization studies and validating novel imaging probes.
A key challenge in long-term live cell imaging is minimizing phototoxicity and photobleaching, which can compromise both image quality and cell viability. Techniques such as total internal reflection fluorescence, LSFM, and multiphoton microscopy (MPM) mitigate these effects. MPM, for example, uses near-infrared wavelengths to reduce photochemical damage while maintaining high-resolution imaging. These advancements extend the duration of live cell observations, making them ideal for studying long-term biological processes.
Future direction
As live cell imaging continues to evolve, managing the vast data it generates remains a new challenge. Efficient storage, robust analytical tools, and seamless integration of imaging modalities are essential for extracting biological insights.
Khalisanni highlights the next phase of innovation: “Despite these advancements, challenges remain. The future lies in integrated, automated systems, including intelligent microscopes that adjust parameters in real time. Additionally, combining imaging modalities like light and electron microscopy is expected to offer deeper insights into cellular structures and functions.”
Building on these advancements, the continued integration of multimodal imaging, high-throughput automation, advanced biosensors, and AI-driven analytical tools is driving the next wave of discoveries in imaging technology.
Leveraging AI to Enhance SuperResolution Confocal Microscopy
How a team of researchers successfully used artificial intelligence and machine learning to improve their confocal imaging
By Damon Anderson, PhD
Yicong Wu, PhD, a staff scientist at the National Institute of Biomedical Imaging and Bioengineering, discussed their study, focused on multiview confocal super-resolution microscopy, with Lab Manager.
Q: Although confocal microscopy is a widely used, powerful tool due to its contrast and flexibility, there are several significant limitations and areas for improvement. Can you elaborate?
A: Yes, confocal microscopy remains the dominant workhorse in biomedical optical microscopy when imaging a wide variety of three-dimensional samples, but it has clear limitations. These drawbacks include substantial point spread function anisotropy (usually its axial resolution is two- to three-fold worse than lateral resolution, confounding the 3D spatial analysis of fine subcellular structures); spatial resolution limited to the diffraction limit; depth-dependent degradation in scattering samples leading to signal loss at distances far from the coverslip; and three-dimensional illumination and volumetric bleaching, which may rapidly diminish the pool of available fluorescent molecules and lead to unwanted phototoxicity.
Q: Can you summarize the major achievements of the study?
A: The spatial resolution, imaging duration, and depth penetration of confocal microscopy in imaging single cells, living worm embryos and adults, fly wings, and mouse tissues are improved with innovative hardware (multi-view microscopy, efficient line-scanning confocal microscopy) and state-of-the-art software (super resolution reconstruction, joint deconvolution, and deep learning techniques).
Q: What approaches were used to accomplish these advancements?
A: We achieved our improvements in performance via an integrated approach: 1) we developed a compact line-scanning illuminator that enabled sensitive, rapid, and diffraction-limited confocal imaging over a 175 x 175 mm2 area,
which can be readily incorporated into multiview imaging systems; 2) we developed reconstruction algorithms that fuse three line-scanning confocal sample views, enabling ~twofold improvement in axial resolution relative to conventional confocal microscopy and recovery of signal otherwise lost to scattering; 3) we used deep learning algorithms to lower the illumination dose imparted by confocal microscopy, enabling clearer imaging than light sheet fluorescence microscopy in living, light-sensitive, and scattering samples; 4) we used sharp, line illumination introduced from three directions to further improve spatial resolution along these directions, enabling better than 10-fold volumetric resolution enhancement relative to traditional confocal microscopy; 5) we showed that combining deep learning with traditional multiview fusion approaches can produce super-resolution data from single confocal images, providing a route to rapid, optically sectioned, super-resolution imaging with higher sensitivity and speed than otherwise possible.
Q: Can you discuss the novel tripleview SIM (structured illumination microscopy) image reconstruction technique used in the study and how this compares with traditional SIM in the context of super-resolution microscopy?
A: In traditional SIM, the two times resolution enhancement is achieved by reconstruction of multiple interference images. In our triple-view SIM, we obtained super-resolution 1D images using digital photon reassignment and joint deconvolution of the views, achieving triple-view 1D SIM. We also used deep learning to predict 1D super-resolved images at six rotations per view, and jointly deconvolved the views to achieve triple-view 2D SIM. We demonstrated that our triple-view 1D and 2D SIM methods outperformed a commercial 3D SIM system when imaging relatively thick samples.
Q: The multifaceted approach used in this study produced significant enhancements in confocal imaging resolution and performance. Can you describe the imaging improvements as applied to a few of the more than 20 distinct fixed and live samples that were included in the study?
A: The biological results are not only visually striking, but also enable new quantitative assessments of intracellular structures and tissues. For example, we imaged Jurkat T cells expressing H2B-GFP and 3xEMTB-mCherry at five-second intervals for 200 time points, revealing nucleus squeezing and deformation as the cells spread on the activating surface.
For thicker samples other than single cells, one example was densely labeled nerve ring region in a C. elegans larva, in which we obtained superior volumetric resolution in triple-view 2D SIM mode (253 x 253 x 322 nm3), more than tenfold better than the raw confocal data (601 x 561 x 836 nm3).
Q: What were the impacts of AI and ML on the imaging improvements observed in the study?
A: Our work provides a blueprint for the integration of deep learning with fluorescence microscopy. We successfully deployed neural networks to denoise the raw confocal images, enabling lower illumination dose and thus extending imaging duration. We also showed that such networks can predict isotropic, super-resolution images and improve imaging at depth.
Q: How do you see AI and ML impacting future confocal microscopy investigations, and in particular, singlecell imaging applications?
A: We believe that the combination of confocal imaging with deep learning allows much better imaging performance than confocal microscopy alone, and has great promise for improving spatial resolution, signal-to-noise, imaging speed, and imaging duration. We suspect the same method could also be profitably applied to other microscopes with sharp line-like illumination for single-cell imaging, including lattice light-sheet microscopy, traditional and nonlinear SIM, and stimulated emission depletion microscopy with 1D depletion. Such microscopes are capable of improving spatial resolution in single-cell studies, yet this improvement in resolution usually comes at a cost in terms of temporal resolution, signal, or phototoxicity. One caveat, however, is to remember that AI/ML generates predictions based on data the network has seen before. Although we obtained significant improvements when using AI, it is always important to remember that these networks can generate predictions with artifacts, particularly if the input data differs significantly from the training data the network has already “seen.” More work is needed to make it easier to validate the output of such approaches, but we are very excited about the possibilities for fluorescence microscopy.
Integrating Artificial Intelligence with Digital Pathology
Expert insights on the latest developments
By Michelle Dotzert, PhD
Liron Pantanowitz, MD, is the vice chairman of pathology informatics and the director of cytopathology at the University of Pittsburgh Medical Center Shadyside. He is also the director of the Pathology Informatics Fellowship Program.
Q: What is digital pathology, and how has it changed the way physicians and scientists interact with pathology data?
A: Most people take whole slide imaging to be synonymous with digital pathology, but digital pathology can mean many more things. You can take gross photos, microscopic photos, fluorescent photos, and more. However, for the most part, digital pathology is considered to be whole-slide imaging and the capability to digitize or scan your glass slides and then convert them to digital slides, e-slides, or whole-slide images. Over the years, we’ve developed different technologies to determine morphology. Electron microscopy, for example, is followed by the field of immunohistochemistry to come up with biomarkers for more definitive diagnoses, and then molecular techniques to examine not only phenotype but also the genotype. Now, with the invention of imaging technology, we’ve been able to digitize slides and convert them into pixels, enabling advancements in the field of pathology. For example, it is easier to share those images and convert those pixels into data, which is perfect for artificial intelligence (AI). We’ve come a long way from the time of Virchow, the father of modern pathology, and his microscope. Now, biologists are becoming more like data scientists as a result of digital pathology technology.
Q: How is digital pathology being combined with machine learning and AI?
A: For the first time, we can digitize slides in high throughput. Commercially available whole-slide scanners have been available for about two decades, and that has enabled labs to scan and digitize slides to create large datasets. Now, labs have a large amount of data, and this data is accompanied by metadata, such as pathology reports, which provide information about the diagnosis and patient outcome. As a result, these images have been used to begin to train algorithms. At the same time, two important things occurred in the field of computer science: first, computing capabilities and processing speeds have increased immensely, and we have cloud computing available to us; and second, within the umbrella of AI, there has been a shift away from machine learning to newer deep learning technology. Unlike traditional machine learning, which requires an expert pathologist to annotate images and train the algorithm with additional datasets, we have been able to utilize convolutional neural networks that can distinguish the important features required for an accurate diagnosis.
Q: Is an automated diagnosis possible? What implications might this have for patients?
A: Yes, it’s definitely possible. In fact, it was possible several decades ago because the field of pathology already applied automation to cervical cancer screening with pap smears, and there are many lessons to be learned from it. At one point, there was an overwhelming number of tests that required analysis. This led to a rush, and a greater number of mistakes and incorrect diagnoses. This led to the CLIA (Clinical Laboratory Improvement Amendments) regulations of 1988, which limited the number of tests an individual was permitted to screen in a day, to reduce the risk of errors. Unfortunately, this led to an even greater backlog, and created a situation where automation technology was required to solve the problem. Looking back on outcomes data, we observe that in this case, automation improved productivity and accuracy. We have learned from this that AI is best implemented as a solution to an existing problem. In some cases, new AI tools and technologies are being developed that aren’t necessarily needed, and are met with resistance from pathologists. It is also important to remember that integrating AI into an existing workflow requires a learning curve, and it is going to take time. Pre-imaging factors should also be considered, such as how the tissue was prepared and fixed, how thick the tissue is, and whether the stain is consistent. In the case of the pap smear analysis, the companies involved took control of the entire process—not just the machine learning part, but pre-imaging as well.
As for the implications for patients, I think the main thing is that patients will benefit from a more reliable and accurate diagnosis. That’s the promise, and that’s what we are hoping for. At the same time, there could be some indirect negative consequences for patients. For example, if AI competes and takes away pathologist jobs, there may be fewer to deal with the increasing number of cases we see as baby boomers are aging. Another issue is that we are working with narrow AI, in which algorithms are trained to make one specific diagnosis. If there is an anomaly or a disease the algorithm wasn’t trained for, it will be missed. There has to be appropriate oversight in place in laboratories to make sure these mistakes are caught, and preventive measures should be in place to prevent them from happening. At this time, we frequently hear the term “augmented pathology,” which means AI is able to augment what we do, but it is not at the point where there is no human intervention at all.
Q: What challenges lie ahead for the use of AI in pathology?
A: I think there are several challenges that will have to be overcome before we see this in routine practice:
1. The first challenge is related to IT: Most labs do not currently have the IT infrastructure to support AI, and many hospitals do not have the budget for expensive servers. Further, data that could potentially be stored in the cloud would have to be linked to patient identifiers, which poses a problem for hospital or institutional lab security.
2. Mindset: There is still the MD versus machine mindset, and while many pathologists are excited to see this technology, there is still some resistance, and they haven’t fully embraced it themselves.
3. Ethics: All these data sets have to come from patients, and there is concern over whether these patients are informed and whether they are providing consent for commercialization. Liability is also a concern when there is a mistake. Is the physician liable? How do you defend AI?
4. The reimbursement barrier: AI is expensive. Even if you factor in increased productivity and the added value, AI still drives up the cost, and there are no billing codes that enable labs to charge for it.
5. Regulations: Regulatory bodies like the FDA have started to approve AI algorithms for use in pathology, but the process remains complex and slow.
6. Generalizability: Most algorithms are trained on limited datasets, with limited outcomes. This makes it difficult to know whether, for example, an algorithm for a prostate biopsy that indicates the location of cancer and provides a prediction would provide the same prediction for someone living in the eastern United States and someone living in Indonesia. Despite many claims about AI, we don’t know if it will be generalizable to everyone.
7. A potential monopoly: If vendors charge a lot for an AI diagnosis, not everyone may be able to afford it or have access to it. It is possible that this could create further disparities in healthcare.
Advancements in Scanning Electron Microscopes
Explore the latest developments in electron microscopes, including improvements in detectors, sample holders, and software for enhanced usability and data quality
By Michael Beh, PhD
In scientific discovery, the scanning electron microscope (SEM) stands as a powerful tool, enabling researchers to delve into the microscopic world with unparalleled precision. SEMs work by firing a concentrated electron beam onto the surface of a sample and using the predictable interactions between the electrons and the sample surface to produce images of much higher resolution (from <1 nm up to several nm) than is possible with optical microscopy.
Detectors
One of the notable advancements in recent SEM technology centers on detectors—the components responsible for capturing and interpreting electron signals to generate images. Specialized low-vacuum detectors in recent SEMs enhance imaging of nonconductive samples without the need for a conductive coating, overcoming traditional limitations and preventing imaging artifacts. This innovative technology significantly boosts low-vacuum SEM capabilities for detailed surface analysis in materials development and semiconductor device analysis.
Sample holders
Sample holders are crucial for stabilizing and positioning samples to achieve optimal imaging. Recent SEMs boast larger sample chambers designed to accommodate samples of increased weight and size, with samples ranging up to 30 cm in diameter, 21 cm in height, and 5 kg in weight, depending on the model. This development is especially noteworthy in fields like materials science, geology, and archaeology, where examining larger specimens is essential. Importantly, this enhancement allows for the study of substantial samples without compromising their integrity to fit within a smaller sample chamber.
Software
Modern SEM software is more than an instrument operator; it’s now a comprehensive suite that enhances usability, streamlines workflows, and extracts meaningful data from complex imaging processes. Increased automation makes SEMs user-friendly and accessible, with interfaces that simplify interactions and eliminate the need for extensive expertise.
A combination of advanced detectors, increasing sample size, and sophisticated software creates a powerful platform for scientific exploration. Researchers can now embark on intricate studies, investigating the nanoscale structure of materials with unprecedented ease and precision. These developments collectively contribute to enhanced usability and data quality. As SEMs continue to evolve, their role in scientific discovery will undoubtedly become even more central, driving innovation and expanding our understanding of the intricate structures that shape the foundations of our world.
“Modern SEM software is more than an instrument operator; it’s now a comprehensive suite that enhances usability, streamlines workflows, and extracts meaningful data from complex imaging processes.”
Cryo-EM in Drug Discovery
Cryo-EM provides new 3D views of therapies and their targets
By Mike May, PhD
The structure of a drug impacts its efficacy and safety. Scientists can analyze that structure at angstrom-level resolution by quickly freezing a sample to a cryogenic temperature, which is below -150°C, and imaging it with electron microscopy. Cryogenic electron microscopy (cryo-EM) has many applications in drug discovery. For example, it can be used to analyze a drug target’s three-dimensional structure to characterize how a drug may interact with a target.
Overcoming crystallization challenges
In some cases, cryo-EM provides structural information that could not be obtained with other methods, such as crystallography. This technique allows scientists to structurally enable protein systems that are not as amenable to crystallization, like large protein complexes or membrane proteins. Scientists can then generate structures of the proteins in complex with small molecule ligands, allowing chemists to optimize the interactions of proteins for effective drug design.
To create the most effective drug, scientists need to understand the structure of the drug and that of the intended target. Without the protein-ligand structure, it can be challenging to understand how modifications to the ligand can impact potency.
Improving the process
Optimization can often be a long and nonlinear process. Many parts of the process can still be improved, including sample preparation for imaging. In single-particle analysis and characterization, the necessity of identifying conditions for each sample screened, and the need to create multiple grids of the same condition to obtain a usable one, is a limiting step in what could be a fast and information-rich technique.
Despite some ongoing challenges, cryo-EM provides scientists with new ways to develop better drugs for the future, and this capability is likely to improve even further.