Artificial intelligence has become a standard part of clinical and laboratory operations, supporting diagnostics, imaging analysis, and decision-making. Most systems, however, remain screen-based, offering recommendations without interacting directly with the physical environment. MedOS reflects a different direction for clinical AI. Introduced by a joint Stanford–Princeton research team, MedOS is an AI-XR-cobot system designed to operate inside real clinical environments, where it can perceive procedures as they unfold and assist clinicians in real time.
For lab managers and clinical operations leaders, MedOS highlights the growing role of embodied AI—artificial intelligence systems that integrate perception, reasoning, and physical action. As staffing pressures and procedural complexity increase, embodied AI systems are emerging as a new layer of support within clinical workflows, with implications for training, safety, and operational consistency.
What an AI-XR-cobot system means for clinical workflows
MedOS is described by its developers as an AI-XR-cobot system that combines artificial intelligence, extended reality, and collaborative robotics into a single platform. The system integrates smart glasses for visual input, robotic arms for physical interaction, and a multi-agent AI architecture that mirrors clinical reasoning logic.
Unlike traditional clinical AI tools that analyze data retrospectively or issue alerts, this AI-XR-cobot system is intended to function during procedures. It interprets three-dimensional clinical scenes, tracks procedural context, and coordinates actions alongside clinicians. MedOS builds on earlier research from the team’s LabOS platform, extending AI from laboratory research environments into live clinical workflows.
Embodied AI and the “world model for medicine”
A defining feature of MedOS is its use of what the researchers describe as a “world model for medicine.” This approach allows the embodied AI system to combine perception, simulation, and intervention into a continuous feedback loop.
Using real-time video from smart glasses and spatial data from the clinical environment, MedOS constructs a dynamic three-dimensional representation of ongoing procedures. In surgical simulations, the AI-XR-cobot system has demonstrated the ability to identify anatomical structures, plan procedural steps, and assist with robotic tool alignment. The system updates its internal model as conditions change, enabling it to adapt to clinician actions in real time.
This tight integration of perception and action distinguishes embodied AI platforms like MedOS from decision-support tools that operate outside the procedural flow.
Technical components and data foundations
Several technical elements underpin the MedOS platform:
- Multi-agent AI architecture designed to synthesize evidence and manage procedural logic in real time
- MedSuperVision, an open-source dataset containing more than 85,000 hours of surgical video used to train perception models
- Performance support capabilities evaluated in fatigue-prone environments involving nurses and medical trainees
- Large-cohort data integration used in case studies exploring immunotherapy resistance pathways
For laboratory professionals, these components reflect broader trends in embodied AI development, where multimodal data and real-time context are essential for systems embedded in clinical workflows.
Operational implications for lab managers
As embodied AI systems move closer to the point of care, lab managers overseeing clinical research labs, translational medicine programs, or hospital-based laboratories may encounter new operational considerations. An AI-XR-cobot system like MedOS could influence how organizations approach procedural standardization, training, and quality oversight.
Because MedOS is modular by design, it can be adapted across specialties and care settings, including imaging-guided procedures and precision diagnostics. At the same time, deploying embodied AI within clinical workflows introduces questions around validation, governance, and coordination between laboratory, clinical, and IT teams.
Early deployment and evaluation
MedOS is launching with support from NVIDIA, AI4Science, and Nebius, with early pilot deployments at Stanford, Princeton, and the University of Washington. The system is scheduled to be showcased at a Stanford-hosted event in early March, followed by a public unveiling at NVIDIA’s GTC conference.
According to Le Cong, associate professor at Stanford University and leader of the Stanford–Princeton AI Coscientist Team, “The goal is not to replace doctors. It is to amplify their intelligence, extend their abilities, and reduce the risks posed by fatigue, oversight, or complexity.”
As evaluation continues, MedOS offers an early example of how embodied AI and AI-XR-cobot systems may increasingly operate within real clinical environments, reshaping how artificial intelligence intersects with clinical and laboratory workflows.
This article was created with the assistance of Generative AI and has undergone editorial review before publishing.












