Hello everyone,
and thank you for attending Lab Managers Automation Digital Summit.
My name is Marybeth Deana, and I'll be moderating this discussion.
Welcome to this session,
connecting multiple labs and or experiments unifying science on lab automation
level first, there is, first, there is benchtop instrument level automation,
then integrated instrument automation, and now infrastructure level automation.
The ability to connect data and unify science across multiple geographically
distributed labs or even separate work cells under the same roof.
Automata is providing solutions that integrate benchtop instruments and smaller
automation with a single digital platform to maximize utilization of existing
lab instruments and information networks.
Please send us your questions or comments at any point during this presentation.
Our speaker will address your questions during the q and a session following his
presentation. To ask a question or leave a comment,
simply type your query into the q and a box on the right hand side of your
screen.
We'll try to address as many questions as possible during our time together,
but if we run out of time, I'll forward any unanswered questions to our speaker,
and he may be able to respond to you directly if possible.
Additional resources and a certificate for this presentation are included in the
handouts section on the right hand side of your screen.
Please be sure to answer our special audience poll question for your chance to
win a gift card courtesy of lab manager.
I'd like to remind you that the recording of this webinar will be available for
free on demand viewing after the conclusion of this event,
and I would like to extend a special thank you to our sponsors who support
allows lab manager to keep these webinars free of charge for our readers.
So with that, I'd like to introduce our speaker for this presentation.
Russell Green, current director of product applications at Automata,
and previously the head of partnerships and senior product manager at Synthase
and Senior Manager field Projects at Beckman Coulter.
Life Sciences has been at the forefront of developing lab solutions that
optimize workflows, enhanced data accuracy, and accelerate scientific discovery.
Russell's expertise lies in harnessing automation to reshape research processes,
enabling scientists to achieve greater efficiency,
reproducibility and scalability in their experiments. Russell,
thanks for joining us today.
Hi. Hi, everyone. Thank you very much. Thanks for the great introduction and,
and thanks for joining. So I wanted to speak today, um,
in a bit more detail about some of the,
the larger automation projects that we've done and how we are working towards
bringing together automation technology to go maybe a little bit beyond what
current people consider to be automation and,
and take that into a world where we can really start thinking about what the,
the lab of the future might look like. Um,
I'm gonna cover a few different topics today, um, but in particular, I, I,
I'm gonna talk about the, the larger systems,
but I'm gonna show you how you might also consider starting that journey towards
automation as well. So don't, don't be too concerned if you see some,
some scale of automation that perhaps a little bit beyond what you tend to think
of on an everyday basis. So I'd like to start off with a quote and,
and this is a real simple one from a, from a paper back in, uh,
published in 2018. It says,
the automation of science bears the promise of making better decisions faster.
And I think that that for me speaks to why we might automate in its
most simplest terms. And, uh, I,
I've given a fair number of talks about adoption of automation and,
and how we might take, we might drive more optimal automation in our industry.
And it always comes back to these reasons to automate and why what's gonna be
your driver.
And at the core of everything is this idea that automation should bring us
more data of higher quality, faster. That's what it's all about.
And means that that that race towards getting more data and higher
of higher quality is very real. And there are some,
some labs out there that are really pushing the boundaries on what you can start
doing with current automation. And,
and I think this is all about better data integrity, more data and,
and greater data points around a single, um,
particular experiment and more of them. So with that, i, I,
I don't need to think that this should be onerous either. I,
this change should not feel painful.
It shouldn't really be that disruptive. And actually in the longer term,
it shouldn't be expensive. Quite the opposite. I think it should,
should actually drive costs down,
even though there might be some initial outlay. But we,
but one thing I wanna get through in this talk today is that we should be
planning our journey to infrastructure level automation now,
and I'll speak about what I mean about infrastructure level automation in a
moment.
So as much as anything adoption of automation is a change management problem,
it's that idea of, well,
we need to identify something that's gonna be worthwhile automating,
and then start thinking about what our plan to do that is,
and then think about what the plan afterwards is.
And a another of I talk about today is the, is the longer term view
when we're thinking about the automated systems, and in particular, this is our,
our,
our view of things we should really be thinking about how we best utilize what
is existing. So first of all, space, um,
space is actually a real problem for an awful lot of laboratories right now,
especially when you look at, you know, southern areas of the UK where,
where I'm based or east coast US where the cost of lab space
per square foot or square meter is extraordinary at the moment.
And so how do we maximize that, that space and get the most out of it?
We need to think about the instruments as well,
and we probably have an awful lot of, um, instruments in place,
and an awful lot of them are probably actually more automation friendly than we
realize.
And so how do we utilize those instruments to them to maximize their capacity
and get the most out of them? And, and last but certainly not least, how do we,
how do we utilize our existing people? So one,
another one of the big drivers for automation is to really free up some very
talented people and very skilled people to,
than focus on doing things which perhaps are, um,
are not that used to the best use of their skillsets.
And so automation is often about how do we,
how do we do more with what we've got, um,
and maximum and free up those people to,
to really focus on the interesting things like complex experimental design or,
um, gaining insight from all the data that has been gathered and,
and if we use all the things we are really looking to achieve.
Some things like long-term savings. I mean, this isn't always about money,
but let's face it, there should be a, realistically,
a financial return on investment and automation.
We need to figure out what that is and, and how we generate those savings.
I I have a, I have an example or two later on in this talk. We, as I said, it,
it's really all about data when you, when push comes to shove.
And so it's about achieving this unified, robust data set.
So deep data well characterized with metadata and stored in a unified
format that we can reuse again and again and again. And of course,
modular adaptable systems. And,
and this has actually been a big pitfall of automation over the past two to
three decades as it's been gaining traction in life sciences that automation's
often developed to run a thing, a single thing. And, and we have a,
we always talk in automation projects about, oh,
but we want to have flexibility. But you have to,
you do realistically have to reign a project into thinking about that first
achievable workflow. But of course,
if we can really implement automation in a modular adaptable way,
then we're future-proofing ourselves. And so we need to think about, well,
what can the system do later as well as what can it do now?
What do things look like today? Well, in most labs,
we see a lot of manual experimentation in quite type spaces,
so to speak to that space problem again, but it's not just manual either.
I don't want to ignore the fact that there are an awful lot of people that have
implemented automation in life sciences already. And,
but what I would tend to look at right now when I,
when I see automation in our workplace,
and I've been working in this industry for a while,
is sometimes a little bit beyond this,
but an awful lot of it is what I would call appliance led automation, um,
which is serviced by humans. And that could be something.
So what I mean by appliances, it could be something from a plate washer to a,
an automated liquid handler to a plate reader, um,
maybe some form of digital data capture,
but often that is bridged by someone writing things down in a lab book before
delivering on a Friday afternoon, their final entries into their E L M.
And I liken this a little bit to, um, to a kitchen. And I'm,
I'm a farm baker and I look at all of the appliances around me,
and they're all little bits of automation. So my, my dishwasher, my cake mixer,
my my oven even,
they're all different forms of automation in some sense that are doing tasks for
me that I don't have to handle manually.
But I spend an awful lot of time in between all of those things,
bridging the gap. And so my, my baking process uses a lot of appliances,
but is not automated from an end-to-end perspective.
And this is true in this lab state as well,
where these are all fantastic devices in their own right,
but we need to have someone sat in between those and all of a sudden that person
becomes a break in the chain of custody of operations or, um,
can't be there all of the working hours that we could possibly run all of those
devices and or many other problems presented by a human being in the middle.
So we want to try and get away from that and go to something more.
If you add onto that, there's an awful lot of talk about how, um,
areas of life sciences needs to move towards adopting industry 4.0 principles.
And when you, when you look in in this, in the context of,
of digitalization in science in particular,
there are kind of three core elements you can pick out that come up time and
time again. One of them is connect everything.
So this is not just about internet of things,
it's about making sure that everything works together,
everything you're gathering, all the data, everything,
and bringing it together into one place. Two,
establish automated end-to-end workflows.
And obviously this is an area that's particularly in leans itself to the content
of this talk today. And then three,
implement advanced analy analytics.
So gain insight from what comes out of steps one and two and utilize
that to make better decisions about what goes back into steps one and two.
I'm gonna speak to some degree today about those first two points.
So how do we address connecting everything, both physically and digitally,
and about how do we establish true end-to-end workflow automation?
I'm with my, my company Automat doesn't really get into advanced analytics,
but certainly everything we do enables that to happen in a more efficient way.
I also wanna talk a little bit about what a journey towards infrastructure level
automation would look like. And again, I'll, I'll do a bit more definition of,
of our perception of that infrastructure level automation in a moment.
But the way, in its simplest form, the way this transition looks and,
and we see this happen in some organizations already,
is you start off with your manual processes or even your appliance led
processes. So maybe even at the extreme end of that,
a a a fairly complex liquid handler. Um,
and you might move that into small islands of automation. So I might take my,
my a small cluster of devices and join them together with a robotic arm
so I can service a, a, a, a subset of my workflow, my overall experiment.
If you're then thinking about joining a group of those together,
you might then start getting towards whole workflow automation.
So rather than just having small islands that service a particular part of the
experiment,
I'm now thinking about putting something in one end and getting endpoint data
outta the other. And then if you go beyond that,
you're not thinking about whole or whole facility or even multi-facility
automation where you're looking about how does an experimental campaign run
through a whole series of different labs that are all running interconnected
automation. And, uh, and, and again,
I'm really gonna focus on the last two of these today,
I will speak about small islands of automation.
It's very much part of our core business, but,
but really as a stepping stone towards how we think about automating entire
processes or entire facilities or entire experimental campaigns.
So what shape can hold workflow or what
or whole facility automation take, and that,
and we cluster these into four different types when we think about them.
And,
and they all present different challenges and different benefits when you solve
them. So there's one called end-to-end automated workflows,
which are touched on already, where you take a, a sub, a series of subset, um,
components of automation that maybe deliver on individual steps of a workflow,
and you bring them together in something more holistic. And, and I,
we've got examples of these. I,
as I go through them in a bit more depth in a moment,
we also have what I would describe as multifunctional centralized
work cells. So this is now one work cell that can serve many purposes.
So almost like a core facility, uh, uh, for a, for a, um,
for a building that can run many different functions on demand.
We then have multi-site decentralized automation.
So rather than being something which is centralized and all in one place,
now we're talking about automation that perhaps spans across, um,
several distinctly separated by geography, um,
laboratories where we perhaps want to run the,
the same kind of thing in all of those different spaces,
but maximize the utilization of all of those.
And this sort of scenario applies in particular to,
to larger global organizations where you might have many sites servicing
clients, but all for the same kind of activity. And,
and then last but certainly not least is the,
is multi work cell facility infrastructure automation.
So now I have automation running throughout my building or throughout my
facility or my site and ha but it's joined together.
And so the way that I'm actually running all of that is really thought about as
one, um,
one large set of systems rather than a number of individual work
cells that I have to think about individually.
So diving first into the, uh,
the end-to-end automated workflows, I'm gonna utilize a,
a case study that we've worked on for something called N I P T that may,
I'm not gonna go deep into the science of this today,
but N I P T stands for non-invasive prenatal testing. This is a, uh,
clinical genomics workflow. Um,
non-invasive prenatal testing is a method whereby you can take a,
a blood sample from, uh, from a mother, um,
so maternal blood and, um,
diagnose potential conditions in of the fetus quite early on in that process by
actually taking some of the fetal d n a out of that maternal blood. Um,
very commonly used to, to look for things like trisomy early on, um, in, in a,
in a pregnancy. The general workflow for this is that we,
we need to take that plasma and or take our blood sample and isolate the plasma,
and then from that plasma isolate what's called cf D N A,
so circulating d n a, um, so this is now, well,
so I should say circulating fetal d n a, this is now, um,
bits of d n A that have passed through the placental membrane.
They're in the maternal blood,
and you can actually extract them from and separate those from, from the mother.
So you're distinctly extracting fetal d n a rather than that of the parent
that d n A has to then be extracted, um,
quality controlled and then normalized before going into, um,
sample preparation for sequencing. So sequencing library preparation,
which again, has to be quality,
controlled and then pulled ready for loading into the sequencer.
So these are the steps that I'm interested in here from an automation
perspective. Um, this is,
this is pretty much an entire end-to-end workflow.
We don't tend to consider loading of, of sequences in this scenario.
Part of that because they, they themselves run for a long period of time and,
and it, it wouldn't be efficient to, to link them into an automated workflow.
There's some real challenges around this, like right now,
people doing this as who are losing using low levels of automation, it,
there are an awful lot of manual touchpoints here and,
and that means that it becomes really hard to scale with the combination of
existing people and devices that you've got. Um,
and so what we wanna see is a route to scaling an increasing throughput.
This is an incredibly popular technique for diagnostics these days,
but we don't really wanna increase the cost per test. Ideally,
we wanna see that go down so we can open this up to more po more people and more
support, more possibilities for that kind of testing.
So if we now start looking at this in terms of current state, um,
what we'll see is a lot of people running a lot of individual devices.
So I can see here that in for plasmid isolation,
I'm using things like a liquid handler and a centrifuge to go through that
process. Um, perhaps some form of, um,
plate reader based quantitation at this stage as well. I've then got, uh,
people interacting in the middle here with, um,
nucleic acid extraction devices and more liquid handling and perhaps returning
to that plate readers of more data gathering.
And then I've gained more,
more human interaction for these last three steps of library prep and
call library quality control and pooling where I might be
interacting with more liquid handlers,
more kinds of analytical devices such as Q PCR R instruments and fragment
analyzers and, and plate readers and whatnot. And,
and the point being here is there's an awful lot of people often overlapping.
Um,
we have samples moving through all these independent devices and they're very
easy to mix up. Um,
so we have a really poor chain of custody and we are not necessarily using all
of these devices. They're most efficient. This is like a,
this is a scenario that's very hard to scale.
And even though we've implemented some liquid handlers here,
this is still quite a complex scenario for, for running that entire workflow.
The next step in this journey about end-to-end workflows is to start thinking
about, well, where are the work cells?
I could build that really start maximizing an automation and,
and you start seeing just some discre discreet pockets here that are well
worthwhile or starting out with automation.
So we have our plasma isolation phase over here.
Then in the middle we have a, um,
our circulating d n a extraction system when combined with a normalization.
And then we have a final system here that handle the last three steps of library
prep, library QC, and pooling. And these are,
to give you a sense of scale,
I'll talk about our hardware a little bit earlier on.
This system's around about three meters long. So, um, about nine foot,
and depending which part of the world you're, you're, you, you're, you're,
you're living in and how you want to measure that. Um, so we talk about,
in my world, reasonably school, small scale automation here.
Like these are nice discreet work cells,
and which one of these we might start with is probably defined by where our
bottlenecks actually lie in our lab right now.
But this is an end-to-end automation.
End-to-end automation is the scenario where I want to put something in at this
end of the process at the plasma isolation phase and get my pooled library out
the other end.
And so then we start needing to bring together those workflows into something
singular. And now we have one work cell that's, uh, significantly larger,
but it isn't the same size as actually putting all three of those work cells
together because now we're able to look at where in our process,
perhaps we had an underutilized device.
Often the ancillary devices such as centrifuges or plate
peels or maybe even a liquid handler might be underutilized in each of those
individual work cells. So when I bring them together, I can think of, well,
maybe I only need one of those rather than three,
and I can bring that together into one workflow. But now,
compared to the starting point for this,
a single operator is able to present the system with all of the samples
required for several batches of iteration of this particular end-to-end
workflow. And at the end of that process,
in the fridge at the end receive pooled libraries ready for loading onto that
sequencer.
So this is an enormous step change in terms of automation from what they
currently have. And there's some real upsides to this.
So we see this single touchpoint for sample entry.
We address all of those chain of custody issues about data. So we now we have,
'cause we're using automation all the way through the process,
we know where every sample is and what's happened to it at any moment in time.
And we're tracking all of that data. And of course,
the knock on effect of this is some hugely dramatic throughput improvements,
but there are actually some other values that come outta this that I'll speak
you later on in relation to a, a slightly different project.
So that's end-to-end automated workflows. Now moving on to the next example,
I want to talk about this. I I've gotta come up with a catchier phrase this,
but multifunctional centralized automation. And as I said before,
what this is designed to do is servicing varying demands from all over
a site. So let's say right now my current status,
I have many laboratories doing many different assays in a very,
very efficient way. You may perhaps using some very underutilized,
um, small work cells. So I've got, I don't know, my,
my genomics works in, in one lab, which is being used at 20%.
I've got a cell-based assay work cell in another lab,
which is perhaps being used at 25% because there just isn't the demand, um,
from each individual lab.
And then I've got other labs which are doing absolutely nothing.
They have no automation, everything's being done manually and,
and the people in that lab are running at 150% capacity.
So this is a pretty common scenario, believe it or not,
through when you look at a whole building and,
and what everyone's actually doing.
So now we look at a system like this where we perhaps have a lot of different
individual work units that have been brought together. So I've got in,
this isn't strictly true,
but in here I've got a group of devices that is highlighted in yellow that are
able, is able to carry out some, um, P C R based analysis.
I've got a group of devices in blue that can carry out some, um,
immunoassay, um, capabilities. And then I've got a group in pink here,
which are able to carry out some cell biology activities.
And they've all been brought together into one system.
Now this system could be used in many different ways and by many different
users. So it's serviced by one, one host, one laboratory,
a core facility if you will, who is then able to on demand,
pivot the activities it's running and maybe even run some in parallel depending
on what the demand is coming from all over the facility.
So here we might see a subset of devices being used for standard
tissue culture operations such as media exchange or packaging.
We have a cell counter, we have an imager,
we have a liquid handler and able to,
and an ink combined with an incubator and a media fridge are able to carry out
most of those routine, um, applications for standard tissue culture,
mammalian tissue culture. This is,
we might then look at a subset of devices for running an amino
assay such as an Eliza. Uh,
actually this is able to run several different assays, this group of devices.
'cause you can see here we've got a flow cytometer,
a specific amino assay system and a plate reader. So this is a,
a group of ancillary devices such as sealers and peels and incubators combined
with a liquid handler and a plate washer that can service several different
endpoint analyses.
But you can also see if we look at this compared to the previous system that
we're now sharing resources between them. So the centrifuge,
which is fundamental to both processes, is utilized by both of those.
And we only need one of those to service a much higher capacity of, um,
of assay space. And then as another example,
we can just take another subset,
so the sealer and the peeler and single pulse or the sealer and the incubator
and a and a different analyzer along with the liquid hundred and centrifuge to
carry out some Q P C R assays for, for genotyping, right?
So now we're running three completely different experiments using just
one platform. And potentially,
depending on the implications for time constraints and things like that,
we could run all of this using, um, simultaneously.
Alternatively, we could use the entire system for running one larger,
more complex experiment,
perhaps for end-to-end cell characterization in this case.
So we are growing cells, maintaining them at, at different periods of time.
We are carrying out flow cytometry based analysis or imaging perhaps
counting cells or even taking samples and putting that through into an immuno
I'd say. So we can vary the way the system's using depending on the demand.
And of course,
the knock on effect of that is our system uptime has dramatically increased.
So rather than having several systems running at 20%,
we now probably have one system running at more like 80 or 90% because we're
servicing demand in a much more efficient way.
We can also share some of these lower usage ancillary devices.
So we need fewer of them. Um, which obviously brings down the,
the hardware and maintenance cost of the overall building.
We can also then service this system by a single um,
laboratory. So one, one se one set of expert service in this for many,
many users, um, which makes the whole support system much, much simpler as well.
So the two examples I've gone through so far are kind of the natural extensions
to what people see right now in terms of work cell automation,
something perhaps a little bit more complex.
The next two examples of infrastructure level automation I want to speak to are
a little bit more complex.
So the first one I want to speak to is what I would describe as multi-site
decentralized automation.
And so the use case here is that I have several sites that this could be several
for several labs on one, um, one site,
or it could be several individual sites separated globally.
And then we have spoken with a few clients who have this challenge of,
we have labs all over the world and how do we normalize what we're doing and
standardize it? And so this allows us to,
now in this example using EISA again as an example here,
we now are able to run the same hardware in perhaps with
slightly different capability sets in. You see this one's a little bit smaller.
We can run the same hardware with the same workflow in three different sites
connected by one cloud-based control platform.
So we have unified workflows that are shared globally.
So when I went into my validation of this,
I only had to validate for one instance of this.
And now I can push that down to systems, um, locally,
and then I can look at where my demand is coming from.
So perhaps I've got one site that's underused and one site that's overloaded and
it might make more sense for me to send my client or my sample,
my internal samples to a different site so I can rebalance that load and make
sure that all my facilities are operating efficiently. And of course,
all of that data then gets consumed by that orchestration platform and brings
it into one location. So it doesn't actually matter where you are in the world,
you can always access that data and deliver it to your,
your internal or external customers.
So that's multi-site decentralized automation.
And this is really a software challenge.
This just starts bringing together those individual work cells we've talked
about into one unified controlled platform using cloud technology.
The final type of infrastructure level automation I wanna speak to is now multi
work cell shared capabilities.
So now we have automation.
This is looking at a fairly advanced facility where I've got automation running
all the way through my building, but there's overlapping capabilities on those.
So you can see here I've got systems that are multiple systems that are able
to carry out tissue culture.
And I've got several systems that can run cell-based assays and several systems
that can run e ia, but perhaps they're all subtly different.
So perhaps my cell-based assay system on one floor has the ability to do
imaging,
whereas the one on another floor is looking at flow cytometry or
fluorescence as a, as an endpoint assay.
And perhaps my ELIZA systems have different throughput capabilities and perhaps
my, my cell culture systems have, um, different incubator conditions.
So the capabilities vary a little bit from system to system,
but generally speaking, they overlap quite a lot. Now,
if I want to run a campaign of experiments,
I can look at this whole building and all of the automation within it
as a set of tools that are available for me to run my entire experimental
campaign. And again, this becomes a digital problem.
So you would look at something like our software platform to then look,
look at your requirements of your experimental campaign.
You want to run the timings of those,
the capabilities that are needed for each system, the throughput required,
and then look at all of those available resources throughout the building and
schedule each of those individual executable runs to happen on the right system
at the right time. So it might look at, oh, my EISA system up here is, is,
um, not both,
not got the right capability set and it's overloaded in terms of its usage.
So I'll, I'll use the one down here because that can deliver on my capability,
so I'll, I'll schedule that system to run next Thursday.
So this becomes quite a complex building-wide infrastructure challenge that can
all be handled by one single orchestration platform such as ours.
There is also, of course, a, a, a data implication for this as well.
Now because all of our systems are connected through one platform,
we're able to then harvest data from all of those platforms and consume that
into one, um, into,
into the single orchestration platform and then have that connecting directly,
which we with, with whichever other cloud-based infrastructure.
So our digital ecosystem that might exist for our deep analysis or our data
storage or archiving or whatever that might be.
So we're delivering highly contextualized deep metadata rich
data sets from the whole experiment and campaign into one data lake ready for
consumption by other things.
So we can see some huge advantages to this kind of automation as well,
but I appreciate that this is now a future state that very few fat labs are
really thinking about right now.
But it's what we should be thinking about as we head towards the future.
And this will give us this ultimate maximum utilization of automated work cells
because now they're running not just on singular demand,
but multiple demand around different, um,
campaigns and scheduling the usage of them to maximize the efficiency across the
different, different, uh, demands that they're being placed upon. And of course,
we got a unified metadata enriched data set out at the end of it as well. And,
and that data continuity is kept.
So of course I'm, I'm,
I'm from a provider of solutions for this space and so it would be remiss of me
not to speak a little bit about what,
how automat can actually support this kind of journey to the lab of the future.
And, and so I I,
I'll speak a little bit about our hardware and our software first. Um,
so Automata, um, primarily builds something called Link,
which is the first fully automated lab bench.
So we took a core piece of lab infrastructure and said, Hmm,
perhaps that should have an,
an option to have automation running right through it so that we can utilize the
space better and drive more easy adoption of automation into labs that perhaps
don't have it right now and make best use of all of that space that is already
there. The link bench, um,
comprises three main areas. We have a robot, um,
running along the front of it or on a rail,
and that robot is really there just to tend devices.
And that'll become clear when I show you some, some videos in the moment.
What I mean by that, that, but we don't,
what we see in a lot of classical automation systems is do you put a robot in
the middle and immediately becomes the bottleneck to everything that happens on
the system? Um,
but it's not something that you can then easily solve 'cause it's very hard to
then put another robot in to solve that problem.
What we need to do is solve for that bottleneck in a different way.
And the way that automats approach that is with the transport layer.
So the transport layer is easier to explain with a video in a moment,
but is I think consider this our LabWare superhighway.
So it runs underneath the surface of the top surface of the bench and is able to
carry individual pieces of LabWare to their point of use,
ready for picking up by our,
our robot and delivering to devices just in time. And then of course,
we've got on top of everything, the the bench top. Um,
there is a status slide that runs along the top of this bench.
And then on top of that is where we put all of our, our devices.
So it's where our instruments would typically be placed.
Just to further explain the transport layer, here's a,
an an animation of one of our our systems running. Um,
and you can just see as it zooms in LabWare being moved around on that transport
layer from bench to bench.
And so it goes to the right place just in time for it to be, um,
delivered to the devices on top
To show you that in real life. Um,
here's some footage of our transport layer videos from underneath.
So you can see some of our, um, our cable handling running here. Um,
some of our barcode readers that are embedded under the tables,
but you can see one of our pucks transporting a piece of LabWare from bench to
bench, bringing it to the front of the bench just in time for that labro lab,
the,
the lab robot to remove that piece of LabWare and then go and pick up the next
one service the devices that sits on the bench just above it.
Actually later on I'll show you a video of this very same system viewed from the
front so you can see how that fits into the, the bigger picture.
The link bench can be bolted together in a number of different ways.
So I've just shown you a couple of linear systems that are shown on the top
right here, sorry, top left, um,
where we have our transport layer running through the system.
And then actually I've just shown for an example here,
we don't always utilize the transport layer.
Sometimes we take it out and and create a bit more vertical integration space.
We can also add shelves to go even taller depending on which devices we're
using. So we have a lot of vertical space we can utilize here as well for really
compact systems that don't, don't require, um, quite as high a throughput. Um,
sample logistics. We can then bend the system around corners.
So we've got benches that provide inside and outside corner configurations.
This becomes really useful when we need to wrap things around the edge of an
existing lab and utilize that space maybe in a U shape or even going round a
pillar. And then we have a configuration that is increasingly popular,
which is the, the back-to-back island system.
So now you can see we've got two runs of benches. So eight benches,
eight benches in total that are back to back with, um,
robots on the outside.
And now we can move through the whole of this system with that unified transport
layer only underneath.
And so that LabWare can go from one side of the system to another.
And this system configurations particularly useful when we want to start putting
systems into containment, for example,
because it becomes an easier system design.
Our software is cloud-based, um, where we can,
so any users can log in from,
from anywhere depending on their permissions and do everything from workflow
design using drag and drop, um,
node-based workflow configuration through to, um,
simulating and controlling the execution of those runs all from a,
all from a P C R tablet or a phone.
So how does that work to start bringing together this
orchestration of infrastructure level automation?
So I'll show you a few more in-depth features of our software, um,
'cause it really does start becoming a software problem once you've got around
the system design challenges.
So this is what it looks like when you log into our, um, software platform.
You can see we have a number of this is just looking at one work cell, um,
with one particular user and you can see a representation of all of the
different runs that are currently going on in parallel on that work cell.
So now we're talking about multi run implementation on the single work cell.
We can also look at our work cell runs.
So here you can see each align here is an individual execution, if you will.
And some of those have been finished, some of those are currently running,
some of them are queued. And we have this one at the top,
which is got this red highlight because it's now ready to run.
So you can see it has a schedule, um, button next to it.
And so what I can now do is start calendarizing those runs. So this run,
when I go into it,
I can now click on the schedule button and I can start deciding when I want that
to run.
And the system will show me when the system has availability to run this
particular, um, batch of samples.
And I can also change some of the core parameters that I might want to address
with this particular work cell. So here I might need to change my reagent.
And actually in this example, this is for a, a system that utilizes or consumes,
um,
files that drive some activities on the liquid handler that have been generated
by a limb system. So we can,
we can upload those files because it's not not directly connected to the limbs
in this particular case.
So now I scheduled that run that now appears in my run view where I
can see all the runs that are currently executing.
So there's several runs running in parallel on this platform and I'm new run is
scheduled to start after those are finished.
So now I have a view where I can start looking at a multi run.
I'm just gonna slow this down because it goes very quickly. Um, we can,
we can now look at a multi run environment.
So this is a system actually we're running three executions of a
three batch experiment each.
This is a real system in our lab that's running our lasers. Um,
and you can see the system now we can view that we started one, the run one,
and then we started running run two concurrently. Um,
and you can see all of the activities that are happening and each set of
activities is color coded by individual batch. So batch one is being washed,
sealed, shaped, peeled washed, and so on and so forth.
But this is all happening in parallel. Of course,
everyone wants to know about what happens on the system when something goes
wrong. Stuff does go wrong with automation. Anyone that tells you it doesn't is,
is misleading you.
What we aim to do is minimize the number of times things go wrong and make sure
that when something does go wrong, it's intuitive as to what you should do.
So in our system here, run one has, now we've gone into an error state. Um,
we have to get an error code for the device in many cases.
So here we've got an indication of what that error's going to be and we've got a
number of actions that the user could take to resolve that,
but we get some strong visual indications that,
that something isn't quite right on the system. Um,
as well as of course the benches themselves,
those light indicators would flash red as well.
And now we can move through that and resolve that error.
I wanna speak a little bit about our software architecture before closing up
with a couple of more specific examples. So this is very a fairly complex slide,
but I wanted to make sure you're aware that we have different kinds of users
would feed into our platform.
So we have like 21 C FFR part 11 enabling user control where people might just
be an operator or an administrator with full control or a workflow designer
who then feed into our operating our platform, which runs on a W Ss.
And, and that platform, once you've created a run,
will send data down to a local instance of our software,
which is actually embedded in our hardware.
So it's actually embedded in the bench called our hub.
And the reason why we run things this way is if this connection were to break
mid run,
all of the data required to execute upon that platform is actually stored
locally.
So a network failure is never gonna impact on your runtime and you can
interact with this more locally.
That hub is also handing an awful lot of the data as well.
So that hub is consuming data from instruments which may be fed in some instance
back into the cloud-based platform and through that into a, um,
through some data piping into your data lake or your E L N or your lims,
but it might also go directly from your hub into your LIS system or your E L N
or your data lake.
So we have a couple of options of how we drive that data continuity into the
rest of your digital ecosystem.
So last but not least,
I wanna talk about what would be the real impact of this for my lab.
And there's a couple of examples here that are quite different.
The first one I want to speak to is actually a,
a very high throughput ligand binding assay that we, um, we,
we worked on with a client where before adopting the automation that we
delivered,
people were running individual like appliance level automation,
shall we say when,
and then handwriting data around what they'd done into a notebook,
which all then got uploaded.
S o p said like immediately more,
more practically it was like on a Friday afternoon, um, into the,
into the limb system.
The only bit that was really automated here was data coming from the plate
reader could be manually uploaded,
but all of the actual process steps rule manual.
After implementing our platform,
we now for each of those instruments are able to log the tasks that have
happened and record the data that came along with those.
So a lot of this data here is actually system data or process data.
So probes measuring temperature and humidity,
logging timestamps and, and, um,
other metadata and activity from the individual devices that we're running,
such as the incubator performance or when things were sealed or when things were
put through the plate washer.
And we can combine that with the plate reader data to deliver this like metadata
rich dataset about everything that happened and take away all of that
handwritten requirement and deliver that directly into the client's limbs and,
and Azure based data lake in this case. So in this instance,
we went,
we actually delivered a 13 times improvement in the number of data points
per sample, um, from the automation or per plate rather.
So they were collecting around about three data points per plate before our
platform.
39 afterwards was quite a huge impact in terms of measuring the performance of
that experiment. Jumping back to something I shared earlier on,
so this is now a completely different example where we think about end-to-end
clinical genomics. So something completely different. Um,
we've been working on a system that is very similar to the one I shared for N I
P T. That's a different application.
And in that instance what we're seeing is a three times throughput increase over
what they're performing with their appliance level automations.
This was actually a reasonably automated laboratory and the knock on effect for
that was, um, the number of staff didn't reduce,
the number of staff saved the same,
but the staff cost per desk went down dramatically.
So we saw the 70% reduction in cost per test,
and then we start seeing this enormous decrease in manual interaction.
So we are raising the, um, consistency of our experimentation as well.
So this is the kind of impact we see when we implement this sort of scale of
end-to-end infrastructure level automation.
All of that may have been a little bit overwhelming in terms of this huge scale
of automation I've talked about. Uh, but I,
I stand by the fact that we should really be thinking about how we go from where
we are now to that. And so if you are,
if you are at the appliance level automation,
perhaps think about what the next steps might be. If you're not automating,
think about what the appliances are that you need to get first,
find a core problem, start automating, and then plan to scale,
but have a plan and start thinking about. So in this instance,
going back to the N I P T example,
we went from a standalone liquid handler to fully automated end-to-end
workflows. And we didn't do this in Naugh to a hundred.
We didn't go from nothing to that final step.
We've gone through a phased delivery and really planned out what that change
management should be as we move from low to no automation to
to real lab of the future stuff. And so with that, I,
I'll close my talk. I can leave you with a,
a pretty video for a moment of that same system I filmed from below the bench
earlier on just showing you our platform,
our robotic arms interacting in parallel with a number of devices working with
our transplant transport layer underneath to,
to really handle those plate logistics. Um,
and I'll be happy to take any questions. Thank you very much.
All right, great. Thanks very much Russell for a wonderful presentation.
So at this point we are about ready to move into our question and answer session
with the audience. Again, for those of you who may have joined us late,
you can send in your questions by typing 'em into the q and a box on the right
hand side of your screen. Even if you don't have a question,
we invite you to leave a comment. Let us know how you enjoyed this presentation,
if you found the information useful. Um,
and if you'd like Russell on the Automata team to reach out to you following
this webinar. I'm also gonna put up their, uh,
their website is on the screen in front of you right now if you would like to,
uh, visit them and we have some, uh,
I'd like to remind you to visit the handout section on the right hand side of
your screen for some supporting information for this event.
And we do have that poll also,
if you'd like to comment for your chance to win a gift card courtesy of lab
manager. So Russell, thanks again. Uh,
let's jump into the audience questions here. This one, this first one says,
this all seems like a huge transition.
So where do I begin if I wanna highlight some automation targets in my lab?
Yeah, that's a, that's a really good question.
I think we often talk about this like where do I start when we talk about
adoption of automation, um, I always encourage people to think about,
um, try and try and get to a point very early on where you're setting a goal.
So when, when we work with clients on this,
and we do have clients that come to us without a specific goal at the very
beginning, we'll work with them to sort of really come and do and and,
and look at their laboratory and the processing they're running right now and
try and identify where their current bottlenecks might be,
where their current maybe data continuity problems might be or
reproducibility challenges might be. But try and land on a goal,
which could be one of those things. It could be increased throughput,
it could be, I dunno, reduced manual interaction at this point.
It could be generate more data here, it could be add in another analyzer,
but you need to figure out which of those first goals is most tangible.
I think to try and leap to some of the automation examples I've,
I've shown today is possible, but it's a big leap. Um,
we do work with clients that are doing exactly that and we build a really strong
change in management program, but the,
the success of implementation of automation and the adoption of it by,
by people in your lab is better served if you start off with something
tangible and meaningful to them and then scale with it. So, so set a goal and,
and think about what's small and go from there.
Okay, great. Thanks. Uh, let's go on to the next one. It says,
can your software be run not in the cloud?
Uh, yeah, yeah, we get asked this quite a lot. Um, the simple answer is no.
I mean,
we are looking to provide a state-of-the-art solution that will take you into
the future. Um,
we know that there are some sites that have challenges adopting, um,
cloud-based infrastructure,
but actually we've not really found a site where it's been not real many sites
where it's been an absolute block and we'll work with you to to,
to get past any of the challenges there. But no, our,
our software is designed to be run the cloud and you reap all of the benefits
that come with that. So I, I know there's probably a long answer now,
but we can use cloud computing technology to real deal with some really naughty
scheduling problems,
such as how do I deal with time constraints through the system?
And of course we can then do like practical things like allow you to monitor
your system from, I don't know, your morning cup of coffee,
wherever that may be. Um,
and all of these things are enabled by cloud infrastructure. So we,
when we advocate for all of those things being really positive.
Okay, great. Thank you. Uh,
we have some more questions coming in from the audience.
If you scroll to the bottom a little bit. This next one says,
I know it will vary a lot, but what is the cost range for a link?
Yeah, it does vary a lot. Um, I think I'm not gonna, I'm, I'll be very honest,
I'm not gonna answer that super directly because it varies too much by project.
Um,
we have projects that are in six figures and projects that are in seven
figures and greater, right? So it really depends on what you're trying to do.
Um, also the other bit that can really impact the cost of these is, um,
the devices that go onto them,
which is a little bit out of our control and whether you need to add on things
like containment. So, um, yeah, sorry,
that's an indirect answer to that,
but I think we are best served getting to an estimate for a project
fairly early on in the,
in the process of looking at what workflow you want to automate,
looking at the space you have,
trying to figure that out and then figuring out what a cost estimate might be.
Yeah.
Okay, thanks. Um,
this next question says how does the bench communicate with the instrument?
Is this something I could set up myself or would it require external support?
Yeah, really good question. Um,
so we at automata are an integrator of devices.
So all of those third party devices, we provide the integration for that.
We factory acceptance test that in many cases and then deliver that to your
site. So we take full responsibility for the connectivity of,
um, the link platform to the devices that are going on it.
So it is not something you should have to worry about.
So all of that communication is handled by ourselves. Of course,
the o the other example where external mosport might be required is if we have
to work with the third party vendor. And I mean that,
that is true in some specific examples where we need some extra support and
subcontract to that device vendor to, to bring some extra communication support.
But most of the devices that are out there that are automation friendly have
really well documented APIs. We develop the plugins that then, um,
both physically control that and deal with any data challenges that go with
that. And then of course we also even go beyond that and start delivering data
connectivity to your, you know, your,
your limbs or E L N or data lake or whatever that might be.
Yeah. Next question please.
Sure. Thanks. Um, let's go to the top of the list.
I think we have time for a couple more. If I use existing instruments,
can I still get access to 'em on the automated platform?
Some of them are really expensive.
Yeah. Um, yeah, that's an interesting one. So if you,
if you look, I've been asked this on every automation company. I, I've,
I've sort of had the pleasure of working with and it varies how you deal with
it. Um,
one of the beautiful things about the Link platform is that because we've put
the robots on the outside and we've inverted that design,
now all of those devices or almost all those devices now face
outwards on the system. And so most of those devices,
not all of them can now be accessed, um, even when the system is running.
But can that be accessed manually? And I think the, the, the,
the real clinch tends to come in on some of these really expensive ones.
Like I've had a few projects where we've worked with things like imaging systems
and high content images where they could be a million dollars in their own right
and their run times quite long.
So we tend to locate those on a position of the system where they're more easily
accessible.
And then as long as the automation system isn't using it at that moment in time,
we can flag to the user that when it's available and they can walk up and
interact with it manually. And of course, all the robots themselves are, um,
are these are collaborative robots, right? So they, they,
it's perfectly safe to walk up while the whole system is running and interact
with those devices.
Hopefully that answered that question. Okay.
Perfect. Thank you so much. Uh, we have time for one last question.
How did the client work out what their targets could be for automation in the
genomics case study?
Oh yeah. And yeah, that's an interesting one. So we, I think I alluded to this,
that this wasn't a, we didn't go from nothing to everything in one step. Um,
and we, we did pick on some individual areas to start building upon.
And this maybe,
maybe I answered this a little bit in the earlier question where what we,
we sat down with a client and we did an awful lot of walking through their
existing workflows and also what devices they had available in their own lab
already. And it, and it,
what surfaced was some particular areas of problems So in
their case it was mainly around the extractions.
So we started focusing on the extractions because mainly because it was really
onerous to do it manually and it was the bottleneck to anything else working.
So we thought if we can solve that problem,
then we can start thinking about what the other areas we might automate as be as
coming afterwards. And actually by doing that,
we defined a work cell right in the middle of the process that helped us to
define the work cells that went upstream and downstream of that.
And then it became a lot clearer on how we might transition to a larger
end-to-end system in the future as well.
So yeah.
Okay. Wonderful. Thanks so much.
So that does bring us to the end of this presentation, and again,
I'd like to remind you that this will be available along with the other
recordings from our Automation Digital Summit.
So please watch for an email from Lab Manager once these recordings are
available. On behalf of Lab Manager,
I'd like to thank Russell Green for all the hard work you put into this
presentation,
and I'd like to thank all of you for taking time out of your busy schedules to
join us. Once again, thank you to our sponsors,
automata Integra Ran Tech Scientific Metro U S A analytic Jenna and
copy type.
Their support allows Lab Manager to keep these events free of charge for our
readers. Please be sure to tune in for our next presentation at 11:00 AM Eastern
tomorrow, the three Ws of introducing automation, why, who and when.
For more information on all of our upcoming on demand webinars,
or to learn more about the latest tools and technologies for the laboratory,
please visit our website@labmanager.com.
Thank you all for being part of our Automation Digital Summit,
and we hope you have a great day.