Lab Manager | Run Your Lab Like a Business

Product News

Are You Ignoring Your Lab's IT Needs?

How a lab manager deals with the IT requirements of instruments, directly impacts instrument uptime, service costs, usability, validation, and enterprise network security.

by Anton Federkiewicz
Register for free to listen to this article
Listen with Speechify
0:00
5:00

Lab instrumentation today is becoming increasingly computerized. It seems that every piece of instrumentation is coming into the lab with a computer attached or one built right into the instrument itself. The data that these instruments generate flows at rates that require high speed network connections. As soon as you plug one of these computers into a wall jack, you need to worry about instrument stability, corporate network security, and IT/IS standards. At the same time, it is unusual to find a truly unified support system for lab computers in most research centers. This leads to many disagreements with stakeholders about the proper maintenance and support of these computers. Add to this situation a regulatory validation requirement and it is easy to see how any lab manager can become overwhelmed.

I wanted to share some of my observations on this subject and offer some very simple recommendations to help the average lab manager maintain computer systems in lab environments. Keep in mind that I said SIMPLE recommendations, not EASY ones. Your experiences may differ for any number of reasons and if they do, I would love to hear about it.

“What makes an instrument computer different from any other computer in a corporate environment?"

Standard corporate desktop computers are usually tightly controlled and managed. They are part of a large population of computers that have the same software, operating system, patch levels, security systems, and other configuration parameters. The very nature of instrument computing runs counter to most standardization efforts. Instrument computers are different because of their value, use, distinctiveness, and environment.

Desktop computers have become a commodity. Instrument computers cost a small fortune. This is especially true if you purchase the computer bundled with the instrument from the vendor. It is not unusual to see a computer broken out of a system purchase costing $5,000 to $10,000 or more. That fact runs against what most people see in the consumer stores where computers cost under $1,000. This is just the upfront cost associated with these specialized computers. There are other backend costs such as repair/upgrades of software (usually not considered to be part of most service contracts) and validation/change control to consider as well.

Instrument computer usage varies, but for a large part, they are considered to be the primary data acquisition points of the lab. There used to be a time when instrumentation was used without a computer but those days are long gone. Modern lab instrumentation simply cannot run without them. Generally, people do not use them to prepare monthly financial reports, surf the web, or answer e-mails as these activities are supported in the office environment and only jeopardize the lab equipment’s primary purpose. In the desktop environment, it is rather easy to swap out, borrow, or restore a computer when it breaks. Instrument computers are much more critical to the lab’s primary product, much harder to temporarily replace, and nearly impossible for most desktop IT groups to service under standard service level agreements. The last point becomes even more exacerbated when desktop computer support is outsourced to a third party, which seems to be the general trend these days.

Instrument computers tend to be both unique and diverse in a large lab environment. They run software that is usually not available from your computer superstore or standard desktop application library. These software packages can be complex, difficult to maintain, and/or expensive to repair or reconfigure. Researchers need to use a variety of instruments to do their jobs which leads to a variety of computers and configurations attached to those instruments. An additional source of diversity occurs when there is weak central asset management or planning over the instrumentation acquisition processes used by different lab groups. I have seen five different makes of the same type of instrument in the same location simply because each research group had their own favorite. Instrument computer diversity also occurs when instrumentation is purchased over time. Labs where instrumentation is running on the newest computers on one bench and ones that are seven years old on others are commonplace. The lack of lifecycle management may be due to multiple factors that may be purpose driven, technology-based, or financial in nature.

The operating environment of most instrument computers differs substantially from the carpeted areas of the corporate desktop computer world. Usually everyone in the carpeted areas gets their own PC. Instrument computers are generally shared by the entire lab staff. Lab systems are corrupted quite often because of constant power-user tinkering. Instrument vendor software packages sometimes require that users have administrative rights on the computer in order to even operate the software. That would never be allowed in a normal desktop computing environment for obvious reasons. Lab computers also generally run under higher workloads than desktops and consequently require higher maintenance. For instance, our service data indicates that instrument computer hard drives fail at a 20% higher rate than their desktop counterparts. These computers often work harder for longer lengths of time with less maintenance than desktop PCs.

Managing an instrument PC like just any other desktop computer has led to many lab managers reluctance to allow general PC service technicians to even walk into their labs. I have heard and witnessed multiple instances of well intentioned IS service technicians who have shut down labs by insisting that a patch needed to be deployed on an instrument machine in order for it to be “in compliance” with IS standards. Now don’t get me wrong, IS standards are a good thing as they protect all of us from all kinds of nonsense. The problem with their heavy application in the lab environment is that it requires special knowledge and a gentle hand to implement those standards in the labs in a way that both satisfies the standard and keeps the lab running. One of the main reasons why “Parallel IT” organizations commonly spring up in lab environments is that they have those knowledgeable, gentle hands. That said, there are excellent IS organizations and people that have a genuine concern for lab computing and a good understanding of the differences from the desktop environment. Unfortunately, I find that both the “Parallel IT” people and the official IT people with lab systems experience are usually spread way too thinly to support instrument computing and are usually not coordinated across research organizations effectively.

“OK, so our lab ccomputers are different. We seem to be getting by just fine."

I hear this argument from every person or group responsible for lab computer support. The problem is that this same logic is being repeated from many different groups with different interpretations of what “getting by just fine” actually means. Support groups include instrument services departments, vendor service engineers and organizations, subject matter experts, general lab users, IT/IS support groups, and anyone else who has even the slightest interest in maintaining instrument systems. Every group has their own interests and, usually, they assume that those interests are common to everybody. Interests are usually defined by the function that you serve in the organization.

An instrument service group functions to keep the lab’s instruments operating efficiently at the lowest cost possible. The usual problem lies in the fact that they have little to no IT requirements other than what is imposed upon them from other organizations. It is unusual to find an instrument services department that includes an IT role. What little IT role that they do have consists of keeping the systems that they support operational. “Getting by just fine,” for the instrument services organization that I describe, means that the instruments work and that the researchers are happy. Very little is done about security issues until it is too late. Malware, data security and integrity, patch management, and other IT functions tends to become an afterthought as these types of concerns are not about keeping instruments running or researchers happy. IT organizations are the ones looked to for prevention of those issues.

Vendor service engineers (VSEs) and vendor organizations are primarily service retailers who focus their efforts within the scope of their products advertised function. Services consist of “break-fix” repairs, software/hardware upgrades, installations, and preventative maintenance. Since these services usually focus exclusively on the instrument hardware and software, VSEs sometimes insist that the PC should be used “as is” and never altered to fit into the IT infrastructure of the company. Customers are advised that any changes to the OEM deployment may void the warrantee and generate additional service costs. My team has had to become very adept at proving that our modifications are not what are causing an instrument system to malfunction. “Getting by just fine” for a VSE means that the instrument runs as it did from the factory. Once again, general computer needs seem to be somebody else’s problem and in all fairness, justifiably so.

Subject matter experts (SMEs) are those lab personnel who are savvy enough to be able to manage their particular lab PCs in a way that either attempts to accommodate IS security concerns or avoids IS security mandates altogether through isolating their network and forcing their users to have what limited functionality they can provide. They are primarily motivated to keep their particular lab instruments running and keep their own users needs as the most important thing. IS security tends to be something to be worked around instead of implemented. “Getting by just fine” for an SME means that their users don’t become aggravated by restrictions caused by what they would argue are excessive security concerns. It also means that they are not swamped with support requests and can get their research work done and that nobody from corporate IT notices that they have set up their own independent lab environment.

General lab users are, of course, primarily motivated by their need for data. Their mantra is “Keep the data flowing at all costs!” I would consider this voice to be the most important voice of all the stakeholders. This voice should set the mission of all the support stakeholders. If you run a research lab, your product is data. It is uncommon for general lab users to worry about IT security or supportability. They tend to do whatever it takes to keep the data flowing. This includes all kinds of risky behaviors, workarounds, and general tinkering. I won’t name names, but I have had to fix some very impressively damaged computer systems where the root cause was somebody thinking that they could fix that “little error box that kept coming up” because they did the same thing on their home PC.

Last but not least are the IT/IS support organizations. The overwhelming goal of this service group is to standardize, sterilize, and write procedures and policies because that is what works for their largest environment, the office desktop. Lab computing environments just don’t fit into that mold for the reasons we have already discussed. These “non-standard” systems tend to either get exempted or put outside of standard IT/IS support scope. Resulting gaps in coverage seem to fall back on the researcher to fill or someone from IS/IT takes a “best effort” approach at supporting the instrument computer. An illustrative and all too frequent scenario occurs when the computer software that runs the instrument suddenly stops working. A lab manager knows that the instrument hardware is running fine and places a service call to their IT helpdesk. The IT analyst looks up instrument software on the call script and sees that the usual response is to ask the lab manager to “call the vendor.” The researcher calls and after a few days, the VSE comes in and takes a look at the problem. Since the troubleshooting tree that we discussed for VSEs centers upon the sanctity of the OEM build, the VSE tells the lab manager that it was a “hot fix” that a well meaning IT person put on the PC that caused the instrument to stop working. Of course, the lab manager is furious and blames IT for pushing this critical patch to the computer. IT’s honorable intention was to help protect the lab system and the larger corporate network in the only way that their policies, procedures, and resources would allow. The offending patch is removed by the vendor and the lab manager is warned that their system cannot be supported under the current service contract if they continue to alter the system. It tends to go around and around like this until either a compromise is made between functionality and security or the event is considered an escalated issue, straining the relations between organizations.

Now, don’t get me wrong, there are MANY fine people out there supporting these systems. That includes people from all of these groups with their competing interests. Overwhelmingly, what I have found is that everybody on this list has the same general opinions of this situation. Everyone believes that we need to come up with a solution to these instrument computing support gaps. The dilemma is in the combination of competing interests, resources (or the lack thereof), and general organizational momentum. It is difficult if not impossible to get anyone to stand up and agree to do anything about it. It is also hard to convince management that a problem exists when each stakeholder group tells the story singly and from their own perspective. Each group is performing their jobs correctly but the gaps between each support group’s scope inhibit the optimization of the instrument computing environment. Ultimately, the disease seems to be that there is no unified voice that speaks for instrument computing.

So what should we do about it?

1. Stop ignoring the fact that there is a computer on the instrument and acknowledge that they are completely interdependent systems. Many support philosophies simply forget that there is a computer attached to the instrument that has support needs of its own. Regular operating system monitoring, patching, anti-virus, defragmentation, networking, data management, upgrades, and other normal computer system maintenance needs to be addressed. The other thing to be avoided is the tendency for certain organizations to think of the instrument computer as just another computer. Instrument computer support has to be done in a holistic manner that is centered on maintaining the uptime of the entire instrumentation system.

2. Unify your support philosophies. No, I don’t mean that there is one group that has to take on this whole issue. What we have done at Wyeth is to come up with a consortium of volunteers that bridges these problematic organizational boundaries and sets a unified direction for lab computing in our global environment. We meet regularly with IS/IT organizations in an open forum and share ideas, come up with solutions, align ourselves, and generally try to address the fact that there is indeed a computer on our instruments. It’s not perfect but it is much better than the way things were done before. It also helps to remove the stigma of being part of a “Parallel IT” organization. I suggest that you start your own unofficial lab computing user group. Establish a grassroots effort to unify how these machines are supported. When you speak with one voice, management tends to listen better, understand more, and has the opportunity to get behind something other than a discordant series of opinions.

3. Leverage enterprise class support tools. This one can produce immediate return on investment and doesn’t need a massive effort to implement. For years, we did lab computer support by hand. We visited each workstation personally, installed software, hot-fixes, and did repairs one at a time. This was both time consuming and ineffective. We were never able to say with any certainty if our lab systems were secure or even running properly. We began to collaborate and learn from each other about how certain support software could assist us in maintaining a more uniform and secure lab computing environment. Most people in our lab users group began to leverage the following enterprise class software packages:
a. Symantec Ghost Solutions Suite — Fact: Your lab computer’s hard drive will fail! A hard drive that costs less than $100 can cause a lab software re-installation costing $1–10K and keep a lab system out of production for quite a few days. We found that by imaging (making a compressed snapshot) of the hard drive of each and every lab computer in our environment and maintaining them systematically we can get rid of that re-installation cost and have the users up and running again in under two hours. This software suite allows us to remotely image and restore all of our workstations hard drives. We can also do it with a few mouse clicks directly from the lab computers desktops so that downtime is momentary and images are refreshed before and after service by the end users themselves.
b. McAfee ePolicy Orchestrator — These days, a computer simply MUST have some anti-virus software. Most of our workstations had some form of anti-virus software. The problem was that they were inconsistent when it came to virus definitions, patches, and versions. This product allowed us to unify our anti-virus efforts. One thing that we should note is that we do not allow our standard corporate anti-virus policies to be run in our labs. We administer our own and deploy in a much gentler and more flexibly controlled fashion than the normal desktop computing environment does. We always remember that lab systems are different while recognizing the need for standardized methods and technologies.
c. VNC/Remote Desktop Connections — IT security usually frowns at this technology for the potential vulnerability but if managed correctly these applications can leverage your limited resource support organization greatly. These applications allow you to remotely access the lab computers desktops so that you can see exactly what the user is seeing and fix problems from your desktop. Our service call script has our technicians log into the instrument machine as soon as the call is received. About 80% of our call volume is resolved within 15 minutes or less using these remote tools.
d. A patch management solution — We use BES BigFix but we started with Shavlik HFNetChk Pro™. Some people simply let their instrument computers update themselves from Microsoft but we have found that this can be very disruptive to research efforts and generate concerns over change management. These OS patch management suites allow us to deploy patches to our instrument machines the way we like to do them. Some machines have patch issues. These machines are noted and excluded from certain patch deployments. All reboots are controlled by the user via dialogs that allow a flexible deployment window and reboot to happen when researchers are not using the machine. If a patch is found to have a software conflict it can be simply rolled off the machine and functionality is restored. For more problematic conflicts, the machine may be restored to its original configuration via our Ghost process.
e. Domain Authentication — One of the big problems that we ran into early on was the lack of a comprehensive user and policy management system that would be flexible enough to be used in the lab environment. One of the biggest time sinks in our early days was simply managing user accounts and policies. Password resets, user account creation, policies, and permissions were all done on a computer by computer basis. We then implemented an active directory/domain authentication scheme that solved all our problems. Now, user accounts are managed by our main IT organization and are synchronized with the rest of the desktop computing environment but their permissions on each workstation are controlled by us. This allows us the flexibility that we need in supporting such a diverse set of user requirements. We are very mindful of corporate policies and their purposes. We are able to accommodate the necessary policy differences for lab computing utilizing the Windows active directory container features to manage our own policies at a much more granular level.
f. Lab Network Segmentation — Lab computers are security risks but in a research organization, they are a necessary risk. This can also be the case in other special function environments like manufacturing, security, and building management. We run into trouble when that risk is taken on an enterprisewide network. One of the major risk mitigation solutions was to segregate our lab machines from the general computing environment. This not only allowed us to satisfy IS security requirements, it made our labs run with less interruption and gave us lab support people more control over what happened in our environment.

There are plenty of options out there for enterprise class solutions to these problems and the ones that are listed here are just the ones that we have found to be effective. Not only that but many of these solutions are probably already available in your IT infrastructure. You would need to “tweek” it for the lab environment but I think with the help of your IT folks you would be able to manage well. Additionally, all of these solutions can be used in a GxP environment with the right amount of validation effort.

The important things to take away from this are the need for collaboration and focus on the business objectives of productivity AND risk management. There is help out there for these problematic instrument computers. You don’t have to leave them unmanaged but it does take some work.

Remember, I said that I was going to make simple recommendations, not easy ones. Constructive partnerships, teams, and overlapping expertise remove the support gaps from the lab computing environment. At my particular research site, a team of two and a half (including myself) instrument service engineers implemented each of these solutions for approximately 350 instrument computers over three years. That is in addition to our normal duties of maintaining those instruments, scheduling vendors, doing upgrades, and other general instrument services functions. When it was all done, we had a system in place that allows us to maintain those 350 instrument computers in a manner that maximizes their uptime, minimizes their costs, and does so in a manner that is safe for our network. Additionally, we are well on our way to spreading this lab computer support model to all areas of our corporation. We had a whole lot of help from our IT organizations, lab managers, management, and others that were concerned about this issue. I found that people will come to your aid if you are willing to stand up for the business objectives, ask for help, and take on the challenge. We work in one of the most productive pharmaceutical research centers in the world and there is never a shortage of work to do. If we can do it, so can you.

Anton Federkiewicz has a B.S. in Chemical Engineering from New Jersey Institute of Technology. He has 12+ years of diversified engineering experience focusing on computer development/ support and process automation within the research and manufacturing environments. He is currently the Instrument Services Supervisor at Wyeth Research in Princeton, NJ. He wants you to know that he would welcome your comments, questions, criticisms, or topical conversations at 732-274-4070 or by e-mail at federka@wyeth.com.