A sort of humanlike hand swipes lightly over three tomatoes sitting in a row on a surface. Then, using its index finger, the robotic hand lightly taps the first, then the second and the third tomatoes. Finally, with a gentle yet precise right-to-left swipe, the hand pushes the middle tomato out of the line. In another demonstration, the same robotic hand locates a tomato sitting on top of a stack of two sugar cube-like objects and tenderly lifts the tomato without tumbling the cubes. It takes advanced sensors to make lab robotics work so precisely and carefully.
Image courtesy of Huichan ZhaoThis robotic hand comes from Robert Shepherd, assistant professor of mechanical and aerospace engineering at Cornell University (Ithaca, NY), and his colleagues in the Organic Robotics Lab. Soft sensors make up part of the research in this lab. For example, dielectric elastomer sensors allow a haptic interface, like the one used in the robotic hand. Shepherd’s team 3-D prints capacitors on soft materials to build sensors that can “feel” even as they bend.
Building sensors that resemble the ones in our fingers seems almost like science fiction. Nonetheless, some of the sensors that end up in laboratory robotics in the future might come from equally surprising places.
The sense of sight
“Vision processing is a hot topic for lab robotics,” says Kynan Eng, cofounder and president of Switzerland-based iniLabs. “Ongoing advances in sensor performance, algorithms, and computer performance have opened up new possibilities for automation to achieve increased experimental throughput and greater adaptability of lab equipment.”
In thinking of how to control robotics, some sort of imaging might be one of the first ideas that come to mind to create a sensor. But anyone who knows even a bit about how an eye works and all of the associated neural image processing—at least the parts that are known—is unlikely to suggest mimicking biology. Instead, collecting a series of images with a camera might sound simpler, but it just changes the trouble spots. “A major challenge in the field is in dealing with the huge amounts of data that are generated by modern high-resolution, high-speed vision sensors,” Eng explains.
To battle these challenges, researchers at the Institute of Neuroinformatics of the University of Zurich and ETH Zurich developed the dynamic vision sensor (DVS), which Eng describes as “the first fundamental change in how computer vision is done since the invention of the camera.” Conventional technologies use a series of frames to capture images, but one frame and the next include lots of the same information, which eats up memory, processing power, and time. The frames also use the same exposure on each pixel, which reduces the quality of an image in very dark or very bright areas.
Related Article: Standard Knowledge for Robots
Instead, the DVS actually mimics some of the methods used by eyes for vision. For example, it works fully asynchronously—without frames—and processes only pixel-level changes, which are created by movements in the image being captured. That, Eng explains, “allows the sensors to provide data at microsecond time resolution, and that is as good or better than conventional highspeed vision sensors running at thousands of frames per second.” In addition to providing better temporal resolution, the sparse DVS data stream requires far less storage and computing power. Eng adds that the sensor’s “dynamic range is increased by orders of magnitude due to the local processing.”
iniLabs recommends this sensor for various applications, including real-time robotics. In a case where a robot requires visual input and fast reactions, the DVS makes a great choice. It works even better where space, power, and weight matter, because it requires much less of all three than traditional imaging solutions do. In addition, the DVS can process the image on the same board that includes the sensor.
Linking LiDAR to labs
In some cases, sensors for laboratory robotics start in other areas. In Morgan Hill, CA, scientists at Velodyne LiDAR use light detection and ranging (LiDAR) to create sensors for various applications, including autonomous cars, where LiDAR is used to navigate streets and highways. “It’s also been used to guide manipulator arms or actuators in manufacturing,” says Jeff Wuendry, marketing manager at Velodyne.
The 3-D LiDAR sensors built 10 years ago were specialized, physically larger, and prohibitively expensive. The key to Velodyne’s technology is size. “The unique part is miniaturizing a lot of the subcomponents to make the sensor smaller and smaller,” Wuendry explains. The company, working with Efficient Power Conversion (El Segundo, CA), did that by creating a solid-state LiDAR sensor from a monolithic gallium nitride integrated circuit (IC), which measures just 4 square millimeters. The smaller sensor size and the trend toward a solid-state design make 3-D LiDAR sensing feasible for new applications.
Beyond manufacturing applications, this device could be used in drones. The small size of the IC-based sensor plus the lack of moving parts make it an effective choice in drone applications.
In corporate literature about this advance, Anand Gopalan, vice president of research and development at Velodyne LiDAR, stated, “This technology really opens the door to miniaturization and gives Velodyne the ability to build LiDARs in various form factors for many diverse applications. We will soon have a portfolio of integrated circuits to address various aspects of LiDAR functionality, paving the way to a whole new generation of reliable, miniaturized, and cost-competitive LiDAR products.”
This technology relies on 3-D laser scanning. Laser diodes on the IC, Wuendry explains, “provide depth perception, like your eyes do.” Multiple laser diodes spinning at 20 hertz collect data on objects in 360 degrees around the IC and 30 degrees up and down—all out to a distance of 200 meters. To do that, it processes from 300,000 to 1 million data points per second. That is a lot of data to process, but newer controller algorithms are designed to reduce the need to use all the data all the time. By using the same sensing technology, the controller can focus on data-point changes and ignore static data points.
The information collected by the IC goes to a controller for off-chip processing. One available version of this technology, called the Puck, is about the size of two hockey pucks stacked on top of each other.
The miniaturization of this sensor plus advances in its capabilities expand the likely applications. “As the sensor becomes smaller and more powerful, you can put it in more locations,” Wuendry says. “Making it less expensive will also lead to more possible uses.” Some of those uses will probably be in tomorrow’s lab robotics. Certainly scientists who want to collect data from drones could use this technology today. The simplicity of an IC-based sensor that is small and collects data from a wide range could make it useful in many field studies.
Fine-tuning techniques
Rather than flying over sites to collect data, most lab robotics need finer control to move samples and instruments. Like other advances, improvements there could come from unexpected spots, and one could be industry. Hungary- based OptoForce, for instance, makes a six-axis force torque sensor.
“Our sensor gives the sense of touch to robots so more tasks can be automated and time can be saved,” says Nora Bereczki, marketing manager at OptoForce. “With our sensor, the robot will be more precise and human touch-needed tasks can be automated.”
In labs, that touch is often required. In fact, some lab applications could use robots working as a team. “What we see is that collaborative robots are gaining space during manufacturing,” Bereczki says. “There is a big market need for this kind of solution.” As examples, she mentions repetitive, monotonous human tasks. The same kind of needs exist in labs, and the OptoForce sensor could eventually help scientists too. In addition, the OptoForce sensor provides high resolution—sensitive to just 0.1 newtons—and it’s robust. “Even if the sensor falls down on the floor, it won’t break,” Bereczki notes.
The fine touch, sensitivity, and robust build of this sensor make it a valuable option in lab robotics. As Bereczki says, “We are developing applications continuously according to the market needs.” Although most of those needs come from industry today, the expanding use of robotics in scientific labs could be one of tomorrow’s key applications of this technology.
Watching a robotic hand sort tomatoes or seeing a machine find and move pallets in an industrial plant might not fit your idea of how to automate your lab, but keep an open mind. You never know when a piece of technology will improve just enough to fit a specific sensing need in your lab. A robotic hand that can locate and lift a tomato can gingerly move labware. If you work with hazardous samples or reagents, even more applications of these sensors could keep your team safe as machines take over some once-human tasks.