Two informatics fields that are currently undergoing rapid evolutionary development are artificial intelligence (AI) and the Internet of Things (IoT). As a result of AI and IoT being complementary fields, with the synergy between them greatly enhancing the capabilities of each entity, you don’t see the competition for resources that you might normally expect. The key to understanding this is to realize that AI functions best with vast amounts of data, while IoT devices are ideal sources for supplying the required information streams.

Artificial intelligence

AI is an umbrella term that is frequently used but often misunderstood. It is also a term that many people are reluctant to use, in part because past AI revolutions had been vastly overhyped. Frequently, AI terms are categorized by their capabilities:

  • Type I—Reactive AI: This is one of the most common and basic types of AI. It performs very well in the specific field that it is trained in. IBM’s Deep Blue is an instance of this AI type.
  • Type II—Limited Memory AI: In this type of AI, the limited memory refers to the limited retention time of memory. Think of it like the short-term working memory in a human. This is the type of AI employed in autonomous vehicles, as the data stored provides a reference point against which it can control.
  • Type III—Theory of Mind AI: Reflecting what cognitive scientists refer to as a “theory of mind,” this type of AI cannot only form representations regarding what it senses, but it can also recognize that other entities, such as people, will have their own contrasting representations.
  • Type IV—Self-Aware AI: This is an extension of Type III AI to the point that it is self-aware. The android Commander Data from Star Trek: The Next Generation, could be considered an example of this AI type.

Related Article: Identifying Artificial Intelligence 'Blind Spots'

DEFINING IoT: 7 CHARACTERISTICSWhen most people inquire as to the type of AI in a system, what they are really wondering about is “how does it work?” In most of those instances, they are probably dealing with a Type I AI, and a rough explanation would be based on what approach or algorithm1 it uses. Some of the most common approaches are:

  • Machine Learning: An iterative procedure that uses one of a variety of algorithms to automate the building on an analytical model.

  • Deep Learning: A form of machine learning that uses multiple layers of a selected algorithm to model highly abstract data. Each processing layer is responsible for extracting a single feature, then feeding the information onto the layer above it, with the top layer being a classification layer.

Internet of Things

In somewhat simplistic terms, the IoT consists of all devices connected to the internet. Various estimates project that by 2020 there will be 50 billion IoT devices connected to the internet.

As a result of this definition being so broad, you will sometimes hear IoT referred to as a field rather than as a specific topic. One of the downsides of it being an umbrella term—similar to AI—is that it has become something of a buzzword that’s frequently included in marketing hype, whether appropriate or not. While many have attempted to devise a more specific definition, this resulted in a plethora of definitions, making many definitions context sensitive. Just a partial list of the definitions that have been developed would be longer than this entire article.

The complexity of the device has nothing to do with its classification as belonging to the IoT. The device could be something as simple as a thermometer or a float switch, or as complex as a Tesla electric car or a gas chromatograph. The critical factor is that it is connected to the internet—either directly or indirectly. Directly connected is fairly obvious—it might connect via a standard Ethernet cable, wifi, or any other standard internet interface. Indirectly connected devices can be somewhat more enigmatic, in that the device might use a technology such as Bluetooth or Zigbee to connect to a gateway, which is then connected to the internet.

Related Article: How the Internet of Things Is Affecting Laboratory Equipment

This concept can be extended further, as a specific device need not be directly connected to the gateway. Instead, it can traverse a local network composed of an arbitrary number of devices in order to link to the gateway. This latter situation is most commonly found in a mesh network of devices. This allows the connection to follow any arbitrary path through this mesh, a very useful characteristic in the event that one or more devices is somehow damaged. Normally, the practical limitation to the number of IoT devices comprising the mesh is determined by the amount of transmission delay that the particular application can tolerate, as there is an additional transmission delay for each device the message has to go through, even if we ignore the possible delay from network collisions between devices or the gateway.

Depending on how our arbitrary IoT device is engineered, it may be powered from standard line voltage or a battery; in some instances, it may not have an attached power supply at all. With the advances made in low-power processors and other electronics, it is quite feasible to design an IoT device to be powered by harvesting energy2 from its environment. It is quite feasible for devices to harvest sufficient energy to power both the device and its communication interface.

The synergy of it all

It is by combining AI with IoT that we observe a multiplier effect, allowing these technologies to display capabilities that neither could exhibit on its own. There are two primary ways of accomplishing this. Currently, the most common is installing appropriate sensors in the IoT device and using them to provide a data stream back through the internet to be processed on a remote AI system. Depending on what you are trying to monitor, you might have single or multiple data streams from one sensor type or a variety of sensor types. We are already seeing a migration of this data processing onto the IoT device itself as the processors and memory within the devices become more capable.

There are a number of reasons to perform this migration. One is to help reduce the amount of network traffic, as an IoT device can generate a prodigious amount of data. What we currently think of as “big data” will seem minuscule in comparison to the data streaming from all of the IoT devices being monitored. Another justification for migrating the processing to the IoT device is that in many instances, the value of the data is extremely transient. In other words, the data must be processed immediately or its value drops to nothing. A good example of this is when the extracted data is being used in a process control loop. If you have a continuous flow reactor, to optimize the quality of the product produced, you must apply feedback continuously. Any significant delay, which in some systems may be seconds or less, results in either an inferior/low-yield product, or worse, a runaway exothermic reaction.

By installing IoT devices to monitor all reactor conditions that could affect the process, such as temperatures, pressures, flow rates, etc., the AI system can be used to optimize the product yield. On the scale that many industrial processes work, even a fraction of a percent improvement in product yield could result in a significant financial return.

Related Article: The Lab of Tomorrow

AI can be applied to the analysis side in a laboratory as well. A number of instruments that take advantage of the power of AI are already on the market. You can find gas chromatographs, infrared spectrometers, Raman spectrometers, etc., that include AI in their control software. This makes the machines much “smarter’’ when it comes to analyzing the data being collected. In all but rare cases, this eliminates the need for specialists to run the machines and analyze the data. Another significant use of AI would be in multi-omics, where it can be used to analyze data from gas chromatograph mass spectrometers and liquid chromatograph mass spectrometers or other instrument combinations to visualize the enormous volume of data produced in proteomics, metabolomics, and flux analysis testing. While various vendors have added their own extensions, called gadgets, an open source software platform to enable the connection of data sources, analysis packages, and viewers, called the Garuda Platform, is available from the Garuda Alliance (www.garuda-alliance.org).

An AI system could also be used to track the movement of personnel in and out of, as well as around, the lab. Note that the goal here is not to micromanage your personnel but rather to ergonomically understand the movement of personnel through the building. This would provide information for altering the arrangement of the lab and placement of offices, or even could be used to design an entirely new laboratory.

Combining the umbrellas of IoT and AI shows how one can accelerate the analysis of complex data without the constant need for an expert in the field, at the same time processing massive amounts of experiment data to extract meaning from all of those data bits in order to provide multiple ways of imaging the data. The complement of this is that it also can provide data for optimal laboratory design, as well as afford more effective control of the laboratory environment.

References

1. Alam, F., Mehmood, R., Katib, I. & Albeshri, A. Analysis of Eight Data Mining Algorithms for Smarter Internet of Things (IoT). Procedia Computer Science 98, 437–442 (2016).

2. Damien, B. 11 Myths About Energy Harvesting. Electronic Design (2019). Available at: https://www.electronicdesign.com/power/11-myths-about-energy-harvesting. (Accessed: 21st April 2019)