Problem: A common problem affecting many scientists, especially those working in the area of molecular biology, is the vast amount of data that is created by their experiments. With such a large volume of data to consider, it is often impossible to derive any real biological meaning from their findings with the naked eye alone, which means that sophisticated data algorithms need to be developed in order for researchers to interpret their data effectively.
Until now, computer software designed for this purpose has focused on being able to handle increasingly vast amounts of data. As a result, the role of the scientist/researcher has partly been set aside, and a lot of data analysis is now performed by specialist bioinformaticians and biostatisticians. In most cases, however, this model has several drawbacks, since it is typically the scientist who knows the most about the specific area being studied.
Solution: Even though the exploration and analysis of large data sets can be challenging, the active use of visualization techniques can provide a powerful way of identifying important structures and patterns very quickly. Visualization provides the user instant feedback, and with results that present themselves as they are being generated.
We recommend a five-step method to ensure repeatable and significant results when using visualization. By applying this five-step method, it is possible to investigate large and complex data sets without being a statistics expert. The method is described below in more detail, but some basics need to be in place at the start.
First of all, the high-dimension data needs to be reduced to lower dimensions so that it can be plotted in 3D. We recommend the use of Principal Component Analysis (PCA) for this purpose. Tools to color data to enhance the information are also required, as well as filters and tools to select and deselect parts of the data set.
At this stage, researchers can begin the five-step visualization process by detecting and removing the strongest signal present in the active dataset. Once this signal is identified, it can be removed in order to see whether there are any obscured (but still detectable) signals present. Removing a strong signal will usually result in the reduction of both the number of active samples and/or variables.
Step two of the visualization process is to assess the signal-to-noise ratio in the data by using PCA and randomization. The strength of a visually detected signal or pattern is measured by examining the amount of variance captured in the 3D PCA-plot. This captured variance is compared with what the researcher would expect to capture if the real variables were all replaced by random variables, and will therefore give a clear indication of the reliability of the identified pattern.
Step three is to remove any “noise” by variance filtering. If researchers can see a significant signal-to-noise ratio in their active dataset, they should try to remove some of the active variables that are likely contributing to the noise.
Step four offers the option of performing statistical tests that can be applied to any/ all of the other stages of the five-step process: either during the initial analysis, when a step is repeated, at the end of a step, or not at all.
The final step uses graphs to refine the search for subgroups or clusters. Connecting samples in networks or graphs, for example, makes it possible to move into higher dimensions (i.e. more than three), since the graph created in a sample plot is based on the distances in the space of all active variables, and can therefore provide more insight into the structure of the data. These five steps are then repeated until there are no more structures to be found. When used in this way, visualization can be a powerful tool for researchers. If data can be visualized in a clear way, scientists can identify results easily, on their own, without having to rely on specialist bioinformaticians and biostatisticians.
For more information, visit www.qlucore.com.