Data soup and the art of finding relevance: Why AIOps isn’t enough for modern network monitoring

“Plastic soup” is one term that’s been used to describe the pollution that’s plaguing our oceans. The phrase was coined by Captain Charles Moore in 1997. Moore came across massive amounts of plastic floating in the middle of the ocean and his accounts of this experience helped raise awareness of the scope and severity of the problem.

We know we need to get plastic out of our environment, but that is easier said than done. 71% of the earth’s surface is water and it is extremely costly and difficult to find and extract plastic once it makes its way to rivers, lakes, and oceans. Therefore, more and more initiatives have started to focus on solving the problem at the root: reducing the amount of plastic used and finding ways to keep it out of water.

Data soup

So, why this talk about plastic soup in an IT blog? Well, many of today’s IT teams have a strategy of employing AIOps tools and creating oceans of data across the whole IT stack—from the network to the application layer. With these approaches, the hope is that teams can more easily solve business service issues by finding and correlating anomalies across the end-to-end stack. The problem is that just like trying to retrieve plastic from the ocean, trying to solve IT issues this way can be very difficult and costly.

Why transport all this data and try to troubleshoot using these massive data sets? Doesn’t it make more sense to filter on the specific details, while working in the domain that’s experiencing the issue? Most incidents can be discovered and solved at the root, in their own environment, and only the event data that is relevant for cross-domain correlation needs to be transported to the AIOps data ocean.

Actionable domain data

I often compare this smart, domain-specific approach to the systems used in a self-driving car. An autonomous car uses radar (radio waves), LiDAR (laser detection), ultra-sonic sensors (sound), and cameras (vision). These systems generate data continuously, which is processed, correlated, and acted on within the domain of the car. During normal operation, such as driving and parking, data remains in the vehicle. However, if the car comes to a sudden stop, a subset of relevant data can be broadcasted to other cars, or (worst case) to emergency response teams. These vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications are perfect examples of the power of publishing a limited amount of relevant data to the broader data ocean for further action.

The applicability to network operations management

In light of this illustration, why would you send all network monitoring data to a central data ocean? This means you incur all the effort and cost of creating and maintaining a huge data infrastructure. This entails normalizing, migrating, and integrating the data, and contending with the growing complexity of governing all this data. Finding issues becomes exponentially more complex, time-consuming, and costly—AI and machine learning can only do so much.

That’s why it’s better to use advanced, domain-specific network monitoring tools that can provide end-to-end coverage of modern, multi-vendor networks. Offering sophisticated capabilities, like advanced noise reduction, workflows, and automation, these tools enable intelligent, efficient investigation, analysis, troubleshooting, and resolution. These tools deliver actionable insights into user experience and network fault, performance, and flow. Fundamentally, these tools only publish relevant data to the data ocean, which is simpler, more efficient, and less expensive.

To find out how DX NetOps by Broadcom is enabling network operations teams to realize these advantages, be sure to visit our network monitoring page.

Networking