In today's competitive market, digital businesses such as fintech, ad tech, media and others are always on the lookout for the next big thing to help streamline their business processes. These businesses are constantly generating new data and often have systems and people in place to monitor what is going on. For example, within one company, you might find an IT group monitoring network performance while someone in product management watching page response time and user experience while marketing analysts track conversions per campaign and other KPIs. It is no secret that anomalies in one area often affect performance in other areas, but it is difficult for the association to be made if all the departments are operating independently of one another. In addition, most of the available tools for this type of monitoring look at what has happened in the past, so there is a built-in delay between when something important happens, and when it may (or may not) be discovered via the monitoring process.
Ever since the rise of big data enterprises of all sizes have been in a state of uncertainty. Today we have more data available than ever before, but few have been able to implement the procedures to turn this data into insights. To the human eye, there is just too much data to process. Tim Keary looks at anomaly detection in this first of a series of articles. Unmanageable datasets have become a problem as organizations are needing to make faster decision in real-time.
In our previous post, we explained what time series data is and provided some details as to how the Anodot time series anomaly detection system is able to spot anomalies in time series data. We also discussed the importance of choosing a model for a metric's normal behavior which included any and all seasonal patterns in the metric, and the specific algorithm which Anodot uses to find seasonal patterns. At the end of that post we said it's possible to get a sense of the bigger picture from a lot of individual anomalies. Conciseness is a requirement of any large-scale anomaly detection system because monitoring millions of metrics is guaranteed to generate a flood of reported anomalies, even if there are zero false positives. Achieving conciseness in this context is analogous to distilling the many individual symptoms into a single diagnosis, in much the same way that a mechanic might diagnose a car problem by observing the pitch, volume, and duration of all the sounds it makes, in addition to watching all the dials and indicator lights on the dashboard.
Modern software applications are often comprised of distributed microservices. Consider typical Software as a Service (SaaS) applications, which are accessed through web interfaces and run on the cloud. In part due to their physically distributed nature, managing and monitoring performance in these complex systems is becoming increasingly difficult. When issues such as performance degradations arise, it can be challenging to identify and debug the root causes. At Ericsson's Global AI Accelerator, we're exploring data-science based monitoring solutions that can learn to identify and categorize anomalous system behavior, and thereby improve incident resolution times.
The digital revolution has changed the healthcare landscape irrevocably. Patients expect faster, more efficient care that costs less, which is where artificial intelligence (AI) can help. AI and machine learning allow healthcare organizations to evolve and keep up with trends and new methodologies. Data science enables systems to ingest massive quantities of information quickly, to generate insights and predictions that allow healthcare organizations to focus human attention on what's really important: providing quality care. One of the techniques that are essential for data teams, physicians, insurance analysts, etc., in healthcare to understand is anomaly detection.