Observability is a fast-growing concept in the Ops community that caught fire in recent years, led by major monitoring/logging companies and thought leaders like Datadog, Splunk, New Relic, and Sumo Logic. It's described as Monitoring 2.0 but is really much more than that. Observability allows engineers to understand if a system works like it is supposed to work, based on a deep understanding of its internal state and context of where it operates. It is the capability of monitoring and analyzing event logs, along with KPIs and other data, that yields actionable insights. An observability platform aggregates data in the three main formats (logs, metrics, and traces), processes it into events and KPI measurements, and uses that data to drive actionable insights into system security and performance.
Monte Carlo wants to do for data what application performance management did for enterprise software uptime. The startup launched the Monte Carlo Data Observability Platform, which aims prevent bad data pipelines. Monte Carlo CEO Barr Moses likened data as the new software for companies. "What New Relic does for microservices, Monte Carlo will do for data," she said. "Data is everything to strategic decisions.
As companies increasingly rely on data to power decision making and drive innovation, it's important that this data is timely, accurate, and reliable. In this article we introduce "Key Assets," a new approach taken by some of the best data teams to surface your most important data tables and reports for quick and reliable insights. Have you been 3/4ths of the way done with a data warehouse migration only to discover that you don't know which data assets are right and which ones are wrong? Is your analytics team lost in a sea of spreadsheets, with no life vests in sight? If you answered yes to any of these questions, you're not alone.
Within a year of acquiring SignalFx and Omnition, Splunk is doubling down on tracing and observability with the launch of the Splunk Observability Suite, which the company said will offer a tightly integrated combination of monitoring, investigation, and troubleshooting services for IT and DevOps teams. The suite leverages no sample streaming, full-fidelity ingestion and machine learning capabilities to collect and correlate across metric, trace and log data in real-time and at any scale, Splunk said. Specifically, the suite includes two new Splunk services: Log Observer and Real User Monitoring. The Splunk Log Observer is meant to bring logging capabilities to site reliability engineers, DevOps and developers, with out-of-the-box integrations with cloud and messaging services. The Splunk Real User Monitoring tool aims to extend Splunk's monitoring capabilities to help organizations understand and optimize the digital experiences of their end customers.
If you lead an IT, DevOps, or business operations team, you're probably working on a digital transformation and cloud migration strategy. You're also likely doing it with scarce resources under the strain of shifting market needs and accelerated customer demands. The applications and services that enable these experiences are built on multicloud environments that promise faster innovation and better business outcomes. But these dynamic environments also bring a scale, complexity, and frequency of change that have grown beyond humans' capacity to manage. The common approaches to monitoring these environments to build applications, optimize performance, and run operations are no longer effective.