Why log analytics should be ‘metrics first’
Open source log file analytics specialist InfluxData is insistent that we should take a ‘metrics first’ approach to log analysis.
The company says believes in a metrics first approach that provides developers with the means to ingest, correlate and visualise all time series data at three levels:
Data level one: data relating to technology infrastructure metrics including applications, databases, systems, containers etc.
Data level two: data from business metrics including profit and loss and all the normal economic business monitors.
Data level three: log events… a log, in a computing context, is the automatically produced and time-stamped documentation of events relevant to a particular system and virtually all software applications and systems produce log files.
InfluxData’s technology is focused on the visualisation and analysis of structured application and system events captured via log files. By correlating business metrics to server and application metrics with structured logs, InfluxData claims to be able to provide more precise problem investigation and root-cause analysis capabilities.
The firm’s most recent software release expands functionality with new support for high-speed parsing and ingestion using the syslog protocol, custom log parsing and pre-built log visualisation components.
InfluxData founder and CTO Paul Dix says that each log message represents an event in time and the same metadata that accompanies metrics can be used to pinpoint the valuable contextual information contained within those files.
“By starting with metrics and their associated metadata, operators and developers can understand where and how to interrogate the large volumes of event data contained within logs without performing expensive search queries. This reduces much of the guesswork and prior knowledge required to sift through log data that is typically present when using logs as the initial and primary source of anomaly detection,” said Dix.
InfluxData says that its platform allows users to capture metadata at the collection point, allowing the developer to map elements across systems and supplement additional information when and where required, providing consistency and richness to the logs being transmitted via the syslog protocol.
It provides an improved workflow for log visualisation within the same environment where they have constructed metrics dashboards, which allows a developer to analyse the captured log events for a specific time interval and narrow data down by the important metadata elements, such as host, application, subsystem etc.