The use of cloud, containers, and microservices has made the application landscape more complex. Classic monitoring for monitoring, the IT systems is no longer sufficient. Observability offers log, tracing, and monitoring better suited for this. Five best practices for context-based error source analysis.
Classic monitoring alerts administrators if an error occurs in an IT system. Well implemented shows where the problem is and what is no longer working. To answer why, administrators need a more profound, holistic insight into the systems and microservices. In this context, monitoring is based mainly on monitoring possible problems the company must foresee. Based on this, the operations department must then configure their dashboards. With the observability approach, on the other hand, the company receives data from the entire system.
With the right strategy in place, administrators can flexibly analyze what’s happening across all interconnected environments and where the real root of failure lies. Since monitoring highly complex systems is difficult, the IT service provider Consol has defined five best practices on which companies should base their observability strategy.
Logs are only valid if their content is targeted to the target audience. There are three relevant target groups for logging: operations, developers, and specialist departments. In observability, target group-specific logging means that the logs must contain precisely the information relevant to the maintenance and operation of applications. For example, while developers want to see exactly which line of code an error occurs in their logs, it is more important for administrators to know its effects on other parts of the system. Departments, on the other hand, are primarily interested in how the business use cases are running and whether there are any problems.
Extensive logging is the basis for successful observability. Still, sometimes less can be more, especially considering the cost-benefit factor. Data collection is a significant cost factor, especially in the cloud context: storage is expensive, and network traffic and configuration work also have an impact. The maintenance and updating of the logging infrastructure also cause costs due to high personnel costs. Companies should therefore only collect the data that is necessary for their purposes.
Good monitoring as part of an observability strategy goes far beyond standard technical metrics such as processor load or memory requirements. Business metrics, such as how long it takes to render components on a website, must be defined by companies individually and depending on the application. In addition, monitoring is only effective if it is designed for the long term. After every software release or the implementation of new features, companies should take a close look at how and whether the performance and health of the system have changed. The prerequisite has the corresponding logs as a monitoring history available.
The definition of alerting rules is also part of the holistic observability strategy. Monitoring provides administrators with information about the system in real-time, so they can always check if everything is ok. It is less time-consuming if the system raises the alarm on its own, for example, as soon as a certain percentage of accesses to an application show errors within five minutes. Then those responsible can specifically check what is wrong and where an intervention is necessary. A prerequisite for this is suitable metrics provided by the application. In addition to technical, this also includes business metrics that make it possible to individually monitor the business use cases for which the system is responsible.
Open source software (OSS) is becoming increasingly popular in the professional IT environment as a lucrative alternative to proprietary variants. Open source tools such as Prometheus (monitoring and alerting), Jaeger (tracing), Logstash (logging), and Kibana (visualization) are ubiquitous in the DevOps area. Most rely on open standards such as OpenMetrics, OpenTracing, and OpenTelemetry. The advantages of OSS and open standards are:
ALSO READ: Cyber Attacks: New Opportunities to Fight Cyber Criminals
Recognized for its plethora of high-tech accessories, the Chinese giant Xiaomi has just launched its…
One of the main elements of an identification system based on RFID technology is undoubtedly…
Criteo has set up a data lineage system around its Hadoop cluster. What techniques does…
Its origin, although rooted in traditions, finds new expressions today. The most famous examples demonstrate…
Cloud management has established itself in many companies that must continue to manage their on-site…
There is no question that app development is a booming business. “There’s an app for…