This is a limited proof of concept to search for research data, not a production system.

Search the MIT Libraries

Title: Data for: Performance Benchmarking of Application Monitoring Frameworks

Type Dataset Waller, Jan (2014): Data for: Performance Benchmarking of Application Monitoring Frameworks. Zenodo. Dataset. https://zenodo.org/record/11425

Author: Waller, Jan (Kiel University, Kiel, Germany) ;

Links

Summary

Application-level monitoring of continuously operating software systems provides insights into their dynamic behavior, helping to maintain their performance and availability during runtime. Such monitoring may cause a significant runtime overhead to the monitored system, depending on the number and location of used instrumentation probes. In order to improve a system’s instrumentation and to reduce the caused monitoring overhead, it is necessary to know the performance impact of each probe. While many monitoring frameworks are claiming to have minimal impact on the performance, these claims are often not backed up with a detailed performance evaluation determining the actual cost of monitoring. Benchmarks can be used as an effective and affordable way for these evaluations. However, no benchmark specifically targeting the overhead of monitoring itself exists. Furthermore, no established benchmark engineering methodology exists that provides guidelines for the design, execution, and analysis of benchmarks. This thesis introduces a benchmark approach to measure the performance overhead of application-level monitoring frameworks. The core contributions of this approach are 1) a definition of common causes of monitoring overhead, 2) a general benchmark engineering methodology, 3) the MooBench micro-benchmark to measure and quantify causes of monitoring overhead, and 4) detailed performance evaluations of three different application-level monitoring frameworks. Extensive experiments demonstrate the feasibility and practicality of the approach and validate the benchmark results. The developed benchmark is available as open source software and the results of all experiments are available for download to facilitate further validation and replication of the results.

This dataset supplements the thesis and contains the results of all experiments, including the raw result data, the results of additional experiments, and the configuration of our benchmarks.

More information

  • DOI: 10.5281/zenodo.11425

Subjects

  • Kieker, MooBench, Software Performance Engineering, Benchmarking

Dates

  • Publication date: 2014
  • Issued: August 28, 2014

Rights


Much of the data past this point we don't have good examples of yet. Please share in #rdi slack if you have good examples for anything that appears below. Thanks!

Format

electronic resource

Relateditems

DescriptionItem typeRelationshipUri
IsSupplementTohttps://doi.org/10.5281/zenodo.11515
IsSupplementTo978-3-7357-7853-6
IsPartOfhttps://zenodo.org/communities/kieker
IsPartOfhttps://zenodo.org/communities/moobench
IsPartOfhttps://zenodo.org/communities/zenodo