Results 1 -
1 of
1
Verifying Parallel Algorithms and Programs Using Coloured Petri Nets, (2012)
by Michael Westergaard
Citation Context
...terature in distributed systems that touch issues related to performance bugs. Table 1 shows the summary of the state of the art. First, many of existing work focus on in-deployment and post-mortem tracing, monitoring, debugging, and analysis [2, 4, 9, 10, 22, 23, 26]. Arguably, they rep1 In/Post-Deployment Monitoring, Project 5 [2], Magpie [4], Fay [9], Tracing, X-Trace [10], LagHunter [16], PiP [22], Profiling Spectroscope [23], Log Mining [26] Pre-Deployment Benchmarks YCSB [5], Limpbench [8] Model FATE [11], Demeter [13], MacePC [17], checking SAMC [20], MoDist [27] Formal P Lang [7], CPN [25], methods DynamoDB+PlusCal [14, 21] Table 1: Categorization of Related Work. resent the popular approach but they suffer from one important limitation: passivity. In-deployment and postmortem approaches are passive approaches as they react after performance bugs surface, but they cannot unearth performance bugs prior to deployment. In terms offline performance testing, one of the standards is running benchmarks [5], which is unfortunately far from simulating real deployment environments. To exercise more scenarios, one can simultaneously run benchmarks and simulate certain environments such as...
Developed at and hosted by The College of Information Sciences and Technology
© 2007-2019 The Pennsylvania State University