Experimentation on testbeds with Internet of Things (IoT) devices is hard. The tedious firmware development, the lack of user interfaces, the stochastic nature of the radio channel, the testbed learning curve, are some of the factors that make the evaluation process error prone. The impact of such errors on published results can be quite unfortunate, leading to misconclusions and false common wisdom. The selection of experiment conditions or performance metrics to evaluate one’s own proposal may not lead to perfectly fair comparisons with state-of-the-art, either. Our research community is well aware of these problems and is actively working on solutions. We present OpenBenchmark, a cloud-based, reproducible, repeatable and comparable IoT benchmarking service. OpenBenchmark facilitates and improves the IoT experimentation workflow: it runs the experiments on supported testbeds, instruments the supported firmware according to the industry-relevant test scenarios, and collects and processes the experiment data to produce Key Performance Indicators (KPIs). This paper introduces the OpenBenchmark platform, discusses its applicability, design and implementation.
File | Action |
---|
soda_openbenchmark.pdf | Download |