Specifying Big Data Benchmarks

Tilmann Rabl, Meikel Poess, Chaitan Baru, and Hans-Arno Jacobsen.

Volume 8163 of LNCS. Springer Berlin Heidelberg, 2014.
http://link.springer.com/book/10.1007%2F978-3-642-53974-9.

Abstract

First Workshop, WBDB 2012, San Jose, CA, USA, May 8-9, 2012, and Second Workshop, WBDB 2012, Pune, India, December 17-18, 2012, Revised Selected Papers

Punctuated by the rapid growth in the use of the Internet, both in the number of devices connected globally and the amount of data per device, the world has been in the midst of an extraordinary information explosion over the past decade. As a consequence, society is experiencing a rate of change in dealing with information that is faster than at any other point throughout history. The data originating from social media, enterprise applications, and computer devices in general, commonly referred to as big data, continue to grow exponentially establishing enormous potential for extracting very detailed information. With new systems, techniques, and algorithms being developed that can deal with these new database characteristics, emerges the need for a standardized methodology for their performance evaluation.

This sparked the idea among a small group of industry and academic experts to establish a series of workshops explicitly with the intention of defining a set of benchmarks for providing objective measures of the effectiveness of hardware and software systems dealing with big data applications. The First Workshop on Big Data Benchmarking (WBDB 2012), held during May 8–9, 2012, in San Jose, CA, was attended by over 60 invitees representing 45 different organizations, including industry and academia. Following the successful First Workshop on Big Data Benchmarking, the second workshop (WBDB 2012.in) was held in Pune, India, on December 16–17, 2012. The topics discussed at the workshops can be grouped into four topic areas: (1) Benchmark Context; (2) Benchmark Design Principles for a Big Data Benchmark; (3) Objectives of Big Data Benchmarking; and (4) Big Data Benchmark Design. In this book, the most mature and interesting contributions from the First and Second Workshop on Big Data Benchmarking were collected.

Download


Tags: big data, benchmarking, wbdb


Readers who enjoyed the above work, may also like the following:


  • Advancing Big Data Benchmarks.
    Tilmann Rabl, Raghunath Nambiar, Meikel Poess, Milind Bhandarkar, Hans-Arno Jacobsen, and Chaitanya Baru (Eds.).
    Volume 8585 of LNCS. Springer Berlin Heidelberg, 2014.
    http://link.springer.com/book/10.1007%2F978-3-319-10596-3.
    Tags: big data, benchmarking, wbdb
  • Discussion of BigBench: A Proposed Industry Standard Performance Benchmark for Big Data.
    Chaitanya Baru, Milind Bhandarkar, Carlo Curino, Manuel Danisch, Michael Frank, Bhaskar Gowda, Hans-Arno Jacobsen, Huang Jie, Dileep Kumar, Raghunath Nambiar, Meikel Poess, Francois Raab, Tilmann Rabl, Nishkam Ravi, Kai Sachs, Saptak Sen, Lan Yi, and Choonhan Youn.
    In Sixth TPC Technology Conference on Performance Evaluation & Benchmarking, pages 44-63, 2014. Springer Berlin Heidelberg.
    Tags: bigbench, big data, benchmarking
  • BigBench Specification V0.1.
    Tilmann Rabl, Ahmad Ghazal, Minqing Hu, Alain Crolotte, Francois Raab, Meikel Poess, and Hans-Arno Jacobsen.
    In Proceedings of the 2012 Workshop on Big Data Benchmarking, pages 164-202, 2013.
    Tags: bigbench, big data, benchmarking