Reproducing experimental results is a core tenet of the scientific method. Unfortunately, the increasing complexity of the system we build, deploy and evaluate makes it difficult to reproduce results and hence is one of the greatest impairments for the progress of science in general and distributed systems in particular.
The complexity stems not only from the increasing complexity of the systems under study, but also from the inherent complexity of capturing and controlling all variables that can potentially affect experimental results.
We argue that this can only be addressed with a systematic approach to all the stages of the evaluation process. Angainor is a step in this direction.
Our goal is to address the following challenges: i) precisely describe the environment and variables affecting the experiment, ii) minimize the number of (uncontrollable) variables affecting the experiment and iii) have the ability to subject the system under evaluation to controlled fault patterns.
The architecture and main design decisions of the platform will be detailed in an upcoming paper.
We have open PhD and Post-Doc positions. If you are interested in working in practical distributed systems check the Team below for details on how to get in touch.
July 27th 2018 - Miguel Matos will be giving a keynote at the ApPLIED Workshop - Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems held in conjunction with PODC-2018
June 1st 2018 - The Angainor project officially started. Check the Team and Funding below for more details.
Have a Docker client/daemon up and running on your machine. Check the Docker documentation for instructions.
make all push
config.yaml and adjust it accordingly to your system. The provide defaults provided should work in most cases.
./bin/lsds cluster init
If bash is not your default shell, prefix all commands with
bash as in
bash bin/lsds cluster init
./bin/lsds cluster up
./bin/lsds cluster status
./bin/lsds benchmark --app examples/nginx/nginx.yaml --name hello-world --run-time 120
which will run the experiment for 120 seconds.
./bin/lsds benchmark --app examples/nginx/nginx.yaml --name hello-churn --churn examples/nginx/churn.yaml
Results will become available at
./bin/lsds cluster down
config.yamlto match your cluster settings, with one entry per each cluster machine
If you find any issue or would like to contribute with new features open a new issue and we will get in touch as soon as possible.
This project is led by Miguel Matos, Senior Researcher at INESC-ID and Assistant Professor at Instituto Superior Técnico, Universidade de Lisboa, Portugal in collaboration with researchers at Universitê de Neuchâtel, Switzerland. Check the CONTRIBUTORS file for the full list of people involved in the project.
This work was partially supported by Fundo Europeu de Desenvolvimento Regional (FEDER) through Programa Operacional Regional de Lisboa and by Fundação para a Ciência e Tecnologia (FCT) through projects with reference UID/CEC/50021/2013 and LISBOA-01-0145-FEDER-031456.