Sommario: | We propose a framework for the evaluation of cyberattack detection systems in which theoretical results can be tested in a realistic setup. We emulate a power control infrastructure, an attacker and a monitoring system. In this controlled environment, through a modular approach, it is possible to evaluate a variety of detection models: we inject adversarial activity, collect logs from the systems, analyze such logs and produce evidences that are later processed by artificial intelligence models that can raise alerts, and give diagnostic or predictive information. In particular, we test our framework with detection models based on Dynamic Bayesian Networks, that take into account the evolution of adversarial activities over time. The testbed allows us to effectively test the adequacy of the detection mechanisms for early warning of suspicious events; currently, it includes man-in-the-middle attacks and false data injection. |