Our society is in the middle of a vast technological transformation, with many labour-intensive tasks being automated with software. Software is permeating all aspects of our lives, with self- driving cars, online banking, AI assistants such as Siri and Alexa as notable examples that have direct implications into our safety, security and finances. Think for a moment what would happen if a software system that we highly trust fails to function as we expect? In October 2018, a Lion Air Boeing 737 MAX 8 airplane crashed killing everyone on board. Five months after this incident, the same model operated by Ethiopian Airlines crashed as well. Investigations into these two cases lead to the conclusion that the culprit was a bug, which was present in the software of Boeing 737 MAX 8. Such fatal cases arise because existing reliability models used for building safety critical systems cannot guarantee the absence of a bug, even with the very rigorous testing and verification that is done when building safety critical systems such as airplanes. This leads us to the following question, how trustworthy are current approaches for measuring the reliability of software systems, and how can we improve their ability to detect bugs?
|Effective start/end date||1/01/21 → 31/12/23|