Abstract
Artificial intelligence (AI) systems have been increasingly used to make decision-making processes faster, more accurate, and more efficient. However, such systems are also at constant risk of being attacked. While the majority of attacks targeting AI-based applications aim to manipulate classifiers or training data and alter the output of an AI model, recently proposed sponge attacks against AI models aim to impede the classifier’s execution by consuming substantial resources. In this work, we propose dual denial of decision (DDoD) attacks against collaborative human-AI teams. We discuss how such attacks aim to deplete both computational and human resources, and significantly impair decision-making capabilities. We describe DDoD on human and computational resources and present potential risk scenarios in a series of exemplary domains.
Original language | English |
---|---|
Pages (from-to) | 77-84 |
Number of pages | 8 |
Journal | IEEE Pervasive Computing |
Volume | 22 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Jan 2023 |
Externally published | Yes |
Keywords
- Artificial intelligence
- Data models
- Predictive models
- Task analysis
- Training
- Training data
- Uncertainty