Specific use cases on the evaluation of the security of the object recognition system of the iCub robot will be addressed. Students will also get ability to answer open-ended questions with closed books, solve numerical exercises, use open-source libraries for the security evaluation of machine learning algorithms used by modern robots.
The aim of this short course is to provide graduate students with fundamental and advanced concepts on the security of machine learning in robotics.
Hands-on classes on attacks and defences of machine-learning algorithms using the SecML open-source Python library for the security evaluation of machine learning algorithms (https://secml.readthedocs.io).
Introduction to machine learning security: introduction by practical examples from computer vision, biometrics, spam filtering, malware detection.
Threat models and attacks to machine learning. Modelling adversarial tasks. The two-player model (the attacker and the classifier). Levels of reciprocal knowledge of the two players (perfect knowledge, limited knowledge, knowledge by queries and feedback). Attack models against machine learning: evasion, poisoning, backdoor and privacy attacks. The concepts of security by design and security by obscurity.
Evaluation of machine learning algorithms in adversarial environments. Vulnerability assessment via formal methods and empirical evaluations. Adaptive and non-adaptive evaluations.
Design of machine learning algorithms in adversarial environments: Taxonomy of possible defense strategies. Evasion attacks and countermeasures. Defenses against evasion, poisoning, backdoor and privacy attacks. Poisoning attacks and countermeasures. Backdoor poisoning, privacy-related threats and defenses.
Hands-on classes on attacks against machine learning and defences of machine-learning algorithms in robotics
A. Joseph, B. Nelson, B. Rubinstein, D. Tygar, Adversarial machine learning, Cambridge University Press, 2018
B., Battista, F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition 84 (2018): 317-331.
B. Biggio, F.Roli, Wild Patterns, Half-day Tutorial on Adversarial Machine Learning: https://www.pluribus-one.it/research/sec-ml/wild-patterns
Biggio, B., Corona, I., Maiorca, D., Nelson, B., Srndic, N., Laskov, P., Giacinto, G., Roli, F. Evasion attacks against machine learning at test time. ECML-PKDD, 2013.
Biggio, B., Fumera, G., Roli, F. Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng., 26 (4):984–996, 2014.
Ricevimento: Contact the instructor by email. www.saiferlab.ai/people/fabioroli
FABIO ROLI (President)
LUCA DEMETRIO (President Substitute)
LUCA ONETO (President Substitute)
According to the official calendar.
Intermediate in class assignments
Solutions of numerical/coding exercises and open-ended, closed questions.
Contact the instructor by email