CODE 108858 ACADEMIC YEAR 2024/2025 CREDITS 1 cfu anno 2 ROBOTICS ENGINEERING 10635 (LM-32) - GENOVA SCIENTIFIC DISCIPLINARY SECTOR ING-INF/05 LANGUAGE English TEACHING LOCATION GENOVA SEMESTER 1° Semester MODULES Questo insegnamento è un modulo di: TRUSTWORTHY ARTIFICIAL INTELLIGENCE FOR ROBOTICS TEACHING MATERIALS AULAWEB AIMS AND CONTENT LEARNING OUTCOMES Specific use cases on the evaluation of the security of the object recognition system of the iCub robot will be addressed. Students will also get ability to answer open-ended questions with closed books, solve numerical exercises, use open-source libraries for the security evaluation of machine learning algorithms used by modern robots. AIMS AND LEARNING OUTCOMES The aim of this short course is to provide graduate students with fundamental and advanced concepts on the security of machine learning in robotics. TEACHING METHODS Hands-on classes on attacks and defences of machine-learning algorithms using the SecML open-source Python library for the security evaluation of machine learning algorithms (https://secml.readthedocs.io). SYLLABUS/CONTENT Introduction to machine learning security: introduction by practical examples from computer vision, biometrics, spam filtering, malware detection. Threat models and attacks to machine learning. Modelling adversarial tasks. The two-player model (the attacker and the classifier). Levels of reciprocal knowledge of the two players (perfect knowledge, limited knowledge, knowledge by queries and feedback). Attack models against machine learning: evasion, poisoning, backdoor and privacy attacks. The concepts of security by design and security by obscurity. Evaluation of machine learning algorithms in adversarial environments. Vulnerability assessment via formal methods and empirical evaluations. Adaptive and non-adaptive evaluations. Design of machine learning algorithms in adversarial environments: Taxonomy of possible defense strategies. Evasion attacks and countermeasures. Defenses against evasion, poisoning, backdoor and privacy attacks. Poisoning attacks and countermeasures. Backdoor poisoning, privacy-related threats and defenses. Hands-on classes on attacks against machine learning and defences of machine-learning algorithms in robotics RECOMMENDED READING/BIBLIOGRAPHY A. Joseph, B. Nelson, B. Rubinstein, D. Tygar, Adversarial machine learning, Cambridge University Press, 2018 B., Battista, F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition 84 (2018): 317-331. B. Biggio, F.Roli, Wild Patterns, Half-day Tutorial on Adversarial Machine Learning: https://www.pluribus-one.it/research/sec-ml/wild-patterns Biggio, B., Corona, I., Maiorca, D., Nelson, B., Srndic, N., Laskov, P., Giacinto, G., Roli, F. Evasion attacks against machine learning at test time. ECML-PKDD, 2013. Biggio, B., Fumera, G., Roli, F. Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng., 26 (4):984–996, 2014. TEACHERS AND EXAM BOARD FABIO ROLI Ricevimento: Contact the instructor by email. www.saiferlab.ai/people/fabioroli Exam Board FABIO ROLI (President) LUCA DEMETRIO LUCA ONETO (President Substitute) LESSONS LESSONS START According to the official calendar. Class schedule The timetable for this course is available here: Portale EasyAcademy EXAMS EXAM DESCRIPTION Intermediate in class assignments ASSESSMENT METHODS Solutions of numerical/coding exercises and open-ended, closed questions. FURTHER INFORMATION Contact the instructor by email