|SCIENTIFIC DISCIPLINARY SECTOR||ING-INF/05|
Today machine-learning algorithms and AI-based systems are used for many real-world applications, including image recognition, spam filtering, malware detection, biometric recognition. In these applications, the learning algorithm may have to face intelligent and adaptive attackers who can carefully manipulate data to purposely subvert both the learning and the operational phases. Part 1 of the course aims to introduce the fundamentals of the security of machine learning and the related field of adversarial machine learning. Part 2 introduces the international regulations behind the so called “trustworthy AI. The course uses application examples including object recognition in images, biometric recognition, spam filtering, and malware detection.
Understanding of fundamental concepts and advanced methods on the security of machine learning and trustworthy artificial intelligence and their applications to pattern recognition. Ability to answer open-ended questions with closed books, solve numerical exercises, use open-source libraries for the security evaluation of machine learning algorithms.
The aim of this course is to provide graduate students with fundamental and advanced concepts on the security of machine learning and trustworthy artificial intelligence.
This course is for graduate students who already attended basic courses (or have a basic/intermediate knowledge) of machine learning and artificial intelligence and have a basic/intermediate knowledge of programming languages (in particular, the Python language).
Lectures. The lecturer will use slides. Copies of slides will be provided to the students. Hands-on classes on attacks and defences of machine-learning algorithms using the SecML open-source Python library for the security evaluation of machine learning algorithms (https://github.com/pralab/secml).
Part 1: Security of Machine Learning (30 hours)
Introduction to machine learning security: introduction by practical examples from computer vision, biometrics, spam filtering, malware detection. (2 hours)
Threat models and attacks to machine learning. Modelling adversarial tasks. The two-player model (the attacker and the classifier). Levels of reciprocal knowledge of the two players (perfect knowledge, limited knowledge, knowledge by queries and feedback). Attack models against machine learning: evasion, poisoning, backdoor and privacy attacks. The concepts of security by design and security by obscurity. (6 hours)
Evaluation of machine learning algorithms in adversarial environments. Vulnerability assessment via formal methods and empirical evaluations. Adaptive and non-adaptive evaluations. (6 hours)
Design of machine learning algorithms in adversarial environments: Taxonomy of possible defense strategies. Evasion attacks and countermeasures. Defenses against evasion, poisoning, backdoor and privacy attacks. Poisoning attacks and countermeasures. Backdoor poisoning, privacy-related threats and defenses. (8 hours)
Practical sessions with Python. Hands-on classes on attacks against machine learning and defences of machine-learning algorithms. (8 hours)
Part 2: Trustworthy Artificial Intelligence (18 hours)
AI regulations: the European AI Act. European ethics guidelines for trustworthy AI. AI regulations in the world (3 hours)
Fairness and privacy of machine learning: fairness and privacy-related threats and defenses. (6 hours)
Robust AI: robust optimization in machine learning; design of machine learning algorithms in the wild and out-of-distribution pattern recognition (3 hours)
Explainable AI: explainability methods. Global and local methods. Model-specific and model-agnostic methods. (3 hours)
Practical sessions with Python: Hands-on classes on explainable machine learning algorithms and AI regulations (3 hours)
A. Joseph, B. Nelson, B. Rubinstein, D. Tygar, Adversarial machine learning, Cambridge University Press, 2018
B., Battista, F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition 84 (2018): 317-331.
B. Biggio, F.Roli, Wild Patterns, Half-day Tutorial on Adversarial Machine Learning: https://www.pluribus-one.it/research/sec-ml/wild-patterns
Biggio, B., Corona, I., Maiorca, D., Nelson, B., Srndic, N., Laskov, P., Giacinto, G., Roli, F. Evasion attacks against machine learning at test time. ECML-PKDD, 2013.
Biggio, B., Fumera, G., Roli, F. Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng., 26 (4):984–996, 2014.
Office hours: By appointment, scheduled by email.
Office hours: Contact the instructor by email.
See the official calendar of the University of Genova.
All class schedules are posted on the EasyAcademy portal.
Intermediate in class assignments + final home assignment.
Intermediate in class assignments (closed-book solutions of numerical/coding exercises and open-ended questions) + final home assignment. Teams of 3 students maximum can do the final home assignment. Grading policy = intermediate in class assignments (15/30) + final home assignment (15/30)
|09/01/2023||09:00||GENOVA||Esame su appuntamento||Oral examination is by appointment only. Please, contact the course instructors to set the exact date and time. You can do the oral examination only after you have completed and delivered your project assignment.|
|09/01/2023||09:00||GENOVA||Orale||Oral examination is by appointment only. Please, contact the course instructors to set the exact date and time. You can do the oral examination only after you have completed and delivered your project assignment.|
|30/01/2023||09:00||GENOVA||Esame su appuntamento||Oral examination is by appointment only. Please, contact the course instructors to set the exact date and time. You can do the oral examination only after you have completed and delivered your project assignment.|
|13/02/2023||09:00||GENOVA||Esame su appuntamento|
|05/06/2023||09:00||GENOVA||Esame su appuntamento|
|26/06/2023||09:00||GENOVA||Esame su appuntamento|
|17/07/2023||09:00||GENOVA||Esame su appuntamento|
|04/09/2023||09:00||GENOVA||Esame su appuntamento|
Contact the instructor by email.