|SCIENTIFIC DISCIPLINARY SECTOR||ING-INF/05|
Today machine-learning algorithms and AI-based systems are used for many real-world applications, including image recognition, spam filtering, malware detection, biometric recognition. In these applications, the learning algorithm may have to face intelligent and adaptive attackers who can carefully manipulate data to purposely subvert both the learning and the operational phases. Part 1 of the course aims to introduce the fundamentals of the security of machine learning and the related field of adversarial machine learning. Part 2 introduces the international regulations behind the so called “trustworthy AI. The course uses application examples including object recognition in images, biometric recognition, spam filtering, and malware detection.
The aim of this course is to provide graduate students with fundamental and advanced concepts on the security of machine learning and trustworthy artificial intelligence. Part 1 of the course introduces the fundamentals of the security of machine learning, the related field of adversarial machine learning, and some practical techniques to assess the vulnerability of machine-learning algorithms and to protect them from adversarial attacks. Part 2 introduces the international regulations behind the so called “trustworthy AI”, and the main techniques to design robust machine-learning algorithms which are fair, privacy preserving and whose operation can be explained at some extent to the final users. The course uses application examples including object recognition in images, biometric recognition, spam filtering, and malware detection
The aim of this course is to provide graduate students with fundamental and advanced concepts on the security of machine learning and trustworthy artificial intelligence.
This course is for graduate students who already attended basic courses (or have a basic/intermediate knowledge) of machine learning and artificial intelligence and have a basic/intermediate knowledge of programming languages (in particular, the Python language).
Lectures. The lecturer will use slides. Copies of slides will be provided to the students. Hands-on classes on attacks and defences of machine-learning algorithms using the SecML open-source Python library for the security evaluation of machine learning algorithms (https://secml.readthedocs.io/).
Part 1: Security of Machine Learning
Introduction to machine learning security: introduction by practical examples from computer vision, biometrics, spam filtering, malware detection.
Threat models and attacks to machine learning. Modelling adversarial tasks. The two-player model (the attacker and the classifier). Levels of reciprocal knowledge of the two players (perfect knowledge, limited knowledge, knowledge by queries and feedback). Attack models against machine learning: evasion, poisoning, backdoor and privacy attacks. The concepts of security by design and security by obscurity.
Evaluation of machine learning algorithms in adversarial environments. Vulnerability assessment via formal methods and empirical evaluations. Adaptive and non-adaptive evaluations.
Design of machine learning algorithms in adversarial environments: Taxonomy of possible defense strategies. Evasion attacks and countermeasures. Defenses against evasion, poisoning, backdoor and privacy attacks. Poisoning attacks and countermeasures. Backdoor poisoning, privacy-related threats and defenses.
Practical sessions with Python. Hands-on classes on attacks against machine learning and defences of machine-learning algorithms.
Part 2: Trustworthy Artificial Intelligence
AI regulations: the European AI Act. European ethics guidelines for trustworthy AI. AI regulations in the world
Fairness and privacy of machine learning: fairness and privacy-related threats and defenses.
Robust AI: robust optimization in machine learning; design of machine learning algorithms in the wild and out-of-distribution pattern recognition
Explainable AI: explainability methods. Global and local methods. Model-specific and model-agnostic methods.
Practical sessions with Python: Hands-on classes on explainable machine learning algorithms and AI regulations
A. Joseph, B. Nelson, B. Rubinstein, D. Tygar, Adversarial machine learning, Cambridge University Press, 2018
B., Battista, F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition 84 (2018): 317-331.
B. Biggio, F.Roli, Wild Patterns, Half-day Tutorial on Adversarial Machine Learning: https://www.pluribus-one.it/research/sec-ml/wild-patterns
Biggio, B., Corona, I., Maiorca, D., Nelson, B., Srndic, N., Laskov, P., Giacinto, G., Roli, F. Evasion attacks against machine learning at test time. ECML-PKDD, 2013.
Biggio, B., Fumera, G., Roli, F. Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng., 26 (4):984–996, 2014.
Office hours: By appointment, scheduled by email.
Office hours: Contact the instructor by email.
See the official calendar of the University of Genova.
All class schedules are posted on the EasyAcademy portal.
Intermediate in class assignments or home assignment+oral exam.
Intermediate in class assignments (closed-book solutions of numerical/coding exercises and open-ended/closed questions), or home assignment+oral exam. Grading policy = open/closed questions (15/30) + numerical/coding exercises (15/30)
Contact the instructor by email.