Skip to main content
CODE 108606
ACADEMIC YEAR 2024/2025
CREDITS
SCIENTIFIC DISCIPLINARY SECTOR ING-INF/05
LANGUAGE English
TEACHING LOCATION
  • GENOVA
SEMESTER 1° Semester
TEACHING MATERIALS AULAWEB

OVERVIEW

Today machine-learning algorithms and AI-based systems are used for many real-world applications, including image recognition, spam filtering, malware detection, biometric recognition. In these applications, the learning algorithm may have to face intelligent and adaptive attackers who can carefully manipulate data to purposely subvert both the learning and the operational phases. Part 1 of the course aims to introduce the fundamentals of the security of machine learning and the related field of adversarial machine learning. Part 2 introduces the international regulations behind the so called “trustworthy AI. The course uses application examples including object recognition in images, biometric recognition, spam filtering, and malware detection. 

AIMS AND CONTENT

LEARNING OUTCOMES

The aim of this course is to provide graduate students with fundamental and advanced concepts on the security of machine learning and trustworthy artificial intelligence. Part 1 of the course introduces the fundamentals of the security of machine learning, the related field of adversarial machine learning, and some practical techniques to assess the vulnerability of machine-learning algorithms and to protect them from adversarial attacks. Part 2 introduces the international regulations behind the so called “trustworthy AI”, and the main techniques to design robust machine-learning algorithms which are fair, privacy preserving and whose operation can be explained at some extent to the final users. The course uses application examples including object recognition in images, biometric recognition, spam filtering, and malware detection

AIMS AND LEARNING OUTCOMES

The aim of this course is to provide graduate students with fundamental and advanced concepts on the security of machine learning and trustworthy artificial intelligence.

The last part of the course will have the aim of increasing the students' ability to identify and pursue learning objectives that will enable them to pass an assessed final test

PREREQUISITES

This course is for graduate students who already attended basic courses (or have a basic/intermediate knowledge) of machine learning and artificial intelligence and have a basic/intermediate knowledge of programming languages (in particular, the Python language).

TEACHING METHODS

Lectures. The lecturer will use slides. Copies of slides will be provided to the students. Hands-on classes on attacks and defences of machine-learning algorithms using the SecML open-source Python library for the security evaluation of machine learning algorithms (https://secml.readthedocs.io/).

The last part of the course will consist in the development of problem solving skills and self-assessment of learning through practical exercises that allow students to identify and pursue learning objectives that will enable them to pass an assessed final test.

SYLLABUS/CONTENT

Part 1: Security of Machine Learning

  • Introduction to machine learning security: introduction by practical examples from computer vision, biometrics, spam filtering, malware detection.

  • Threat models and attacks to machine learning. Modelling adversarial tasks. The two-player model (the attacker and the classifier).  Levels of reciprocal knowledge of the two players (perfect knowledge, limited knowledge, knowledge by queries and feedback). Attack models against machine learning: evasion, poisoning, backdoor and privacy attacks. The concepts of security by design and security by obscurity.

  • Evaluation of machine learning algorithms in adversarial environments. Vulnerability assessment via formal methods and empirical evaluations. Adaptive and non-adaptive evaluations.

  • Design of machine learning algorithms in adversarial environments: Taxonomy of possible defense strategies. Evasion attacks and countermeasures. Defenses against evasion, poisoning, backdoor and privacy attacks. Poisoning attacks and countermeasures. Backdoor poisoning, privacy-related threats and defenses.

  • Practical sessions with Python. Hands-on classes on attacks against machine learning and defences of machine-learning algorithms.

Part 2: Trustworthy Artificial Intelligence

  • AI regulations: the European AI Act.  European ethics guidelines for trustworthy AI. AI regulations in the world

  • Fairness and privacy of machine learning:  fairness and privacy-related threats and defenses.

  • Robust AI: robust optimization in machine learning; design of machine learning algorithms in the wild and out-of-distribution pattern recognition

  • Explainable AI: explainability methods. Global and local methods. Model-specific and model-agnostic methods.

  • Practical sessions with Python: Hands-on classes on explainable machine learning algorithms and AI regulations

RECOMMENDED READING/BIBLIOGRAPHY

A. Joseph, B. Nelson, B. Rubinstein, D. Tygar, Adversarial machine learning, Cambridge University Press, 2018

 B., Battista, F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition 84 (2018): 317-331.

B. Biggio, F.Roli, Wild Patterns, Half-day Tutorial on Adversarial Machine Learning: https://www.pluribus-one.it/research/sec-ml/wild-patterns

Biggio, B., Corona, I., Maiorca, D., Nelson, B., Srndic, N., Laskov, P., Giacinto, G., Roli, F. Evasion attacks against machine learning at test time.  ECML-PKDD, 2013.

Biggio, B., Fumera, G., Roli, F. Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng., 26 (4):984–996, 2014.

TEACHERS AND EXAM BOARD

Exam Board

FABIO ROLI (President)

LUCA ONETO

DAVIDE ANGUITA (President Substitute)

LUCA DEMETRIO (President Substitute)

LESSONS

LESSONS START

See the official calendar of the University of Genova.

Class schedule

The timetable for this course is available here: Portale EasyAcademy

EXAMS

EXAM DESCRIPTION

Intermediate in class assignments or home assignment+oral exam.

ASSESSMENT METHODS

Intermediate in class assignments (closed-book solutions of numerical/coding exercises and open-ended/closed questions), or home assignment+oral exam. Grading policy = open/closed questions (15/30) +  numerical/coding exercises (15/30)

Students with certification of Specific Learning Disabilities (SLD), disabilities, or other special educational needs must contact the instructor at the beginning of the course to agree on teaching and examination methods that, while respecting the course objectives, take into account individual learning styles and provide appropriate compensatory tools. It is reminded that the request for compensatory/dispensatory measures for exams must be sent to the course instructor, the School representative, and the “Settore servizi per l'inclusione degli studenti con disabilità e con DSA” office (dsa@unige.it) at least 10 working days before the test, as per the guidelines available at the link: https://unige.it/disabilita-dsa

FURTHER INFORMATION

Contact the instructor by email.

Agenda 2030 - Sustainable Development Goals

Agenda 2030 - Sustainable Development Goals
Industry, innovation and infrastructure
Industry, innovation and infrastructure