Skip to main content
MyCourses MyCourses
  • Schools
    School of Arts, Design, and Architecture (ARTS) School of Business (BIZ) School of Chemical Engineering (CHEM) –sGuides for students (CHEM) – Instructions for report writing (CHEM) School of Electrical Engineering (ELEC) School of Engineering (ENG) School of Science (SCI) Language Centre Open University Library Aalto university pedagogical training program UNI (exams) Sandbox
  • Service Links
    MyCourses - Instructions for Teachers - Teacher book your online session with a specialist - Digital tools for teaching - Personal data protection instructions for teachers - Instructions for Students - Workspace for thesis supervision Sisu Student guide Courses.aalto.fi Library Services - Resourcesguides - Imagoa / Open science and images IT Services Campus maps - Search spaces and see opening hours Restaurants in Otaniemi ASU Aalto Student Union Aalto Marketplace
  • ALLWELL?
    Study Skills Support for Studying Starting Point of Wellbeing About AllWell? study well-being questionnaire
  •   ‎(en)‎
      ‎(en)‎   ‎(fi)‎   ‎(sv)‎
  • Toggle Search menu
  • Hi guest! (Log in)

close

Can not find the course?
try also:

  • Sisu
  • Courses.aalto.fi

CS-E4001 - Research Seminar in Computer Science D: Research Seminar on Security and Privacy of Machine Learning, 02.03.2021-28.05.2021

This course space end date is set to 28.05.2021 Search Courses: CS-E4001

  1. Home
  2. Courses
  3. School of Science
  4. department of...
  5. cs-e4001 - re...
  6. Sections
  7. Materials
 
Syllabus

Materials

  • Materials

    Materials

    Methodology for reading research papers

    Here you can find a short paper providing a good methodology for "How to read a research paper": http://ccr.sigcomm.org/online/files/p83-keshavA.pdf


    Systematization of knowledge on adversarial machine learning

    Adversarial Machine Learning        Huang et al.       2011
    SoK: Security and Privacy in Machine Learning        Papernot et al.       2017
    Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning        Biggio and Roli
          2018


    Download link to papers

    Before each discussion session, you must read one paper that will be presented during the discussion + the other paper presented during the discussion or an optional paper on the same theme as the discussion session.

    Papers presented during discussions

    Theme
    Title Authors
    Year
    1. Model evasion
    Devil’s Whisper: A General Approach for Physical Adversarial Attacks against Commercial Black-box Speech Recognition Devices
    Chen et al.
    2020
      On Adaptive Attacks to Adversarial Example Defenses
    Tramer et al.
    2020
    2. Model poisoning Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization Munoz-Gonzalez et al. 2017
      Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
    Chen et al.
    2017
    3. Compromised training library/platform
    BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
    Gu et al.
    2017
      Machine Learning Models that Remember Too Much
    Song et al.
    2017
    4. Model stealing
    High Accuracy and High Fidelity Extraction of Neural Networks
    Jagielski et al.
    2019
    Imitation Attacks and Defenses for Black-box Machine Translation Systems
    Wallace et al.
    2020
    5. Protecting intellectual property of models
    Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring Adi et al.
    2018
    DAWN: Dynamic Adversarial Watermarking of Neural Networks Szyller et al.
    2019
    6. Data leakage
    The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
    Carlini et al.
    2019
      ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
    Salem et al.
    2019
    7. Tracing training data
    Radioactive data: tracing through training
    Sablayrolles et al.
    2020
    Auditing Data Provenance in Text-Generation Models
    Song and Shmatikov
    2019
    8. Privacy-preserving training

    Learning Differentially Private Recurrent Language Models

    McMahan et al.
    2018
      Auditing Differentially Private Machine Learning: How Private is Private SGD?
    Jagielski et al.
    2020
    9. Fairness & bias in ML prediction
    Characterising Bias in Compressed Models
    Hooker et al.
    2020
      On the Privacy Risks of Algorithmic Fairness
    Chang and Shokri
    2020


    Additional papers (optional reading)
    Theme
    Title Authors
    Year
    1. Model evasion
    Adversarial Examples Are Not Bugs, They Are Features
    Ilyas et al.
    2019
     
    TextBugger: Generating Adversarial Text Against Real-world Applications
    Li et al.
    2018

    Certified Defenses Against Adversarial Examples Raghunathan et al.
    2018

    Ensemble Adversarial Training: Attacks and Defenses
    Tramèr et al.
    2020
    2. Model poisoning Poisoning Attacks against Support Vector Machines Biggio et al. 2012
      Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
    Shafahi et al.
    2018

    Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
    Wang et al.
    2019
      Certified Defenses for Data Poisoning Attacks
    Steinhardt et al.
    2017
    4. Model stealing
    Exploring Connections Between Active Learning and Model Extraction
    Chandrasekaran et al.
    2018

    Model Extraction Attacks Against Recurrent Neural Networks
    Takemura et al.
    2020

    Prediction Poisoning Utility-Constrained Defenses Against Model Stealing Attacks Orekondy et al.
    2020

    Extraction of Complex DNN Models: Real Threat or Boogeyman? Atli et al.
    2020
    5. Protecting intellectual property of models
    REFIT: a Unified Watermark Removal Framework for Deep Learning Systems with Limited Data
    Chen et al.
    2020
    Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks
    Aiken et al.
    2020
    Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
    Lukas et al.
    2019
    Rethinking deep neural network ownership verification: Embedding passports to defeat ambiguity attacks
    Fan et al.
    2019
    6. Data leakage 
    Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
    Fredrikson et al.
    2015
    Extracting Training Data from Large Language Models
    Cralini et al.
    2020

    Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
    Yeom et al.
    2018
    7. Training data privacy Dataset Inference: Ownership Resolution in Machine Learning
    Maini et al.
    2021

    Towards Probabilistic Verification of Machine Unlearning
    Sommer et al.
    2020
    8. Privacy-preserving training Tempered Sigmoid Activations for Deep Learning with Differential Privacy
    Papernot et al.
    2020

    Certified Robustness to Adversarial Examples with Differential Privacy
    Lecuyer et al.
    2018
      Privacy Risks of Securing Machine Learning Models Against Adversarial Examples
    Song et al.
    2019
    9. Fairness & bias in ML prediction
    POTS: Protective Optimisation Technologies
    Kulynich et al.
    2018

    Delayed Impact of Fair Machine Learning
    Liu et al.
    2018

    Equality of Opportunity in Supervised Learning
    Hardt et al.
    2016
      The Frontiers of Fairness in Machine Learning
    Chouldechova and Rott
    2018

    Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems
    Datta et al.
    2016

    • Restricted Not available unless any of:
      • Your Email address is albert.mohwald@aalto.fi
      • Your Email address is ananth.mahadevan@helsinki.fi
      • You belong to Research Seminar on Security and Privacy of Machine Learning (Oodi)
      • Your Email address is yujia.guo@aalto.fi
      • Your Email address is oliver.jarnefelt@aalto.fi
      • Your Email address is paavo.reinikka@aalto.fi
      icon for activity FolderCourse slides + recordings Folder

Course home

Course home

Next section

Leading discussion session►
Skip Upcoming events
Upcoming events
Loading There are no upcoming events
Go to calendar...
  • CS-E4001 - Research Seminar in Computer Science D: Research Seminar on Security and Privacy of Machine Learning, 02.03.2021-28.05.2021
  • Sections
  • Security & Privacy of Machine Learning
  • Materials
  • Leading discussion session
  • Paper takeaways + questions
  • Programming Assignment
  • For Aalto users
  • Home
  • Calendar
  • Learner Metrics

Aalto logo

Tuki / Support
  • MyCourses help
  • mycourses(at)aalto.fi
Palvelusta
  • MyCourses rekisteriseloste
  • Tietosuojailmoitus
  • Palvelukuvaus
  • Saavutettavuusseloste
About service
  • MyCourses protection of privacy
  • Privacy notice
  • Service description
  • Accessibility summary
Service
  • MyCourses registerbeskrivining
  • Dataskyddsmeddelande
  • Beskrivining av tjänsten
  • Sammanfattning av tillgängligheten

Hi guest! (Log in)
  • Schools
    • School of Arts, Design, and Architecture (ARTS)
    • School of Business (BIZ)
    • School of Chemical Engineering (CHEM)
    • –sGuides for students (CHEM)
    • – Instructions for report writing (CHEM)
    • School of Electrical Engineering (ELEC)
    • School of Engineering (ENG)
    • School of Science (SCI)
    • Language Centre
    • Open University
    • Library
    • Aalto university pedagogical training program
    • UNI (exams)
    • Sandbox
  • Service Links
    • MyCourses
    • - Instructions for Teachers
    • - Teacher book your online session with a specialist
    • - Digital tools for teaching
    • - Personal data protection instructions for teachers
    • - Instructions for Students
    • - Workspace for thesis supervision
    • Sisu
    • Student guide
    • Courses.aalto.fi
    • Library Services
    • - Resourcesguides
    • - Imagoa / Open science and images
    • IT Services
    • Campus maps
    • - Search spaces and see opening hours
    • Restaurants in Otaniemi
    • ASU Aalto Student Union
    • Aalto Marketplace
  • ALLWELL?
    • Study Skills
    • Support for Studying
    • Starting Point of Wellbeing
    • About AllWell? study well-being questionnaire
  •   ‎(en)‎
    •   ‎(en)‎
    •   ‎(fi)‎
    •   ‎(sv)‎