AN1: Transformers for time series forecasting

Tutor: Alexander Nikitin (alexander.nikitin@aalto.fi)

The topic explores the applications of transformer architectures to time series forecasting. The students can choose which of the architectures they want to explore. It is desirable to implement the selected architectures.

References:

  • https://arxiv.org/abs/2201.12740
  • https://proceedings.neurips.cc/paper/2021/hash/bcc0d400288793e8bdcd7c19a8ac0c2b-Abstract.html
  • https://www.sciencedirect.com/science/article/pii/S0169207021000637


AN2: Graph Neural Networks

Tutor: Alexander Nikitin (alexander.nikitin@aalto.fi)

Graph neural networks (GNNs) are one of the most promising methods in deep learning. GNNs deal with graph domain inputs instead of data generated from Euclidean space. It allows for many applications, for example, drug design, networks analysis, and natural language processing. In this project, we will make a survey of different types of GNNs, for instance, graph convolutional networks, graph attention networks, and graph recurrent neural networks. The idea is to focus mainly on continuous-time normalizing flows, but it can be adjusted for the student’s research preferences.

References:

  • https://arxiv.org/pdf/1901.00596.pdf


AU: Misinformation classification and community detection on Twitter

Tutor: Ali Unlu (ali.unlu@aalto.fi)

Acting on the wrong information can kill. According to the World Health Organization, nearly 6 000 people around the globe were hospitalized in the first 3 months of 2020 because of coronavirus misinformation. During this period, at least 800 people may have died due to misinformation related to COVID-19. At its extreme, death can be the tragic outcome of misinformation. False information runs the gamut, from discrediting the threat of COVID-19 to conspiracy theories that vaccines could alter human DNA and so on. An ongoing project “Crisis Narratives”, multidisciplinary research consortium between Aalto University and Finnish Institute for Health and Welfare (THL) investigates Covid-19 related narratives. Part of this project, the task focuses on what kind of misinformation types shared during Corona epidemic, who shared this type of information, how online activism was organized, how this threat emerged and changed on Twitter.

Prerequisite: Basics of Data Mining, programming skills, preferably R/Python. 

Language: Finnish (more preferable since the data is in Finnish language) or English.


AG: Allocation of Compliance Responsibilities in Artificial Intelligence Lifecycle

Tutor: Ana Paula Gonzalez Torres (ana.gonzaleztorres@aalto.fi)

This research examines how regulatory compliance responsibilities can be feasibly distributed among the parties involved in the development and deployment of AI systems for public sector services. In particular, we consider how to ensure ways of supporting compliance with the “AI Act” in adoptions of AI applications in public administration institutions by means of Artificial Intelligence as a Service (AIaaS) offered by private organisations.

References:

  • De Silva, D., & Alahakoon, D. (2022, 6 10). An artificial intelligence life cycle: From conception to production. Patterns, 3(6), 1-13. doi:https://doi.org/10.1016/j.patter.2022.100489
  • European Commission. (2021, 4 21). Proposal for a Regulation of the European Parliament and of the Council. Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM(2021) 206 final. Brussels.
  • Mäntymäki, M., Minkkinen , M., Birkstedt, M., & Viljanen, M. (2022). Putting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance. ArXiv, abs/2206.00335.
  • Pant, A. (2019, 01 11). Workflow of a Machine Learning project. Retrieved from Towards Data Science: https://towardsdatascience.com/workflow-of-a-machine-learning-project-ec1dba419b94


AY: Microservices - when and how to use them

Tutor: Antti Ylä-Jääski (antti.yla-jaaski@aalto.fi)

Microservice architecture is a modern approach to software system design in which the functionality of the system is divided into small independent units. Microservice systems differ from a more traditional monolithic system in many ways some of which are unexpected. A cost of a migration from a monolithic system to a system based on microservices is often substantial so this decision needs to be carefully evaluated. Microservices have become very popular in recent years. An increasing number of companies (e.g., Amazon, Netflix, LinkedIn) are moving towards dismantling their existing monolithic applications in favor of distributed microservice systems. As with any big software project, migrating to a microservice architecture often requires considerable investment. In this project work, you will discuss the benefits and drawbacks of adopting microservice architecture in comparison to monolithic architecture. Another option is to describe and discuss how service mesh provides containers and microservices-based applications with services within the compute cluster.

Reference:

  • Pooyan Jamshidi, Claus Pahl, Nabor C. Mendon.a, James Lewis, and KStefan Tilkov. Microservices: The journey so far and challenges ahead. IEEE SOFTWARE, 35(3):24 – 35, 2018.



AB: Likelihood-free model selection

Tutor: Ayush Bharti (ayush.bharti@aalto.fi)

Model selection or comparison entails picking the best fitting model from a set of candidate models. This process involves inferring parameters of the given models based on data via the likelihood function. However, for many models in engineering and physical sciences domains, the likelihood function is unavailable. Many methods have been proposed in the literature to address this issue. In this project, you will review the existing literature on likelihood-free model selection methods, and implement them. The example model can be chosen based on the interests of the student.

References:

  • https://arxiv.org/pdf/1503.07689.pdf


HK1: Ultra-low energy communication for 6G

Tutor: Hamza Khan (hamza.khan@ericsson.com)

The usage scenarios that have been identified for initial 5G communication are enhanced mobile broadband (eMBB), massive machine-type communication (mMTC), and Ultra-Reliable and Low Latency communication (URLLC). These use cases have very different requirements than the low-power wide-area (LPWA) use cases currently addressed by the Reduced Capability (RedCap) and NB-IoT solutions. The consideration of use-case requirements drives the choices of key physical-layer parameters. These choices have a direct impact on the complexity and cost of the device hardware platform. We foresee that ultra-low energy IoT devices will be the next step for beyond 5G and 6G communication.

References:

  • https://arxiv.org/pdf/2111.07607.pdf
  • https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8642801
  • https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9269936


HK2: Passive IoT solutions for 6G

Tutor: Hamza Khan (hamza.khan@ericsson.com)

The future of communication is sought without the hassle of replacing or charging the battery. The era of zero-energy devices, relies on the energy harvested from the surroundings – from vibrations, from light, from temperature gradients, or even from the radio-frequency waves themselves. An example could be the parcels in route that can be tracked by using low-cost, zero-energy devices, potentially printed directly on the boxes.

References:

  • https://ieeexplore.ieee.org/abstract/document/9789440
  • https://ieeexplore.ieee.org/abstract/document/9537929
  • https://link.springer.com/article/10.1007/s11432-020-3261-5
  • https://www.mdpi.com/1996-1073/14/24/8219


SS1: Security indicators and warnings

Tutor: Sanna Suoranta (sanna.suoranta@aalto.fi)

Many software warns the user with a dialogue when something unexpected happens. However, people often ignore all the warnings and just click the OK button to get rid of those dialogues. The reason is not necessarily indifference or negligence but the way how human brain habituates stumuli that are often seen. Furhtermore, researchers can investigate areas of web services where users look with eye tracking. Users do not often look security indicators but just the content of a service page. The aim of this work is to investigate what researchers have suggested as solutions for the problem of noticing security issues that they encounter while using web services.

References:

  • Anderson, B. B., Jenkins, J. L., Vance, A., Kirwan, B., and Eargle, D. (2016). Your memory is working against you: How eye tracking and memory explain habituation to security warnings. Decision Support Systems, 92:3-13.
  • Darwish, A. and Bataineh, E. (2012). Eye tracking analysis of browser security indicators. In 2012 International Conference on Computer Systems and Industrial Informatics.


SS2: Usability of passwords

Tutor: Sanna Suoranta (sanna.suoranta@aalto.fi)

Passwords are known to all who use any digital services. Many other means have been suggested for authentication but we still use passwords. For example, if automatically generated strong passwords are such that they can be pronounced, they are easier to remember. Furthermore, mobile devices have created new problems for typing in passwords. The aim of this work is to investigate how usability of passwords can be improved.

References:

  • Bergstrom, J. R., Frisch, S. A., Hawkings, D. C., Hackenbracht, J., Greene, K. K., Thefanos, M., and Griepentrog, B. (2014a). Development of a scale to assess the linguistic and phonological di culty of passwords. In Cross-Cultural Design, 6th International conference, CCD 2014, Held as a part of HCI International 2014.
  • Greene, K. K., Gallagher, M. A., Staton, B. C., and Lee, P. Y. (2014). I can't type that! p@$$w0rd entry on mobile devices. In Human Aspects of Information Security, Privacy, and Trust, Second International Conference, HAS 2014.
  • Greene, K. K., Kelsey, J., and Franklin, J. M. (2016). Measuring the Usability and Security of Peruted Passwords on Mobile Platforms. NISTIR 8040. NIST. URL: http://dx.doi.org/10.6028/NIST.IR.8040.


SS3: Psychometry for researching usable security

Tutor: Sanna Suoranta (sanna.suoranta@aalto.fi)

Instead of just asking from users, the development of psychometric tools have given ways to really see how users react to software. For example, a decrease in the amplitude of the peripheral vascular pulse indicates mental stress, and it can be detected with photoplethysmogram. There is increasing amount of research where these tools are used in improving usability of software, but how about usability of security? The aim of this work is to investigate how psychometry is used in research of usable security.

References:

  • Cowley, B., Filetti, M., Lukander, K., Torniainen, J., Henelius, A., Ahonen, L., Barral, O., Kosunen, I., Valtonen, T., Huotilainen, M., Ravaja, N., and Jacucci, G. (2015).
  • The psychophysiology primer: A guide to methods and a board review with a focus on human-computer interation. Foundations and Trends in Human-Computer Interaction, 9(3-4):151-308


TN: ML-based Approach for Profiling Microservices at the Edge

Tutor: Tri Nguyen (tri.m.nguyen@aalto.fi)

Since edge computing is rapidly growing and offering many advantages, microservice applications are deployed more and more on such environments. Towards various optimization goals, profiling of microservices is essential for allocating, scheduling, migrating and scaling them on devices with limited resources. In this topic students need to discuss different profiling methods/techniques/tools (possible on cloud) and their applicability in edge environment. The discussion focus on solution apply ML comparing to heuristic approaches and should consider various aspect such as metrics (network/CPU/memory,...), goals (resource efficiency/QoS,...), infrastructure (edge/cloud/hybrid,...), limitations, etc

References:
  • https://ieeexplore.ieee.org/abstract/document/9112926
  • https://ieeexplore.ieee.org/abstract/document/6529276
  • https://dl.acm.org/doi/abs/10.1145/2499368.2451125


TT: Efficient methods for uncertainty in Deep learning

Tutor: Trung Trinh (trung.trinh@aalto.fi)

Bayesian neural networks (BNNs) are neural networks (NNs) whose weights are represented by a distribution. Compared to a deterministic NN, BNNs theoretically can produce more accurate and better calibrated predictions. However, due to the sheer amounts of parameters in modern NNs, BNNs are difficult to train and require massive amounts of computation. Methods have been proposed to improve the efficiency of BNNs, for instance by performing inference in the node space [1] or in the depth space [2]. In this project, we will survey the methods and applications of efficient BNNs, as well as whether or not these methods can be combine together to obtain better performance. 

Prerequisite/What to expect: Basic understanding of probability and statistics, machine learning, and programming skills.

References:

  • https://arxiv.org/pdf/2005.07186.pdf
  • https://arxiv.org/pdf/2006.08437.pdf


SJ: A survey on deep model aggregation in federated learning

Tutor: Shaoxiong Ji (shaoxiong.ji@aalto.fi)

Federated learning is a new learning paradigm that decouples data collection and model training via multi-party computation and model aggregation. This topic will review recent papers (published in 2021 and 2022) on model fusion methods such as adaptive aggregation, regularization, clustered methods, and Bayesian methods, and analyze how those methods can solve the challenges in federated learning. 

Prerequisite: knowledge of deep learning

References:
  • https://arxiv.org/pdf/2102.12920.pdf


MO: Fairness in clustering problems

Tutor: Michał Osadnik (michal.osadnik@aalto.fi)

Well-established clustering problems, such as k-means k-median, and p-centrum leave much to be desired when it comes to dealing with additional conditions imposed on the structure of the clustering. The reason is their objective functions primarily model connection cost. As an example, while selecting representatives from the society, often the additional goal is to ensure that some underrepresented groups will be selected with a good proportion. The notion of "fairness" is not very novel (Kleinberg 2016), but recently it got also applied to the field of clustering (Chierichetti 2017). Since then, great effort has been put into studying a variety of requirements. E.g., diversity of nodes inside clusters (Chierichetti 2017), diversity in groups of clusters (Esmaeili 2022), and diversity in selected facilities (Thejaswi 2022). Also, some old problems might be considered as representing those types of problems, e.g. Capacited k-median (Trinh 2015, Adamczyk 2018) imposing upper bounds clusters capacities. Similarly, the well-known Matroid Median problem can be used for modeling "fairness". Those requirements often significantly increase the theoretical complexity. Therefore, often FPT algorithms become relevant. The research might be interesting from the perspectives of computational complexity, approximation factors, bicriteria solutions, or used metrics (Euclidean, general, doubling).

References:

  • Seyed A. Esmaeili, Fair Labeled Clustering, https://dl.acm.org/doi/10.1145/3534678.3539451
  • Khoa Trinh, A Survey of Algorithms for Capacitated k-median Problems, https://www-hlb.cs.umd.edu/sites/default/files/scholarly_papers/Trinh.pdf
  • Flavio Chierichetti et al., Fair Clustering Through Fairlets https://arxiv.org/abs/1802.05733
  • Mehrdad Ghadiri, Socially Fair k-Means Clustering, https://arxiv.org/abs/2006.10085 (https://www.youtube.com/watch?v=x_70jDxm7X8)
  • Sepideh Mahabadi et al, Individual Fairness for k-Clustering, https://arxiv.org/abs/2002.06742
  • Di Wu et al, New Approximation Algorithms for Fair k-median Problem, https://arxiv.org/abs/2202.06259
  • Ravishankar Krishnaswamy et al, The Matroid Median Problem, https://people.cs.umass.edu/~barna/paper/matroid-median.pdf
  • MohammadTaghi Hajiaghayi et al, Budgeted Red-Blue Median and its Generalizations, https://math.mit.edu/~hajiagha/redblue.pdf
  • Suhas Thejaswi et al., Clustering with fair-center representation: parameterized approximation algorithms and heuristics, https://arxiv.org/abs/2112.07030


LC: Semantic interoperability for automotive vertical

Tutor: Lorenzo Corneo (lorenzo.corneo@gmail.com)

Automotive vertical ecosystems require the collaboration of many parties, such as road transport authorities and vehicle/equipment manufacturers, e.g., traffic lights, map services, etc. In these complex ecosystems, exchanging information in an interoperable manner is key to achieve scale. For example, traffic safety is nowadays enforced through several existing solutions, from different providers, that are usually incapable to exchange information in a cooperative fashion. As a result, integrating and extending these solutions is complex and time-consuming, which leads to high costs. This seminar topic proposes to study and analyze state-of-the-art semantic interoperability solutions in the context of automotive vertical.

References:

  • https://www.mdpi.com/2220-9964/8/3/141
  • RDF - https://www.w3.org/RDF/
  • OWL - https://www.w3.org/2001/sw/wiki/OWL
  • SPARQL - https://www.w3.org/wiki/SPARQL


BL: Adversarial attacks and defenses

Tutor: Blerta Lindqvist (blerta.lindqvist@aalto.fi)

Neural network classifiers are susceptible to attacks that cause misclassification. Many of the proposed defenses have been disputed, leaving only few standing.

References:

  • https://nicholas.carlini.com/writing/2019/all-adversarial-example-papers.html
  • https://nicholas.carlini.com/writing/2018/adversarial-machine-learning-reading-list.html
  • https://nicholas.carlini.com/writing/2019/all-adversarial-example-papers.html
  • Best current attack is by Carlini&Wagner: Towards evaluating the robustness of neural networks 
  • Best current defense: Adversarial machine learning at scale by Kurakin et al, Towards deep learning models resistant to adversarial attacks by Madry et al.
  • My own paper is https://arxiv.org/abs/2006.04504. This expresses best my perspective on adversarial attacks and defenses. Feel free to use it or not in your paper.


MK: Low Latency, Low Loss, Scalable Throughput (L4S) Protocol

Tutor: Miika Komu (miika.komu@ericsson.com)

Low Latency, Low Loss, Scalable Throughput (L4S) is a network layer extension that allows signalling of congestion in the network to allow faster tuning of transport layer connections. The goal of this seminar topic is to write an introduction of L4S and to summarize standardization and scientific publications related to the topic.

References:

  • https://www.ericsson.com/en/news/2021/10/dt-and-ericsson-successfully-test-new-5g-low-latency-feature-for-time-critical-applications
  • https://www.ericsson.com/en/reports-and-papers/white-papers/enabling-time-critical-applications-over-5g-with-rate-adaptation
  • https://www.diva-portal.org/smash/get/diva2:1484466/FULLTEXT01.pdf


JH: Adapting Kubernetes for Fog Computing

Tutor: Jaakko Harjuhahto (jaakko.harjuhahto@aalto.fi)

The fog computing paradigm extends centralized cloud computing with additional smaller units of compute capacity closer to users. This distributed capacity can offer lower latencies and better service quality, compared to a cloud data center only solution. However, the fog system is more challenging to manage. Fog infrastructure management is an active area of research, and a standard solution has not yet emerged. Kubernetes is the de facto standard orchestration framework for cloud native environments. A central component of Kubernetes is the pod scheduler. The scheduler is responsible for making decisions on how and when software containers are assigned to nodes (i.e. physical or virtual machines) that will run the containerized application. The scheduler has a significant impact on how well the Kubernetes cluster can meet specific requirements. Researchers have modified Kubernetes [1,2] to use it as a reference platform for research into orchestration for fog computing [3]. Your task is to write a literature study on recent research into modifying Kubernetes for fog computing. The review should focus on a specific viewpoint. Some suggestions for viewpoints are listed below: - Cloud-edge hierarchy. How is the geo-distributed system controlled? The default Kubernetes scheduler is centralized and controls the entire system. What advantages could distributed scheduling offer, and what challenges are involved? - Locality. Pods that interact with each other should be placed onto nodes in close proximity to minimize latency and maximize bandwidth. Likewise, pods that provide a service to end user should be placed near that end user. How to implement a locality aware scheduler? - Heterogenous compute. Computers in a data center cluster are typically very similar. This is not the case with fog computing, as node hardware can vary considerably. Even the processing power of a single CPU core will differ depending on the architecture, power budget, frequency etc. How can the scheduler adopt to heterogenous hardware? - Network capabilities. In a data center, all the nodes are connected by a reliable high bandwidth network. For a geo-distributed systems, bandwidth between nodes is not equally available and the network links can be unreliable or change altogether. How to implement a network topology and capability aware scheduler? Prior familiarity with Kubernetes is recommended for this topic.

References: 

  • Carmen Carrion. Kubernetes scheduling: Taxonomy, ongoing issues and challenges. ACM Comput. Surv., may 2022. Just Accepted. https://dl.acm.org/doi/10.1145/3539606
  • Zeineb Rejiba and Javad Chamanara. Custom scheduling in kubernetes: A survey on common problems and solution approaches. ACM Comput. Surv., jun 2022. Just Accepted. https://dl.acm.org/doi/10.1145/3544788
  • Breno Costa, Joao Bachiega, Leonardo Rebou ̧cas de Carvalho, and Aleteia P. F. Araujo. Orchestration in fog computing: A comprehensive survey. ACM Comput. Surv., 55(2), jan 2022 https://dl.acm.org/doi/10.1145/3486221


AJ: Conditional generation of molecules with normalizing flows

Tutor: Anirudh Jain (anirudh.jain@aalto.fi)

Auto-regressive flows are one of the promising generative models for molecular graph generation[1]. These models struggle to conditionally generate molecules with desired properties and require complex workarounds [1, 2]. Some variants of conditional flows have been proposed in the computer vision domain[3]. In this thesis, you will review and summarize potential approaches for conditional normalizing flows. Furthermore, it is possible to adapt these methods for molecular graphs and experimentally evaluate the performance.

References:

  • GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation: https://arxiv.org/abs/2001.09382
  • MoFlow: An Invertible Flow Model for Generating Molecular Graphs: https://arxiv.org/abs/2006.10137
  • InfoCNF: An Efficient Conditional Continuous Normalizing Flow with Adaptive Solvers: https://arxiv.org/abs/1912.03978


VH: Intent-Based Networking

Tutor: Vesa Hirvisalo (vesa.hirvisalo@aalto.fi)

Traditionally providing network services has been a complex thing to do as it involves not only setting up the servers and related software, but also configuring the network so that services are published, and then, the required components can be identified, the services discovered and configured for the specific use. The advent of modern micro-services systems, fog/edge computing, Industrial Internet of Things, etc. has complicated this scenery even more. [1] Recently, there has been progress in Intent-Based Networking (IBN) [2]. IBN resembles the classical idea of zero-configuration networks [4], and is close to service meshes [5] and software-defined networking [6], but it takes a holistic view into solving the problems and applying intelligent methods is central. The network administrators define the goals and the intelligence embedded into the network (i.e., the IBN mechanisms) figure out how to achieve the goals. The task for the student is to write a review on intent-based networking. The topic can be approached from several different viewpoints, so the topic can (and should) be tuned to fit the studies of the students.

References:

  • Yousefpour & al. All One Needs to Know about Fog Computing and Related Edge Computing Paradigms - A Complete Survey. Journal of Systems Architecture. DOI:10.1016/j.sysarc.2019.02.00
  • E. Zeydan and Y. Turk, Recent Advances in Intent-Based Networking: A Survey, 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), 2020, pp. 1-5, DOI:10.1109/VTC2020-Spring48590.2020.9128422
  • L. Velasco et al., End-to-End Intent-Based Networking, in IEEE Communications Magazine, vol. 59, no. 10, pp. 106-112, October 2021, DOI:10.1109/MCOM.101.2100141
  • https://en.wikipedia.org/wiki/Zero-configuration_networking
  • https://en.wikipedia.org/wiki/Service_mesh
  • https://en.wikipedia.org/wiki/Software-defined_networking


JM: Data-driven inference of ODE models

Tutor: Julien Martinelli (julien_martinelli@aalto.fi)

This project studies the inference of physical/biological models from data measurements only. These systems are usually represented using ODEs. Modern methods propose to learn these ODEs by regressing the dynamics of each variable onto a given library of candidate functions., usually using sparse regression. After understanding how these methods work, The project can explore how sensitive they are to e.g. initial conditions, noise, number of time series, time resolution...

References:

  • Brunton et al, PNAS 2015, Data-driven Sparse identification of nonlinear dynamics
  • Inferring Biological Networks by Sparse Identification of Nonlinear Dynamics - Brunton 2016 IEEE
  • Identification of dynamic mass-action biochemical reaction networks using sparse Bayesian methods - Plos CB 2022
  • pysindy library (github)

CB: Computer-aided proofs for cryptography

Tutor: Christopher Brzuska (chris.brzuska@aalto.fi)

Cryptographic protocols such as TLS (the protocol behind https), EMV (the protocol for payment by credit card), and MLS (the new standard for secure messaging) have become increasingly complex. Indeed, the new TLS standard is more than 100 pages long. Due to the increased complexity, manual analysis has become more difficult and more error-prone, and computers are now used both to *find* proofs and attacks as well as to verify proofs and implementations.

References: 

  • TLS standard: https://www.rfc-editor.org/rfc/rfc8446 
  • A survey: SoK: Computer-Aided Cryptography https://eprint.iacr.org/2019/1393 
Some papers on the manual way of doing proofs: 
  • - by Bellare and Rogaway: The Game-Playing Technique https://web.cs.ucdavis.edu/~rogaway/papers/games.ps 
  • - by Victor Shoup: A Tool for Taming Complexity in Security Proofs, https://www.shoup.net/papers/games.pdf 
A wish for computer-aided cryptography: 
  • - A plausible approach to computer-aided cryptographic proofs by Shai Halevi https://eprint.iacr.org/2005/181.pdf 
Some implementations of Halevi's wish: 
  • - EasyCrypt: A tutorial - You can find this article on page 146 ff. here: https://link.springer.com/content/pdf/10.1007/978-3-319-10082-1.pdf 
  • - Bringing State-Separating Proofs to EasyCrypt https://eprint.iacr.org/2021/326.pdf 
Something completely different, a *symbolic* approach: 
  • - The Tamarin prover: https://tamarin-prover.github.io/ 
  • - Reconciling Two Views of Cryptography: https://www.cs.ucdavis.edu/~rogaway/papers/equiv.pdf 
A couple of more tools/approaches in the field (non-exhaustive list): 
  • - https://www.mitls.org/ an F*-based approach, leads to verified implementations 
  • - CryptoVerif https://bblanche.gitlabpages.inria.fr/CryptoVerif/ 
  • - ProVerif: https://bblanche.gitlabpages.inria.fr/proverif/ 
  • - SSProve: https://eprint.iacr.org/2021/397 
  • - VerifPal: https://blog.symbolic.software/author/nadimkobeissi/


SH: Cryptocurrencies price predictions using effictive factors

Tutor: Seied veria Hoseini (veria.hoseini@aalto.fi)

We are going to illustrate a system dynamic model to show the effective factors on cryptocurrency market price. Then examine our model using a Deep learning model.

References:

  • https://iopscience.iop.org/article/10.1088/1757-899X/928/3/032007/pdf
  •  https://www.makeuseof.com/factors-influencing-the-cryptocurrency-value/


NL: Machine Learning for Forward Reaction Prediction and Retrosynthesis

Tutor: Najwa Laabid (najwa.laabid@aalto.fi)

Forward reaction prediction is concerned with guessing the product of a chemical reaction given a set of initial reactants. Retrosynthesis is the reverse problem: given the structure of a target molecule, we would like to know the potential starting material. Both problems are relevant to synthesis planning, which is an important step in the drug discovery pipeline. Using machine learning to solve these problems is an active area of research, covering many topics such as: graph neural networks, language models, latent variable models, etc. This area of research is a great opportunity to learn state-of-the-art machine learning models and their application to hard sciences, namely chemistry.

References:

  • Machine intelligence for chemical reaction space, Schwaller P. et. al., https://wires.onlinelibrary.wiley.com/doi/full/10.1002/wcms.1604 
  • The Future of Retrosynthesis and Synthetic Planning: Algorithmic, Humanistic or the Interplay?, Williams M. C., https://www.publish.csiro.au/ch/pdf/CH20371 
  • Computational Chemical Synthesis Analysis and Pathway Design, Feng F., https://www.frontiersin.org/articles/10.3389/fchem.2018.00199/full


VR: Modern Generative Modelling Landscape

Tutor: Vishnu Raj (vishnu.raj@aalto.fi)

Generative modelling refers to training machine learning models to approximate the distribution of training samples. Developments in deep learning have accelerated this area of research by the use of neural networks are complex function approximators. In this project, the goal is to survey the landscape of modern generative modelling research and compare the merits and demerits of each. We will be looking at the following generative models: i. Variational Autoencoders, ii. Energy based models, iii. Autoregressive models, iv. Generative Adversarial Networks, v. Normalizing flows, and vi. Diffusion models.

References:

  • Bond-Taylor, S., Leach, A., Long, Y. and Willcocks, C.G., 2021. Deep generative modelling: A comparative review of VAEs, GANs, normalizing flows, energy-based and autoregressive models. arXiv preprint arXiv:2103.04922.


MD: Container isolation beyond namespaces

Tutor: Mario Di Francesco (mario.di.francesco@aalto.fi)

The namespace feature of the Linux kernel introduced a powerful abstraction to enforce isolation between different processes. Container runtimes build on top of namespaces to restrict the visibility of resources in operating system-level virtualization. However, namespaces alone are not sufficient to guarantee isolation when the code of a containerized application is compromised. In these cases, it is necessary to further strengthen isolation by using sandboxes, restricting access to non-root users, or leveraging container-like virtual machines. The focus of this seminar topic is to explore different options to harden container isolation based on these approaches.

References:

  • - Liz Rice, "Container Security", O'Reilly Media, 2020, Chapters 8 and 9 (https://primo.aalto.fi/permalink/358AALTO_INST/1h25avu/cdi_safari_books_v2_9781492056690) 
  • - Rootless Containers (https://rootlesscontaine.rs/) 
  • - Kubernetes Documentation, Tutorials, Security (https://kubernetes.io/docs/tutorials/security/)


AD: Recent advances in Reinforcement Learning

Tutor: Anton Debner (anton.debner@aalto.fi)

Reinforcement Learning (RL) is one of the three main branches of Machine Learning. In contrast to supervised and unsupervised learning, RL algorithms aim to maximize a reward signal by interacting with an environment by trial-and-error. Due to the trial-and-error nature, it is usually used in combination with a simulation or a game engine. RL has seen success especially in learning to play video games [0], but this success has then lead to these techniques being applied to other areas as well. RL can be divided into various research directions, that have seen interesting developments lately. For example, Offline RL [1] aims to utilize the large amounts of data produced by modern applications. It replaces real-time simulations with large datasets, while still keeping the trial-and-error nature of RL. On the other hand, Upside-down RL [2] argues that with some modifications to RL, it is possible to utilize well understood and well optimized supervised learning methods to gain an competive edge against typical RL algorithms. At the same time, applications, such as network management have been becoming increasingly more complex due to the introduction of modern paradigms such as fog/edge computing, industial IoT and microservices [3]. This increase in complexity opens up room for applying RL in various layers of network management (e.g., [4]). Your task is to focus on one of these three viewpoints (offline RL, upside-down RL or applying RL in context of network management) and write a literature review.

References:

  • Mnih, V. *et al.* (2015) ‘Human-level control through deep reinforcement learning’, doi:10.1038/nature14236. 
  • Prudencio, R.F., Maximo, M.R.O. and Colombini, E.L. (2022) ‘A Survey on Offline Reinforcement Learning: Taxonomy, Review, and Open Problems’, Available at: http://arxiv.org/abs/2203.01387 
  • Schmidhuber, J. (2019) ‘Reinforcement Learning Upside Down: Don’t Predict Rewards -- Just Map Them to Actions’, Available at: http://arxiv.org/abs/1912.02875 
  • Yousefpour & al. All One Needs to Know about Fog Computing and Related Edge Computing Paradigms - A Complete Survey. Journal of Systems Architecture. DOI:10.1016/j.sysarc.2019.02.00 
  • Kim, G., Kim, Y. and Lim, H. (2022) ‘Deep Reinforcement Learning-Based Routing on Software-Defined Networks’, doi:10.1109/ACCESS.2022.3151081


YV: Physics-inspired Drug-Drug Interaction Prediction

Tutor: Yogesh Verma (yogesh.verma@aalto.fi)

Patients take multiple drugs to treat complex or co-existing co-morbidities. As a result, there is a possibility of in-vivo interaction among different drugs, leading to combined or novel side effects. The task here is to characterize the interactions among the combination of drugs and their resulting side effects. Deep learning has been dominantly applied in domains of healthcare, drug design, cheminformatics, etc, and has performed distinctively via incorporating informative inductive biases. These inductive biases range from representations like graphs, surfaces, etc to dynamics. This project aims at investigating and formulating novel and physics-inspired enhanced methodologies to predict the interactions among different kinds of drugs. In case of time remaining, this can be extended to drug-target prediction as well, where the aim is to find the target (protein) specific for the given drug molecule.

References:

  • KGNN: Knowledge Graph Neural Network for Drug-Drug interaction prediction (https://www.ijcai.org/proceedings/2020/380) 
  • CASTER: Predicting drug interactions with chemical substructure representation (https://ojs.aaai.org//index.php/AAAI/article/view/5412) 
  • Bi-level GNNs for drug-drug interaction prediction (https://arxiv.org/pdf/2006.14002.pdf) 
  • Drug-drug adverse effect prediction with graph co-attention (https://arxiv.org/pdf/1905.00534.pdf)


LT1: Debugging, Logging and Monitoring ML Systems: Techniques and Tools

Tutor: Linh Truong (linh.truong@aalto.fi)

The topic will research how current techniques and tools support the developer and/or the provider to debug, log and monitor components/services/tasks in ML systems in distributed computing infrastructures. The research should highlight differences (and challenges) of debugging, logging and monitoring for ML systems. The topic is for a single student. We recommend it for students with some experiences in cloud software and systems. We expect the student to present a viewpoint in the topic, thus scoping the research to target a specific type of audience, and presents the result providing useful information to the audience.

References:

  • Debugging Machine Learning Pipelines: https://dl.acm.org/doi/10.1145/3329486.3329489 
  • Automatically Debugging AutoML Pipelines using Maro: ML Automated Remediation Oracle: https://dl.acm.org/doi/abs/10.1145/3520312.3534868 
  • Ariadne: Analysis for Machine Learning Programs: https://dl.acm.org/doi/10.1145/3211346.3211349 
  • Towards Automated ML Model Monitoring: Measure, Improve and Quantify Data Quality: https://www.amazon.science/publications/towards-automated-data-quality-management-for-machine-learning 
  • Towards Observability for Machine Learning Pipelines: https://www.cidrdb.org/cidr2022/papers/p20-shankar.pdf


LT2: Programming Orchestration of Data Analysis Workflows in Edge Cloud Continuum

Tutor: Linh Truong (linh.truong@aalto.fi)

There are many workflow frameworks for orchestrating data analytics workflows in clouds and HPC. We are moving to analytics in edge cloud continuum. Which orchestration techniques and tools are useful for edge cloud continuum? This topic studies existing works to provide insightful discussions about possible frameworks and features for orchestrating edge-cloud data analytics tasks.

References:
  • https://dl.acm.org/doi/10.1145/3468737.3494103 https://dl.acm.org/doi/10.1145/3332301 
  • https://dl.acm.org/doi/fullHtml/10.1145/3486221 
  • https://dl.acm.org/doi/pdf/10.1145/3452369.3463820 
  • https://dl.acm.org/doi/abs/10.14778/3529337.3529344 
  • Open sources like Airflow, etc.


AH: Inference in human-AI interaction

Tutor: Alex Hämäläinen (alex.hamalainen@aalto.fi)

An important research area in modern AI is the development of intelligent collaborative systems that can interact efficiently with other (human) agents. Notably, such systems should be able to adapt their behavior depending on the agents’ objectives and other relevant features. However, this information may not always be directly available and should be inferred based on sparse observations of the agents’ behavior. This project is a literature survey on contemporary approaches to decision-making agent modeling and inference; a more specific focus on a subtopic can be discussed based on the student’s interests. 

Prerequisites: Basic understanding of reinforcement learning and probabilistic (Bayesian) modeling

References:

  • https://dl.acm.org/doi/pdf/10.1145/3025453.3025576 
  • http://proceedings.mlr.press/v80/rabinowitz18a.html


PM1: Optimising Cellular Fog Computing: How to orchestrate and search resources

Tutor: Petri Mähönen (petri.mahonen@aalto.fi)

Fog and edge computing are becoming a crucial part of mobile service business. However, this requires careful placement of computational and communication nodes in order to minimise service latency and save precious energy. This leads to multi-parameter optimisation, but also to question how we search & advertise such resources.

References:

  • Wubin Li, Yves Lemieux, Jing Gao, Zhuofeng Zhao, and Yanbo Han. Ser- vice mesh: Challenges, state of the art, and future research opportunities. In 2019 IEEE International Conference on Service-Oriented System Engineering (SOSE), pages 122–1225, 2019.
  • Breno Costa, Joao Bachiega, Leonardo Rebouc ̧as de Carvalho, and Aleteia P. F. Araujo. Orchestration in fog computing: A comprehensive survey. ACM Comput. Surv., 55(2), Jan 2022.
  • Elena Meshkova, Janne Riihijärvi, Marina Petrova, Petri Mähönen. A survey on resource discovery mechanisms, peer-to-peer and service discovery frameworks, Computer Networks, Volume 52, Issue 11, pages 2097-2128, 2008.


PM2: Service Discovery (Mechanisms) in Distributed Networked Environment

Tutor: Petri Mähönen (petri.mahonen@aalto.fi)

Service Discovery is a contemporary problem in many systems and services. It was developed strongly in the context of different peer-to-peer services, and it is notoriously difficult for Bluetooth. As mobile and distributed data centre based computational services are proliferating we need to reconsider different SD concepts and also take such issues as scalability and security into account.

References:

  • S. Cheshire, M. Krochmal. DNS-Based Service Discovery, IETF RFC 6763
  • C. Huitema, D. Kaiser. DNS-Based Service Discovery (DNS-SD) Privacy and Security Requirements, IETF RFC 8882
  • T. Lemon, S. Cheshire. Service Registration Protocol for DNS-Based Service Discovery, IETF Internet Draft, draft-ietf-dnssd-srp-12, 2021.
  • Elena Meshkova, Janne Riihijärvi, Marina Petrova, Petri Mähönen. A survey on resource discovery mechanisms, peer-to-peer and service discovery frameworks, Computer Networks, Volume 52, Issue 11, pages 2097-2128, 2008.


PM3: (Semantic) Service Discovery for Fog, Edge, and Web

Tutor: Petri Mähönen (petri.mahonen@aalto.fi)

How one finds the services and computational resources in highly dynamic networked environment? The task is to survey and consider different developed architectures and semantic search protocols. This also leads a lot of leverage to think independently about problems and required resources in this problem domain.

References:

  • Sara Blanc, Jose-Luis Bayo-Monton, Senen, Palanca-Barrio and Néstor X. Arreaga-Alvarado. A Service Discovery Solution for Edge Choreography-Based Distributed Embedded Systems, Sensors 2021, 21, 672. https://doi.org/10.3390/s21020672 
  • Mohamed S. Alshafaey, Ahmed I. Saleh, Mohamed F. Alrahamawy. A new cloud-based classification methodology (CBCM) for efficient semantic web service discovery, Cluster Computing (2021) 24:2269–2292 
  • Tsung-Yi Tang, Li-Yuan Hou and Tyng-Yeu Liang. An IOTA-Based Service Discovery Framework for Fog Computing. Electronics, 10, 844, 2021. 




Last modified: Tuesday, 13 September 2022, 6:34 PM