ELEC-E5510 - Speech Recognition D, Lecture, 3.11.2021-17.12.2021
This course space end date is set to 17.12.2021 Search Courses: ELEC-E5510
Topic outline
-
In this list, the 2020 slides will be replaced by the 2021 ones after each lecture is given. The titles may be identical, but the contents are improved each year based on feedback. The project works and their schedule changes each year.
For practicalities, e.g. regarding to the Lecture Quizzes and Exercises, check MyCourses > Course Practicalities
-
Zoom for lectures URLZoom for the lectures. Passcode is 393499
-
Please do not distribute these to other than the course participants! This is because the comments or questions from course participants have not been filtered out yet.
If you need to access the videos from a gmail address, or the access does not work for you otherwise, just make a usual google access request.
-
-
-
-
The goal is to verify that you have the learned the idea of a Token passing decoder. The extremely simplified HMM system is almost the same as in the 2B Viterbi algorithm exercise. The observed "sounds" are just quantified to either "A" or "B" with given probabilities in states S0 and S1. Now the task is to find the most likely state sequence to can produce the sequence of sounds A, A, B using a simple language model (LM). The toy LM used here is a look-up table that tells probabilities for different state sequences, (0,1), (0,0,1) etc., up to 3-grams.
Hint: You can either upload an edited source document, a pdf file, a photo of your notes or a text file with numbers. Whatever is easiest for you. To get the activity point the answer does not have to be correct.
-