Registreringsalternativ

Please note! Course description is confirmed for two academic years, which means that in general, e.g. Learning outcomes, assessment methods and key content stays unchanged. However, via course syllabus, it is possible to specify or change the course execution in each realization of the course, such as how the contact sessions are organized, assessment methods weighted or materials used.

LEARNING OUTCOMES

The student forms a conceptual understanding of language models and large language models. The student understands key principles underlying the current large language models. The student understands the effect of prompting on large language models and can engineer prompts for large language models to improve output quality. The student knows of issues related to large language models such as hallucination, bias, privacy, and security.

Credits: 1

Schedule: 09.09.2024 - 31.12.2024

Teacher in charge (valid for whole curriculum period):

Teacher in charge (applies in this implementation): Arto Hellas

Contact information for the course (applies in this implementation):

CEFR level (valid for whole curriculum period):

Language of instruction and studies (applies in this implementation):

Teaching language: English. Languages of study attainment: English

CONTENT, ASSESSMENT AND WORKLOAD

Content
  • valid for whole curriculum period:

    Language models, probabilistic models, neural networks, large language models, prompting and prompt engineering, issues and concerns related to large language models.

DETAILS

Substitutes for Courses
Prerequisites
Gäster inte kan komma åt den här kursen arbetsyta, försök att logga in