11.01.24, 17:00 Uhr

Online • Institute of AI in Management, LMU
Text controlled generative models (such as large language models or text-to-image diffusion models) operate by embedding natural language into a vector representation, then using this representation to sample from the model's output space. This talk concerns how high-level semantics are encoded in the algebraic structure of representations. In particular, we look at the idea that such representations are ''linear''---what this means, why such structure emerges, and how it can be used for precision understanding and control of generative models.
Alle KI-Events