Zum Inhalt springen Zur Suche springen

Keynotes

Prof. Dr. Timo Dickscheid, Working Group Leader "Big Data Analytics", Forschungszentrum Jülich

„FAIrly Detailed: AI & Big Data Powering Human Brain Atlasing“

Computational technology unlocks new possibilities for understanding the complexity of the human brain, but it requires comprehensive analysis and integration of measurements from different modalities and scales, all the way from molecules to the whole brain. In this talk, I will introduce an openly accessible human brain atlas developed over the past years, which maps brain organization at the micrometer level using AI methods for processing vast amounts of microscopy data. I will explore how representation learning, generative modeling, and discriminative modeling help decoding brain organization and briefly highlight how the resulting insights into brain networks can advance future AI research.

Prof. Dr. Laura Kallmeyer, Department of Computational Linguistics, HHU

“Cognitively plausible language models: To which extent do artificial language models behave like humans when processing language and how can we increase their cognitive plausibility?”

The topic of cognitive plausibility of generative language models (LMs) has attracted considerable interest recently. “Cognitive plausibility” in this case refers to the strength of correlations between behavioral data from the LM and from humans. It has for instance been shown that the surprisal of a word (i.e., its negative log likelihood given the preceding words) in a generative LM is predictive for human reading times. I will discuss a range of features both from the LMs as well as from humans, report literature that links the two, and discuss further hypotheses for correspondences. Furthermore, I will discuss ways to increase the cognitive plausibility of the generative LM. The psycholinguistic literature assumes that humans (i) not only process language incrementally, integrating each subsequent word into the already developed representation, (ii) but also make predictions about upcoming words and upcoming structure. Generative LMs are incremental models that are trained towards next word prediction. But they do not expicitly construct structural presentations and predict upcoming structure. I will discuss ways of including the latter as well. Besides naming proposals from the literature, I will also sketch ideas for future work in this direction.