Prof. Dr. Rolf Apweiler
Director of EMBL-EBI, Hinxton, Cambridgeshire
Title: The Value of Open Data. Innovative AI-enabled Bioinformatics Projects and Partnerships by EMBL-EBI: AlphaFold, and other Examples
Abstract: EMBL's European Bioinformatics Institute (EMBL-EBI) maintains the world’s most comprehensive range of freely available and up-to-date molecular data resources. Developed in collaboration with our colleagues worldwide, our services promote to share data, perform complex queries and analyse the results in different ways.
In my presentation I like to highlight how our open data resources enable AI approaches and give some recent examples of innovative EMBL-EBI AI projects and partnerships. One of these is AlphaFold, a state-of-the-art AI system developed by DeepMind, is able to computationally predict protein structures with unprecedented accuracy. These predictions are being made freely and openly available to the global scientific community in partnership with EMBL’s European Bioinformatics Institute (EMBL-EBI), opening up new and exciting research avenues to dramatically deepen our understanding of human health, disease and our environment, with implications for areas like drug design and sustainability.
Bio: Rolf Apweiler is Director of EMBL-EBI. Prior to this position he was Associate Director, after many years of leading protein resources such as UniProt and InterPro. Rolf has made a major contribution to methods for the automatic annotation of proteins, making it possible to add relevant information to proteome sets for entire organisms. He has spearheaded the development of standards for proteomics data, and his teams have maintained major collections of protein identifications from proteomics experiments (PRIDE) and molecular interactions (IntAct). He also led EMBL-EBI’s contribution to the Gene Ontology, was Director of Open Targets, and is now leading the efforts of EMBL-EBI around the European COVID-19 Data Platform.
Rolf received his PhD from the University of Heidelberg in 1994, and has been at EMBL since 1987. His major contribution to the field of proteomics was recognised by the the Human Proteomics Organisation’s “Distinguished Achievement Award in Proteomics” in 2004 and his election to President of the Human Proteomics Organisation, which he held in 2007 and 2008. In 2012, he was elected as a member of EMBO, in 2015 he was elected to an ISCB (International Society for Computational Biology) fellow, and in 2022 he was elected to a Member of the Academia Europaea. Rolf also served over many years on a multitude of Editorial Boards and Scientific Advisory Boards.
Prof. Dr. Alexander Dilthey
Professor for Genomic Microbiology and Immunity, Institute of Medical Microbiology and Hospital Hygiene, Heinrich Heine University Düsseldorf
Title: From genomic contact tracing to comprehensively characterizing the human microbiome: Emerging applications of genomic data science and real-time sequencing
Abstract: Understanding where infections happen is key to preventing them. The SARS-CoV-2 pandemic has demonstrated the power of modern genome sequencing for the detection and monitoring of viral variants; its potential for the in-depth characterization of infection transmission chains, however, remains underexplored. Combining real-time long-read sequencing with deep backward contact tracing and systems for automated viral data exchange and visual analysis, we show that the Integrated Genomic Surveillance System of Düsseldorf (IGSD) enabled the tracing of SARS-CoV-2 transmission chains at the person-to-person level in the population at large, enabling important insights into nightlife transmission dynamics and the role of travel-imported infections. Similarly, real-time long-read sequencing can enable important advances in the characterization of human microbiomes across the microbial tree of life.
Bio: Alexander Dilthey (DPhil) is Professor of Genomic Microbiology and Immunity at Heinrich Heine University Düsseldorf, Institute of Medical Microbiology and Hospital Hygiene. His research group works at the intersection of modern genomics and computational biology, focusing on the areas of population-scale immunogenomics, microbiome research, long-read sequencing, and the development of sequencing-based diagnostics. Alexander obtained a DPhil in Statistical Genetics at the University of Oxford (McVean Group) in 2008; further academic training stations included the Wellcome Centre for Human Genetics in Oxford and a Visiting Research Fellowship at NHGRI-NIH (Bethesda, USA; Phillippy Group). In 2017, Alexander returned to Germany to lead an independent research group at Heinrich Heine University Düsseldorf, and obtained professorial positions at the Universities of Cologne and Düsseldorf in 2021 and 2022, respectively. Alexander’s contributions include the development of statistical methods for immunogenetics which were applied to hundreds of thousands of individuals; methods enabling the analysis of some of the most complex regions of human genomes; and methods for the analysis of complete microbial genomes and microbiomes, based on novel long-read sequencing technologies. During the SARS-CoV-2 pandemic, Alexander co-led a pioneering effort to develop and establish a genomic contact tracing effort to better understand viral transmission chains in the population at large. Alexander co-coordinates the genomic surveillance infrastructure (GenSurv) of Germany’s Netzwerk Universitätsmedizin (NUM), contributing to the pandemic preparedness of Germany. Alexander has co-founded two biotech start-up companies and was a fellow of the Studienstiftung des deutschen Volkes.
Prof. Dr. Mireille Hildebrandt
Professor of Law, Vrije Universiteit Brussel and Radboud
Title: AI for better law: what could possibly go wrong?
Abstract: To deploy AI in law, we need to make certain assumptions about the computability of the law. For data-driven law (using machine learning) we probably need to frame law as a corpus of legal training data; for code-driven law (using programming languages to ‘render’ the law) we need to frame law as a closed system of rules or algorithms. This contribution will assess what safeguards must be in place to prevent such framing from becoming true, reminding us of Hannah Arend’s seminal insights at the end of The Human Condition, where she suggests that the problem is not whether behaviourism is true, but that it may become true.
Bio: Hildebrandt is a Research Professor on ‘Interfacing Law and Technology’ at Vrije Universiteit Brussels (VUB), appointed by the VUB Research Council. She is co-Director of the Research Group on Law Science Technology and Society studies (LSTS) at the Faculty of Law and Criminology. She also holds the part-time Chair of Smart Environments, Data Protection and the Rule of Law at the Science Faculty, at the Institute for Computing and Information Sciences (iCIS) at Radboud University Nijmegen. Her research interests concern the implications of automated decisions, machine learning and mindless artificial agency for law and the rule of law in constitutional democracies. Hildebrandt has published 5 scientific monographs, 23 edited volumes or special issues, and over 120 chapters and articles in scientific journals and volumes. She received an ERC Advanced Grant for her project on ‘Counting as a Human Being in the era of Computational Law’ (2019-2024), that funds COHUBICOL. In that context she is co-founder of the international peer reviewed Journal of Cross-Disciplinary Research in Computational Law, together with Laurence Diver (co-Editor in Chief is Frank Pasquale). In 2022 she has been elected as a Fellow of the British Academy (FBA).
Prof. Dr. Katharina Simbeck
Professor for business administration and controlling, HTW Berlin
Title: They shall be fair, transparent, robust - how can AI systems be audited?
Abstract: Increasingly, AI systems will be required to be certified with regards to their functionality and the fairness of their results. This will be the case especially for high risk AI systems, AI systems that can influence a person’s life choices, such as systems used in education or hiring. There are, however, a number of challenges related to auditing AI systems. The systems often lack transparency about the relationship between input variables and system output, the functioning of the system is not easily explained. Some AI systems are constantly re-trained on new data, others are applied to data, that differs substantially from the training data originally used to build the system. In most cases, neither source code nor training data are publicly available. In this contribution I will discuss how AI systems could be audited, which challenges occur and potential solutions.
Bio: Prof. K. Simbeck holds a double-degree in Business Administration from the University of Erlangen-Nürnberg and the French EM Lyon and a PhD from Technical University Berlin. She worked for large multinationals in Finance for 10 years before joining HTW Berlin as a professor in the business information systems program in 2014. Prof. Simbeck's research focuses on the social impact of digital technologies, especially with regards to fairness and transparency of AI systems. She is also researches the opportunities created through digital learning for learners and educators. She is the author of numerous publications on fairness and AI and a reviewer for recognized journals and conferences.
Prof. Dr. Nicola Segata
Laboratory of Computational Metagenomics, Department CIBIO, University of Trento
Title: "Combining metagenomics and machine learning for novel precision medicine tools"
Abstract: The exploration of the human microbiome and its medical implications through metagenomic sequencing is rapidly advancing, thanks to new computational techniques that can reveal previously unknown microbial characteristics and the growing availability of massive data sets. To make progress in this field, it is essential to develop machine learning methods that can better model the relationships between the human microbiome and health, and can enable automated microbiome-based diagnostic and prognostic tools. However, despite the potential for transformative biomedical applications, this work is still in its early stages due to various factors. During my presentation, I will discuss the present challenges associated with utilizing machine learning for human metagenomics, and outline some of the groundbreaking biomedical applications that could be hugely impacted by next-generation artificial intelligence applied to the human microbiome.
Bio: Nicola Segata is Full Professor and Principal Investigator at the CIBIO Department of the University of Trento (Italy) and Principal Investigator at the European Institute of Oncology in Milan (Italy). He earned his Ph.D. in Computer Science at University of Trento in 2009 and he then moved to Harvard School of Public Health for his post-doctoral training where he started studying the human microbiome with computational metagenomics approaches in the Laboratory of Prof. Curtis Huttenhower. He came back to University of Trento (Department CIBIO) where he started his laboratory in 2013 which employs experimental meta'omic tools and novel computational approaches to study the diversity of the human microbiome across conditions and populations and its role in human diseases. His work is supported by the European Research Council and by several other European agencies. The projects in his laboratory bring together computer scientists, microbiologists, statisticians, and clinicians and focus on profiling microbiomes with strain-level resolution and on meta-analysing very large sets of metagenomes with novel computational tools.
Prof. Dr. Johann Justus Vasel, LL.M. (NYU)
Public law with a special focus on legal issues of artificial intelligence, Heinrich Heine University Düsseldorf
Title: Better AI via Better Regulation?
Abstract: Since its emergence, artificial intelligence has largely been free from regulation. In the recent past, ethical guidelines and principles were front and center to govern the use of this technique. However, now we are witnessing the shift from soft to hard law and from self-government to state government, the EU Commission’s proposed “AI Act” as just one example. While the overarching goals of the emerging AI landscape is to foster AI, to enhance trust, and to mitigate the risks, the current approaches seem an ill-suited means to these ends. Could “agile regulation” help to govern the Fourth Industrial Revolution?
Bio: Johann Justus Vasel studied Law and Economics at the University of Bayreuth and the Julius Maximilian University of Würzburg with a focus on public international and European law. Subsequent to his studies, he worked at the Human Rights Center (Potsdam), received his doctorate from the University of Hamburg, did his legal traineeship at the Berlin Court of Appeal, completed his master's degree at New York University and has been a Max Weber Fellow at the European University Institute, Florence. In the past he held a position as Visiting Professor for Law & Economics at the University of St. Gallen (Switzerland) and researched at the Institute for Law & Economics at Hamburg University. Johann Justus Vasel, is currently Professor for Public Law with a Focus on Legal Aspects of Artificial Intelligence at the Heinrich-Heine-University Düsseldorf, serves as a Board Member of the HeiCAD and is coordinator of the legal use case within the Manchot-Research Group. His wide research interests span from legal philosophy and constitutional law to European and public international law with a special focus on AI.
Prof. Dr. Marius Wehner
Professorship for Business Administration, esp. Digital Management & Digital Work
Title: How to restore fairness perceptions? The influence of AI-supported selection tools and explanations on applicant reactions
Abstract: Companies increasingly use artificial intelligence (AI) for their recruitment and selection processes to reduce costs, increase efficiency, and find well-fitting candidates. However, applicants have considerable concerns about AI-supported selection tools concerning the procedural and interactional fairness of these procedures. Knowledge about the actions that organizations can take to restore fairness perceptions is still in its infancy. Therefore, we empirically investigate ways of explaining the benefits of AI by organizations to alter applicants’ reactions to AI‐supported selection tools during recruitment. We will discuss important implications for research and practice.
Bio: Marius Wehner is a professor of digital management and digital work at Heinrich Heine University Düsseldorf. He dedicates his work to researching and teaching in the field of algorithmic decision‐making, fairness perceptions of AI, human resource management, and entrepreneurship. He earned his PhD from the University of Giessen in 2012 and worked in a post-doc position at Paderborn University. In 2017, he was promoted to an assistant professor for business administration, esp. corporate governance, at Heinrich Heine University Düsseldorf. He was awarded the Best Paper Award at the European Academy of Management (EURAM) and the Entrepreneurship Research Newcomer Award at the G-Forum. He published in top-tier journals, such as Management Information Systems Quarterly, Human Resource Management (US), Journal of Business Ethics, Business & Information Systems Engineering, Journal of Vocational Behavior, and European Management Review. He also serves on the Editorial Boards of The International Journal of Human Resource Management and European Management Review.
Prof. Dr. Hartmut Wessler
Professor for Media and Communication Studies, University of Mannheim
Title: How can AI help to improve democratic public debate?
Abstract: Widespread criticism of low or deteriorating quality of democratic public debate, particularly online, has led to sustained interest in devising algorithmic solutions that are both effective and scalable. Previous work has often focused on (a) identifying uncivil content (to fight hate speech) or on (b) providing people with counter-attitudinal content (to combat selective exposure). The first approach struggles with the inclusion of contextual knowledge necessary to identify non-literal language use, as in irony and sarcasm. The second approach faces the fact that for partisans the mere availability of counter-attitudinal information may serve to harden polarization rather than soften it. In addition to these two, I will highlight a third route, namely (c) specifying the requirements for autonomous discussion moderation. From a communication science perspective, I will ask how participants can be nudged to engage in more perceptive democratic listening and in searching for common concerns that might unite otherwise opposing groups.
Bio: Hartmut Wessler is Professor in the Institute for Media and Communication Studies and Principal Investigator at the Mannheim Center for European Social Research (MZES). He studied communication, political science and sociology at the Free University of Berlin and Indiana University, Bloomington, IN. After an interlude at the University of Leipzig he pursued his doctorate at the University of Hamburg in 1994–1997. Afterwards he worked at FU Berlin before moving to the International University Bremen/Jacobs University as an Associate Professor in 2001. He joined the University of Mannheim as Full Professor in 2007.
Hartmut Wessler’s research is broadly concerned with the democratic quality of mediated public discourse, its determinants and the ways it could be improved by both humans and algorithms. Together with the members of his research team, he has studied media debates and media events in the areas of climate change, migration/refugees, religion/secularism, terrorism as well as other pressing and contentious issues. In his projects he often employs standardized, automated or interpretive media content analysis of both textual and visual resources, but also uses interviews or surveys. He particularly focuses on integrating computational methods in his research and on developing multimodal (text + image) methods of media content analysis.
Hartmut Wessler has conducted research and taught classes as a visiting scholar at New York University, the Universidade Federal de Minas Gerais in Belo Horizonte, Brazil, and the University of Zürich, Switzerland. His work was published in leading journals such as the Journal of Communication, Political Communication, Communication Theory, The International Journal of Press/Politics and others, as well as by renowned book publishers both in Germany and internationally. He was elected vice-chair of the German Research Foundation's review board for the social sciences (2012–2016) and serves as the vice-president of the German Communication Association since 2022.
Magdalena Wojcieszak, PhD
Professor of Communication, University of California, Davis
Associate Researcher, University of Amsterdam
Title: Algorithms in Online Politics: Problems and Solutions
Abstract: Populism, polarization, misinformation, and wavering support for democratic norms are pressing threats to many democracies. Although the sources of these threats are multifaceted, social media platforms and their recommendation algorithms are often seen as the culprit. Many observers and scholars worry that platform algorithms create filter bubbles and lead to rabbit holes of radicalization.
In this presentation, I address these issues in two ways. First, I present a sock puppet based audit of ideological biases on YouTube. This auditing methodology allows us to systematically and at scale isolate the influence of the algorithm in recommendations to congenial and radical content. We examine whether (1) recommended videos are congenial with users' ideology, especially deeper in the watching trail (i.e., filter bubbles) and (2) recommendations deeper in the trail are progressively more extreme and come from problematic channels (i.e., radicalization). Second, I argue that although most scholars worry about ideological biases in recommender systems, interest-based biases are as – if not more – democratically consequential. The problem is less that people consume “bad” political content (radical, unverified, or otherwise problematic), but that most do not consume any at all. I present a computational intervention aimed at nudging the algorithm toward incentivizing greater recommendations and exposure to quality news on social media. I then integrate these studies, discussing how transparent algorithms could help minimize some of the democratic challenges.
Bio: Magdalena Wojcieszak (Ph.D. U. of Pennsylvania), is Professor of Communication, U. of California, Davis and an Associate Researcher at the Amsterdam School of Communication Research, U. of Amsterdam, where she directs the ERC EXPO Grant.
Her research focuses on how people select political information in the current media environment and on the effects of these selections on democratic attitudes, cognitions, and behaviors. She also examines the effects of mass media, new information technologies, and various messages on extremity, polarization, tolerance, and perceptions. Prof. Wojcieszak’s current work aims to identify the extent of interest- and political biases in recommendation algorithms and propose principle solutions to minimize these biases, especially in the context of promoting exposure to quality news and diverse political contents online.
Prof. Wojcieszak has (co-)authored more ~70 articles in peer-reviewed journals, is the Associate Editor of Journal of Communication, and serves on editorial boards of seven peer-reviewed journals. She has received several awards for her teaching and research (including the 2016 Young Scholar Award from the International Association of Communication).
Prof. Wojcieszak is part of the Misinformation Committee at the Social Science One, first ever partnership between academic researchers and social media platforms, and of an independent research partnership between researchers and Facebook to study the impact of Facebook and Instagram on key political attitudes and behaviors during the U.S. 2020 elections.
Prof. Dr. Marc Ziegele
Professor for Political Online Communication, Institut for Communication and Media Studies, Heinrich Heine University Düsseldorf
Title: Developing an Incivility Dictionary for German Online Discussions – a Semi-Automated Approach Combining Human and Artificial Knowledge
Abstract: Incivility in online discussions has become an important issue in political communication research. Instruments and tools for the automated analysis of uncivil content, however, are rare, especially for non-English user-generated text. Our study presents an extensive dictionary (DIKI, ger. Diktionär für Inzivilität) to detect incivility in German-language online discussions, and a semi-automated two-step-approach that combines manual content analysis with automated keyword collection using a pre-trained word embedding model. Various evaluations show that DIKI clearly outperforms comparable dictionaries that have been used as alternative instruments to measure incivility (e.g. the LIWC) as well as basic machine learning approaches to text classification. Still, the manual evaluation of DIKI confirms that detecting complex and context-dependent forms of incivility remains challenging and constant update would be needed to maintain performance.
Bio: Marc Ziegele holds the assistant professorship for communication and media research with a focus on online political communication at the Department of Social Sciences at Heinrich Heine University Düsseldorf since February 2018. He is also head of the junior research group "Deliberative Discussions on the Social Web," which is funded by the Ministry of Culture and Science of the State of North Rhine-Westphalia. Previously, he was a research assistant at the Department of Communication at Johannes Gutenberg University Mainz and studied media economics at the same department. His research interests include participation and citizen discussions on the Internet. In the junior research group, he investigates measures to improve the quality and impact of users' public discussions about political issues. He also researches the causes and consequences of media trust, various aspects of citizens' social web use, and computational methods for the social sciences.