Announcing ICLR 2024 Invited Speakers
We are pleased to announce the Invited Speakers for ICLR 2024. These speakers were selected to cover a range of topics, both within core machine learning, and also in adjacent areas of interest (e.g., legal considerations, sustainability, and drug design). Those attending ICLR can see the full schedule here (if attending in person, please set your timezone to Europe/Vienna).
Invited Speakers in alphabetical order (talk titles & abstracts subject to change):
Kyunghyun Cho, NYU & Genentech (Prescient Design)
Talk Title: “Machine Learning in Prescient Design’s Lab-in-the-Loop Antibody Design”
Abstract: TBA
Priya Donti, MIT & Climate Change AI
Talk Title: “ Optimization-in-the-loop ML for climate and energy”
Abstract: TBA
Kate Downing, Law Offices of Kate Downing
Talk Title: “Legal Fundamentals for AI Researchers”
Abstract: This talk will cover fundamental legal principles all AI researchers should understand. This talk will explore why legislators are looking at new laws specifically for AI and the goals they want to accomplish with those laws. It will also cover legal risks related to using training datasets, understanding dataset licenses, and options for licensing models in an open fashion.
Raia Hadsell, Google DeepMind
Talk Title: “Learning through AI’s winters and springs: unexpected truths on the road to AGI”
Abstract: After decades of steady progress and occasional setbacks, the field of AI now finds itself at an inflection point. AI products have exploded into the mainstream, we’ve yet to hit the ceiling of scaling dividends, and the community is asking itself what comes next. In this talk, Raia will draw on her 20 years experience as an AI researcher and AI leader to examine how our assumptions about the path to Artificial General Intelligence (AGI) have evolved over time, and to explore the unexpected truths that have emerged along the way. From reinforcement learning to distributed architectures and the potential of neural networks to revolutionize scientific domains, Raia argues that embracing lessons from the past offers valuable insights for AI’s future research roadmap.
Moritz Hardt, Max Planck Institute for Intelligent Systems, Tübingen
Talk Title: “The emerging science of benchmarks”
Abstract: Benchmarks are the keystone that hold the machine learning community together. Growing as a research paradigm since the 1980s, there’s much we’ve done with them, but little we know about them. In this talk, I will trace the rudiments of an emerging science of benchmarks through selected empirical and theoretical observations. Specifically, we’ll discuss the role of annotator errors, external validity of model rankings, and the promise of multi-task benchmarks. The results in each case challenge conventional wisdom and underscore the benefits of developing a science of benchmarks.
Devi Parikh, Georgia Tech
Talk Title: “Stories from my life“
Abstract: This is going to be an unusual AI conference keynote talk. When we talk about why the technological landscape is the way it is, we talk a lot about the macro shifts – the internet, the data, the compute. We don’t talk about the micro threads, the individual human stories as much, even though it is these individual human threads that cumulatively lead to the macro phenomenon. We should talk about these stories more! So that we can learn from each other, inspire each other. So we can be more robust; more effective in our endeavors. By strengthening our individual threads and our connections, we can weave a stronger fabric together. This talk is about some of my stories from my 20-year journey so far – about following up on all threads, about learnt reward functions, about fleeting opportunities, about multidimensional impact landscapes, and about curiosity for new experiences. It might seem narcissistic, but hopefully it will also feel authentic and vulnerable. And hopefully you will get something out of it.
Jie Tang, Tsinghua University
Talk Title: “The ChatGLM’s Road to AGI”
Abstract: Large language models have substantially advanced the state of the art in various AI tasks, such as natural language understanding and text generation, and image processing, multimodal modeling. In this talk, we will first introduce the development of AI in the past decades, in particular from the angle of China. We will also talk about the opportunities, challenges, and risks of AGI in the future. In the second part of the talk, we will use ChatGLM, an alternative but open-sourced model to ChatGPT, as an example to explain our understanding and insights derived during the implementation of the model.