Explore the safety, adaptability, and efficiency of AI in the real world
The 40th International Conference on Machine Learning (ICML 2023) begins next week and will be held from July 23rd to 29th in Honolulu, Hawaii.
ICML brings together the artificial intelligence (AI) community to share new ideas, tools, and datasets, and make connections to advance the field. From computer vision to robotics, researchers from around the world will present their latest advances.
Shakeel Mohamed, our Director of Science, Technology and Society, will speak about machine learning with social purpose, addressing challenges from health and climate, socio-technical perspectives, and strengthening global communities. .
We are proud to support the conference as a platinum sponsor and continue to work with long-term partners LatinX in AI, Queer in AI, and Women in Machine Learning.
The conference will also feature demonstrations of AlphaFold, advances in fusion science, and new models such as PaLM-E for robotics and Phenaki for generating video from text.
Google DeepMind researchers will present more than 80 new papers at this year’s ICML. Because many papers were submitted before the partnership between Google Brain and DeepMind, the first papers submitted in partnership with Google Brain will be published on the Google Research blog, but this blog will not show the papers posted in partnership with DeepMind. Papers will be featured.
(Simulated) World AI
The success of AI that can read, write, and create is powered by underlying models, or AI systems trained on vast datasets that can learn how to perform many tasks. Our latest research investigates how these efforts can be translated into the real world, laying the groundwork for competent and embodied AI agents that can better understand the dynamics of the world more generally. and unlock new possibilities for more useful AI tools.
The oral presentation will introduce AdA, an AI agent that, like humans, can adapt to solve new problems in simulated environments. In minutes, AdA can tackle difficult tasks such as combining objects in new ways, navigating invisible terrain, and collaborating with other players.
Similarly, we show how visual language models can be used to help train embodied agents, for example to tell a robot what it is doing.
The future of reinforcement learning
Developing responsible and trustworthy AI requires understanding the core goals of these systems. In reinforcement learning, one way this can be defined is through rewards.
The oral presentation will aim to address the reward hypothesis, first proposed by Richard Sutton, which states that all goals can be thought of as maximizing the expected cumulative reward. We explain the exact conditions under which this holds, and clarify the types of goals that can and cannot be captured by rewards in the general form of reinforcement learning problems.
When deploying AI systems, they need to be robust enough to withstand the real world. AI tools are often limited for reasons of safety and efficiency, so consider how to better train reinforcement learning algorithms within your constraints.
Our research, which won the ICML 2023 Best Paper Award, explores ways to teach models complex long-term strategies under the uncertainty of incomplete information games. We’ll share how the model can be played to win a two-player game without knowing the other players’ positions or possible moves.
Challenges on the front lines of AI
Humans can easily learn, adapt, and understand the world around them. Developing advanced AI systems that can generalize in a human-like manner will help create AI tools that can be used in everyday life and tackle new challenges.
One way AI adapts is by rapidly changing its predictions in response to new information. The oral presentation will discuss the plasticity of neural networks, how it is lost during training, and how to prevent loss.
Additionally, research that may help explain the types of in-context learning that emerge in large-scale language models by studying neural networks that are meta-trained on data sources whose statistics change spontaneously, such as natural language prediction. I will also introduce you.
In an oral presentation, we will introduce a new family of recurrent neural networks (RNNs) that perform better on long-term inference tasks to unlock the future potential of these models.
Finally, in “Quantile Unit Assignment,” we propose an approach that separates luck from skill. By establishing clearer relationships between actions, outcomes, and external factors, AI can better understand complex real-world environments.