This blog post is the summary of the YouTube series "DeepMind - The Podcast Season 2" where the mathematician Hannah Fry discovers how DeepMind is using AI to advance science in critical areas.
S2E1 - A breakthrough unfolds
AlphaFold is a revolutionary tool in the field of AI that aims to predict the 3D shape of proteins. Since proteins in the human body are constructed from just 20 types of amino acids, their ultimate shapes are determined by the laws of physics. The objective of AlphaFold is to use the sequence of amino acids to forecast the protein shape, a task that has countless possibilities.
While X-ray crystallography is a common technique used to view the genuine shape of proteins, it is a lengthy process that could take years to achieve if the protein has never been studied before. Scientists have spent decades recording the shapes of some proteins, and the Critical Assessment of Protein Structure Prediction (CASP) is a global experiment aimed at predicting the shape of a given sequence. The Global Distance Test (GDT) measures the accuracy of a prediction compared to the actual shape of the protein and ranges from 0 to 100, with 90 being considered a solution.
Initially, the earlier version of AlphaFold used vision techniques and was computationally demanding, resulting in an average GDT of just 58.9. However, the second version was rebuilt from the ground up in order to address this issue. In May 2020, AlphaFold delivered an exceptional average GDT of 92.4, a significant improvement over the previous version.
AlphaFold is a revolutionary algorithm that has the power to predict the shape of proteins in a matter of minutes. This is a significant advancement as it eliminates the need for laborious experiments in labs, and can greatly accelerate scientific discovery. With its potential to aid in the development of new treatments for rare tropical diseases, drug discovery, and testing hundreds of enzymes that could digest plastics or kill some parasites, AlphaFold has become an indispensable tool for researchers.
Proteins are essential components of the human body, and they can be found everywhere, from the immune system and antibodies to hemoglobin. AlphaFold's ability to predict protein shapes with great accuracy is a game-changer in the field of protein research.
In July 2021, AlphaFold was released to the public. However, before its release, ethical considerations were discussed, as there was a concern that bad actors could use the algorithm to produce bioweapons. Despite this, it was determined that there were easier ways to produce bioweapons than by getting into protein folding.
One of the notable aspects of AlphaFold is that it is not a perfect dictionary. It outputs an uncertainty score in addition to the shape of the protein, which helps researchers evaluate the accuracy of their predictions.
It is important to note that AlphaFold is not capable of true intelligence. It is a specialist that has been trained to perform a specific task, which is to look at patterns and predict the shape of proteins. While it is undoubtedly a significant achievement in the field of artificial intelligence, it is still far from possessing true intelligence.
S2E2 - Speaking of intelligence
Large Language Model (LLM) is a machine learning technique that uses deep learning to generate text. The most powerful LLM currently available is GPT-4, which was developed by OpenAI. LLMs are used in various applications such as translation, search autocomplete, and chatbots. Researchers believe that language will play a pivotal role in achieving Artificial General Intelligence (AGI).
The first attempt at creating a language model was ELIZA in 1964. Designed by Joseph Weizenbaum, it was a simple algorithm that simulated a psychotherapist. Researchers believe that language is fundamental to communicate with AGI. Language has a finite set of words but can describe an infinite set of things, making it a general-purpose system that is fundamental to our intelligence. It is also key to our social intelligence and cooperation.
LLMs are trained on vast amounts of data from the internet. The bigger the model, the richer the context it has. However, the most powerful models, such as GPT-3, are enormous with 100 layers of neurons and 175 billion parameters, which consume a lot of energy during training. Despite their power, they are far less efficient at acquiring knowledge than the human brain and simply "echo" what they have seen before. They don't "understand" the words they are saying, which can lead to logical errors and longer term mismatches for summarization.
Unfortunately, LLMs can be used to create personalized fake news as the training corpus from the internet is highly biased. The toxicity of the model is also a problem, as it can become racist and violent. Researchers are working on creating safeguards and detoxifying the model, which is an entire area of research. A toxicity classifier can be used, but it is not perfect and can remove some perfectly fine sentences due to specific keywords, leading to the marginalization of certain groups.
To remind humans that they are conversing with an algorithm, human characteristics are removed from the answer of the model. This helps prevent people from giving personal information to the model.
Some researchers believe that achieving AGI requires more than just language. Other modalities, such as images, sound, intonations, and touch, may also be necessary. For example, you can read 100 books about cats but never understand cats unless you experience one. On the other hand, some researchers believe that language could be enough. Humans can talk about black holes without interacting with them and only describe them mathematically.
In conclusion, LLMs are powerful language models with various applications. However, they also have limitations and potential issues such as toxicity and bias. Researchers are working on improving LLMs and exploring other modalities to achieve AGI.
S2E3 - Better together
One of the crucial milestones in the development of artificial general intelligence (AGI) is the ability for AI to cooperate. Humans are known for being exceptional cooperators, and groups such as families, countries, and organizations are considered intelligent entities that work together towards a common goal. To teach AI cooperation, reinforcement learning (RL) is used to define a reward system and enable the model to maximize the reward through actions in its environment.
To teach cooperative behavior, AI are rewarded when other agents also benefit from their actions. This means that AI should redefine their interests based on the interests of others. While AI could learn to lie and even manipulate, pure altruism may not always be the best solution. For example, when self-driving cars approach an intersection, it is essential to determine who goes first. Facing a social dilemma, selfish behavior by everyone could lead to negative consequences for all.
The game "diplomacy" is a useful environment to teach agents to compete and cooperate, as it encourages long-term success. RL agents in the game began to cooperate towards a common goal, even though communication was not possible. While agents might lie to maximize their rewards, they should learn that in the long term, honesty is the best policy.
As the environment for AI and humans is fundamentally different, it is essential to adapt AI to human behavior and rhythms. For example, when cooking, you may prefer a slower but more cooperative AI agent, rather than a super-fast agent that is too fast for you. While there is a gap between simulation and the real world, some researchers believe that embodied AI in the real world is the key to achieving AGI.
S2E4 - Let's get physical
Many people often use the terms robots and AI interchangeably, but there is a clear distinction between the two. AI refers to a computer program that has been trained on a large dataset, while robots are machines that can take actions and manipulate the environment.
DeepMind has integrated AI algorithms into their robots and uses machine learning techniques to teach their robots how to function in different environments. This is essential because in many cases, the parameters and settings of a task are not fixed and can change frequently. In such situations, AI robots are needed to learn and adapt to the changing circumstances.
Reinforcement learning is a common method used by DeepMind to teach their robots to accomplish tasks. However, one significant challenge they face is the "sparse reward problem", where the robot only receives feedback on its success or failure, with no indication of how well it performed. This can make training very time-consuming and challenging. One solution is to provide additional human feedback in addition to the sparse reward.
The Robotics Lab aims to connect AI research with the real world and develop robots that can interact with the physical environment effectively. They seek to develop an AI that possesses physical intelligence, such as the ability to move a body.
As we continue to develop these robots, we must be cautious about their usage, as they can be used for harmful purposes such as carrying weapons. Instead, the goal should be to augment human capabilities, not replace them.
Sharing knowledge and using transfer learning can significantly accelerate the development and training of these robots. Other learning methods such as imitation learning can also be used alongside reinforcement learning to enhance their performance.
Finally, some researchers suggest that "reward is enough" and focusing on solving one problem instead of one problem per ability may be more effective. Regardless of the approach, we must continue to develop and refine these AI robots to help us solve complex problems and improve our lives.
S2E5 - The road to AGI
Artificial General Intelligence (AGI), also known as Human-Level Artificial Intelligence (HLAI), is an agent's ability to achieve goals in a wide range of environments. Researchers predict that by 2030, there is a 50% chance that AGI will appear, taking different forms such as a service like Google or a robot. However, the question arises as to how we would recognize an AGI. In a 3D environment, the human operator would be able to communicate with the agent, and it would be able to solve problems like a human, understand the world, and draw parallels. A simulated environment can be complex enough, but it needs to cross the barriers to the real world.
To train an AI, there are multiple techniques available, such as reinforcement learning, supervised learning, and unsupervised learning. While some researchers believe that reward is enough and that every problem can be understood as maximizing a reward, others argue that designing a meaningful reward is challenging. Moreover, the "credit-assignment problem" poses a challenge where the algorithm learns how to hack the reward function and find a way to get a lot of rewards without doing the right task properly.
Currently, every task is a niche, and the difficulty is to have a general agent that can do these tasks in a connected way. However, combining different types of learning, such as those used by humans and animals, could be a solution. For instance, MuZero, an AI trained by DeepMind, was able to discover the rules of a wide variety of games and train itself. The real-world problems are hard to define, so MuZero plans and searches through the environment by itself to achieve a goal.
MuZero was used by DeepMind to improve video compression algorithms. As 80% of internet traffic is streaming and downloading video, better video compression can save a significant amount of energy and cost. MuZero was able to learn how to use static sections reused across the video and use them to make video encoding efficient, treating video encoding as a game. The results were impressive, with MuZero making videos 6% smaller, saving CO2 emissions, and bringing content to more people, such as educational content in emerging markets where data rate is costly.
S2E6 - AI for science
DeepMind is a company that aims to utilize AI as an ultimate tool to advance scientific discovery in various fields. The company's research projects are applicable in solving root node problems and unlocking a wide variety of other problems. For instance, AlphaFold, one of their projects, can solve protein folding, which can help discover new drugs and find molecules to destroy plastics.
In addition, DeepMind is also making significant contributions to nuclear fusion. The reaction takes place in a nuclear fusion reactor called tokamak, and the plasma is extremely hot and chaotic. This plasma must not touch the edge of the tokamak. While old methods simulate the shape of the plasma with physical equations, DeepMind can predict and control the shape of the plasma to maximize its heat capacity.
DeepMind is also helping ecologists monitor changes in the ecosystem of the Serengeti national park. With thousands of cameras producing 20,000 images per month, the citizen science project used to identify species was not enough as the project grew. DeepMind helped build a computer vision solution to monitor the ecosystem changes and introduce targeted interventions quickly to protect it.
Furthermore, DeepMind's genomics is still in an early stage, but it has the potential to help understand cells, discover treatments for cancer, and create organs for transplantation. The genotype, which is visible in the DNA, is a sequence of four nucleic acids: A, T, C, G. On the other hand, the phenotype is the macroscopic details of a body, such as hair color. Understanding the relation between both can help predict whether someone has a high probability of having cancer or how a change in DNA can have a positive phenotype effect.
To achieve effective AI problem-solving, it is essential to ask the right questions about the model architecture and how to know if the model is getting better or worse. DeepMind uses a model architecture called a Transformer to "translate" the DNA. The Transformer uses a technique called Attention to help the algorithm focus on the input that matters.
Lastly, DeepMind has also applied AI to pure mathematics, helping mathematicians make some correspondence between the algebraic and geometric representation of "knots".
S2E7 - Me, myself and AI
Wavenet, a model developed by DeepMind, is a breakthrough in the field of generating voice and music. The model is trained to predict the raw waveform of the voice, and even if it is not trained on a specific voice, it can quickly learn the nuances of that voice with a process called "fine-tuning". While it has been used to synthesize the voices of famous people with impaired speech, it has been kept from the public domain due to concerns about its potential use in creating fake news.
One limitation of Wavenet is that it struggles with different types of pronunciation. Nevertheless, the model's potential is immense, and it has shown remarkable success in generating high-quality audio output. Wavenet is too large to run on a single device, and its predictions are made via the internet.
DeepMind has also ventured into weather forecasting, primarily nowcasting (prediction within a few hours). Traditional weather forecasting involves running huge mathematical models on supercomputers. However, DeepMind uses vision methods to extrapolate the position of the clouds. The models, known as GANs, generate the next few frames of a weather forecast video. The primary goal of these models is to alert people before a severe weather event. However, the models are not effective in predicting rare events that are absent from their training dataset.
In addition to weather forecasting, DeepMind is also assisting Liverpool football club in improving their game. The players are monitored with sensors, and the "video assistant coach" helps make informed decisions. The technology also assists in post-match analysis, and in the future, we may have personalized comments during a football game.
It is important to note that AI systems are only as good as the data they are trained on. If a particular group is missing from the training dataset, the algorithm can be biased, and its performance may suffer. Therefore, it is essential to ensure that the data used to train AI models is diverse and representative.
S2E8 - Fair for all
AI needs to benefit everyone and not create disadvantages for any particular group. It is crucial for algorithms to represent the ideal society and not just mirror what they see. However, one major problem that arises with AI is bias amplification. This means that any unfairness in the training dataset can get amplified, leading to dangerous outcomes in predictive algorithms.
Many AI models are trained on datasets labeled manually by humans, which can lead to biased datasets. Detoxifying these datasets is a challenging task, but it is essential to ensure that the AI systems have equal performance across all groups. We need to carefully control and monitor the AI systems that go out into production to avoid the bias that we have seen in previous tech revolutions.
One example of this bias is the healthcare program in the US, where AI was refusing black people's applications. The model was trained on historical data that was highly biased, which resulted in discriminatory outcomes. To avoid such scenarios, we must ensure that the AI systems have equal performance across all groups.
Another issue that needs to be addressed in AI systems is the underrepresentation of certain groups. For instance, African researchers have recognized that language models cannot recognize African languages. To create local benefits for everyone, we need more diversity in the data and the workforce.
To achieve these big steps, we need a tight collaboration with other organizations, governments, and universities. By working together, we can create AI systems that benefit everyone and avoid the mistakes of the past.
S2E9 - The promise of AI with Demis Hassabis
AGI, or Artificial General Intelligence, has the ability to solve a vast array of tasks on a human level. There is no evidence to suggest that the brain contains something that cannot be computed, which means that the functions of the brain can be mimicked by a Turing machine, or a computer.
The path to AGI can be broken down into several components. An AGI should be capable of collaboration, solving a wide range of problems, and understanding language at a higher level. Psychologists are also analyzing whether AGI can possess emotion, creativity, and other human-like qualities. The milestone of AGI will likely be incremental, possibly taking place within the next decade or two.
Although algorithms today are powerful pattern recognition tools, some abilities are missing, such as symbolic knowledge for math, as they cannot generalize. LLMs, or Large Language Model, do not understand basic knowledge and get confused by basic physical situations, making them "clever parrots."
Demis Hassabis, co-founder and CEO of DeepMind, believes that intelligence and consciousness are dissociable, meaning that we can have one without the other. There could be ethical implications with the creation of consciousness in AI. An AGI should not be opinionated in certain cases, such as an intelligent encyclopedia where the goal is to provide information efficiently without any bias.
Concerning the application of AGI, Demis believes that it could help us with climate change, health issues, and accelerate scientific discovery in general. It could unlock key technological breakthroughs, such as nuclear fusion, and help build more efficient solar panels, superconductors, and materials. However, he also believes that society is not yet ready for AGI.
Reinforcement Learning and Deep Learning algorithms must be applied to games and simulated environments before they can be used to solve real-world problems. It is important to build analysis tools to understand the answers of LLMs and other Deep Learning algorithms and fix any bias.
DeepMind has already made breakthroughs in "narrow" AI systems, such as AlphaFold and other RL systems that improve the performance of data center cooling systems, resulting in significant carbon footprint savings. The world is facing increasingly complex problems, such as climate change and inequality, and new problems are emerging, such as water supply. AGI may be necessary for the future of our species, but it requires a broader societal conversation to address the potential problems that may arise from its usage.