Skip to content
Human Health

Artificial Intelligence and Medicine

Randomly behaving dots and lines created by a computer. Animation by Tunart for Getty.

Chances are, if someone says “AI” is doing something—such as diagnosing cancer from an X-ray, driving a car, or recognizing human speech with uncanny accuracy—much of the credit should go to a specific computing technique: deep learning.

Deep learning is a form of machine learning that makes computers remarkably adept at detecting and anticipating patterns in data. Because that data can come in many forms, whether it’s audio, video, numbers in a spreadsheet, or the appearance of pedestrians in a street, deep learning is versatile. It can speed up data-intensive applications, like the design of custom molecules for medicine and new materials. And it can open up new applications, like the ability to analyze how minute changes in the activity of your genes affect your health.

To understand why it’s called “deep” learning and why it has sparked a resurgence of hope for artificial intelligence, you have to go back to the 1940s and 1950s. Even though computers at the time were essentially giant calculators, computing pioneers were already imagining that machines might someday think—which is to say they might reason for themselves. And one obvious way to try to make that happen was to mimic the human brain as much as possible.

Image is from the book "Perceptrons: An Introduction to Computational Geometry" written by Marvin Minsky and Seymour Papert and published in 1969. The perceptron is a type of artificial neural network that was first developed in the late 1950s and early 1960s.

That’s the idea behind “neural networks.” They’re loose approximations of physical neurons and how they are organized. Originally, computer engineers hard-wired them into big machines, and today these neurons are simulated in software. But in either case they’re designed to take an input (for example, all the pixels that make up a digital image)‚ and generate a meaningful output (say, an identification of the image). What determines the output? The behavior of simulated neurons in the network. Much as a physical neuron fires an electrical signal in response to a stimulus and triggers the firing of other brain cells, every simulated neuron in an artificial neural network has a mathematical value that affects the values of other simulated neurons.

Imagine you want to have a computer distinguish between horses and zebras. Every time you vary the input—say you put up an image of a creature with black and white stripes instead of brown hair—that changes the mathematical values of the simulated neurons in the first layers, the ones that analyze the most general features of an image. These changes in value cascade through the network, all the way out to the ones in the top-most layer, the ones that confirm for the machine that it should identify the animal as a zebra. If it fails to do so accurately, you can train the system by adjusting the mathematical values of key nodes in the network. (This method is known as supervised learning; there’s a cool twist on it that we’ll get into a little bit later.)

For decades, the utility of neural networks was mainly theoretical. Computers lacked the processing power to use it in a wide variety of data-intensive tasks. But that has changed dramatically in the past decade, thanks mainly to the advancement of Graphics Processing Units (GPUs) developed for rendering realistic video games (of all things!). Now neural networks can handle a great deal of data at once and cascade it mathematically through layers upon layers of simulated neurons. These ever-more complex networks are “deep” rather than shallow, which is why the process of training them to work is known as deep learning.

The reason we care about these added layers of depth is that they make computers much more finely attuned to small signals—some meaningful patterns—in a blur of data. So, what does that mean in practice? You can think of three major categories of applications: computer perception, computer analysis, and computer creation. These classes of applications—and combinations of them—are providing the foundation for entirely new scientific practices and technologies.

In the first category, you’ve probably already seen a big change in computers’ ability to perceive the world. Deep learning is why digital assistants like Siri and Alexa consistently interpret your spoken commands. Even though they still can’t always deliver the information or perform the task you’d like, they transform the sound of your speech into text with impressive accuracy. This is how computers are identifying cancer from radiology images, and sometimes doing it more reliably than human doctors can. Deep learning also helps self-driving cars process data from the roads—although the slow rollout of automated vehicles should serve as a reminder that this approach isn’t magical and can only do so much.

In the second category, computer analysis: as deep learning algorithms have gotten better at finding signals in data, machines have been able to tackle more sophisticated problems that are impossible for humans because they involve so much information. Medicine is a prime example. Some researchers are using deep-learning algorithms to reanalyze data from past experiments, looking for correlations they missed the first time. Say a drug failed in trials because it worked on only 10 percent of a study’s participants. Did those 10 percent have something meaningful in common? A deep-learning system might spot it. The technology also is being used to model how engineered molecules will behave in the body or in the environment. If a computer can spot patterns that indicate the molecules are probably toxic, that reduces the chances that researchers will waste precious time and money on trials in animals.

Meanwhile, some startup companies are using deep learning to analyze minuscule changes in images or videos of living cells that the naked eye would miss. Other companies are combining various types of medical information—genomic readouts, data from electronic health records, and even models of the mechanisms of certain diseases—to look for new correlations to investigate.

The final category, computer creation, is booming because of two intriguing refinements to deep learning: reinforcement learning and generative adversarial networks, or GANs.

Remember the horse vs. zebra detector that had to be “supervised” in its learning by adjusting the mathematical values of the neural network? With reinforcement learning, programmers do something different. They give a computer a score for its performance on a training task, and it adjusts its own behavior until it maximizes the score. This is how computers have gotten astonishingly good at video games and games like Go. When IBM’s Deep Blue mastered chess two decades ago, it essentially was calculating all the possible moves that would exist after any given move. Such brute force doesn’t work with the bewildering array of possible moves that exist in Go, let alone many real-world problems. Instead, computer scientists used reinforcement learning. They let the computer see for itself whether various patterns of play led to wins or losses.

Now imagine you’re a biomedical researcher who wants to create a protein to attack disease in the body. Rather than getting a reinforcement learning system to optimize its behavior for the highest score in a game, you might optimize a protein-design function to favor the simplest atomic structure. Similarly, manufacturers with networks of machinery can use reinforcement learning to control the performance of individual pieces of equipment so as to favor a factory’s overall efficiency.

Generative adversarial networks bring together all the ideas you’ve just read about. They pit two deep learning networks against each other. One of them tries to create some data from scratch, and the other one evaluates whether the result is realistic. Say you want a totally new design for a chair. One neural network can be told to randomly combine aspects of chairs: various materials, curves, and so on. The other neural network performs a simple evaluation: is that a chair or not? Most of what the first neural network comes up with will be nonsense. But the other neural network will reject those, leaving behind only chairs that look real, despite not having ever existed in reality. Now you have a plausible set of new chair designs to tinker with. These same approaches can be used to design new drug molecules with a higher likelihood of success.

GANs can have creepy implications, like when they generate fake photos, audio or video that feels utterly real. But the best way to think of them is as another machine-learning tool—one of several reasons that a technology that has been developing for decades is getting more efficient—and likelier to unlock new discoveries.

Many of Flagship Pioneering’s companies use machine learning techniques to develop new drug discovery platforms, including a number of our most recent prototype companies, and NewCos such as Cogen, which is developing a platform to control the immune system’s response to treat cancer, autoimmune diseases, and chronic infections.

A Timeline of Deep Learning

1943
Two researchers in Chicago, Warren McCulloch and Walter Pitts, show that highly simplified models of neurons could be used to encode mathematical functions.
1958
Frank Rosenblatt, a psychologist at the Cornell Aeronautical Laboratory, develops a basic neural network in a machine called the Perceptron. It detects images with a camera and categorizes them by turning knobs to adjust the weights of “association cells” in the machine. Rosenblatt says it eventually should be possible to mass-produce Perceptrons that are conscious of their own existence.
1959
Stanford researchers Bernard Widrow and Ted Hoff show how neural networks can predict upcoming bits in a data stream. The technology proves useful in noise filters for phone lines and other communications channels.
1969
Research on neural networks stalls after MIT’s Marvin Minsky and Seymour Papert argue, in a book called “Perceptrons,” that the method would be too limited to be useful even if neural networks had many more layers of artificial neurons than Rosenblatt’s machine did.
1986
David Rumelhart, Geoff Hinton, and Ronald Williams publish a landmark paper on “backpropagation,” a method for training neural networks by adjusting the weights they assign to individual artificial neurons. The backpropagation algorithm had been applied in computers in the 1970s, but now researchers put it to wider use in neural networks.
1990
AT&T researcher Yann LeCun, who decades later will oversee AI research at Facebook, uses backpropagation to train a system that can read handwritten numbers on checks.
1992
Gerald Tesauro of IBM uses reinforcement learning to get a computer to play championship-level backgammon.
2006
Hinton and colleagues show how to quickly train several individual layers in a neural network.
2012
“Deep learning” takes off after Hinton and two of his students establish that a neural network trained in their method outperforms other computing techniques on a standard test for classifying images. Their system’s error rate is 15 percent; the next-best entrant is wrong 26 percent of the time.
2014
Google researcher Ian Goodfellow plays two neural networks off each other to create what he calls a “generative adversarial network.” One network is programmed to generate data—such as an image of a face—while the other, known as the discriminator, evaluates whether it’s plausibly real. Over time, the generator will tend to produce images (or other data) that seem realistic.
2015
A startup in London, DeepMind, uses reinforcement learning to train a system that masters old Atari video games like Breakout. It plays the games randomly but quickly selects tactics that lead to higher scores.
2016
A deep learning system called AlphaGo beats human Go champion Lee Sedol after absorbing thousands of examples of past games played by people.
2017
An updated version of AlphaGo, known as AlphaZero, plays 29 million games against itself rather than studying past games played by humans. Then it powerfully demonstrates this form of reinforcement learning by beating the original Alpha Go program 100 games to nothing. The method also works with chess and a Japanese game called shogi.
2018
The same team develops AlphaFold, a set of deep learning and generative neural networks to predict the structure of proteins from their amino acid sequences. The team enters the Critical Assessment of Techniques for Protein Structure Prediction (CASP) competition and places first, predicting the structure of 25 of the 43 unknown proteins, with the runner up only able to predict three out of the 43.

If you see an error in this story, contact us.

More from: Human Health