Technology

The Rational Neural Network accelerates Machine-human Discovery

The Rational Neural Network accelerates Machine-human Discovery

Because of its recent success in image recognition, speech recognition, and drug discovery, deep learning has become a hot topic in many fields of science. Deep learning techniques are based on neural networks, which have a certain number of layers and can perform a variety of mathematical transformations on the input.

Math is the language of the physical world, and Alex Townsend notices mathematical patterns everywhere: in the weather, in the movement of soundwaves, and even in the spots or stripes that zebra fish embryos develop.

“We’ve been deriving calculus equations called differential equations to model physical phenomena since Newton wrote down calculus,” said Townsend, an associate professor of mathematics in the College of Arts and Sciences. This method of deriving calculus laws works, according to Townsend, if you already know the physics of the system. But what about learning physical systems whose physics are unknown?

By connecting these activation functions and weights in combination, you can create very complicated maps that take inputs to outputs, just like the brain might take a signal from the eye and turn it into an idea.

Alex Townsend

Mathematicians in the new and expanding field of partial differential equation (PDE) learning collect data from natural systems and then use trained computer neural networks to try to derive underlying mathematical equations. In a new paper, Townsend and co-authors Nicolas Boullé of the University of Oxford and Christopher Earls, professor of civil and environmental engineering in the College of Engineering, advance PDE learning with a novel “rational” neural network that reveals its findings in a way that mathematicians can understand: through Green’s functions — the right inverse of a differential equation in calculus.

This human-machine collaboration is a step toward the day when deep learning will improve scientific exploration of natural phenomena such as weather systems, climate change, fluid dynamics, genetics, and others. The paper “Data-Driven Discovery of Green’s Functions Using Human-Understandable Deep Learning” was published in Scientific Reports, Nature.

Neural networks, a subset of machine learning, are inspired by the simple animal brain mechanism of neurons and synapses – inputs and outputs, according to Townsend. In computerized neural networks, neurons are referred to as “activation functions” because they collect input from other neurons. Synapses, also known as weights, connect neurons and send signals to the next neuron.

Rational neural network advances machine-human discovery

“By connecting these activation functions and weights in combination, you can create very complicated maps that take inputs to outputs, just like the brain might take a signal from the eye and turn it into an idea,” Townsend explained. “In particular, we are watching a system, a PDE, and trying to get it to estimate the Green’s function pattern that would predict what we are watching.”

Green’s functions have been used by mathematicians for nearly 200 years, according to Townsend, who is an expert on them. He typically employs a Green’s function to quickly solve a differential equation. Earls proposed a reversal by using Green’s functions to understand a differential equation rather than solving it.

To accomplish this, the researchers developed a customized rational neural network with more complicated activation functions that can capture extreme physical behavior of Green’s functions. In a separate study published in 2021, Townsend and Boullé introduced rational neural networks.

“There are different types of neurons in the brain, just like there are different parts of the brain. They are not all alike, “Townsend explained. “In a neural network, that is equivalent to selecting the activation function – the input.”

Rational neural networks are potentially more flexible than standard neural networks because researchers can select various inputs.

“One of the important mathematical ideas here is that we can change that activation function to something that can actually capture what we expect from a Green’s function,” Townsend explained. “The machine learns the Green’s function for a natural system. It has no idea what it means and is unable to interpret it. However, now that we’ve learned something mathematically understandable, we can look at the Green’s function.”

There is a different physics for each system, according to Townsend. He is excited about this research because it puts his expertise in Green’s functions to use in a modern setting with new applications.