Computer

A Novel Approach to Solving the “Hardest of the Hard” Computer Issues

A Novel Approach to Solving the “Hardest of the Hard” Computer Issues

A relatively new sort of computer that replicates how the human brain functions were already changing how scientists approached some of the most difficult data processing challenges.

Researchers have now discovered a technique to make reservoir computing function between 33 to a million times quicker, using substantially fewer computer resources and requiring significantly less data input.

Researchers solved a tough computing task on a desktop computer in less than a second in one test of next-generation reservoir computing. According to Daniel Gauthier, main author of the study and professor of physics at The Ohio State University, the identical issue now requires a supercomputer to solve and still takes substantially longer, even with today’s state-of-the-art equipment.

“We can perform very complex information processing tasks in a fraction of the time using much less computer resources compared to what reservoir computing can currently do,” Gauthier said.

“And reservoir computing was already a significant improvement on what was previously possible.”

The study was published today (Sept. 21, 2021) in the journal Nature Communications.

For our next-generation reservoir computing, there is almost no warming time needed.

Daniel Gauthier

Reservoir computing, according to Gauthier, is a machine learning technique that is used to handle the “hardest of the hard” computing issues, such as forecasting the development of dynamical systems that change over time.

Dynamical systems, such as the weather, are difficult to anticipate because even little changes in one state can have far-reaching consequences, he explained.

The “butterfly effect” is a well-known example, in which changes in the weather caused by a butterfly flapping its wings may impact the weather weeks later.

According to Gauthier, previous research has proven that reservoir computing is well-suited for learning dynamical systems and can produce precise projections of how they will behave in the future.

It accomplishes this by employing an artificial neural network that functions similarly to a human brain. Data from a dynamical network is fed into a network’s “reservoir” of randomly linked artificial neurons. The network generates important data that scientists can evaluate and input back into the system, allowing them to develop an increasingly accurate forecast of how the system will evolve in the future.

The larger and more complicated the system is, and the more precise the forecast must be, the larger the network of artificial neurons must be, as well as the computational resources and time required to execute the task.

One concern, according to Gauthier, is that the reservoir of artificial neurons is a “black box,” and scientists have no idea what goes on inside of it; all they know is that it works. Gauthier highlighted that the artificial neural networks at the foundation of reservoir computing are based on mathematics.

“We had mathematicians look at these networks and ask, ‘To what extent are all these pieces in the machinery really needed?’” he said.

Gauthier and his colleagues researched this subject and discovered that the entire reservoir computer system could be considerably simplified, resulting in a large reduction in processing resources and significant time savings. They put their theory to the test using a meteorological system created by Edward Lorenz, whose work led to the discovery of the butterfly effect.

On this Lorenz forecasting problem, their next-generation reservoir computing outperformed today’s state-of-the-art. The new approach outperformed the present model by 33 to 163 times in a simple simulation on a desktop computer.

When it came to forecasting precision, though, next-generation reservoir computing was nearly a million times quicker. And, according to Gauthier, the new-generation computation achieved the same precision with only 28 neurons, compared to the 4,000 required by the current-generation model.

One explanation for the speedup is that the “brain” driving this next generation of reservoir computing requires far less warmup and training to get the same results as the present version. The warmup is data that must be entered into the reservoir computer as input to prepare it for its real work.

“For our next-generation reservoir computing, there is almost no warming time needed,” Gauthier said.

“Currently, scientists have to put in 1,000 or 10,000 data points or more to warm it up. And that’s all data that is lost, that is not needed for the actual work. We only have to put in one or two or three data points,” he said.

And, once researchers are ready to teach the reservoir computer to make the projection, the next-generation system will require a lot less data.

In their Lorenz forecasting assignment, the researchers found that they could acquire the same results with 400 data points as the current generation did with 5,000 or more data points, depending on the level of precision sought.

“What’s exciting is that this next generation of reservoir computing takes what was already very good and makes it significantly more efficient,” Gauthier said.

He and his colleagues intend to continue working on this project to tackle even more challenging computer challenges, such as fluid dynamics predictions.

“That’s an incredibly challenging problem to solve. We want to see if we can speed up the process of solving that problem using our simplified model of reservoir computing.”

Erik Bollt, a professor of electrical and computer engineering at Clarkson University, Aaron Griffith, a postdoctoral researcher in physics at Ohio State, and Wendson Barbosa, a postdoctoral researcher in physics at Ohio State, were co-authors on the work. The US Air Force, the Army Research Office, and the Defense Advanced Research Projects Agency all contributed to the project.