Technology

The System Mixes Light and Electrons to Provide Quicker, More Environmentally Friendly Computing

The System Mixes Light and Electrons to Provide Quicker, More Environmentally Friendly Computing

Computing is at a crossroads. Moore’s Law, which predicts that the number of transistors on an electronic chip will double every two years, is slowing due to physical constraints in placing more transistors on inexpensive microchips. Computer power increases are slowing as there is a greater need for high-performance computers that can support increasingly complicated artificial intelligence models.

This annoyance has prompted engineers to investigate new methods for increasing the computational capacity of their machines, but a solution is still elusive.

Photonic computing is one potential solution to machine-learning models’ increasing processing demands. Instead of transistors and wires, these devices use photons (microscopic light particles) to conduct analog computation processes.

Lasers generate these little bundles of energy that travel at the speed of light, much like a spaceship in a science fiction film. When photonic computing cores are combined with programmable accelerators such as a network interface card (NIC) and its enhanced equivalent, SmartNICs, the resulting hardware can be connected to a regular computer to turbocharge it.

The-System-Mixes-Light-and-Electrons-to-Provide-Quicker-More-Environmentally-Friendly-Computing
The System Mixes Light and Electrons to Provide Quicker, More Environmentally Friendly Computing

MIT researchers have demonstrated photonics’ ability to accelerate modern computing by showcasing its capabilities in machine learning.

Their photonic-electronic reconfigurable SmartNIC, dubbed “Lightning,” enables deep neural networks—machine-learning models that mimic how brains process information—to execute inference tasks like picture recognition and sentence production in chatbots like ChatGPT. The innovative design of the prototype enables exceptional speeds, resulting in the first photonic computing system capable of serving real-time machine-learning inference demands.

This month, the researchers will report their findings at the Association for Computing Machinery’s Special Interest Group on Data Communication (SIGCOMM). The abstract was published in the ACM SIGCOMM 2023 Conference Proceedings.

Despite its potential, photonic computing devices are difficult to implement because they are passive, meaning they lack the memory or instructions to regulate dataflows, unlike their electrical counterparts. Previous photonic computing systems encountered this bottleneck, but Lightning eliminates it to ensure seamless data transfer between electronic and photonic components.

“Photonic computing has demonstrated significant advantages in accelerating bulky linear computation tasks such as matrix multiplication, while the rest is handled by electronics: memory access, nonlinear computations, and conditional logics.” This generates a significant amount of data that must be exchanged between photonics and electronics to complete real-world computing tasks, such as a machine learning inference request,” says Zhizhen Zhong, a postdoc in the group of MIT Associate Professor Manya Ghobadi at the CSAIL.

“The Achilles’ heel of previous state-of-the-art photonic computing works was controlling this dataflow between photonics and electronics.” Even if you have a super-fast photonic computer, you must have enough data to keep it running without stalling. Otherwise, you’ll have a supercomputer that’s simply sitting there doing nothing.”

Ghobadi, an associate professor at MIT’s Department of Electrical Engineering and Computer Science (EECS) and CSAIL member, and her colleagues are the first to find and resolve this problem. To accomplish this feat, they combined photonics’ speed with electronic computers’ dataflow management skills.

Prior to Lightning, photonic and electronic computing methods communicated in separate languages. The hybrid system developed by the team records the required calculation operations on the datapath using a reconfigurable count-action abstraction that connects photonics to computer circuit components.

This programming abstraction serves as a unified language between the two, managing access to the dataflows as they travel through. Electrons carry information, which is transformed into light in the form of photons, which work at light speed to aid in the completion of an inference task. The photons are then transformed back to electrons in order to convey the information to the computer.

The new count-action abstraction enables Lightning’s quick real-time computing frequency by smoothly linking photonics and electronics. Previous attempts used a stop-and-go technique, which meant that data would be slowed by considerably slower control software that made all movement decisions.

“Building a photonic computing system without a count-action programming abstraction is like trying to steer a Lamborghini without knowing how to drive,” adds senior author Ghobadi.

“How would you react? You’re presumably holding a driving handbook in one hand, then pressing the clutch, checking the manual, letting off of the brake, checking the manual, and so on. This is a stop-and-go activity since you have to consult some higher-level organization for every choice.”

“But that’s not how we drive; we learn to drive and then use muscle memory behind the wheel without consulting the manual or driving rules.” Lightning’s muscle memory is our count-action programming concept. At runtime, it effortlessly drives the electrons and photons in the system.”

An environmentally friendly solution: Services for machine learning Completing inference-based tasks, such as ChatGPT and BERT, currently necessitates a significant amount of computer power. They’re not only expensive—some estimates say ChatGPT costs $3 million every month to run—but they’re also bad for the environment, potentially spewing more than double the average person’s carbon dioxide. Lightning uses photons, which move faster than electrons in wires while producing less heat, allowing it to calculate at a higher frequency while using less energy.

By synthesizing a Lightning chip, the Ghobadi group compared their device against standard graphics processing units, data processing units, SmartNICs, and other accelerators. When executing inference requests, the team discovered that Lightning used less energy.

“Our synthesis and simulation studies show that Lightning reduces machine learning inference power consumption by orders of magnitude when compared to state-of-the-art accelerators,” says Mingran Yang, a graduate student in Ghobadi’s lab and one of the paper’s co-authors. Lightning, as a more cost-effective and faster solution, is a viable upgrade for data centers looking to decrease the carbon footprint of their machine-learning models while speeding inference response time for customers.