A physical neural network is a type of neural network in which the activity of individual artificial neurons is modeled by actual physical materials rather than a software program. It is a type of artificial neural network that uses an electrically adjustable material to simulate the function of a neural synapse or a higher-order (dendritic) neuron model. These systems are based on the biophysical processes of the human brain in a much more concrete way, and are a very specific and sophisticated type of neural network that is not very common in the tech world.
The term “physical” neural network refers to the reliance on physical hardware to emulate neurons as opposed to software-based approaches. More generally the term is applicable to other artificial neural networks in which a memristor or other electrically adjustable resistance material is used to emulate a neural synapse.
Types of physical neural networks
Bernard Widrow and Ted Hoff created ADALINE (Adaptive Linear Neuron) in the 1960s, which used electrochemical cells called memistors (memory resistors) to mimic the synapses of an artificial neuron. The memistors were implemented as 3-terminal devices based on reversible electroplating of copper, with the resistance between two of the terminals controlled by the integral of the current applied via the third terminal.
The Memistor Corporation briefly commercialized the ADALINE circuitry in the 1960s, allowing for some pattern recognition applications. However, because the memistors were not fabricated using integrated circuit fabrication techniques, the technology was not scalable and was eventually abandoned as solid-state electronics matured.
Carver Mead’s book Analog VLSI and Neural Systems, published in 1989, spawned perhaps the most common variant of analog neural networks. Analog VLSI is used to implement the physical realization. This is frequently realized using field-effect transistors with low inversion. Translinear circuits can be used to model such devices.
This is a technique described by Barrie Gilbert in several papers published around the mid-1970s, most notably his 1981 paper Translinear Circuits. Circuits can be analyzed as a set of well-defined functions in steady-state using this method, and such circuits can be assembled into complex networks.
Physical Neural Network
A physical neural network is defined by Alex Nugent as one or more nonlinear neuron-like nodes that sum signals and nanoconnections made of nanoparticles, nanowires, or nanotubes that determine the signal strength input to the nodes. The history of the applied electric field determines the alignment or self-assembly of the nanoconnections, which performs a function analogous to neural synapses.
There are numerous applications for such physical neural networks. A temporal summation device, for example, can be made up of one or more nanoconnections with an input and an output, where an input signal provided to the input causes one or more of the nanoconnections to experience an increase in connection strength over time.