What neuromorphic engineering is, and why it’s triggered an analog revolution

There are a number of types and styles of artificial intelligence, but there’s a key difference between the branch of programming that looks for interesting solutions to pertinent problems, and the branch of science seeking to model and simulate the functions of the human brain. Neuromorphic computing, which includes the production and use of neural networks, deals with proving the efficacy of any concept of how the brain performs its functions — not just reaching decisions, but memorizing information and even deducing facts.

Both literally and practically, “neuromorphic” means “taking the form of the brain.” The key word here is “form,” mainly because so much of AI research deals with simulating, or at least mimicking, the function of the brain. The engineering of a neuromorphic device involves the development of components whose functions are analogous to parts of the brain, or at least to what such parts are believed to do. These components are not brain-shaped, of course, yet like the valves of an artificial heart, they do fulfill the roles of their organic counterparts. Some architectures go so far as to model the brain’s perceived plasticity (its ability to modify its own form to suit its function) by provisioning new components based on the needs of the tasks they’re currently running.

Also: A neuromorphic, memory-centric, chip architecture

part-of-trees-and-undergrowth-by-van-gogh.jpg

Close-up of “Trees and Undergrowth” by Vincent Van Gogh, 1887. Part of the collection of the Van Gogh Museum in Amsterdam. Photograph in the public domain.

The goals of neuromorphic engineering

While building such a device may inform us about how the mind works, or at least reveal certain ways in which it doesn’t, the actual goal of such an endeavor is to produce a mechanism that can “learn” from its inputs in ways that a digital computer component may not be able to. The payoff could be an entirely new class of machine capable of being “trained” to recognize patterns using far, far fewer inputs than a digital neural network would require.

“One of the most appealing attributes of these neural networks is their portability to low-power neuromorphic hardware,” reads a September 2018 IBM neuromorphic patent application [PDF], “which can be deployed in mobile devices and native sensors that can operate at extremely low power requirements in real-time. Neuromorphic computing demonstrates an unprecedented low-power computation substrate that can be used in many applications.”

Although Google has been a leader in recent years, of both research and production of hardware called tensor processors (TPU) dedicated specifically to neural network-based applications, the neuromorphic branch is an altogether different beast. Specifically, it’s not about the evaluation of any set of data in terms of discrete numeric values, such as scales from 1 to 10, or percentage grades from 0 to 100. Its practitioners have a goal in mind other than to solve an equation, or simply to produce more software. They seek to produce a cognition machine — one that may lead credence to, if not altogether prove, a rational theory for how the human mind may work. They’re not out to capture the king in six moves. They’re in this to build mechanisms.

Also: The AI chip unicorn that’s about to revolutionize everything

Why bother experimenting with neuromorphic designs?

A neural network in computing is typically represented by a set of elements in memory — dubbed axons, after their counterparts in neurology — that become adjusted, or weighted, in response to a series of inputs. These weights are said to leave an impression, and it is this impression that (hopefully) a neural net can recall when asked to reveal the common elements among the inputs. If this impression can be treated as “learning,” then a small neural net may be trained to recognize letters of the alphabet after an extensive training. 

Provisioning a neural network model in a purely digital environment requires a tremendous amount of data.  A cloud service provider is in a particularly advantageous position to capitalize on this requirement, especially if it can popularize the applications that make use of machine learning. It’s why Amazon and others are so excited these days about AI: As a category of task, it’s the biggest consumer of data.

Yet you may have noticed something about human beings: They’ve become rather adept with just the brains they have, without the use of fiber optic links to cloud service providers. For some reason, brains are evidently capable of learning more, without the raw overhead of binary storage. In a perfect world, a neural net system should be capable of learning just what an application needs to know about the contents of a video, for example, without having to store each frame of the video in high resolution.

Conceivably, while a neuromorphic computer would be built on a fairly complex engine, once mass-produced, it could become a surprisingly simple machine. We can’t exactly grow brains in jars yet. But if we have a plausible theory of what constitutes cognition, we can synthesize a system that abides by the rules of that theory, perhaps producing better results using less energy and requiring an order of magnitude less memory.

As research began in 2012 toward constructing working neuromorphic models, a team of researchers including the California NanoSystems Institute at UCLA wrote the following [PDF]:

Although the activity of individual neurons occurs orders of magnitude slower (ms) than the clock speeds of modern microprocessors (ns), the human brain can greatly outperform CMOS computers in a variety of tasks such as image recognition, especially in extracting semantic content from limited or distorted information, when images are presented at drastically reduced resolutions. These capabilities are thought to be the result of both serial and parallel interactions across a hierarchy of brain regions in a complex, recurrent network, where connections between neurons often lead to feedback loops.

Also: Neuton: A new, disruptive neural network framework for AI

Self-synthesis

A truly neuromorphic device, its practitioners explain, would include components that are physically self-assembling. Specifically, they would involve atomic switches whose magnetic junctions would portray the role of synapses, or the connections between neurons. Devices that include these switches would behave as though they were originally engineered for the tasks they’re executing, rather than as general-purpose computers taking their instructions from electronic programs.

Such a device would not necessarily be tasked with AI applications to have practical use. Imagine a set of robot controllers on a factory floor, for instance, whose chips could realign their own switches whenever they sensed alterations in the assemblies of the components the robots are building. The Internet of Things is supposed to solve the problem of remote devices needing new instructions for evolved tasks, but if those devices were neuromorphic by design, they might not need the IoT at all.

Neuromorphic engineers have pointed out a deficiency in general computer chip design that we rarely take time to consider: As Moore’s Law compelled chip designers to cram more transistors onto circuits, the number of interconnections between those transistors multiplied over and over again. From an engineering standpoint, the efficiency of all the wire used in those interconnections degraded with each chip generation. Long ago, we stopped being able to communicate with all the logic gates on a CPU during a single clock cycle.

Had chip designs been neuromorphic one or two decades ago, we would not have needed to double the number of transistors on a chip every 12 to 18 months to attain the performance gains we’ve seen — which were growing smaller and smaller anyway. If you consider each interconnection as a kind of “virtual synapse,” and if each synapse were rendered atomically, chips could adapt themselves to best service their programs.

Also: How IoT might transform four industries this year

Examples of neuromorphic engineering projects

Today, there are several academic and commercial experiments under way to produce working, reproducible neuromorphic models, including:

spinnaker-at-univ-manchester.jpg
  • SpiNNaker [pictured above] is a low-grade supercomputer developed by engineers with Germany’s Jülich Research Centre’s Institute of Neuroscience and Medicine, working with the UK’s Advanced Processor Technologies Group at the University of Manchester. Its job is to simulate the functions so-called cortical microcircuits, albeit on a slower time scale than they would presumably function when manufactured. In August 2018, Spinnaker conducted what is believed to be the largest neural network simulation to date, involving about 80,000 neurons connected by some 300 million synapses.
180108-intel-ceo-krzanich-with-loihi.jpg
  • Intel is experimenting with what it describes as a neuromorphic chip architecture, called Loihi (lo · EE · hee). Intel has been reluctant to share images that would reveal elements of Loihi’s architecture, though based on what information we do have, Loihi would be producible using a form of the same 14 nm lithography techniques Intel and others employ today. First announced in September 2017, and officially premiered the following January at CES 2018 by then-CEO Brian Krzanich, Loihi’s microcode include statements designed specifically for training a neural net. It’s designed to implement a spiking neural network (SNN), whose model adds more brain-like characteristics.
  • IBM maintains a Neuromorphic Devices and Architectures Project involved with new experiments in analog computation. In a research paper, the IBM team demonstrated how its non-volatile phase-change memory (PCM) accelerated the feedback or backpropagation algorithm associated with neural nets. These researchers are now at work determining whether PCM can be utilized in modeling synthetic synapses, replacing the static RAM-based arrays used in its earlier TrueNorth and NeuroGrid designs (which were not neuromorphic).

Also: Why Intel built a neuromorphic chip

It thinks, therefore it is

Some of the most important neuromorphic research began in 2002, in response to a suggestion by engineers with Italy’s Fiat. They wanted a system that could respond to a driver falling asleep at the wheel. Prof. James K. Gimzewski of UCLA’s California NanoSystems Institute (CNSI), responded by investigating whether an atomic switch could be triggered by the memory state of the driver’s brain. Here is where Gimzewski began his search for a link between nanotechnology and neurology — for instance, into the measured differences in electric potential between signals recorded by the brain’s short-term memory and those recorded by long-term memory.

Shining a light on that link from a very high altitude is UC Berkeley Prof. Walter Freeman, who in recent years has speculated about the relationship between the density of the fabric of the cerebral cortex, and no less than consciousness itself — the biological process through which an organism can confidently assert that it’s alive and thinking. Freeman calls this thick fabric within the neocortex that forms the organ of consciousness the neuropil, and while Gimzewski’s design has a far smaller scale, he’s unafraid to borrow that concept for its synthetic counterpart.

In 2014, Gimzewski premiered his research, showing photographs of a grid of copper posts at near-micron scale, that have been treated with a silver nitrate solution. Once exposed to gaseous sulfur, the silver atoms form nanowires from point to point on the grid — wires which behave, at least well enough, like synapses. According to Gimzewski:

“We found that when we changed the dimension of the copper posts. We could move… to more nanowire structures, and it was due to the fact that we can avoid some instabilities that occur on the larger scale. So we’re able to make these very nice nanowire structures. Here you can see, you can have very long ones and short ones. And using this process of bottom-up fabrication, using silicon technology, [as opposed to] top-down fabrication using CMOS process… we can then generate these structures… It’s ideal, and each one of these has a synthetic synapse.”

The CNSI team’s fabrication process is capable, Gimzewski claims, of depositing 1 billion synaptic interconnects per square centimeter. (In March 2017, Intel announced it managed to cram 100 million transistors onto a one square-centimeter CPU die.)

Also: South Korea to invest over $620 million in nanotech R&D

Why neuromorphic engineering requires a new class of machine

If you’ve ever played chess against an app, you have toyed with one of the earliest and most basic forms of AI: the decision tree. Liberal use of the word “decision” ends up making this branch sound way too grandiose; in practice, it’s extremely simple, and it has zero to do with the shape or form of the brain.

minimax-svg.png

Diagram of a minimax decision tree by Nuno Nogueira.  Released through Wikimedia Commons.

Essentially, a decision tree algorithm applies numeric values to assessed possibilities. A chess program evaluates all the possibilities it can find, for moves and counter-moves and counter-counter-moves well into the future, and chooses the move with the best assessed value. One chess program may distinguish itself from the others through the value points it attributes to the exposure or capture of an important piece, or the closure of an important line of defense.

The ability to condense these evaluations into automated response patterns may constitute what some would call, at least rudimentarily, a “strategy.” Adapting that strategy to changing circumstances is what many researchers would call learning. Google’s DeepMind unit is one example of a research project that applies purely mathematical logic to the task of machine learning, including this example involving modeling responses to multiple patients’ heart and lung behavior.

Also: Google’s AI surfs the “gamescape” to conquer game theory

The downside of determinism

Here is where everything sheds its digital-age garb and takes on a more physical, tactile, Jules Verne-esque ambiance: We have a tendency to define a “computer” as necessarily digital, and everything that performs a computing task as a digital component. As neuromorphic science extends itself into the architecture of computing hardware, such as processors, scientists have already realized there are elements of neuromorphology that, like quantum computing, cannot be modeled digitally. Randomness, or otherwise inexplicable phenomena, are part of their modus operandi.

The one key behavior that essentially disqualifies a digital computer from mimicking an organic or a subatomic entity, is the very factor that makes it so dependable in accounting: Its determinism. The whole point of a digital computer program is to determine how the machine must function, given a specific set of inputs. All digital computer programs are either deterministic or they’re faulty.

The brain of a human being, or any animal thus far studied, is not a deterministic organism. We know that neurons are the principal components of memory, though we don’t really know  how experiences and sensory inputs map themselves to these neurons. Scientists have observed, though, that the functions through which neurons “fire” (display peak electrical charges) are probabilistic. That’s to say, the likelihood that any one neuron will fire, given the same inputs (assuming they could be re-created), is a value less than 100 percent.

Neither neurologists nor programmers are entirely certain why a human brain can so easily learn to recognize letters when they have so many different typefaces, or to recognize words when people’s voices are so distinct. If you think of the brain as though it were a muscle, it could be stress that improves and strengthens it. Indeed, intellect (the phenomena arising from the mind’s ability to reason) could be interpreted as a product of the brain adapting to the information its senses infer from the world around it, by literally building new physical material to accommodate it. Neurologists refer to this adaptation as plasticity, and this is one of the phenomena that neuromorphic engineers are hoping to simulate.

intel-spiking-neural-networks.jpg

Diagram by Alish Dipani, released through Intel Developer Mesh.

A spiking neural network (SNN) would accomplish this. In the biological brain, each neuron is connected to a variety of inputs. Some inputs produce excitation in the neuron, while others inhibit it — like the positive and negative weights in an artificial neural net. But with an SNN, upon reaching a certain threshold state described by a variable (or perhaps with a function), the neuron’s state spikes, literally referring to its electrical output. The purpose of an SNN model is to draw inferences from these spikes — to see if an image or a data pattern triggers a memory.

Also: This million-core supercomputer is inspired by the human brain

What makes a neuromorphic chip more analog?

There’s one school of thought that argues, even if a sequence of numerals is not truly random, so long as the device drawing inferences on that data is not informed, it won’t matter anyway. All neural network models developed for deterministic systems operate under this presumption.

neural-net-02tkm11.jpg

Scott Fulton III

The counter-argument is this: When a neural network is initialized, its “weights” (the determinants of the axons’ values) must be randomized. To the extent that it is possible for one random pattern to be similar or exact to another one, that extent must be attributed as a bias, and that bias reflects negatively on any final result.

Also: New year, new networks as researchers optimize AI

Why real randomness matters

There’s also this: Electromechanical components may be capable of introducing the non-deterministic elements that cannot be simulated within a purely digital environment, even when we put blinders on. Researchers at Purdue University are experimenting with magnetic tunnel junctions (MTJ) — two ferromagnetic layers sandwiching a magnesium oxide barrier. An electric current can tease a magnetic charge into jumping through the barrier between layers. Such a jump may be analogous to a spike.

An MTJ exhibits a behavior that’s reminiscent of a transistor, teasing electrons across a gap. In this case, MTJ enables a division of labor where the receiving ferromagnetic layer plays the role of axon, and the tunnel in-between portrays a synapse.

The resulting relationship is genuinely mechanical, where the behavior of charges may be described, just like real neurons, using probability. So any errors that result from an inference process involving MTJs, or components like them, will not be attributable to bias that can’t be helped due to determinism, but instead to bugs that may be corrected with the proper diligence. For the entire process to be reliable, the initialized values maintained by neurons must be truly randomized.

Also: Artificial intelligence has a probability problem

The case against neuromorphic

Of course, neurologists and biotechnicians downplay any neuromorphic computing model as stopping well short of simulating real brain activity. Some go so far as to say that, to the extent that the components of a neuromorphic system are incomplete, any model of computing it produces is entirely fantastic.

Dr. Gerard Marx, CEO of Jerusalem-based research firm MX Biotech Ltd., suggests that the prevailing view of the brain as a kind of infinite tropical rain forest, where trees of neurons swing from synapses in the open breeze, is a load of hogwash. Missing from any such model, Marx points out, is a substance called the extracellular matrix (nECM), which is not a gelatinous, neutral sea but rather an active agent in the brain’s recall process.

Marx postulates that memory in the biological brain requires neurons, the nECM, plus a variety of dopants such as neurotransmitters (NT) released into the nECM. Electrochemical processes take place between these three elements, the chemical reactions from which have not only been recorded, but are perceived as closely aligned with emotion. The physiological effects associated with recalling a memory (e.g., raised blood pressure, heavier breathing) trigger psychic effects (excitement, fear, anxiety, joy) which in turn have a reinforcing effect on the memory itself. Writes Marx with his colleague Chaim Gilon [PDF]:

We find ourselves in the inverse position of the boy who cried: “The emperor has no clothes!” as we exclaim: “There are no “naked neurons!” They are swaddled in nECM, which is multi-functional, as it provides structural support and is a hydrogel through which liquids and small molecules diffuse. It also performs as a “memory material,” as outlined by the tripartite mechanism which identifies NTs as encoders of emotions.

This is not to say neuromorphic computing can’t yield benefits. But if the theory is that it will yield greater benefits without taking all the other parts of the brain into account, then Marx’s stance is, its practitioners should stop pretending to be brain surgeons.

Also: Memory aid: Virtual Reality may soon help you cram for a test

When neuromorphic potential could spike

At any one time in history, there is a theoretical limit to the processing power of a supercomputer — a point after which increasing the workload yields no more, or no better, results. That limit has been shoved forward in fits and starts by advances in microprocessors, including by the introduction of GPUs (formerly just graphics processors) and Google’s design for TPUs. But there may be a limit to the limit’s extension, as Moore’s Law only works when physics gives you room to scale smaller.

Neuromorphic engineering points to the possibility, if not yet probability, of a massive leap forward in performance, by way of a radical alteration of what it means to infer information from data. Like quantum computing, it relies upon a force of nature we don’t yet comprehend: In this case, the informational power of noise. If all the research pays off, supercomputers as we perceive them today may be rendered entirely obsolete in a few short years, replaced by servers with synthetic, self-assembling neurons that can be tucked into hallway closets, freeing up the space consumed by mega-scale data centers for, say, solar power generators.

Previous and related coverage:

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is AI? Everything you need to know 

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What a quantum computer is, and why it needs to be more 

It would be the harbinger of an entirely new medium of calculation, harnessing the inexplicable powers of subatomic particles to obliterate the barriers of time in solving incalculable problems. Your part in making it happen may simply be to convince yourself that black is white and up is down.

What is the IoT? Everything you need to know

The Internet of Things explained. What the IoT is, and where it’s going next.

Learn more — From the CBS Interactive Network:

Elsewhere:

Read More

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *