What Is a Neural Network?

Learn about what a neural network is, what it does and how it works.

What Is a Neural Network

As artificial intelligence (AI) has become a hot topic of conversation in information technology (IT) circles—and really in all manner of social circles—most of us have talked about the possibilities. Beyond a vague sense that algorithms and other advanced mathematics are probably involved, however, most people would likely struggle to express how AI does what it does. Many people might struggle just to express what AI actually does. Does AI actually think?

The Definition and Explanation of Neural Networks

That's a complicated question with a lot of moving parts. When we say "artificial intelligence," however, there are three overarching concepts that, together, largely define what we're talking about. The first two, machine learning and deep learning, are both processes and outcomes:

  • Machine learning is what AI researchers want AI to do. Generally speaking, machine learning occurs when a computer gathers data and creates a means of analyzing that data.
  • Deep learning is the type of machine learning that most researchers try to facilitate. Generally speaking, deep learning occurs when a computer analyzes multiple facets of the same problem.

If machine learning and deep learning are the goal, then a neural network is the mechanism, a software construct that links individual decision-making units called neurons. Broadly speaking, a neural network enables learning.

A Brief History of Neural Networks

Artificial neural networks are intended to mimic the function of the human brain. It took a long time for medical science to advance to the point at which researchers could actually study brain function, and most of the big breakthroughs have come in the relatively recent past. So, for almost as long as humans have been studying human thought, they have also been studying the possibility of thinking machines.

The work of late 19th-century scientists like Alexander Bain, Williams James, Charles Sherrington and Santiago Ramón y Cajal forms the basis of what we know about brain function today. Early research suggested that thought and memory are driven by highly specialized brain cells linked to the human nervous system.

Today, we know that these specialized cells, called neurons, are connected in vast networks and pass signals via tiny electrical impulses.

Generally speaking, there are three types of neurons in the brain:

  • Sensory neurons respond to stimuli provided by our senses: Sight, sound, smell, taste and touch
  • Motor neurons govern the function of muscles, organs and glands
  • Interneurons facilitate connection between neurons

The three types of neurons interact via chemical secretion of molecules called neurotransmitters. Over time, neurotransmission — facilitated by synapses — creates pathways that permit the storage of information, an end result that is believed to be the foundation of memory. Memory, of course, is essential to learning and to independent thought.

Only about a half-century after most of this was first theorized, computer science pioneers were already attempting to apply it to mechanical constructs. The idea of creating artificial neurons was first suggested by neurophysiologist Warren McCulloch and logician Walter Pitts in 1943. Five years later, computer science titan Alan Turing wrote a paper that proposed a cooperative cluster of artificial neurons.

Components of a Neural Network

There are many different types of neural networks, but most of them have similar components. Most neural networks are software-based, and that's where we'll focus most of our discussion. There are a handful of types of physical neural networks, however, that are hardware-based and often employ various forms of nanotechnology.

For the most part, however, when you read or hear about a "neural network," what's being discussed is essentially an advanced computer program. As such, most neural networks, though they are intended to mimic the function of the human brain, have no more form or substance than any other kind of software.

When we talk about the parts of a neural network, the temptation is to picture something physical, like LEGO blocks that snap together. For the most part, however, we're essentially talking about lines of computer code that execute various commands and interact in various ways, including:

  • Neurons: Artificial neurons, also called nodes, are the basic building blocks of a neural network. A neuron receives and compares information and decides whether to pass it along to another neuron.
  • Inputs: An input is the information that is relayed (or not) from one neuron to the next.
  • Weights: A weight is a numeric value that helps determine how frequently certain connections are used. Weights are assigned to reflect the importance of connections between neurons and are determined over time.
  • Biases: Biases are similar to weights, except that they determine the significance of inputs. The numeric value of a bias is initially randomly determined and adjusted over time.
  • Outputs: An output is the end result of a neural network's analysis. It represents the decision of the neural network regarding whatever inputs have been fed into it. Outputs are also called "activation functions."
  • Layers: Neural networks are typically arrayed in layers, with an input layer and an output layer separated by a variable number of hidden layers. The number of intervening hidden layers between the input layer and the output layer represents a tradeoff: More hidden layers typically results in increased accuracy, but also increases the overall number of computations, driving up power consumption and expense.
  • Connections: Every neuron in a given layer of a neural network is connected to every neuron in the adjacent layers. Over time, some connections are used more frequently than others.  

How Do Neural Networks Work?

In a nutshell, various types of information feed into the input layer of a neural network. Information is relayed from one neuron to the next. Calculations made at each layer of the network, in each of its individual neurons, determine what information is forwarded to subsequent layers. Eventually, the network produces an output that represents its analysis of the initial information.

More advanced neural networks are capable of cycling information through hidden layers multiple times. As a neural network functions over time, various weights and biases are determined and stored. These weights and biases guide subsequent requests for analysis of information, and generally represent the network's ability to learn and improve its output.

Types of Neural Networks

There are many different categories of neural networks, and many of these categories have subcategories. Some of the more common types are as follows:

  • Feedforward: In a feedforward neural network, information goes in, passes through and comes out. The first neural networks were feedforward networks. Types of feedforward neural networks include autoencoder networks, probabilistic networks, time delay networks, convolutional networks and deep stacking networks.
  • Recurrent: Recurrent neural networks (RNNs) can send information both forward and backward, cycling data back to earlier layers of the network. Types of RNNs include Hopfield networks, Boltzmann machines, self-organizing maps, learning vector quantization (LVQ) networks, simple recurrent networks, echo state networks, long short-term memory (LSTM) networks, bidirectional recurrent neural networks (BRNN) and stochastic neural networks.
  • Memory: A memory neural network incorporates long-term memory to produce predictive outputs. Types of memory neural networks include, one-shot associative memory networks, hierarchical temporal memory networks, holographic associative memory networks, neural Turing machines and pointer networks.

Other categories of neural networks include deep belief networks (DBNs), radial basis function (RBF) networks, regulatory feedback networks, modular neural networks and dynamic neural networks. Neural networks are the focus of a large and growing field of scientific inquiry.

Applications of Neural Networks

There are many and varied uses for neural networks, and neural networks are better at some tasks than others. For example, neural networks that process and analyze images are fairly advanced. This is particularly valuable in the medical field, where advanced imaging technology produces complex and detailed scans that can be studied and processed faster — and with greater accuracy — using a neural network.

NASA uses neural networks to study activity in distant corners of the galaxy. The massive gas giant planet WASP-12b circles a star nearly 1,400 light years from Earth, in an orbit so tight that the star's gravity is literally tearing the planet apart. Neural network analysis of images relayed by the Hubble Space Telescope allowed researchers to vastly improve their knowledge of the molecular composition of WASP-12b's atmosphere.

Future of Neural Networks

In the movie Her, a low-level chatbot roughly akin to Siri becomes, in a matter of months, an evolved super consciousness whose godlike intelligence transports it to a plane of existence not even comprehensible by its human owner. There are dozens, perhaps hundreds, of films, TV shows and books that envision similarly grandiose outcomes.

If that sort of thing were to happen in real life, then neural networks are at least the seed that mechanical consciousness — the ultimate expression of AI — would theoretically grow from.

Neural networks as we currently know them, on the other hand, are somewhat analogous to what astronaut Neil Armstrong famously said about setting foot on the Moon. They are both a giant leap forward in terms of computing technology and, at least for now, one small step toward the future of thinking machines. Most researchers agree that machines with human consciousness and emotions are a far-future ambition.

Want to learn more about artificial intelligence?

Get the answers you need by joining the conversation in

CompTIA’s AI Technology Interest Group

Newsletter Sign Up

Get CompTIA news and updates in your inbox.

Subscribe

Read More from the CompTIA Blog

Leave a Comment