What is a neural network?

What is a neural network?

A neural network is a type of machine learning that is modeled on the human brain, creating an artificial neural network that, through an algorithm, allows the computer to learn by incorporating new data.

Although there are many artificial intelligence algorithms currently, neural networks are capable of performing what has been called deep learning. While the basic unit of the brain is the neuron, the essential element of an artificial neural network is a perceptron that performs simple signal processing, and then they are connected to a large mesh network.

The computer with the neural network learns to perform a task by having it analyze training examples, which have been pre-labeled beforehand. A common example of a task for a neural network using deep learning is an object recognition task, where the neural network is presented with a large number of objects of a certain type, such as a cat or a street sign, and the computer, by analyzing recurring patterns in the presented images, learns to classify the new images.

Table of Contents
  1. How Neural Networks Learn
  2. History of neural networks.
  3. Real-world uses for neural networks

How Neural Networks Learn

Unlike other algorithms, neural networks with their deep learning cannot be directly programmed for the task. Rather, they have the requirement, like a child's developing brain, that they need to learn the information. Learning strategies go through three methods:

  • Supervised learning: This learning strategy is the simplest, as there is a labeled data set, which the computer passes through, and the algorithm is changed until it is modified. You can process the data set to get the desired result.
  • Unsupervised learning: This strategy is used in cases where there is no labeled data set available for learning. The neural network analyzes the data set, then a cost function tells the neural network how far from the target it was. The neural network is tuned to increase the precision of the algorithm.
  • Reinforced learning: In this algorithm, the neural network is reinforced for positive results and is punished for a negative result, forcing the neural network to learn over time.

History of neural networks.

While neural networks certainly represent a powerful modern computing technology, the idea dates back to 1943, with two researchers at the University of Chicago, Warren McCullough, a neurophysiologist, and Walter Pitts, a mathematician.

Their paper, "A Logical Calculus of the Ideas Immanent in Nervous Activity," was first published in the journal Brain Theory, which popularized the theory that the firing of a neuron is the 39th basic unit of brain activity. However, this paper was more concerned with the development of cognitive theories at the time, and the two researchers moved to MIT in 1952 to start the first department of cognitive science.

Neural networks in the 1950s were a fertile field for research in computer neural networks, including the Perceptron, which achieved visual shape recognition on the basis of a fly's compound eye. In 1959, two researchers at Stanford University developed MADALINE (Multiple ADAptive LINear Elements), a neural network that went beyond theory and addressed a real problem. MADALINE has been specifically applied to decreasing the amount of echo on a telephone line, to improve voice quality, and has been so successful that it remains in commercial use today.

Despite the initial enthusiasm for artificial neural networks, a notable book published in 1969 by MIT, Perceptrons: An Introduction to Computational Geometry, tempered this. The authors expressed skepticism about artificial neural networks, and how this was likely a dead end in the quest for true artificial intelligence. This weakened this area of ​​research considerably throughout the 1970s, both in public interest and funding. Despite this, some efforts continued, and by 1975, the first multi-layered network was developed, paving the way for further development in neural networks, an achievement that some had considered impossible less than a decade ago.

In 1982, interest in neural networks was greatly renewed when John Hopfield, a professor at Princeton University, invented the associative neural network; the innovation was that data could travel bidirectionally where it had previously only been unidirectional, and its inventor also referred to it as a Hopfield network. Moving forward, artificial neural networks have enjoyed great popularity and growth.

Pen and writing

Real-world uses for neural networks

Handwriting recognition is an example of a real-world problem that can be addressed through an artificial neural network. The challenge is that humans can recognize handwriting with simple intuition, but the challenge for computers is that each person’s handwriting is unique, with different styles and even different spacing between letters, making consistent recognition difficult.

For example, the first letter, a capital A, can be described as three straight lines where two meet at the top, and the third is across the other two halfway down, and it makes sense to humans, but it's challenging to express this in a computer algorithm.

By taking the artificial neural network approach, the computer is fed by learning examples of known handwriting characters, which have been previously labeled with which letter or number they correspond to and through the algorithm the computer learns to recognize each character and as the data character set increases, so does the accuracy. Handwriting recognition has a variety of applications, as diverse as automatically reading addresses on postal service letters, reducing bank fraud on checks, typing characters for writing, and computing pen.

<

p class="vanilla-image-block">Financial data on laptop screen

Another type of problem for an artificial neural network is forecasting financial markets. This is also called “algorithmic trading,” and it has been applied to all kinds of financial markets, stock markets, commodities, interest rates, and various currencies. In the case of the stock market, traders use neural network algorithms to find undervalued securities, improve existing stock models, and use aspects of deep learning to optimize their algorithm as the market changes. There are now companies specializing in neural network stock trading algorithms, for example, MJ Trading Systems.

Artificial neural network algorithms, with their inherent flexibility, continue to be applied to complex pattern recognition and prediction problems. In addition to the examples above, this includes various applications such as facial recognition in social media images, cancer detection for medical images, and business forecasting.

Leave your comment

Your email address will not be published. Required fields are marked with *

Go up