What is a neural network?

What is a neural network?
A neural network is a type of machine learning that is modeled on the human brain, creating an artificial neural network that, through an algorithm, allows the computer to learn by incorporating new data. Although there are currently many artificial intelligence algorithms, neural networks are capable of performing what has been called deep learning. While the basic unit of the brain is the neuron, the essential element of an artificial neural network is a perceptron that performs simple signal processing, and these are then connected to a large mesh network. The computer with the neural network learns to perform a task by having it analyze training examples, which have been pre-labeled beforehand. A common example of a task for a neural network using deep learning is an object recognition task, where the neural network is presented with a large number of objects of a certain type, such as a cat or a street sign, and the computer, by analyzing the recurring patterns in the presented images, learns to classify the new images.

How Neural Networks Learn

Unlike other algorithms, neural networks with their deep learning cannot be directly programmed for the task. Rather, they have the requirement, like a child's developing brain, that they need to learn the information. Learning strategies go through three methods:

History of neural networks.

While neural networks certainly represent powerful modern computing technology, the idea dates back to 1943, with two researchers at the University of Chicago, Warren McCullough, a neurophysiologist, and Walter Pitts, a mathematician. His article, "A Logical Calculation of Immanent Ideas in Nervous Activity," was first published in the journal Brain Theory, which popularized the theory that the firing of a neuron is the 39 basic unit of brain activity. However, this paper had more to do with the development of cognitive theories at the time, and the two researchers moved to MIT in 1952 to start the first department of cognitive science. Neural networks in the 1950s were a fertile field for research in computer neural networks, including Perceptron, which achieved visual shape recognition on the basis of a fly's compound eye. . In 1959, two researchers from Stanford University developed MADALINE (Multiple ADaptive LINEar Elements), a neural network that goes beyond theory and addresses a real problem. MADALINE has been specifically applied to decrease the amount of echo on a telephone line, to improve voice quality, and has been so successful that it is still in commercial use today. Despite the initial enthusiasm for artificial neural networks, a remarkable book published in 1969 by MIT, Perceptrons: An Introduction to Computational Geometry, softened this. The authors expressed skepticism about artificial neural networks, and how this was likely a dead end in the search for true artificial intelligence. This considerably weakened this area of ​​research throughout the 1970s, both in public interest and in funding. Despite this, some efforts continued, and in 1975, the first multilayer network was developed, paving the way for further development in neural networks, an achievement that some had considered impossible less than a decade ago. In 1982, interest in neural networks was considerably renewed when John Hopfield, a professor at Princeton University, invented the associative neural network; The innovation was that data could travel bidirectionally as before it was only unidirectional, and is also known as a Hopfield network by its inventor. In the future, artificial neural networks have enjoyed great popularity and growth.

Pen and writing

Real-world uses for neural networks

Handwriting recognition is an example of a real-world problem that can be addressed through an artificial neural network. The challenge is that humans can recognize handwriting with simple intuition, but the challenge for computers is that each person's handwriting is unique, with different styles and even different spacing between letters, making consistent recognition difficult. For example, the first letter, a capital A, can be described as three straight lines where two meet across the top at the top, and the third is across the other two in the middle, and makes sense to humans, but it is challenging to express this in a computer algorithm. By adopting the artificial neural network approach, the computer is fed by learning examples of known handwriting characters, which have been previously tagged with which letter or number they correspond to and through the algorithm, the computer learns to recognize each character and , as the data character set increases, as does the precision. Handwriting recognition has a variety of applications, as diverse as automatically reading addresses on postal service letters, reducing bank check fraud, writing characters to write. computing pen. <p class="vanilla-image-block">Financial data on laptop screen Another type of problem for an artificial neural network is forecasting financial markets. This is also called "algorithmic trading", and it has been applied to all kinds of financial markets, stock markets, commodities, interest rates, and various currencies. In the case of the stock market, traders use neural network algorithms to find undervalued stocks, improve existing stock models, and use aspects of deep learning to optimize their algorithm as the market changes. There are now companies specializing in neural network stock trading algorithms, for example, MJ Trading Systems. Artificial neural network algorithms, with their inherent flexibility, continue to be applied to complex pattern recognition and prediction problems. In addition to the examples above, this includes various applications such as facial recognition in social media images, cancer screening for medical imaging, and business forecasting.