A neural network is a series of algorithms that endeavor to recognize underlying relationships in a set of data through a process that mimics how the human brain operates.
In this sense, neural networks refer to systems of neurons, either organic or artificial in nature.
Neural networks have higher computational rates than conventional computers because many of the operations are done in parallel. That is not the case when the neural network is simulated on a computer. The idea behind neural nets is based on the way the human brain works.
Which is the first neural network?
MADALINE was the first neural network applied to a real-world problem, using an adaptive filter that eliminates echoes on phone lines. While the system is as ancient as air traffic control systems, it is still in commercial use, like air traffic control systems.
Which is the most straightforward neural network?
Invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory, a perceptron is the most straightforward neural network possible: a single neuron’s computational model. A perceptron consists of one or more inputs, a processor, and a single output.
What are the types of neural networks?
- Artificial Neural Networks (ANN)
- Convolution Neural Networks (CNN)
- Recurrent Neural Networks (RNN)
The Origin of Neural Network
After knowing the necessary things, let’s dig into the history of neural networks.
The first step toward artificial neural networks came in 1943 when Warren McCulloch, a neurophysiologist, and a young mathematician, Walter Pitts, wrote a paper on how neurons might work. They modeled a simple neural network with electrical circuits. Donald Hebb wrote a book, Organisation of Behavior, in 1949 on reinforcing this concept of neurons and how they work. It pointed out that neural pathways are strengthened each time that they are used.
As computers advanced into their infancy of the 1950s, it became possible to begin to model these theories’ rudiments concerning human thought. Nathanial Rochester from the IBM research laboratories led the first effort to simulate a neural network that failed, but later attempts were successful. During this time, traditional computing began to flower, and, as it did, the emphasis on computing left neural research in the background.
Yet, throughout this time, advocates of “thinking machines” continued to argue their cases. In 1956 the Dartmouth Summer Research Project on Artificial Intelligence provided a boost to artificial intelligence and neural networks. One of the outcomes of this process was to stimulate research in the intelligent side, AI, known throughout the industry, and in the much lower level neural processing part of the brain.
Also, Frank Rosenblatt, a neuro-biologist of Cornell, began work on the Perceptron. He was intrigued with the operation of the eye of a fly. Much of the processing which tells a fly to flee is done in its sight. The Perceptron, which resulted from this research, was built in hardware and is the oldest neural network today. A single-layer perceptron was useful in classifying a continuous-valued set of inputs into one of two classes. The Perceptron computes a weighted sum of the information, subtracts a threshold, and passes one of two possible values out as a result. Unfortunately, the Perceptron is limited and was proven during the “disillusioned years” in Marvin Minsky and Seymour Papert’s 1969 book Perceptrons.
In 1959, Bernard Widrow and Marcian Hoff of Stanford developed models they called ADALINE and MADALINE. These models were named for their use of Multiple ADAptive LINear Elements. MADALINE was the first neural network to be applied to a real-world problem. It is an adaptive filter that eliminates echoes on phone lines. This neural network is still in commercial use. Unfortunately, these earlier successes caused people to exaggerate the potential of neural networks, particularly in light of the electronics’ limitations then available. This excessive hype, which flowed out of the academic and technical worlds, infected the general literature of the time.
A fear set in as writers began to ponder what effect “thinking machines” would have on a man. The concern, combined with unfulfilled, outrageous claims, caused respected voices to critique the neural network research. The result was to halt much of the funding. This period of stunted growth lasted through 1981.
In 1982 several events caused a renewed interest. John Hopfield of Caltech presented a paper to the National Academy of Sciences. Hopfields approach was not to simply model brains but to create useful devices. With clarity and mathematical analysis, he showed how such networks could work and what they could do. Yet, Hopfields biggest asset was his charisma. He was articulate, likable, and a champion of dormant technology.
At the same time, another event occurred. A conference was held in Kyoto, Japan. This conference was the US-Japan Joint Conference on Cooperative/Competitive Neural Networks.
Japan subsequently announced its Fifth Generation effort. US periodicals picked up that story, generating a worry that the US could be left behind. Soon funding was flowing once again.
By 1985 the American Institute of Physics began an annual meeting – Neural Networks for Computing. By 1987, the Institute of Electrical and Electronic Engineer’s (IEEE) first International Conference on Neural Networks drew more than 1,800 attendees.
By 1989 at the Neural Networks for Defense meeting Bernard Widrow told his audience that they were engaged in World War IV, “World War III never happened,” where the battlefields are world trade and manufacturing.
Today, neural network discussions are occurring everywhere. Their promise seems very bright as nature itself is the proof that this kind of thing works. Yet, its future, indeed the very key to the whole technology, lies in hardware development.
Currently, most neural network development is merely proving that the principle works. This research is developing neural networks that, due to processing limitations, take weeks to learn. To take these prototypes out of the lab and put them into use requires specialized chips.
Companies are working on three types of neuro chips – digital, analog, and optical. Some companies are working on creating a “silicon compiler” to generate a neural network Application Specific Integrated Circuit (ASIC). These ASICs and neuron-like digital chips appear to be the wave of the near future.
Ultimately, optical chips look very promising. Yet, it may be years before optical chips see the light of day in commercial applications.