History of AI and ML: How It Has Evolved Over Time
Artificial Intelligence (AI) and Machine Learning (ML) are today’s buzzwords. In an age of digitization, AI and ML are increasingly finding applications. Freshers and experienced alike are trying to take up AI and Machine Learning Course in Austin, New York, London, etc., to enhance their career prospects. Though it appears to us that these are budding concepts, the history of AI and ML dates back to the previous millennium.
In the early 20th century, we were introduced to the concept of robots operated by artificial intelligence through the Tin man in “Wizard of Oz.” There are myths about the existence of mechanical men among ancient Greek and Egypt texts. In this article, we will learn about the history of AI and ML and how it has evolved in due course.
The Prenatal Phase of Artificial Intelligence and Machine Learning
The first work on artificial intelligence was done in 1943 by Warren McCulloch and Walter Pits. The proposal was made based on a model of artificial neurons. In 1949, Donald Hebb proposed a rule called Hebbian learning, where he proposed an upgraded rule to change the strength of connecting the neurons. In 1950, a young British mathematician called Alan Turing explored the mathematical possibility of artificial intelligence and became the pioneer of machine learning. He published a paper in 1950, the logical framework of which was based on “Computing Machinery and Intelligence,” as per which he posed a question that if human beings can use the available information to solve a problem and make decisions, why can’t machines? He proposed a test to check the machines’ ability to display intelligent behavior similar to human beings. This test was called the Turing Test.
Even though Turing proposed a logical framework, this could not bear fruit because before 1949, computers lacked the necessary requirements for intelligence. Computers of those times could only execute commands and not store commands. For Turing’s proposal to show results, there had to be some fundamental changes in computers. The other reason was that computers were extremely expensive, allowing only big IT companies and popular universities to afford to experiment with computers.
Artificial Intelligence Is Born
In 1955, Allen Newell, Cliff Shaw, and Herbert A. Simon designed the first artificial intelligence program called “Logic Theorist,” which mirrored a human being’s problem-solving skills. This program, funded by Research and Development (RAND) Corporation, proved 38 of 52 Mathematics theorems using the program. In addition, they also found new and more elegant proofs for some theorems.
In 1956, the program was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by American Computer scientists John McCarthy and Marvin Minsky. McCarthy adopted the term “Artificial Intelligence” for the first time at the Dartmouth Conference, and the subject entered the academic field. By this time, high-level machine languages such as FORTRAN, LISP, or COBOL were invented, and the parallel discovery of AI arose a high level of enthusiasm among people.
John McCarthy brought in top researchers from different fields hoping they would give their valuable inputs to discuss artificial intelligence and expecting it to be a collaborative effort. His expectations fell short as a mutual agreement on standard methods of practice could not be reached. However, what was realized was that AI was achievable and that this conference marked a significant event in the history of AI.
The Golden Period of Artificial Intelligence
From the latter half of 1950 until 1974, AI saw prosperity. Computers that earlier could just perform commands could now store information. They became more accessible, faster, and financially affordable. There was an improvement in designing and understanding the suitability of algorithms to solve problems.
In 1966, Joseph Weizenbaum invented the first chatbot called ELIZA. Weizenbaum’s ELIZA and Newell and Simon’s General Problem Solver showed a new dimension of problem-solving and the interpretation of spoken language. The researchers emphasized developing algorithms to solve mathematical problems. The success and advocacy of these researches persuaded leading government agencies, for example, Defense Advanced Research Projects Agency (DARPA), to invest in AI research at many institutions. In 1972, Japan created the first intelligent humanoid robot called WABOT-1.
The Winter Season
During 1774- 1980, AI saw its first winter season. This was a time when computer scientists had a severe shortage of funding from the Government to continue research in the field of AI. Subsequently, public interest in this field shrunk.
Ups and Downs of AI
In 1980, AI returned to the ground with “Expert Systems,” which were programs designed to facilitate decision-making ability in computers like human beings. In the same year, Stanford University conducted the first national conference of the American Association of Artificial Intelligence. Other than the “expert system,” “deep learning” techniques by John Hopfield and David Rumelhart facilitated computers to learn about user experience. In this decade, two events brought AI back to the market. One was the growth of the algorithmic toolkit. The second was a raise in funds.
From 1982-1990, the Japanese Government heavily invested in expert systems and other AI-related activities as a part of their Fifth Generation Computer Project (FGCP). The Government provided a capital amount of $400 million dollars to revolutionize computer processing, implement logic programming, and improve artificial intelligence. It was unfortunate that most of the goals were not met. However, it sowed the seeds of a future that would see talented engineers and scientists.
The period between 1987 and 1983 saw another winter. The high cost involved and lack of results stopped investors and governments from providing funds for AI. Ironically, in the 1990s and 2000s, AI flourished even in the absence of requisite funding and popularity. In 1997, history was created when IBM’s chess program called Deep Blue bested world chess champion and grand master Gary Kasparov. It was a major achievement in the history of AI and ML. Other developments that took place during this decade were the implementation of speech recognition software developed by Dragon Systems on Windows; and the recognition and display of human emotions by Kismet, a robot developed by Cynthia Breazeal.
The Rise of AI
In 2002, an AI-enabled vacuum cleaner called Roomba entered houses. In 2006, businesses started implementing AI. Today companies such as Facebook, Twitter, Netflix, Amazon, and Google use AI. There was an incident where a lady on the other side of an appointment call did not realize that she was talking to a virtual assistant that had Google’s AI program “Duplex” implemented.
Conclusion
Concepts such as Deep learning, big data, and data science are booming in today’s times. We are on the cusp of the Fourth Industrial Revolution, which according to Forbes, will see exponential changes, completely different from the previous three revolutions. Experts predict that in the next 20 years, AI will improve the quality of human life. Vehicles will be automated and AI-enabled machines will be implemented at a larger level of customer interaction. The history of AI and ML that started with a proposal by Alan Turing posing a question will show a rise in contemplation in the years ahead.