#CouncilofEurope: History of #ArtificialIntelligence

Read Time8 Minute, 22 Second

Artificial intelligence (AI) is a young discipline of sixty years, which is a set of sciences, theories and techniques (including mathematical logic, statistics, probabilities, computational neurobiology, computer science) that aims to imitate the cognitive abilities of a human being. Initiated in the breath of the Second World War, its developments are intimately linked to those of computing and have led computers to perform increasingly complex tasks, which could previously only be delegated to a human.

However, this automation remains far from human intelligence in the strict sense, which makes the name open to criticism by some experts. The ultimate stage of their research (a “strong” AI, i.e. the ability to contextualize very different specialized problems in a totally autonomous way) is absolutely not comparable to current achievements (“weak” or “moderate” AIs, extremely efficient in their training field). The “strong” AI, which has only yet materialized in science fiction, would require advances in basic research (not just performance improvements) to be able to model the world as a whole.

Since 2010, however, the discipline has experienced a new boom, mainly due to the considerable improvement in the computing power of computers and access to massive quantities of data.

Promises, renewed, and concerns, sometimes fantasized, complicate an objective understanding of the phenomenon. Brief historical reminders can help to situate the discipline and inform current debates.

1940-1960: Birth of AI in the wake of cybernetics

The period between 1940 and 1960 was strongly marked by the conjunction of technological developments (of which the Second World War was an accelerator) and the desire to understand how to bring together the functioning of machines and organic beings. For Norbert Wiener, a pioneer in cybernetics, the aim was to unify mathematical theory, electronics and automation as “a whole theory of control and communication, both in animals and machines”. Just before, a first mathematical and computer model of the biological neuron (formal neuron) had been developed by Warren McCulloch and Walter Pitts as early as 1943.

At the beginning of 1950, John Von Neumann and Alan Turing did not create the term AI but were the founding fathers of the technology behind it: they made the transition from computers to 19th century decimal logic (which thus dealt with values from 0 to 9) and machines to binary logic (which rely on Boolean algebra, dealing with more or less important chains of 0 or 1). The two researchers thus formalized the architecture of our contemporary computers and demonstrated that it was a universal machine, capable of executing what is programmed. Turing, on the other hand, raised the question of the possible intelligence of a machine for the first time in his famous 1950 article “Computing Machinery and Intelligence” and described a “game of imitation”, where a human should be able to distinguish in a teletype dialogue whether he is talking to a man or a machine. However controversial this article may be (this “Turing test” does not appear to qualify for many experts), it will often be cited as being at the source of the questioning of the boundary between the human and the machine.

The term “AI” could be attributed to John McCarthy of MIT (Massachusetts Institute of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as “the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning. The summer 1956 conference at Dartmouth College (funded by the Rockefeller Institute) is considered the founder of the discipline. Anecdotally, it is worth noting the great success of what was not a conference but rather a workshop. Only six people, including McCarthy and Minsky, had remained consistently present throughout this work (which relied essentially on developments based on formal logic).

While technology remained fascinating and promising (see, for example, the 1963 article by Reed C. Lawlor, a member of the California Bar, entitled “What Computers Can Do: Analysis and Prediction of Judicial Decisions”), the popularity of technology fell back in the early 1960s. The machines had very little memory, making it difficult to use a computer language. However, there were already some foundations still present today such as the solution trees to solve problems: the IPL, information processing language, had thus made it possible to write as early as 1956 the LTM (logic theorist machine) program which aimed to demonstrate mathematical theorems.

Herbert Simon, economist and sociologist, prophesied in 1957 that the AI would succeed in beating a human at chess in the next 10 years, but the AI then entered a first winter. Simon’s vision proved to be right… 30 years later.

1980-1990: Expert systems

In 1968 Stanley Kubrick directed the film “2001 Space Odyssey” where a computer – HAL 9000 (only one letter away from those of IBM) summarizes in itself the whole sum of ethical questions posed by AI: will it represent a high level of sophistication, a good for humanity or a danger? The impact of the film will naturally not be scientific but it will contribute to popularize the theme, just as the science fiction author Philip K. Dick, who will never cease to wonder if, one day, the machines will experience emotions.

It was with the advent of the first microprocessors at the end of 1970 that AI took off again and entered the golden age of expert systems.

The path was actually opened at MIT in 1965 with DENDRAL (expert system specialized in molecular chemistry) and at Stanford University in 1972 with MYCIN (system specialized in the diagnosis of blood diseases and prescription drugs). These systems were based on an “inference engine,” which was programmed to be a logical mirror of human reasoning. By entering data, the engine provided answers of a high level of expertise.

The promises foresaw a massive development but the craze will fall again at the end of 1980, early 1990. The programming of such knowledge actually required a lot of effort and from 200 to 300 rules, there was a “black box” effect where it was not clear how the machine reasoned. Development and maintenance thus became extremely problematic and – above all – faster and in many other less complex and less expensive ways were possible. It should be recalled that in the 1990s, the term artificial intelligence had almost become taboo and more modest variations had even entered university language, such as “advanced computing”.

The success in May 1997 of Deep Blue (IBM’s expert system) at the chess game against Garry Kasparov fulfilled Herbert Simon’s 1957 prophecy 30 years later but did not support the financing and development of this form of AI. The operation of Deep Blue was based on a systematic brute force algorithm, where all possible moves were evaluated and weighted. The defeat of the human remained very symbolic in the history but Deep Blue had in reality only managed to treat a very limited perimeter (that of the rules of the chess game), very far from the capacity to model the complexity of the world.

Since 2010: a new bloom based on massive data and new computing power

Two factors explain the new boom in the discipline around 2010.

  • First of all, access to massive volumes of data. To be able to use algorithms for image classification and cat recognition, for example, it was previously necessary to carry out sampling yourself. Today, a simple search on Google can find millions.
  • Then the discovery of the very high efficiency of computer graphics card processors to accelerate the calculation of learning algorithms. The process being very iterative, it could take weeks before 2010 to process the entire sample. The computing power of these cards (capable of more than a thousand billion transactions per second) has enabled considerable progress at a limited financial cost (less than 1000 euros per card).

This new technological equipment has enabled some significant public successes and has boosted funding: in 2011, Watson, IBM’s IA, will win the games against 2 Jeopardy champions! ». In 2012, Google X (Google’s search lab) will be able to have an AI recognize cats on a video. More than 16,000 processors have been used for this last task, but the potential is extraordinary: a machine learns to distinguish something. In 2016, AlphaGO (Google’s AI specialized in Go games) will beat the European champion (Fan Hui) and the world champion (Lee Sedol) then herself (AlphaGo Zero). Let us specify that the game of Go has a combinatorics much more important than chess (more than the number of particles in the universe) and that it is not possible to have such significant results in raw strength (as for Deep Blue in 1997).

Where did this miracle come from? A complete paradigm shift from expert systems. The approach has become inductive: it is no longer a question of coding rules as for expert systems, but of letting computers discover them alone by correlation and classification, on the basis of a massive amount of data.

Among machine learning techniques, deep learning seems the most promising for a number of applications (including voice or image recognition). In 2003, Geoffrey Hinton (University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (University of New York) decided to start a research program to bring neural networks up to date. Experiments conducted simultaneously at Microsoft, Google and IBM with the help of the Toronto laboratory in Hinton showed that this type of learning succeeded in halving the error rates for speech recognition. Similar results were achieved by Hinton’s image recognition team.

Overnight, a large majority of research teams turned to this technology with indisputable benefits. This type of learning has also enabled considerable progress in text recognition, but, according to experts like Yann LeCun, there is still a long way to go to produce text understanding systems. Conversational agents illustrate this challenge well: our smartphones already know how to transcribe an instruction but cannot fully contextualize it and analyze our intentions.

Source: https://www.coe.int/en/web


Robert Williams

About Post Author


News247WorldPress is a UK News Agency. The Agency was founded in August 2014 by Robert Williams and L. J. Rothschild. As an international news agency we cover all the important top news of the day in text, pictures and graphics in many languages: German, English, Romanian and more...

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: