See also: Mef= [[AD INTERIM Artificial intelligence]], [[AD INTERIM Artificial intelligence (film)]], [[IA]]
The artificial intelligence (term created by John McCarthy), often shortened with the initials IA , is defined by one of its creators, Marvin Lee Minsky, as “the construction of computer programs which are devoted to tasks which are, for the moment, accomplished in a more satisfactory way by human beings because they ask high level mental processes such as: the training perceptuel, the organization of the memory and the critical reasoning”.
A difficult definitionFor the cognitive Sciences, any robot has a certain degree of “intelligence” as from the moment when it is able to adapt to the environment and to solve problems. The limits are however difficult to establish. Thus, according to certain data processing specialists, a program which plays sets of failures comprises an intelligence, but limited to a very restricted environment. For other data processing specialists, this environment is even so poor that one cannot accept the qualifier of intelligent for these programs.
There exist various definitions of the artificial intelligence, because:
- the adjective “artificial” is undoubtedly easy to include/understand: this type of intelligence is the result of a process created by the Homme, rather than of a biological natural process and évolutionnaire,
- On the other hand, the concept of “Intelligence” is difficult to encircle:
- capacity to acquire and retain knowledge, to learn or include/understand thanks to the experiment.
- the use of the reasoning ability to solve problems, and to answer quickly and in a way appropriate to a new situation, etc See What intelligence?
The problems raised by the artificial intelligence relate to various fields like:
- the Engineering, in particular for the construction of the Robot S,
- sciences of human cognition (cognitive neurosciences, cognitive psychology, etc)
- the Philosophy of the spirit for the questions associated with the Knowledge and the Conscience.
There exists on the other hand a consensus of separation between:
- weak artificial intelligence
- strong artificial intelligence
Strong artificial intelligenceThe theory of the strong artificial intelligence explains why it is possible to create a machine able to interpret by having thus a comprehension of its own reasoning, to simulate an intelligent behavior, to test a self-awareness, and feelings.
DefinitionThe concept of strong artificial intelligence refers to a machine able not only to produce an intelligent behavior, but to test an impression of a real self-awareness, “truths feelings” (no matter what one can put behind these words), and “a comprehension of its own reasoning”.
The strong artificial intelligence was used as engine with the discipline, but also caused many debates. While being based on the report which the conscience has a biological support and thus material, the majority of the scientists do not see an obstacle of principle to create one day a conscious intelligence on a material support other only biological. According to holding of the strong IA, so at present there are no computers or robots as intelligent as the human being, it is not a problem of tool but of design. There would be no functional limit (a computer is a universal machine of Turing with for only limits the limits of the calculability), it would be only limits related to the human aptitude to conceive the suitable program.
Estimate of feasibilityThis very coarse estimate is especially intended to specify the orders of magnitude involved.
A typical computer of 1970 carried out 107 logical operations a second, and thus occupied - geometrically - a kind of medium between a balances Roberval (1 logical operation a second) and the human Cerveau (coarsely 2 X 1014 logical operations a second, because formed of 2 X 1012 Neuron S which cannot each one commutate more than 100 times a second).
In 2005, a typical Microprocesseur treats 64 bits in parallel (128 in the case of machines with Dual-core) at typical 2 GHz, which places in installed capacity in the 1011 logical operations a second. With regard to these machines intended for the private individual , the variation was thus clearly reduced. With regard to the machines like Blue obstructs, it even changed direction.
The material is now present. Logiciel with the measurement of this hardware remains to be developed. Indeed, the important thing is not to reason more quickly, by treating more data, or by memorizing more things than the human brain, the important thing is to process the data in a suitable way.
The IA underlines the difficulty with of clarifying all knowledge useful for the resolution of a complex problem. Certain knowledge known as implicit is acquired by the experiment and badly formalisables. For example, what distinguishes a face familiar of two hundred others? We clearly do not know to express it.
The training of these tacit knowledges by the experiment seems a promising way (see Réseau of neurons). Nevertheless, another type of complexity appears, structural complexity. How to connect modules specialized to treat a certain type of information, for example a system of visual pattern recognition, a system of voice recognition, a system related on the motivation, driving coordination, the language, etc etc On the other hand, once such a conceived system and a training by the experiment realized, the intelligence of the robot could probably be duplicated in great number of specimens.
“'' How to get there from young stag? ''” (older article on the subject, in English)
Diversity of the opinionsThe principal opinions supported to answer the question of a conscious artificial intelligence are the following ones:
- Impossible : the conscience would be clean living organisms, and it would be related to the nature of the biological systems. This position is defended mainly by philosophers and of the monks.
- Problem : She points out however all the controversies passed between vitalistic and materialists, the history having on several occasions cancelled the positions of the first.
- Impossible with machines handling of the symbols like the current computers, but possible with systems whose material organization would be founded on quantum processes. This position is defended in particular by Roger Penrose. quantum algorithms are theoretically able to bring to a successful conclusion calculations out of the practical attack of the conventional calculators (complexity in NR ln NR instead of NR ², for example, subject to existence of the suitable calculator). Beyond the speed, the fact that a quantum computer is not a Machine of Turing (nonfounded on a formal system) opens possibilities which - according to this author - are basically prohibited with the machines of Turing.
- Problem : One does not lay out yet for the moment of algorithms of IA with to implement above. All that thus remains speculative.
- Impossible with machines handling of the symbols like the current computers, but possible with systems whose material organization mimerait the operation of the human brain, for example with specialized electronic circuits reproducing the operation of the neurons.
- Problem : The system in question answering exactly in the same way that its computer simulation - always possible - in the name of which principle of assigning a difference to them? .
- Impossible with the traditional algorithms handling of the symbols (formal logic), because of many knowledge is difficult to clarify but possible with a training by the experiment of this knowledge using tools such as Réseaux of formal neurons , whose logical and nonmaterial organization takes as a starting point the neurons biological, and used with conventional computer material.
- Problem : so conventional computer material is used to carry out a network of neurons, then it is possible to carry out the IA with the traditional computers handling of the symbols (since they are the same machines, to see Thèse of Church-Turing). This position thus appears incoherent. However, its defenders (thesis of the strong IA) assert that impossibility in question is related to our inaptitude with all to program in an explicit way, it has nothing to do with a theoretical impossibility. In addition, which makes a computer, a system containing exchanges of bits of paper in an immense room can simulate it more slowly a few billion times. However there can remain difficult to admit that this exchange of bits of paper “is a aware”. See Chinese Room. According to holding of the strong IA, that does not pose however a problem.
- Impossible because the thought is not a calculable phenomenon by discrete and finished processes. To pass from a state of thought to the following, there is a noncountable infinity, a continuity of transitory states. This idea is refuted by Alain Cardon (To model and design a Thinking machine).
- Possible with computers handling of the symbols. The concept of symbol is however to take in the broad sense. This option includes work on the reasoning or the training symbolic system based on the logic of the predicates, but also the techniques connexionnists such as the networks of neurons, which, in the base, are defined by symbols. This last opinion constitutes the most committed position in favor of the strong artificial intelligence.
Authors like Hofstadter (but already front him Arthur C. Clarke or Alan Turing) (see the Test of Turing) in addition express a doubt about the possibility of making the difference between an artificial intelligence which would test really a conscience, and another which would simulate this behavior exactly. After all, we cannot even be certain that other consciences that ours (at the human ones means itself) test really anything. There one finds the known problem of the Solipsisme in philosophy.
- the mathematician of physics Roger Penrose thinks that the conscience would come from the exploitation of quantum phenomena in the brain (see Microtubule S), preventing the realistic simulation of more than few tens of neurons on a normal computer, from where results still very partial of the IA. It remained until now isolated on this question. Another researcher presented since a of the same thesis spirit though less radical: Andrei Kirilyuk
However, the artificial intelligence is far from being only limited to the networks of neurons, which are generally used only like classifieurs. The techniques of general resolution of problems and the logic of the predicates, inter alia, provided spectacular results and are exploited by the engineers in many fields.
Popular cultureThe topic of a machine able to test a conscience and feelings - or in any case to make as if - constitutes a great classic of the Science-fiction, in particular in the series of novels of Isaac Asimov on the Robot S. This subject however was exploited very early, as in the account of the adventures of Pinocchio, published in 1881, where a puppet able to test love for its creator, seeks to become a true little boy. This screen strongly inspired the film AD INTERIM Artificial intelligence , carried out by Steven Spielberg, on the basis of idea of Stanley Kubrick. The work of daN Simmons, in particular the cycle of Hypérion, also contains talks and developments on the subject. Another major work of science fiction on this topic, Destination empties , of Frank Herbert, puts in scene in an attractive way emergence of a strong artificial intelligence.
Weak artificial intelligenceThe concept of artificial intelligence weak constitutes a pragmatic approach of Engineer: to seek to build increasingly autonomous systems (to reduce the cost of their supervision), algorithms able to solve problems of a certain class, etc But, this time, the machine simulates the intelligence, it seems to act as if it were intelligent. One sees of them concrete examples with the programs which try to pass the Test of Turing, like ELIZA. These programs manage to imitate in a coarse way the behavior the human ones vis-a-vis the other human ones during a dialog. These programs " semblent" intelligent, but are not it. Holding of the strong IA admit that there is well in this case a simulation of intelligent behaviors, but that it is easy to discover it and that one cannot thus generalize. Indeed, if one cannot differentiate two intelligent behaviors in experiments, can that of a machine and that of human, how one claim that the two things have different properties? The term even of " simulation of the intelligence" is disputed and would have, always according to them, being replaced by " reproduction of the intelligence".
Holding of the weak IA assert that the majority of the current techniques of artificial intelligence are inspired by their paradigm. It would be for example the step used by IBM in its project named Autonomic computing. The controversy persists nevertheless with holding of the strong IA which disputes this interpretation.
Simple evolution, therefore, and not revolution: the artificial intelligence is registered on this account in the line succession of what were the Operations research in the Années 1960, the Process control in the Années 1970, the Decision-making aid in the Années 1980 and the Data mining in the Années 1990. And, which more is, with some continuity .
It is especially of reconstituted human intelligence, and programming of training.
Currents of thoughtThe Cybernétique incipient from the Forties asserted its multi-field character very clearly and nourished the most various contributions: Neurophysiology, Psychology, Logical, Social sciences… And they is quite naturally that it considered two approaches of the systems, two approaches taken again by cognitive sciences and of this fact the artificial intelligence:
- an approach by the decomposition (the top downwards),
- an approach of construction Systemic globalist or (of bottom upwards).
These two approaches, rather complementary that contradictory, are respectively at the base of the working hypotheses which constitute the Cognitivisme and the Connexionnisme. They tend today (2005) to operate their fusion.
CognitivismThe Cognitivisme considers that living it, a such computer (although by very different processes obviously), handles primarily elementary symbols. In its book the company of the spirit , Marvin Minsky, being based on observations of the psychologist Jean Piaget considers the cognitive process like a competition of agents providing of the partial answers and whose opinions are arbitrated by other agents. It quotes the following examples of Piaget:
the child believes initially that the higher the water level is in glass, the more there is water in this glass. After having played with successive transfers, it integrates the fact that the concept height of the liquid in glass enters in competition with that of the diameter of glass, and referee of sound better between the two.
- It saw then a similar experiment by handling modeling clay : the reduction of several objects temporarily represented with the same ball of paste encourages it to release a concept of conservation of the quantity of matter .
At the end of the day, these sets of children appear essential with the formation of the spirit, who release some rules to arbitrate the various background information that it meets, by tests and errors.
Intelligent the Automates site returns account with a great regularity of discovered concerning this specific approach.
ConnexionnismThe Connexionnisme, referring to the processes Car-organization nels, considers cognition like the result of a total interaction of the elementary parts of a system. One cannot deny that the dog lays out of a kind of knowledge of the differential equations of the movement, since it manages to catch a stick with the vol. And not more than a cat has also a kind of knowledge of the law of fall of the bodies, since it behaves as if he knew starting from which height he should not try any more to jump directly to move towards the ground. This faculty which evokes a little the Intuition philosophers would characterize by the taking into account and the consolidation of perceptive elements whose no taken separately the threshold of the conscience reaches, or in any case does not start particular interpretation there.
SynthesisSites as Intelligent Automates regularly give an account (inter alia subjects) of the specific discoveries to these two approaches, and more and more to their synthesis. It is announced there that three concepts return in a recurring way in the majority of work:
redundancy (the system is not very sensitive to specific breakdowns)
- “re-entry” (the components is informed permanently between them; this concept differs from the Réentrance in Programmation)
- selection (with the wire of time, the effective behaviors are released and reinforced)
Scopes of applicationOne can plan to ask for the following services, together or separately, with a device of artificial intelligence:
- vocal Interface: to render comprehensible itself while speaking to him,
- Assistance by machines in the dangerous tasks, or asking a high degree of accuracy,
- Assistance with the medical diagnoses (although a tensiometer, which fulfills this function, is regarded by nobody as an application of the artificial intelligence),
- complexes Solution to problem, subject quantifying this word,
- Automatic translation, if possible in real-time or very slightly differed, as in film “Dune”,
- automatic Intégration of information coming from heterogeneous sources,
In the state, the current achievements of the artificial intelligence can be gathered in various fields, such as:
- the expert systems,
- the machine Learning,
- the automatic Treatment of the languages,
- the Pattern recognition, of the faces and vision in general, etc
With the wire of time, some computer programming languages proved more convenient than others to write applications of artificial intelligence. Among those, Lisp and Prolog undoubtedly were médiatisés. Lisp constituted a clever solution to make artificial intelligence in FORTRAN. ELIZA (the first Chatterbot, therefore not of the “true” artificial intelligence) held on three pages of SNOBOL.
One as uses, more for reasons of availability and performance as of convenience, traditional languages such as C or C++. Lisp had for its part a series of successors more or less inspired by him, of which the language Scheme.
Demonstration programs of simple geometrical theorems existed as of the Années 1960; and of the software as commonplace as Maple and Mathematica carry out today work of integration symbolic system which thirty years ago still were spring of a higher mathematical student of . But these programs do not know more not than they carry out geometrical or algebraic demonstrations that Deep Blue did not know that he played failures (or a program of invoicing which he calculates an invoice). These cases thus represent more of the intellectual operations computer-assisted calling upon the computing power that artificial intelligence strictly speaking.
In ludic data processing (the video games), the Artificial intelligence (IA) develops. Indeed the new generations of video charts treat a great number of operations before reserved for the Processeur. The processor is thus requested less for posting and the programmers can use his power to develop systems of more sophisticated IA.
The artificial intelligence made important great strides during the Années 1960 and 70, but following disappointing results compared to the invested budgets, its success grew blurred as of the medium of the Années 1980.
According to certain authors, the prospects for the artificial intelligence could have disadvantages, if for example the machines became more intelligent than the human ones, and ended up dominating them, (for most pessimistic) of even exterminating them, in the same way that we seek to exterminate certain sequences of ARN (viruses) whereas we are built starting from DNA, a close relation derived from the ARN. One recognizes of course the topic of the film Terminator, but of the very qualified directors of company technically, like Bill Joy of the company Sun, affirm to regard the risk as long-term reality.
PrecursorsIf progress of the artificial intelligence is recent, this topic of reflection is completely old, and it appears regularly during the history. The first signs of interest for an artificial intelligence and the principal precursors of this discipline are the following.
- One of the oldest traces of the topic of “the man in the machine” goes back to 800 before our era, in Egypt. The statue of the god Amon raised the arm to indicate new the Pharaon among the applicants who ravelled in front of him, then it “made” a speech of dedication. The Egyptians were probably conscious of the presence of a priest actuating a mechanism and declaring the words crowned behind the statue, but that did not seem to be for them contradictory with the incarnation of the divinity.
About the same time, Homère , in the Iliade (XVIII, 370-421), described the automats produced by the god blacksmith Héphaïstos: tripods provided with gold wheels, able to carry objects until the Olympe and to return only in the residence of the god; or, two maidservants forged out of gold who assist it in her task. In the same way, the Giant of Bronze Talos, guard of the shores of the Crete, was sometimes regarded as a work of the god.
Vitruve , Roman architect, described the existence between IIIe and before our era, of a school of engineers founded by Ctesibius with Alexandria, and designing mechanisms intended for the recreation such of the corbels which sang.
Héron old the describes in its treaty “Automats”, a carousel animated thanks to the vapor and considered as anticipating the steam engines.
the Automate S disappear then until the end from the Moyen-âge.
One lent to Roger Bacon the design of gifted automats of the word; in fact, probably of mechanisms simulating the pronunciation of certain simple words.
Léonard de Vinci built an automat in the shape of lion in the honor of Louis XII.
Gio Battista Aleotti and Solomon de Caus built birds artificial and singing, of the mechanical flutists, the nymphs, the dragons and the satyrs animated to brighten aristocratic festivals, gardens and caves.
Rene Descartes would have conceived in 1649 an automat that it called “my Francine daughter”. It in addition leads a reflection of an astonishing modernism on the differences between nature of the Automate S, and those on the one hand animals (not of difference) and on the other hand that of the men (not of assimilation). These analyzes make of it the precursor ignored of one of the main themes of the Science-fiction: indistinctness enters living it and the artificial one, between the men and the Robot S, the Androïde S or the artificial intelligences.
- Jacques de Vaucanson built in 1738 a “artificial Canard of gilded copper, which drinks, eats, cancane, splashes and digests like a truth Canard”. It was possible to program the movements of this automat, thanks to pinions placed on a engraved cylinder, which controlled rods crossing the legs of the Canard. The automat was exposed during several years in France, Italy and England, and the transparency of the abdomen made it possible to observe the internal mechanism. The device making it possible to simulate digestion and to expel a kind of green pulp is the subject of a controversy. Certain commentators estimate that this green pulp was not manufactured starting from introduced food, but prepared in advance. Others estimate that this opinion is founded only on imitations of duck of Vaucanson. Unfortunately, the fire of the Museum of Nijni Novgorod in Russia about 1879 destroyed this automat.
the craftsmen Pierre and Louis Jaquet-Droz manufactured among best the Automate S based on a purely mechanical system, before the development of the electromechanical devices. Some of these automats, by a system of multiple cams, were able to write a small ticket (of course always the same one).
Automatic thoughtCan the cognitive processes be reduced to simple a Calcul? And if such is the case, which are the Symbole S and the rules to be used?
The first tests of formalization of the Pensée are the following:
Raymond Lulle , missionary, philosopher, and Spanish theologist of the 13th century, made the first attempt to generate ideas by a mechanical system. He combined by chance concepts thanks to a kind of Rule slide, a zairja, on which swivelled of the engraved concentric discs of letters and philosophical symbols. He baptized his method Grand Art (Ars Magna), founded on the identification of basic concepts, then their mechanical combination either between them, or with related ideas. Raymond Lule applied his method to the Métaphysique, then with the Morale, the Médecine and the Astrologie. But it used only deductive logic , which did not make it possible its system to acquire a Apprentissage, nor more to call into question its starting principles: only inductive logic allows it.
Gottfried Wilhelm Leibnitz , at the 17th century, imagined a thinking Calcul ( calculus rationator ), by assigning a Nombre with each Concept. The handling of these numbers would have made it possible to solve the most difficult questions, and to even lead to a universal Langage. Leibnitz however showed that one of the main difficulties of this method, also met in modern work on the artificial intelligence, is the interconnection of all the concepts, which does not make it possible to isolate an idea from all the others to simplify the problems involved in the thought.
George Boole invented the mathematical formulation of the fundamental processes of the reasoning, known under the name of Boolean algebra. He was conscious of the bonds of his work with the mechanisms of the intelligence, as the title of its principal work published in 1854 shows it: “ laws of the thought ” ( The laws off thought ), on the Boolean algebra.
Gottlob Frege improved the system of Boole by inventing the concept of Prédicat, which is a logical entity either true, or distorts (any house has an owner), but containing nonlogical variables, not having of or any degree of truth (house, owner). This invention simply had a great importance since it made it possible to show general theorems, by applying typographical rules to whole of symbols. The reflection in language running did not relate any more but to the choice of the rules to apply. In addition, only the user knows the direction of the symbols which he invented, which brings back to the problem of significance in artificial intelligence, and subjectivity of the users.
Bertrand Russell and Alfred North Whitehead published in the beginning of the 20th century a work entitled “ Principia mathematica ”, in which they solve internal contradictions with the theory of Gottlob Frege. This work let hope to lead to a complete formalization of mathematics.
Kurt Gödel shows on the contrary that mathematics will remain an opened construction, while publishing in 1931 an article entitled “ Of the proposals formally indécidables contained in similar Principia mathematica and other systems ”. Its demonstration is that starting from a certain complexity of a system, one can create there more logical proposals than one cannot show some true or false. The arithmetic one, for example, cannot slice by its axioms if one must accept numbers whose square is -1. This choice remains arbitrary and is of nothing related to the basic axioms. The work of Gödel suggests that one will be able to thus create an arbitrary number of new axioms, compatible with the precedents, as one needs some. It should be noted that if the arithmetic is shown incomplete, the calculation of the predicates (formal logic) on the contrary is shown by complete Gödel like .
Alan Turing arrives at the same conclusions as Kurt Gödel, by inventing abstract and universal machines (renamed the machines of Turing), whose modern computers are regarded as concretizations. It shows the existence of calculations that no machine can make (a human step more, in the cases which it quotes), without for all this that constitutes for Turing a reason to doubt the feasibility of thinking machines answering the criteria of the Test of Turing.
Irving John Good , Myron Tribes and E.T. Jaynes described in a very clear way the rather simple principles of a robot to inductive Logique using the principles of the Inférence bayésienne to enrich its base by knowledge on the basis of the Théorème of Cox-Jaynes. They unfortunately did not treat the question in the way in which one could store this knowledge without the mode of storage involving a cognitive oblique . The project is close to that of Raymond Lulle, but founded this time on a Logique inductive, and thus clean to solve some open problems .
Robot with inductive logic
Of the researchers as Alonzo Church posed practical limits with the ambitions of the reason, by directing research (Herbert Simon, Michael Rabin, Stephen Cook) towards obtaining the solutions in finished time, or with resources limited, like worms the categorization of the problems according to classes of difficulty (in connection with work of Cantor on the infinite one).
Hopes and mistrustA spectacular description of a possible future of the artificial intelligence was made by the professor I.J. Good:
- “Let us suppose that exceeding in intelligence all exists a machine of which a man is able, also brilliance is there. The design of such machines belonging to the mental activities, this machine could in its turn create machines better than itself; that would without any doubt cause a chain reaction of development of the intelligence, while the human intelligence would remain almost on the spot. It results from it that the machine ultra intelligent will be the last invention which the man will have need to make , provided that the aforementioned machine is flexible enough for constantly obeying to him. ”
The situation in question, corresponding to a qualitative change of the principle even of progress, was named by some authors “the Singularity”.
Good estimated at a little more than one chance out of two the development of such a machine before the end of the 20th century. Isn't the prediction, in, (still?) realized, but the public had impregnated: the course of the action of IBM quadrupled (although the versed quarterly dividends remained with very little thing close the same ones) in the months which followed the victory of Deep Blue over Gary Kasparov. A broad part of the general public was indeed persuaded that IBM had just developed the vector of such a explosion of the intelligence and that this company would benefit from it. The hope was of course disappointed: once its acquired victory, Deep Blue, simple Calculator evaluating 200 million positions at the second, without conscience of the play itself, was reconverted out of traditional machine used for the Data mining. We are probably still very far from a machine having what we name of the general intelligence , and as much of a machine having the bases knowledge of any researcher, however humble is it.
On the other hand, a program " comprenant" a language natural and connected to the Internet would be theoretically likely to build, gradually, a basic kind of knowledge. We are unaware of however all today () the optimal structure to choose as well for such a base as time necessary to gather of it and to arrange the contents of it.
In the fiction
- Colossus : the Forbin project 1969, according to the novel of Refusals Feltham Jones of 1967 (a system of military IA states-unien contacts its Russian counterpart so that they cooperate with their common mission, to avoid the nuclear war… by neutralizing the human ones!) probably provided the starting idea of Terminator .
- Metropolis of Fritz Lang (1927), where, in a futuristic world, the robots and the human ones do not manage any more to coexist;
- 2001, the odyssey of space of Stanley Kubrick, the fight between HALL and Dave;
- D.A.R.Y.L , Daryl is an amnesic child collected on a road. But finally, the government seeks to destroy the dated analyzing robot youth lifeform ;
- the trilogy of the Terminator with Arnold Schwarzenegger, where Skynet seeks to eliminate the mankind
- Ghost in the Shell , where a IA wakes up with the conscience;
- the trilogy of the Matrix where the machines control the human ones;
- AD INTERIM Artificial intelligence of Steven Spielberg, inspired of the news of Brian Aldiss Supertoys last all the summer . The central figure is certainly an ultimate result - but for the only imaginary moment - artificial intelligence: a gifted child-robot of emotions and feelings;
- I, Robot with Will Smith, inspired of the work of Isaac Asimov and topic similar to the film AI ;
- Blade Runner of Ridley Scott (1981), where man-robots return on ground after a space mission (but do not accept the shut-down following the success of their mission);
- WarGames of John Badham (1983) with Matthew Broderick, where David is an hacker who by challenge manages to circumvent the most sophisticated security systems and to control the last generation of the data-processing plays. But when the unconscious one penetrates without being located in the middle of the military computer of the Pentagon, ministry for American defense (Department off Defense or DoD), it initiates by play a confrontation of world width (the system was envisaged at the base by its creator to simulate war games but it does not understand that it is now really connected)
- Code Lyoko, or a named artificial intelligence XANA acquires a conscience and tries to conquer the real-world via a Virtual world, then World network
Video gamesThe usual diagram of a cycle of reflection of an intelligent entity in the video games is: Perception => reflection => Action. The majority of the video games use solutions Ad hoc to manage these three phases. There exist nevertheless emergent solutions Middleware.
With regard to the actions, one can isolate two great fields: the Pathfinding, and the procedural Animation: First is studied for a long time and functions in a suitable way in the majority of the current plays (although these pathfinder often remains in 2D and thus do not allow flying objects to move freely). There exist some solutions middleware of pathfinding. Procedural animation is as for it in an embryonic state, but of many research teams currently lean on the subject, had a presentiment of like the functionality of the future.
The IA of the video games is with its stammerings and certainly will carry out enormous progress in the years to come. Perhaps this progress will pass by a material Accélération like was to it the 3D and as is being it the " physique" (management of the movements in Newtonian mechanics either by a dedicated processor, or by a graphics processor reprogrammed for this purpose).
- ambient Intelligence;
- Métaheuristique S (of which the genetic Algorithms);
- Genetic Programming;
- machine Learning (Networks of neurons, Concepts Formal, etc);
- Dated mining;
- Inference bayésienne;
- Expert system;
- System multi-agents;
- cognitive Architecture;
- Logical fuzzy ( fuzzy logic );
- Programming by constraints;
- Reasoning by case;
- Game theory;
- Theorem of Cox-Jaynes.
; Great names of the artificial intelligence
- Douglas Lenat;
- Marvin Lee Minsky ;
- Seymour Papert ;
- John McCarthy
- Jacques Pitrat in France;
- For the anecdote, let us quote Daniel Goossens, researcher in Université Paris 8 but also author of band-drawn of foreground.
|Random links:||F-Zero | Jean-Luc Battini | Sebastien Descons | Loth Bounatiro | David Landes | CAF_Ajax|