Artificial intelligence

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Taw (talk | contribs) at 16:59, 9 December 2001 (see also: eliza, alice). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Jump to navigation Jump to search

Artificial intelligence (commonly abbreviated as AI) was defined as "The science of making machines do things that would require intelligence if done by humans" by Marvin Minsky in 1968.

To date, much of the work in this field has been done with computer simulations of intelligence based on predefined sets of rules.

Two popular AI languages are LISP and Prolog (a logical programming language).


A seminal work in the concept of computer intelligence is "On Computing Machinery and Intelligence" (1950), by Alan Turing. See Turing Test for further discussion.


The Loebner Prize competition has been claimed to be "the first formal instantiation of the Turing Test." Even so, many computer scientists reject any validity in the test, claiming most of the entrants to be formula based gimmics. Criticisms include that the target (of a computer to be able to reply indistinguishably from a real person) is much too far fetched. Anybody attempting it, they claim, would be forced to use formulaic means (such as a database of pre-made replies) in order to win. Despite the publicity for AI generated by the Loebner prize some even see it as detrimental to the field. The argument is that it focuses resources to much on trying to emulate humans, rather than trying innovative approaches with more easily obtainable targets.


See also: Artificial intelligence projects, computer science, cognitive science, semantics, The Singularity, ALICE, ELIZA


Fields in AI:




Loebner Prize website at: http://www.loebner.net/Prizef/loebner-prize.html



For the film see Artificial Intelligence film


/Talk