Checkers (aka 8x8 draughts): Weakly solved (2007).sub-human: performs worse than most humans.par-human: performs similarly to most humans.high-human: performs better than most humans.super-human: performs better than all humans.optimal: it is not possible to perform better (note: some of these entries were solved by humans).īroad classes of outcome for an AI test may be given as: E-sports continue to provide additional benchmarks Facebook AI, Deepmind, and others have engaged with the popular StarCraft franchise of videogames. Games of imperfect knowledge provide new challenges to AI in the area of game theory the most prominent milestone in this area was brought to a close by Libratus' poker victory in 2017.
Deep Mind’s AlphaGo AI software program defeated the world’s best professional Go Player Lee Sedol. AlphaGo brought the era of classical board-game benchmarks to a close when Artificial Intelligence proved their competitive edge over humans in 2016. Games provide a high-profile benchmark for assessing rates of progress many games have a large professional player base and a well-established competitive rating system. Researcher Andrew Ng has suggested, as a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI." While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets. Some versions of Moravec's paradox observe that humans are more likely to outperform machines in areas such as physical dexterity that have been the direct target of natural selection.
There is no consensus on how to characterize which tasks AI tends to excel at.
This gives better insight into the comparative success of artificial intelligence in different areas.ĪI, like electricity or the steam engine, is a general purpose technology. There are many useful abilities that can be described as showing some form of intelligence. 4.3 Human-level artificial general intelligence (AGI).2 Proposed tests of artificial intelligence.Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
Such tests have been termed subject matter expert Turing tests. To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Kaplan and Haenlein structure artificial intelligence along three evolutionary stages: 1) artificial narrow intelligence – applying AI only to specific tasks 2) artificial general intelligence – applying AI to several areas and able to autonomously solve problems they were never even designed for and 3) artificial super intelligence – applying AI to any area capable of scientific creativity, social skills, and general wisdom. However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems, but the field was rarely credited for these successes at the time. Red line - the error rate of a trained human on a particular task.Īrtificial intelligence applications have been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery, video games, and toys. Progress in machine classification of images