Computer models solving intelligence test problems: Progress and implications

Jose Hernandez-Orallo, Fernando Martinez-Plumed, Ute Schmid, Michael Siebers, David Leonard Dowe

    Research output: Contribution to journalArticleResearchpeer-review

    17 Citations (Scopus)

    Abstract

    While some computational models of intelligence test problems were proposed throughout the second half of the XXth century, in the first years of the XXIst century we have seen an increasing number of computer systems being able to score well on particular intelligence test tasks. However, despite this increasing trend there has been no general account of all these works in terms of how they relate to each other and what their real achievements are. Also, there is poor understanding about what intelligence tests measure in machines, whether they are useful to evaluate AI systems, whether they are really challenging problems, and whether they are useful to understand (human) intelligence. In this paper, we provide some insight on these issues, in the form of nine specific questions, by giving a comprehensive account of about thirty computer models, from the 1960s to nowadays, and their relationships, focussing on the range of intelligence test tasks they address, the purpose of the models, how general or specialised these models are, the AI techniques they use in each case, their comparison with human performance, and their evaluation of item difficulty. As a conclusion, these tests and the computer models attempting them show that AI is still lacking general techniques to deal with a variety of problems at the same time. Nonetheless, a renewed attention on these problems and a more careful understanding of what intelligence tests offer for AI may help build new bridges between psychometrics, cognitive science, and AI; and may motivate new kinds of problem repositories.
    Original languageEnglish
    Pages (from-to)74 - 107
    Number of pages34
    JournalArtificial Intelligence
    Volume230
    DOIs
    Publication statusPublished - 1 Jan 2016

    Keywords

    • Intelligence tests
    • Cognitive models
    • Artificial intelligence
    • Intelligence evaluation

    Cite this

    Hernandez-Orallo, Jose ; Martinez-Plumed, Fernando ; Schmid, Ute ; Siebers, Michael ; Dowe, David Leonard. / Computer models solving intelligence test problems: Progress and implications. In: Artificial Intelligence. 2016 ; Vol. 230. pp. 74 - 107.
    @article{5afa85939f5744399a468afaff39c828,
    title = "Computer models solving intelligence test problems: Progress and implications",
    abstract = "While some computational models of intelligence test problems were proposed throughout the second half of the XXth century, in the first years of the XXIst century we have seen an increasing number of computer systems being able to score well on particular intelligence test tasks. However, despite this increasing trend there has been no general account of all these works in terms of how they relate to each other and what their real achievements are. Also, there is poor understanding about what intelligence tests measure in machines, whether they are useful to evaluate AI systems, whether they are really challenging problems, and whether they are useful to understand (human) intelligence. In this paper, we provide some insight on these issues, in the form of nine specific questions, by giving a comprehensive account of about thirty computer models, from the 1960s to nowadays, and their relationships, focussing on the range of intelligence test tasks they address, the purpose of the models, how general or specialised these models are, the AI techniques they use in each case, their comparison with human performance, and their evaluation of item difficulty. As a conclusion, these tests and the computer models attempting them show that AI is still lacking general techniques to deal with a variety of problems at the same time. Nonetheless, a renewed attention on these problems and a more careful understanding of what intelligence tests offer for AI may help build new bridges between psychometrics, cognitive science, and AI; and may motivate new kinds of problem repositories.",
    keywords = "Intelligence tests, Cognitive models, Artificial intelligence, Intelligence evaluation",
    author = "Jose Hernandez-Orallo and Fernando Martinez-Plumed and Ute Schmid and Michael Siebers and Dowe, {David Leonard}",
    year = "2016",
    month = "1",
    day = "1",
    doi = "10.1016/j.artint.2015.09.011",
    language = "English",
    volume = "230",
    pages = "74 -- 107",
    journal = "Artificial Intelligence",
    issn = "0004-3702",
    publisher = "Elsevier",

    }

    Computer models solving intelligence test problems: Progress and implications. / Hernandez-Orallo, Jose; Martinez-Plumed, Fernando; Schmid, Ute; Siebers, Michael; Dowe, David Leonard.

    In: Artificial Intelligence, Vol. 230, 01.01.2016, p. 74 - 107.

    Research output: Contribution to journalArticleResearchpeer-review

    TY - JOUR

    T1 - Computer models solving intelligence test problems: Progress and implications

    AU - Hernandez-Orallo, Jose

    AU - Martinez-Plumed, Fernando

    AU - Schmid, Ute

    AU - Siebers, Michael

    AU - Dowe, David Leonard

    PY - 2016/1/1

    Y1 - 2016/1/1

    N2 - While some computational models of intelligence test problems were proposed throughout the second half of the XXth century, in the first years of the XXIst century we have seen an increasing number of computer systems being able to score well on particular intelligence test tasks. However, despite this increasing trend there has been no general account of all these works in terms of how they relate to each other and what their real achievements are. Also, there is poor understanding about what intelligence tests measure in machines, whether they are useful to evaluate AI systems, whether they are really challenging problems, and whether they are useful to understand (human) intelligence. In this paper, we provide some insight on these issues, in the form of nine specific questions, by giving a comprehensive account of about thirty computer models, from the 1960s to nowadays, and their relationships, focussing on the range of intelligence test tasks they address, the purpose of the models, how general or specialised these models are, the AI techniques they use in each case, their comparison with human performance, and their evaluation of item difficulty. As a conclusion, these tests and the computer models attempting them show that AI is still lacking general techniques to deal with a variety of problems at the same time. Nonetheless, a renewed attention on these problems and a more careful understanding of what intelligence tests offer for AI may help build new bridges between psychometrics, cognitive science, and AI; and may motivate new kinds of problem repositories.

    AB - While some computational models of intelligence test problems were proposed throughout the second half of the XXth century, in the first years of the XXIst century we have seen an increasing number of computer systems being able to score well on particular intelligence test tasks. However, despite this increasing trend there has been no general account of all these works in terms of how they relate to each other and what their real achievements are. Also, there is poor understanding about what intelligence tests measure in machines, whether they are useful to evaluate AI systems, whether they are really challenging problems, and whether they are useful to understand (human) intelligence. In this paper, we provide some insight on these issues, in the form of nine specific questions, by giving a comprehensive account of about thirty computer models, from the 1960s to nowadays, and their relationships, focussing on the range of intelligence test tasks they address, the purpose of the models, how general or specialised these models are, the AI techniques they use in each case, their comparison with human performance, and their evaluation of item difficulty. As a conclusion, these tests and the computer models attempting them show that AI is still lacking general techniques to deal with a variety of problems at the same time. Nonetheless, a renewed attention on these problems and a more careful understanding of what intelligence tests offer for AI may help build new bridges between psychometrics, cognitive science, and AI; and may motivate new kinds of problem repositories.

    KW - Intelligence tests

    KW - Cognitive models

    KW - Artificial intelligence

    KW - Intelligence evaluation

    U2 - 10.1016/j.artint.2015.09.011

    DO - 10.1016/j.artint.2015.09.011

    M3 - Article

    VL - 230

    SP - 74

    EP - 107

    JO - Artificial Intelligence

    JF - Artificial Intelligence

    SN - 0004-3702

    ER -