An eyes and hands model for cognitive architectures to interact with user interfaces

Farnaz Tehranchi, Frank Edward Ritter

Research output: Contribution to conferencePaper

Abstract

We propose a cognitive model to interact with interfaces. The main objective of cognitive science is understanding the nature of the human mind to develop a model that predicts and explains human behavior. These models are useful to Human- Computer Interaction (HCI) by predicting task performance and times, assisting users, finding error patterns, and by acting as surrogate users. In the future these models will be able to watch users, correct the discrepancy between model and user, better predicting human performance for interactive design, and also useful for AI interface agents. To be fully integrated into HCI design these models need to interact with interfaces. The two main requirements for a cognitive model to interact with the interface are (a) the ability to access the information on the screen, and (b) the ability to pass commands. To hook models to interfaces in the general way we work within a cognitive architecture. Cognitive architectures are computational frameworks to execute cognition theories - they are essentially programming languages designed for modeling. Prominent examples of these architectures are Soar [1] and ACT-R [2]. ACT-R models could access the world interacting directly with the Emacs text editor [3]. We present an initial model of eyes and hands within the ACT-R cognitive architecture that can interact with Emacs.

Original languageEnglish (US)
Pages15-20
Number of pages6
StatePublished - Jan 1 2017
Event28th Modern Artificial Intelligence and Cognitive Science Conference, MAICS 2017 - Fort Wayne, United States
Duration: Apr 28 2017Apr 29 2017

Other

Other28th Modern Artificial Intelligence and Cognitive Science Conference, MAICS 2017
CountryUnited States
CityFort Wayne
Period4/28/174/29/17

Fingerprint

User interfaces
Human computer interaction
Interfaces (computer)
File editors
Hooks
Computer programming languages

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence

Cite this

Tehranchi, F., & Ritter, F. E. (2017). An eyes and hands model for cognitive architectures to interact with user interfaces. 15-20. Paper presented at 28th Modern Artificial Intelligence and Cognitive Science Conference, MAICS 2017, Fort Wayne, United States.
Tehranchi, Farnaz ; Ritter, Frank Edward. / An eyes and hands model for cognitive architectures to interact with user interfaces. Paper presented at 28th Modern Artificial Intelligence and Cognitive Science Conference, MAICS 2017, Fort Wayne, United States.6 p.
@conference{73ca82f62bf448be904fda6a7c5150d8,
title = "An eyes and hands model for cognitive architectures to interact with user interfaces",
abstract = "We propose a cognitive model to interact with interfaces. The main objective of cognitive science is understanding the nature of the human mind to develop a model that predicts and explains human behavior. These models are useful to Human- Computer Interaction (HCI) by predicting task performance and times, assisting users, finding error patterns, and by acting as surrogate users. In the future these models will be able to watch users, correct the discrepancy between model and user, better predicting human performance for interactive design, and also useful for AI interface agents. To be fully integrated into HCI design these models need to interact with interfaces. The two main requirements for a cognitive model to interact with the interface are (a) the ability to access the information on the screen, and (b) the ability to pass commands. To hook models to interfaces in the general way we work within a cognitive architecture. Cognitive architectures are computational frameworks to execute cognition theories - they are essentially programming languages designed for modeling. Prominent examples of these architectures are Soar [1] and ACT-R [2]. ACT-R models could access the world interacting directly with the Emacs text editor [3]. We present an initial model of eyes and hands within the ACT-R cognitive architecture that can interact with Emacs.",
author = "Farnaz Tehranchi and Ritter, {Frank Edward}",
year = "2017",
month = "1",
day = "1",
language = "English (US)",
pages = "15--20",
note = "28th Modern Artificial Intelligence and Cognitive Science Conference, MAICS 2017 ; Conference date: 28-04-2017 Through 29-04-2017",

}

Tehranchi, F & Ritter, FE 2017, 'An eyes and hands model for cognitive architectures to interact with user interfaces' Paper presented at 28th Modern Artificial Intelligence and Cognitive Science Conference, MAICS 2017, Fort Wayne, United States, 4/28/17 - 4/29/17, pp. 15-20.

An eyes and hands model for cognitive architectures to interact with user interfaces. / Tehranchi, Farnaz; Ritter, Frank Edward.

2017. 15-20 Paper presented at 28th Modern Artificial Intelligence and Cognitive Science Conference, MAICS 2017, Fort Wayne, United States.

Research output: Contribution to conferencePaper

TY - CONF

T1 - An eyes and hands model for cognitive architectures to interact with user interfaces

AU - Tehranchi, Farnaz

AU - Ritter, Frank Edward

PY - 2017/1/1

Y1 - 2017/1/1

N2 - We propose a cognitive model to interact with interfaces. The main objective of cognitive science is understanding the nature of the human mind to develop a model that predicts and explains human behavior. These models are useful to Human- Computer Interaction (HCI) by predicting task performance and times, assisting users, finding error patterns, and by acting as surrogate users. In the future these models will be able to watch users, correct the discrepancy between model and user, better predicting human performance for interactive design, and also useful for AI interface agents. To be fully integrated into HCI design these models need to interact with interfaces. The two main requirements for a cognitive model to interact with the interface are (a) the ability to access the information on the screen, and (b) the ability to pass commands. To hook models to interfaces in the general way we work within a cognitive architecture. Cognitive architectures are computational frameworks to execute cognition theories - they are essentially programming languages designed for modeling. Prominent examples of these architectures are Soar [1] and ACT-R [2]. ACT-R models could access the world interacting directly with the Emacs text editor [3]. We present an initial model of eyes and hands within the ACT-R cognitive architecture that can interact with Emacs.

AB - We propose a cognitive model to interact with interfaces. The main objective of cognitive science is understanding the nature of the human mind to develop a model that predicts and explains human behavior. These models are useful to Human- Computer Interaction (HCI) by predicting task performance and times, assisting users, finding error patterns, and by acting as surrogate users. In the future these models will be able to watch users, correct the discrepancy between model and user, better predicting human performance for interactive design, and also useful for AI interface agents. To be fully integrated into HCI design these models need to interact with interfaces. The two main requirements for a cognitive model to interact with the interface are (a) the ability to access the information on the screen, and (b) the ability to pass commands. To hook models to interfaces in the general way we work within a cognitive architecture. Cognitive architectures are computational frameworks to execute cognition theories - they are essentially programming languages designed for modeling. Prominent examples of these architectures are Soar [1] and ACT-R [2]. ACT-R models could access the world interacting directly with the Emacs text editor [3]. We present an initial model of eyes and hands within the ACT-R cognitive architecture that can interact with Emacs.

UR - http://www.scopus.com/inward/record.url?scp=85031277768&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85031277768&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85031277768

SP - 15

EP - 20

ER -

Tehranchi F, Ritter FE. An eyes and hands model for cognitive architectures to interact with user interfaces. 2017. Paper presented at 28th Modern Artificial Intelligence and Cognitive Science Conference, MAICS 2017, Fort Wayne, United States.