This chapter presents a methodology for investigating human interaction with automated judges capable of informing training and design: human-automated judgment learning (HAJL). After introducing HAJL, it describes the experimental task and experimental design used as a test case for investigating HAJL's utility. Then, idiographic results representative of the insights that HAJL can bring and a nomothetic analysis of the experimental manipulations are reported. It ends with conclusions surrounding HAJL's utility. The results showed the HAJL's ability not only to capture individual judgment achievement, interaction with an automated judge, and understanding of an automated judge but also to identify the mechanisms underlying these performance measures, including cognitive control, knowledge, conflict, compromise, adaptation, and actual and assumed similarity. In addition, it highlights the number of factors that go into designing effective human-automated judge interaction, which require detailed methods for measurement and analysis.
|Original language||English (US)|
|Title of host publication||Adaptive Perspectives on Human-Technology Interaction|
|Subtitle of host publication||Methods and Models for Cognitive Engineering and Human-Computer Interaction|
|Publisher||Oxford University Press|
|State||Published - Mar 22 2012|
All Science Journal Classification (ASJC) codes