Sequential bayesian updating for big data

Zita Oravecz, Matt Huentelman, Joachim Vandekerckhove

Research output: Chapter in Book/Report/Conference proceedingChapter

13 Scopus citations


The velocity, volume, and variety of Big Data present both challenges and opportunities for cognitive science. We introduce sequential Bayesian updating as a tool to mine these three core properties. In the Bayesian approach, we summarize the current state of knowledge regarding parameters in terms of their posterior distributions, and use these as prior distributions when new data become available. Crucially, we construct posterior distributions in such a way that we avoid having to repeat computing the likelihood of old data as new data become available, allowing the propagation of information without great computational demand. As a result, these Bayesian methods allow continuous inference on voluminous information streams in a timely manner. We illustrate the advantages of sequential Bayesian updating with data from the MindCrowd project, in which crowd-sourced data are used to study Alzheimer’s dementia. We fit an extended LATER (“Linear Approach to Threshold with Ergodic Rate”) model to reaction time data from the project in order to separate two distinct aspects of cognitive functioning: speed of information accumulation and caution.

Original languageEnglish (US)
Title of host publicationBig Data in Cognitive Science
PublisherTaylor and Francis
Number of pages21
ISBN (Electronic)9781315413563
ISBN (Print)9781138791923
StatePublished - Jan 1 2016

All Science Journal Classification (ASJC) codes

  • Psychology(all)
  • Computer Science(all)


Dive into the research topics of 'Sequential bayesian updating for big data'. Together they form a unique fingerprint.

Cite this