Scheduling shared scans of large data files

Parag Agrawal, Daniel Kifer, Christopher Olston

Research output: Contribution to journalArticlepeer-review

54 Scopus citations


We study how best to schedule scans of large data files, in the presence of many simultaneous requests to a common set of files. The objective is to maximize the overall rate of processing these files, by sharing scans of the same file as aggressively as possible, without imposing undue wait time on individual jobs. This scheduling problem arises in batch data processing environments such as Map-Reduce systems, some of which handle tens of thousands of processing requests daily, over a shared set of files. As we demonstrate, conventional scheduling techniques such as shortest-job-first do not perform well in the presence of cross-job sharing opportunities. We derive a new family of scheduling policies specifically targeted to sharable workloads. Our scheduling policies revolve around the notion that, all else being equal, it is good to schedule nonsharable scans ahead of ones that can share IO work with future jobs, if the arrival rate of sharable future jobs is expected to be high. We evaluate our policies via simulation over varied synthetic and real workloads, and demonstrate significant performance gains compared with conventional scheduling approaches.

Original languageEnglish (US)
Pages (from-to)958-969
Number of pages12
JournalProceedings of the VLDB Endowment
Issue number1
StatePublished - 2008

All Science Journal Classification (ASJC) codes

  • Computer Science (miscellaneous)
  • Computer Science(all)


Dive into the research topics of 'Scheduling shared scans of large data files'. Together they form a unique fingerprint.

Cite this