TY - GEN

T1 - Information preservation in statistical privacy and Bayesian estimation of unattributed histograms

AU - Lin, Bing Rong

AU - Kifer, Daniel

PY - 2013/7/29

Y1 - 2013/7/29

N2 - In statistical privacy, utility refers to two concepts: information preservation - how much statistical information is retained by a sanitizing algorithm, and usability - how (and with how much difficulty) does one extract this information to build statistical models, answer queries, etc. Some scenarios incentivize a separation between information preservation and usability, so that the data owner first chooses a sanitizing algorithm to maximize a measure of information preservation and, afterward, the data consumers process the sanitized output according to their needs [22, 46]. We analyze a variety of utility measures and show that the average (over possible outputs of the sanitizer) error of Bayesian decision makers forms the unique class of utility measures that satisfy three axioms related to information preservation. The axioms are agnostic to Bayesian concepts such as subjective probabilities and hence strengthen support for Bayesian views in privacy research. In particular, this result connects information preservation to aspects of usability - if the information preservation of a sanitizing algorithm should be measured as the average error of a Bayesian decision maker, shouldn't Bayesian decision theory be a good choice when it comes to using the sanitized outputs for various purposes? We put this idea to the test in the unattributed histogram problem where our decision-theoretic post-processing algorithm empirically outperforms previously proposed approaches.

AB - In statistical privacy, utility refers to two concepts: information preservation - how much statistical information is retained by a sanitizing algorithm, and usability - how (and with how much difficulty) does one extract this information to build statistical models, answer queries, etc. Some scenarios incentivize a separation between information preservation and usability, so that the data owner first chooses a sanitizing algorithm to maximize a measure of information preservation and, afterward, the data consumers process the sanitized output according to their needs [22, 46]. We analyze a variety of utility measures and show that the average (over possible outputs of the sanitizer) error of Bayesian decision makers forms the unique class of utility measures that satisfy three axioms related to information preservation. The axioms are agnostic to Bayesian concepts such as subjective probabilities and hence strengthen support for Bayesian views in privacy research. In particular, this result connects information preservation to aspects of usability - if the information preservation of a sanitizing algorithm should be measured as the average error of a Bayesian decision maker, shouldn't Bayesian decision theory be a good choice when it comes to using the sanitized outputs for various purposes? We put this idea to the test in the unattributed histogram problem where our decision-theoretic post-processing algorithm empirically outperforms previously proposed approaches.

UR - http://www.scopus.com/inward/record.url?scp=84880543792&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84880543792&partnerID=8YFLogxK

U2 - 10.1145/2463676.2463721

DO - 10.1145/2463676.2463721

M3 - Conference contribution

AN - SCOPUS:84880543792

SN - 9781450320375

T3 - Proceedings of the ACM SIGMOD International Conference on Management of Data

SP - 677

EP - 688

BT - SIGMOD 2013 - International Conference on Management of Data

T2 - 2013 ACM SIGMOD Conference on Management of Data, SIGMOD 2013

Y2 - 22 June 2013 through 27 June 2013

ER -