Private graphon estimation for sparse graphs

Christian Borgs, Jennifer T. Chayes, Adam Davison Smith

Research output: Contribution to journalConference article

18 Citations (Scopus)

Abstract

We design algorithms for fitting a high-dimensional statistical model to a large, sparse network without revealing sensitive information of individual members. Given a sparse input graph G, our algorithms output a node-differentially private nonparametric block model approximation. By node-differentially private, we mean that our output hides the insertion or removal of a vertex and all its adjacent edges. If G is an instance of the network obtained from a generative nonparametric model defined in terms of a graphon W, our model guarantees consistency: as the number of vertices tends to infinity, the output of our algorithm converges to W in an appropriate version of the L2 norm. In particular, this means we can estimate the sizes of all multi-way cuts in G. Our results hold as long as W is bounded, the average degree of G grows at least like the log of the number of vertices, and the number of blocks goes to infinity at an appropriate rate. We give explicit error bounds in terms of the parameters of the model; in several settings, our bounds improve on or match known nonprivate results.

Original languageEnglish (US)
Pages (from-to)1369-1377
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume2015-January
StatePublished - Jan 1 2015
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: Dec 7 2015Dec 12 2015

Fingerprint

Statistical Models

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Borgs, C., Chayes, J. T., & Smith, A. D. (2015). Private graphon estimation for sparse graphs. Advances in Neural Information Processing Systems, 2015-January, 1369-1377.
Borgs, Christian ; Chayes, Jennifer T. ; Smith, Adam Davison. / Private graphon estimation for sparse graphs. In: Advances in Neural Information Processing Systems. 2015 ; Vol. 2015-January. pp. 1369-1377.
@article{da18249dbf0349ac822673088429be1e,
title = "Private graphon estimation for sparse graphs",
abstract = "We design algorithms for fitting a high-dimensional statistical model to a large, sparse network without revealing sensitive information of individual members. Given a sparse input graph G, our algorithms output a node-differentially private nonparametric block model approximation. By node-differentially private, we mean that our output hides the insertion or removal of a vertex and all its adjacent edges. If G is an instance of the network obtained from a generative nonparametric model defined in terms of a graphon W, our model guarantees consistency: as the number of vertices tends to infinity, the output of our algorithm converges to W in an appropriate version of the L2 norm. In particular, this means we can estimate the sizes of all multi-way cuts in G. Our results hold as long as W is bounded, the average degree of G grows at least like the log of the number of vertices, and the number of blocks goes to infinity at an appropriate rate. We give explicit error bounds in terms of the parameters of the model; in several settings, our bounds improve on or match known nonprivate results.",
author = "Christian Borgs and Chayes, {Jennifer T.} and Smith, {Adam Davison}",
year = "2015",
month = "1",
day = "1",
language = "English (US)",
volume = "2015-January",
pages = "1369--1377",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",

}

Borgs, C, Chayes, JT & Smith, AD 2015, 'Private graphon estimation for sparse graphs', Advances in Neural Information Processing Systems, vol. 2015-January, pp. 1369-1377.

Private graphon estimation for sparse graphs. / Borgs, Christian; Chayes, Jennifer T.; Smith, Adam Davison.

In: Advances in Neural Information Processing Systems, Vol. 2015-January, 01.01.2015, p. 1369-1377.

Research output: Contribution to journalConference article

TY - JOUR

T1 - Private graphon estimation for sparse graphs

AU - Borgs, Christian

AU - Chayes, Jennifer T.

AU - Smith, Adam Davison

PY - 2015/1/1

Y1 - 2015/1/1

N2 - We design algorithms for fitting a high-dimensional statistical model to a large, sparse network without revealing sensitive information of individual members. Given a sparse input graph G, our algorithms output a node-differentially private nonparametric block model approximation. By node-differentially private, we mean that our output hides the insertion or removal of a vertex and all its adjacent edges. If G is an instance of the network obtained from a generative nonparametric model defined in terms of a graphon W, our model guarantees consistency: as the number of vertices tends to infinity, the output of our algorithm converges to W in an appropriate version of the L2 norm. In particular, this means we can estimate the sizes of all multi-way cuts in G. Our results hold as long as W is bounded, the average degree of G grows at least like the log of the number of vertices, and the number of blocks goes to infinity at an appropriate rate. We give explicit error bounds in terms of the parameters of the model; in several settings, our bounds improve on or match known nonprivate results.

AB - We design algorithms for fitting a high-dimensional statistical model to a large, sparse network without revealing sensitive information of individual members. Given a sparse input graph G, our algorithms output a node-differentially private nonparametric block model approximation. By node-differentially private, we mean that our output hides the insertion or removal of a vertex and all its adjacent edges. If G is an instance of the network obtained from a generative nonparametric model defined in terms of a graphon W, our model guarantees consistency: as the number of vertices tends to infinity, the output of our algorithm converges to W in an appropriate version of the L2 norm. In particular, this means we can estimate the sizes of all multi-way cuts in G. Our results hold as long as W is bounded, the average degree of G grows at least like the log of the number of vertices, and the number of blocks goes to infinity at an appropriate rate. We give explicit error bounds in terms of the parameters of the model; in several settings, our bounds improve on or match known nonprivate results.

UR - http://www.scopus.com/inward/record.url?scp=84965171597&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84965171597&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:84965171597

VL - 2015-January

SP - 1369

EP - 1377

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

ER -

Borgs C, Chayes JT, Smith AD. Private graphon estimation for sparse graphs. Advances in Neural Information Processing Systems. 2015 Jan 1;2015-January:1369-1377.