Explanation systems for influence maximization algorithms

Amulya Yadav, Aida Rahmattalabi, Ece Kamar, Phebe Vayanos, Milind Tambe, Venil Loyd Noronha

Research output: Contribution to journalConference article

Abstract

The field of influence maximization (IM) has made rapid advances, resulting in many sophisticated algorithms for identifying "influential" members in social networks. However, in order to engender trust in IM algorithms, the rationale behind their choice of "influential" nodes needs to be explained to its users. This is a challenging open problem that needs to be solved before these algorithms can be deployed on a large scale. This paper attempts to tackle this open problem via four major contributions: (i) we propose a general paradigm for designing explanation systems for IM algorithms by exploiting the tradeoff between explanation accuracy and interpretability; our paradigm treats IM algorithms as black boxes, and is flexible enough to be used with any algorithm; (ii) we utilize this paradigm to build XplainIM, a suite of explanation systems; (iii) we illustrate the usability of XplainIM by explaining solutions of HEALER (a recent IM algorithm) among ∼200 human subjects on Amazon Mechanical Turk (AMT); and (iv) we provide extensive evaluation of our AMT results, which shows the effectiveness of XplainIM.

Original languageEnglish (US)
Pages (from-to)8-19
Number of pages12
JournalCEUR Workshop Proceedings
Volume1893
StatePublished - Jan 1 2017
Event3rd International Workshop on Social Influence Analysis, SocInf 2017 - Melbourne, Australia
Duration: Aug 19 2017 → …

All Science Journal Classification (ASJC) codes

  • Computer Science(all)

Cite this

Yadav, A., Rahmattalabi, A., Kamar, E., Vayanos, P., Tambe, M., & Noronha, V. L. (2017). Explanation systems for influence maximization algorithms. CEUR Workshop Proceedings, 1893, 8-19.
Yadav, Amulya ; Rahmattalabi, Aida ; Kamar, Ece ; Vayanos, Phebe ; Tambe, Milind ; Noronha, Venil Loyd. / Explanation systems for influence maximization algorithms. In: CEUR Workshop Proceedings. 2017 ; Vol. 1893. pp. 8-19.
@article{b7ffadcb3322403e826416a354f2aec0,
title = "Explanation systems for influence maximization algorithms",
abstract = "The field of influence maximization (IM) has made rapid advances, resulting in many sophisticated algorithms for identifying {"}influential{"} members in social networks. However, in order to engender trust in IM algorithms, the rationale behind their choice of {"}influential{"} nodes needs to be explained to its users. This is a challenging open problem that needs to be solved before these algorithms can be deployed on a large scale. This paper attempts to tackle this open problem via four major contributions: (i) we propose a general paradigm for designing explanation systems for IM algorithms by exploiting the tradeoff between explanation accuracy and interpretability; our paradigm treats IM algorithms as black boxes, and is flexible enough to be used with any algorithm; (ii) we utilize this paradigm to build XplainIM, a suite of explanation systems; (iii) we illustrate the usability of XplainIM by explaining solutions of HEALER (a recent IM algorithm) among ∼200 human subjects on Amazon Mechanical Turk (AMT); and (iv) we provide extensive evaluation of our AMT results, which shows the effectiveness of XplainIM.",
author = "Amulya Yadav and Aida Rahmattalabi and Ece Kamar and Phebe Vayanos and Milind Tambe and Noronha, {Venil Loyd}",
year = "2017",
month = "1",
day = "1",
language = "English (US)",
volume = "1893",
pages = "8--19",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",
publisher = "CEUR-WS",

}

Yadav, A, Rahmattalabi, A, Kamar, E, Vayanos, P, Tambe, M & Noronha, VL 2017, 'Explanation systems for influence maximization algorithms', CEUR Workshop Proceedings, vol. 1893, pp. 8-19.

Explanation systems for influence maximization algorithms. / Yadav, Amulya; Rahmattalabi, Aida; Kamar, Ece; Vayanos, Phebe; Tambe, Milind; Noronha, Venil Loyd.

In: CEUR Workshop Proceedings, Vol. 1893, 01.01.2017, p. 8-19.

Research output: Contribution to journalConference article

TY - JOUR

T1 - Explanation systems for influence maximization algorithms

AU - Yadav, Amulya

AU - Rahmattalabi, Aida

AU - Kamar, Ece

AU - Vayanos, Phebe

AU - Tambe, Milind

AU - Noronha, Venil Loyd

PY - 2017/1/1

Y1 - 2017/1/1

N2 - The field of influence maximization (IM) has made rapid advances, resulting in many sophisticated algorithms for identifying "influential" members in social networks. However, in order to engender trust in IM algorithms, the rationale behind their choice of "influential" nodes needs to be explained to its users. This is a challenging open problem that needs to be solved before these algorithms can be deployed on a large scale. This paper attempts to tackle this open problem via four major contributions: (i) we propose a general paradigm for designing explanation systems for IM algorithms by exploiting the tradeoff between explanation accuracy and interpretability; our paradigm treats IM algorithms as black boxes, and is flexible enough to be used with any algorithm; (ii) we utilize this paradigm to build XplainIM, a suite of explanation systems; (iii) we illustrate the usability of XplainIM by explaining solutions of HEALER (a recent IM algorithm) among ∼200 human subjects on Amazon Mechanical Turk (AMT); and (iv) we provide extensive evaluation of our AMT results, which shows the effectiveness of XplainIM.

AB - The field of influence maximization (IM) has made rapid advances, resulting in many sophisticated algorithms for identifying "influential" members in social networks. However, in order to engender trust in IM algorithms, the rationale behind their choice of "influential" nodes needs to be explained to its users. This is a challenging open problem that needs to be solved before these algorithms can be deployed on a large scale. This paper attempts to tackle this open problem via four major contributions: (i) we propose a general paradigm for designing explanation systems for IM algorithms by exploiting the tradeoff between explanation accuracy and interpretability; our paradigm treats IM algorithms as black boxes, and is flexible enough to be used with any algorithm; (ii) we utilize this paradigm to build XplainIM, a suite of explanation systems; (iii) we illustrate the usability of XplainIM by explaining solutions of HEALER (a recent IM algorithm) among ∼200 human subjects on Amazon Mechanical Turk (AMT); and (iv) we provide extensive evaluation of our AMT results, which shows the effectiveness of XplainIM.

UR - http://www.scopus.com/inward/record.url?scp=85028997041&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85028997041&partnerID=8YFLogxK

M3 - Conference article

VL - 1893

SP - 8

EP - 19

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -

Yadav A, Rahmattalabi A, Kamar E, Vayanos P, Tambe M, Noronha VL. Explanation systems for influence maximization algorithms. CEUR Workshop Proceedings. 2017 Jan 1;1893:8-19.