Tri-party deep network representation

Shirui Pan, Jia Wu, Xingquan Zhu, Chengqi Zhang, Yang Wang

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

Abstract

Information network mining often requires examination of linkage relationships between nodes for analysis. Recently, network representation has emerged to represent each node in a vector format, embedding network structure, so off-the-shelf machine learning methods can be directly applied for analysis. To date, existing methods only focus on one aspect of node information and cannot leverage node labels. In this paper, we propose TriDNR, a tri-party deep network representation model, using information from three parties: node structure, node content, and node labels (if available) to jointly learn optimal node representation. TriDNR is based on our new coupled deep natural language module, whose learning is enforced at three levels: (1) at the network structure level, TriDNR exploits inter-node relationship by maximizing the probability of observing surrounding nodes given a node in random walks; (2) at the node content level, TriDNR captures node-word correlation by maximizing the co-occurrence of word sequence given a node; and (3) at the node label level, TriDNR models label-word correspondence by maximizing the probability of word sequence given a class label. The tri-party information is jointly fed into the neural network model to mutually enhance each other to learn optimal representation, and results in up to 79% classification accuracy gain, compared to state-of-the-art methods.

Original languageEnglish
Title of host publicationIJCAI-16 - Proceedings of the 25th International Joint Conference on Artificial Intelligence, IJCAI 2016
Subtitle of host publicationNew York, New York, USA 9–15 July 2016
EditorsSubbarao Kambhampati
Place of PublicationPalo Alto CA USA
PublisherAssociation for the Advancement of Artificial Intelligence (AAAI)
Pages1895-1901
Number of pages7
ISBN (Electronic)9781577357704
Publication statusPublished - 2016
Externally publishedYes
EventInternational Joint Conference on Artificial Intelligence 2016 - New York, United States of America
Duration: 9 Jul 201615 Jul 2016
Conference number: 25th
http://ijcai-16.org/

Conference

ConferenceInternational Joint Conference on Artificial Intelligence 2016
Abbreviated titleIJCAI 2016
CountryUnited States of America
CityNew York
Period9/07/1615/07/16
Internet address

Cite this

Pan, S., Wu, J., Zhu, X., Zhang, C., & Wang, Y. (2016). Tri-party deep network representation. In S. Kambhampati (Ed.), IJCAI-16 - Proceedings of the 25th International Joint Conference on Artificial Intelligence, IJCAI 2016: New York, New York, USA 9–15 July 2016 (pp. 1895-1901). Palo Alto CA USA: Association for the Advancement of Artificial Intelligence (AAAI).
Pan, Shirui ; Wu, Jia ; Zhu, Xingquan ; Zhang, Chengqi ; Wang, Yang. / Tri-party deep network representation. IJCAI-16 - Proceedings of the 25th International Joint Conference on Artificial Intelligence, IJCAI 2016: New York, New York, USA 9–15 July 2016. editor / Subbarao Kambhampati. Palo Alto CA USA : Association for the Advancement of Artificial Intelligence (AAAI), 2016. pp. 1895-1901
@inproceedings{da29b0e38a1244cc9163476e25f539af,
title = "Tri-party deep network representation",
abstract = "Information network mining often requires examination of linkage relationships between nodes for analysis. Recently, network representation has emerged to represent each node in a vector format, embedding network structure, so off-the-shelf machine learning methods can be directly applied for analysis. To date, existing methods only focus on one aspect of node information and cannot leverage node labels. In this paper, we propose TriDNR, a tri-party deep network representation model, using information from three parties: node structure, node content, and node labels (if available) to jointly learn optimal node representation. TriDNR is based on our new coupled deep natural language module, whose learning is enforced at three levels: (1) at the network structure level, TriDNR exploits inter-node relationship by maximizing the probability of observing surrounding nodes given a node in random walks; (2) at the node content level, TriDNR captures node-word correlation by maximizing the co-occurrence of word sequence given a node; and (3) at the node label level, TriDNR models label-word correspondence by maximizing the probability of word sequence given a class label. The tri-party information is jointly fed into the neural network model to mutually enhance each other to learn optimal representation, and results in up to 79{\%} classification accuracy gain, compared to state-of-the-art methods.",
author = "Shirui Pan and Jia Wu and Xingquan Zhu and Chengqi Zhang and Yang Wang",
year = "2016",
language = "English",
pages = "1895--1901",
editor = "Kambhampati, {Subbarao }",
booktitle = "IJCAI-16 - Proceedings of the 25th International Joint Conference on Artificial Intelligence, IJCAI 2016",
publisher = "Association for the Advancement of Artificial Intelligence (AAAI)",
address = "United States of America",

}

Pan, S, Wu, J, Zhu, X, Zhang, C & Wang, Y 2016, Tri-party deep network representation. in S Kambhampati (ed.), IJCAI-16 - Proceedings of the 25th International Joint Conference on Artificial Intelligence, IJCAI 2016: New York, New York, USA 9–15 July 2016. Association for the Advancement of Artificial Intelligence (AAAI), Palo Alto CA USA, pp. 1895-1901, International Joint Conference on Artificial Intelligence 2016, New York, United States of America, 9/07/16.

Tri-party deep network representation. / Pan, Shirui; Wu, Jia; Zhu, Xingquan; Zhang, Chengqi; Wang, Yang.

IJCAI-16 - Proceedings of the 25th International Joint Conference on Artificial Intelligence, IJCAI 2016: New York, New York, USA 9–15 July 2016. ed. / Subbarao Kambhampati. Palo Alto CA USA : Association for the Advancement of Artificial Intelligence (AAAI), 2016. p. 1895-1901.

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

TY - GEN

T1 - Tri-party deep network representation

AU - Pan, Shirui

AU - Wu, Jia

AU - Zhu, Xingquan

AU - Zhang, Chengqi

AU - Wang, Yang

PY - 2016

Y1 - 2016

N2 - Information network mining often requires examination of linkage relationships between nodes for analysis. Recently, network representation has emerged to represent each node in a vector format, embedding network structure, so off-the-shelf machine learning methods can be directly applied for analysis. To date, existing methods only focus on one aspect of node information and cannot leverage node labels. In this paper, we propose TriDNR, a tri-party deep network representation model, using information from three parties: node structure, node content, and node labels (if available) to jointly learn optimal node representation. TriDNR is based on our new coupled deep natural language module, whose learning is enforced at three levels: (1) at the network structure level, TriDNR exploits inter-node relationship by maximizing the probability of observing surrounding nodes given a node in random walks; (2) at the node content level, TriDNR captures node-word correlation by maximizing the co-occurrence of word sequence given a node; and (3) at the node label level, TriDNR models label-word correspondence by maximizing the probability of word sequence given a class label. The tri-party information is jointly fed into the neural network model to mutually enhance each other to learn optimal representation, and results in up to 79% classification accuracy gain, compared to state-of-the-art methods.

AB - Information network mining often requires examination of linkage relationships between nodes for analysis. Recently, network representation has emerged to represent each node in a vector format, embedding network structure, so off-the-shelf machine learning methods can be directly applied for analysis. To date, existing methods only focus on one aspect of node information and cannot leverage node labels. In this paper, we propose TriDNR, a tri-party deep network representation model, using information from three parties: node structure, node content, and node labels (if available) to jointly learn optimal node representation. TriDNR is based on our new coupled deep natural language module, whose learning is enforced at three levels: (1) at the network structure level, TriDNR exploits inter-node relationship by maximizing the probability of observing surrounding nodes given a node in random walks; (2) at the node content level, TriDNR captures node-word correlation by maximizing the co-occurrence of word sequence given a node; and (3) at the node label level, TriDNR models label-word correspondence by maximizing the probability of word sequence given a class label. The tri-party information is jointly fed into the neural network model to mutually enhance each other to learn optimal representation, and results in up to 79% classification accuracy gain, compared to state-of-the-art methods.

UR - http://www.scopus.com/inward/record.url?scp=85006173404&partnerID=8YFLogxK

M3 - Conference Paper

SP - 1895

EP - 1901

BT - IJCAI-16 - Proceedings of the 25th International Joint Conference on Artificial Intelligence, IJCAI 2016

A2 - Kambhampati, Subbarao

PB - Association for the Advancement of Artificial Intelligence (AAAI)

CY - Palo Alto CA USA

ER -

Pan S, Wu J, Zhu X, Zhang C, Wang Y. Tri-party deep network representation. In Kambhampati S, editor, IJCAI-16 - Proceedings of the 25th International Joint Conference on Artificial Intelligence, IJCAI 2016: New York, New York, USA 9–15 July 2016. Palo Alto CA USA: Association for the Advancement of Artificial Intelligence (AAAI). 2016. p. 1895-1901