MPSketch: Message Passing Networks via Randomized Hashing for Efficient Attributed Network Embedding

Research output: Contribution to journalArticleResearchpeer review

Authors

Research Organisations

External Research Organisations

  • Central South University
  • Fudan University
  • Beihang University
View graph of relations

Details

Original languageEnglish
Pages (from-to)2941-2954
Number of pages14
JournalIEEE transactions on cybernetics
Volume54
Issue number5
Early online date24 Feb 2023
Publication statusPublished - May 2024

Abstract

Given a network, it is well recognized that attributed network embedding represents each node of the network in a low-dimensional space, and, thus, brings considerable benefits for numerous graph mining tasks. In practice, a diverse set of graph tasks can be processed efficiently via the compact representation that preserves content and structure information. The majority of attributed network embedding approaches, especially, the graph neural network (GNN) algorithms, are substantially costly in either time or space due to the expensive learning process, while the randomized hashing technique, locality-sensitive hashing (LSH), which does not need learning, can speedup the embedding process at the expense of losing some accuracy. In this article, we propose the MPSketch model, which bridges the performance gap between the GNN framework and the LSH framework by adopting the LSH technique to pass messages and capture high-order proximity in a larger aggregated information pool from the neighborhood. The extensive experimental results confirm that in node classification and link prediction, the proposed MPSketch algorithm enjoys performance comparable to the state-of-the-art learning-based algorithms and outperforms the existing LSH algorithms, while running faster than the GNN algorithms by 3–4 orders of magnitude. More precisely, MPSketch runs 2121, 1167, and 1155 times faster than GraphSAGE, GraphZoom, and FATNet on average, respectively.

Keywords

    Approximation algorithms, Computer science, Electronic mail, Graph neural networks, Graph neural networks (GNNs), hashing, Message passing, message passing (MP), network embedding, Prediction algorithms, Task analysis

ASJC Scopus subject areas

Cite this

MPSketch: Message Passing Networks via Randomized Hashing for Efficient Attributed Network Embedding. / Wu, Wei; Li, Bin; Luo, Chuan et al.
In: IEEE transactions on cybernetics, Vol. 54, No. 5, 05.2024, p. 2941-2954.

Research output: Contribution to journalArticleResearchpeer review

Wu W, Li B, Luo C, Nejdl W, Tan X. MPSketch: Message Passing Networks via Randomized Hashing for Efficient Attributed Network Embedding. IEEE transactions on cybernetics. 2024 May;54(5):2941-2954. Epub 2023 Feb 24. doi: 10.1109/TCYB.2023.3243763
Download
@article{3c38d86b44084885b602199598ef7fb0,
title = "MPSketch: Message Passing Networks via Randomized Hashing for Efficient Attributed Network Embedding",
abstract = "Given a network, it is well recognized that attributed network embedding represents each node of the network in a low-dimensional space, and, thus, brings considerable benefits for numerous graph mining tasks. In practice, a diverse set of graph tasks can be processed efficiently via the compact representation that preserves content and structure information. The majority of attributed network embedding approaches, especially, the graph neural network (GNN) algorithms, are substantially costly in either time or space due to the expensive learning process, while the randomized hashing technique, locality-sensitive hashing (LSH), which does not need learning, can speedup the embedding process at the expense of losing some accuracy. In this article, we propose the MPSketch model, which bridges the performance gap between the GNN framework and the LSH framework by adopting the LSH technique to pass messages and capture high-order proximity in a larger aggregated information pool from the neighborhood. The extensive experimental results confirm that in node classification and link prediction, the proposed MPSketch algorithm enjoys performance comparable to the state-of-the-art learning-based algorithms and outperforms the existing LSH algorithms, while running faster than the GNN algorithms by 3–4 orders of magnitude. More precisely, MPSketch runs 2121, 1167, and 1155 times faster than GraphSAGE, GraphZoom, and FATNet on average, respectively.",
keywords = "Approximation algorithms, Computer science, Electronic mail, Graph neural networks, Graph neural networks (GNNs), hashing, Message passing, message passing (MP), network embedding, Prediction algorithms, Task analysis",
author = "Wei Wu and Bin Li and Chuan Luo and Wolfgang Nejdl and Xuan Tan",
year = "2024",
month = may,
doi = "10.1109/TCYB.2023.3243763",
language = "English",
volume = "54",
pages = "2941--2954",
journal = "IEEE transactions on cybernetics",
issn = "2168-2267",
publisher = "IEEE Advancing Technology for Humanity",
number = "5",

}

Download

TY - JOUR

T1 - MPSketch

T2 - Message Passing Networks via Randomized Hashing for Efficient Attributed Network Embedding

AU - Wu, Wei

AU - Li, Bin

AU - Luo, Chuan

AU - Nejdl, Wolfgang

AU - Tan, Xuan

PY - 2024/5

Y1 - 2024/5

N2 - Given a network, it is well recognized that attributed network embedding represents each node of the network in a low-dimensional space, and, thus, brings considerable benefits for numerous graph mining tasks. In practice, a diverse set of graph tasks can be processed efficiently via the compact representation that preserves content and structure information. The majority of attributed network embedding approaches, especially, the graph neural network (GNN) algorithms, are substantially costly in either time or space due to the expensive learning process, while the randomized hashing technique, locality-sensitive hashing (LSH), which does not need learning, can speedup the embedding process at the expense of losing some accuracy. In this article, we propose the MPSketch model, which bridges the performance gap between the GNN framework and the LSH framework by adopting the LSH technique to pass messages and capture high-order proximity in a larger aggregated information pool from the neighborhood. The extensive experimental results confirm that in node classification and link prediction, the proposed MPSketch algorithm enjoys performance comparable to the state-of-the-art learning-based algorithms and outperforms the existing LSH algorithms, while running faster than the GNN algorithms by 3–4 orders of magnitude. More precisely, MPSketch runs 2121, 1167, and 1155 times faster than GraphSAGE, GraphZoom, and FATNet on average, respectively.

AB - Given a network, it is well recognized that attributed network embedding represents each node of the network in a low-dimensional space, and, thus, brings considerable benefits for numerous graph mining tasks. In practice, a diverse set of graph tasks can be processed efficiently via the compact representation that preserves content and structure information. The majority of attributed network embedding approaches, especially, the graph neural network (GNN) algorithms, are substantially costly in either time or space due to the expensive learning process, while the randomized hashing technique, locality-sensitive hashing (LSH), which does not need learning, can speedup the embedding process at the expense of losing some accuracy. In this article, we propose the MPSketch model, which bridges the performance gap between the GNN framework and the LSH framework by adopting the LSH technique to pass messages and capture high-order proximity in a larger aggregated information pool from the neighborhood. The extensive experimental results confirm that in node classification and link prediction, the proposed MPSketch algorithm enjoys performance comparable to the state-of-the-art learning-based algorithms and outperforms the existing LSH algorithms, while running faster than the GNN algorithms by 3–4 orders of magnitude. More precisely, MPSketch runs 2121, 1167, and 1155 times faster than GraphSAGE, GraphZoom, and FATNet on average, respectively.

KW - Approximation algorithms

KW - Computer science

KW - Electronic mail

KW - Graph neural networks

KW - Graph neural networks (GNNs)

KW - hashing

KW - Message passing

KW - message passing (MP)

KW - network embedding

KW - Prediction algorithms

KW - Task analysis

UR - http://www.scopus.com/inward/record.url?scp=85149385879&partnerID=8YFLogxK

U2 - 10.1109/TCYB.2023.3243763

DO - 10.1109/TCYB.2023.3243763

M3 - Article

AN - SCOPUS:85149385879

VL - 54

SP - 2941

EP - 2954

JO - IEEE transactions on cybernetics

JF - IEEE transactions on cybernetics

SN - 2168-2267

IS - 5

ER -

By the same author(s)