Details
Original language | English |
---|---|
Pages (from-to) | 2941-2954 |
Number of pages | 14 |
Journal | IEEE transactions on cybernetics |
Volume | 54 |
Issue number | 5 |
Early online date | 24 Feb 2023 |
Publication status | Published - May 2024 |
Abstract
Given a network, it is well recognized that attributed network embedding represents each node of the network in a low-dimensional space, and, thus, brings considerable benefits for numerous graph mining tasks. In practice, a diverse set of graph tasks can be processed efficiently via the compact representation that preserves content and structure information. The majority of attributed network embedding approaches, especially, the graph neural network (GNN) algorithms, are substantially costly in either time or space due to the expensive learning process, while the randomized hashing technique, locality-sensitive hashing (LSH), which does not need learning, can speedup the embedding process at the expense of losing some accuracy. In this article, we propose the MPSketch model, which bridges the performance gap between the GNN framework and the LSH framework by adopting the LSH technique to pass messages and capture high-order proximity in a larger aggregated information pool from the neighborhood. The extensive experimental results confirm that in node classification and link prediction, the proposed MPSketch algorithm enjoys performance comparable to the state-of-the-art learning-based algorithms and outperforms the existing LSH algorithms, while running faster than the GNN algorithms by 3–4 orders of magnitude. More precisely, MPSketch runs 2121, 1167, and 1155 times faster than GraphSAGE, GraphZoom, and FATNet on average, respectively.
Keywords
- Approximation algorithms, Computer science, Electronic mail, Graph neural networks, Graph neural networks (GNNs), hashing, Message passing, message passing (MP), network embedding, Prediction algorithms, Task analysis
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Engineering(all)
- Control and Systems Engineering
- Computer Science(all)
- Information Systems
- Computer Science(all)
- Human-Computer Interaction
- Computer Science(all)
- Computer Science Applications
- Engineering(all)
- Electrical and Electronic Engineering
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: IEEE transactions on cybernetics, Vol. 54, No. 5, 05.2024, p. 2941-2954.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - MPSketch
T2 - Message Passing Networks via Randomized Hashing for Efficient Attributed Network Embedding
AU - Wu, Wei
AU - Li, Bin
AU - Luo, Chuan
AU - Nejdl, Wolfgang
AU - Tan, Xuan
N1 - Publisher Copyright: IEEE
PY - 2024/5
Y1 - 2024/5
N2 - Given a network, it is well recognized that attributed network embedding represents each node of the network in a low-dimensional space, and, thus, brings considerable benefits for numerous graph mining tasks. In practice, a diverse set of graph tasks can be processed efficiently via the compact representation that preserves content and structure information. The majority of attributed network embedding approaches, especially, the graph neural network (GNN) algorithms, are substantially costly in either time or space due to the expensive learning process, while the randomized hashing technique, locality-sensitive hashing (LSH), which does not need learning, can speedup the embedding process at the expense of losing some accuracy. In this article, we propose the MPSketch model, which bridges the performance gap between the GNN framework and the LSH framework by adopting the LSH technique to pass messages and capture high-order proximity in a larger aggregated information pool from the neighborhood. The extensive experimental results confirm that in node classification and link prediction, the proposed MPSketch algorithm enjoys performance comparable to the state-of-the-art learning-based algorithms and outperforms the existing LSH algorithms, while running faster than the GNN algorithms by 3–4 orders of magnitude. More precisely, MPSketch runs 2121, 1167, and 1155 times faster than GraphSAGE, GraphZoom, and FATNet on average, respectively.
AB - Given a network, it is well recognized that attributed network embedding represents each node of the network in a low-dimensional space, and, thus, brings considerable benefits for numerous graph mining tasks. In practice, a diverse set of graph tasks can be processed efficiently via the compact representation that preserves content and structure information. The majority of attributed network embedding approaches, especially, the graph neural network (GNN) algorithms, are substantially costly in either time or space due to the expensive learning process, while the randomized hashing technique, locality-sensitive hashing (LSH), which does not need learning, can speedup the embedding process at the expense of losing some accuracy. In this article, we propose the MPSketch model, which bridges the performance gap between the GNN framework and the LSH framework by adopting the LSH technique to pass messages and capture high-order proximity in a larger aggregated information pool from the neighborhood. The extensive experimental results confirm that in node classification and link prediction, the proposed MPSketch algorithm enjoys performance comparable to the state-of-the-art learning-based algorithms and outperforms the existing LSH algorithms, while running faster than the GNN algorithms by 3–4 orders of magnitude. More precisely, MPSketch runs 2121, 1167, and 1155 times faster than GraphSAGE, GraphZoom, and FATNet on average, respectively.
KW - Approximation algorithms
KW - Computer science
KW - Electronic mail
KW - Graph neural networks
KW - Graph neural networks (GNNs)
KW - hashing
KW - Message passing
KW - message passing (MP)
KW - network embedding
KW - Prediction algorithms
KW - Task analysis
UR - http://www.scopus.com/inward/record.url?scp=85149385879&partnerID=8YFLogxK
U2 - 10.1109/TCYB.2023.3243763
DO - 10.1109/TCYB.2023.3243763
M3 - Article
AN - SCOPUS:85149385879
VL - 54
SP - 2941
EP - 2954
JO - IEEE transactions on cybernetics
JF - IEEE transactions on cybernetics
SN - 2168-2267
IS - 5
ER -