Skip to content
/ SS-HT Public

Official implementation of the paper "Self-Supervised Hypergraph Training Framework via Structure-Aware Learning" published in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2025.

License

Notifications You must be signed in to change notification settings

iMoonLab/SS-HT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SS-HT: Self-Supervised Hypergraph Training Framework via Structure-Aware Learning

Paper Python 3.10+ PyTorch 1.13.1

Official implementation of the paper "Self-Supervised Hypergraph Training Framework via Structure-Aware Learning" published in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2025.

Authors: Yifan Feng, Shiquan Liu, Shihui Ying, Shaoyi Du, Zongze Wu, Yue Gao.


🌟 Overview

Hypergraphs excel at modeling complex, beyond-pairwise correlations. However, integrating hypergraphs into self-supervised learning (SSL) is challenging due to high-order structural variations. SS-HT introduces:

  • "Masking and ReMasking" Strategy: Enhances feature reconstruction in Hypergraph Neural Networks (HGNNs).
  • Structure-Aware Learning: A metric strategy for local high-order correlation changes using Wasserstein distance.
  • Strong Performance: Significant improvements in low-label settings (e.g., 32% gain on Cora-CC with 1% labels).

Pipeline

Abstract (Click to expand)

Hypergraphs, with their ability to model complex, beyond pair-wise correlations, presents a significant advancement over traditional graphs for capturing intricate relational data across diverse domains. However, the integration of hypergraphs into self-supervised learning (SSL) frameworks has been hindered by the intricate nature of high-order structural variations. This paper introduces the Self-Supervised Hypergraph Training Framework via Structure-Aware Learning (SS-HT), designed to enhance the perception and measurement of these variations within hypergraphs. The SS-HT framework employs a “Masking and ReMasking” strategy to bolster feature reconstruction in Hypergraph Neural Networks (HGNNs), addressing the limitations of traditional SSL methods. It also introduces a metric strategy for local high-order correlation changes, streamlining the computational efficiency of structural distance calculations. Extensive experiments on 11 datasets demonstrate SS-HT’s superior performance over existing SSL methods for both low-order and high-order data. Notably, the framework significantly reduces data labeling dependency, achieving a 32% improvement over HGNN in the downstream task fine-tuning phase under the 1% labeled data setting in the Cora-CC dataset. Ablation studies further validate SS-HT’s scalability and its capacity to augment the performance of various HGNN methods, underscoring its robustness and applicability in real-world scenarios.


📂 Project Structure

SS-HT/
├── config/             # Configuration management (YAML files)
├── data/               # Data loading, augmentation, and splitting logic
├── doc/                # Documentation assets (images)
├── models/             # Core model architectures (HGNN, Losses, Wasserstein Dis)
├── train/              # Training and evaluation loops
├── utils/              # Utility functions (logging, seeding)
├── main.py             # Main entry point for training and evaluation
├── requirements.txt    # Project dependencies
└── readme.md           # This file

🚀 Installation

1. Clone the Repository

git clone https://github.com/iMoonLab/SS-HT.git
cd SS-HT

2. Create Environment (Conda Recommended)

conda create -n SS-HT python=3.10
conda activate SS-HT
pip install -r requirements.txt

🛠️ Usage

Training & Evaluation

To train the SS-HT model and evaluate it on node classification tasks with default settings:

python main.py

Configuration

You can customize the experiments by modifying config/config.yaml. Key parameters include:

  • data_name: Dataset choice (e.g., CC-Cora, CC-Citeseer, DBLP-Paper).
  • encoder_type: GNN/HGNN architecture (hgnn, hgnnp, gat, gcn).
  • mask_rate: Attribute masking ratio (default: 0.7).
  • cl & attr: Weighting factors for contrastive and reconstruction losses.

📊 Supported Datasets

The framework supports various hypergraph datasets including:

  • Citation Networks: Cora, Citeseer, CA-Cora, CC-Cora, CC-Citeseer.
  • Academic Databases: DBLP-Paper, DBLP-Conf, DBLP-Term.
  • Movie Networks: IMDB-Actor, IMDB-Director.

📝 Citation

If you find this work useful, please consider citing our paper:

@article{feng2025hypersupervised,
  title={Self-Supervised Hypergraph Training Framework via Structure-Aware Learning},
  author={Yifan Feng and Shiquan Liu and Shihui Ying and Shaoyi Du and Zongze Wu and Yue Gao},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2025},
  publisher={IEEE}
}

📬 Contact

SS-HT is maintained by iMoon-Lab, Tsinghua University. For questions, please contact Yifan Feng.

About

Official implementation of the paper "Self-Supervised Hypergraph Training Framework via Structure-Aware Learning" published in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2025.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages