Researcher


Research

My research lies at the intersection of machine learning methodology and trustworthy AI systems, focusing on fairness, privacy and explainability of algorithms.
Interested in joining the team? See our Group page for open positions.

🕸️ Theme 1: Algorithmic Fairness

My work aims to build fairness Graph Neural Networks (GNNs) that are unbiased to different gender, race, region etc.

  • Key Focus: Designing GNN architectures that maintain performance under fairness-aware.
  • Recent Projects: Developing fairness-aware encoding for enhancing Graph Transformer fairness; spectral analysis for harmonizing fairness and utility in GNNs.
📃 Related Publications: [FairGT: A Fairness-aware Graph Transformer] | [FUGNN: Harmonizing Fairness and Utility in Graph Neural Networks]

⚖️ Theme 2: Algorithmic Privacy

Focusing on robustness and security of AI, especially Multimodal Transformers, in high-stakes domains like healthcare. This stream addresses vulnerabilities, such as backdoor attacks, to ensure the integrity and reliability of AI-driven patient prognosis systems.

  • Key Focus: Enhancing the security of Multimodal Transformers (ViT) for disease prognosis against backdoor threats in integrated clinical data.
  • Recent Projects: Proposing RMTrans, a robust multimodal framework; developing a patch-based processing method to mitigate trigger over-fitting and enhance global feature learning.
📃 Related Publications: [RMTrans: Robust Multimodal Transformers for Patient Prognosis under Backdoor Threats]

⚖️ Theme 3: Algorithmic Explainability

Comprehensible neural network explanations are crucial for decision-making, especially when models face malicious perturbations. This research addresses the limitations of adversarial training by developing a method to generate interpretable and logical explanations even under unknown perturbations, ensuring explanations align with real-world logic.

  • Key Focus: Designing the AGAIN (fActor GrAph-based Interpretable neural Network) framework to generate comprehensible explanations under unknown perturbations by directly integrating logical rules during inference.
  • Recent Projects: Constructing a factor graph to express logical rules and identify/rectify logical errors in explanations; proposing an interactive intervention switch strategy for rectification without perturbation retraining.
📃 Related Publications: [Factor Graph-based Interpretable Neural Networks]