Skip to content
@LabRAI

Responsible AI (RAI) Lab @FSU

Foundations of Responsible AI Lab at Florida State University

Responsible AI (RAI) Lab @ FSU

Foundations of Responsible AI · Florida State University · Tallahassee, FL

🔬 Dedicated to Making AI Trustworthy, Secure, and Fair 🤖

Home Page Google Scholar GitHub Stars Lab Email


🔥 About RAI Lab

"Making AI responsible is even more important than developing AI itself."

We are the Responsible AI (RAI) Lab at Florida State University, led by Prof. Yushun Dong. Our research sits at the intersection of AI Security, AI Fairness, and Graph Intelligence — working toward AI systems that are not only powerful, but trustworthy.


🚀 Research Focus

Area Topics
🔐 AI Security & Privacy Model Extraction Attacks & Defenses, Adversarial Robustness, LLM Security
🕸️ Graph Neural Networks GNN Robustness, Graph-based Model Fingerprinting, Structural Learning
🤝 Responsible AI Fairness, Explainability, Certified Defenses
🌍 AI for Science Disaster Prediction, Wildfire Risk, Typhoon Forecasting

⭐ Featured Projects

Repository Description Stars
🧠 LangSkills Language skills evaluation for LLMs
🌪️ TyphoFormer 🏆 Best Short Paper @SIGSPATIAL'25 · LLM-Augmented Typhoon Prediction
⚔️ ATOM Detecting Query-Based Model Extraction Attacks on GNNs
🔬 PyGIP Graph Intelligence & Privacy Framework
🔥 PyHazards AI-Powered Hazard Prediction & Risk Assessment Framework

🏆 Recent Highlights

  • 🥇 Best Short Paper Award — ACM SIGSPATIAL 2025 · TyphoFormer
  • 🥈 Second Prize — ICDM 2025 BlueSky Track
  • 📝 Publications at ICML / KDD / AAAI / ICLR / WSDM / EMNLP and more

👥 Join Us

We are actively recruiting PhD students, research interns, and visiting scholars!

If you are passionate about Responsible AI, security, or AI for social good, we'd love to hear from you.

📩 yushun.dong@fsu.edu · 🌐 Lab Website · 📋 Open Projects


RAI Lab · Department of Computer Science · Florida State University

Pinned Loading

  1. ATOM ATOM Public

    ATOM: A Framework of Detecting Query-Based Model Extraction Attacks for Graph Neural Networks

    Python 17 2

  2. MISLEADER MISLEADER Public

    Forked from XueqiC/MISLEADER

    This repository provides implementations of paper "MISLEADER: Defending against Model Extraction with Ensembles of Distilled Models"

    Python 1 1

  3. KDD2025_Tutorial KDD2025_Tutorial Public

    KDD 2025 Tutorial: Model Extraction Attacks and Defenses for Large Language Models

    HTML 1

  4. CEGA CEGA Public

    This is the open-source code for ICML 2025 paper CEGA: A Cost-Effective Approach for Graph-Based Model Extraction Attacks.

    Jupyter Notebook 1

Repositories

Showing 10 of 18 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…