Thanks for your interest. I will recruit two fully funded Graduate Research Assistants (GRAs) in Spring 2025 or Fall 2025. If you are interested in joining my team, please apply to the MSU CSE graduate program and mention my name in your application statement. Students with CS, math, and EE backgrounds are particularly encouraged to apply!
Here are several optional research projects in my team.
Out-of-distribution generalization. The goal is to learn a domain-agnostic prediction function from training domains such that the learned function performs well on new unseen domains. There are two key research challenges: learning a good foundation model from training domains and identifying the generalization bounds.
Fairness and Robustness of LLMs. It seeks to determine whether black-box large language models (LLMs) consistently deliver fair and robust results to a diverse range of users and customers.
Transparency of transfer learning. It explains what knowledge is being transferred in the transfer learning process, e.g., what essential knowledge of pre-trained LLMs can be leveraged for user-specific downstream tasks, and how to efficiently find a pre-trained LLM from thousands of candidates in HuggingFace. Another relevant problem is the uncertainty quantification of the transfer learning models, which determines how confident the models are in their predictions.
Fundamental trade-off between prediction accuracy and trustworthy properties under distribution shifts. This aims to theoretically understand how trustworthy properties (e.g., privacy, fairness, etc.) affect the transfer learning performance.