About Me

Hi, I’m Yao-An Yang, a senior majoring in Computer Science and Physics at the University of Michigan.

I am currently advised by Prof. Joyce Chai where I work with Ziqiao Ma on Vision Language Model grounding and fine-grained understanding, as well as Large Language Model memory and reaction to patterns. Previously, I was advised by Prof. Danai Koutra where I worked on link prediction with graph neural network with Dr. Jiong Zhu and LLM-GNN models for text attributed graphs with Donald Loveland.

Before that, I’ve worked on building cyrogenic particle detectors with Prof. Wolfgang Lorenzon.

Research

My research spans representation learning, multimodality, and mechanistic understanding. In the long term, I am interested in building and understanding intelligent systems that can continually and efficiently learn with natural supervision and embodied interactions.

  • Representation Learning: I am curious about how properties of data, such as structure and modality, interact with model architecture and training to shape learning. I believe developing such understanding requires examining models not only in terms of aggregated performance, but through localized analysis and how their internal representations change during training and inference. Additionally, I am interested in building more data efficient models.

  • Multimodality: I am interested in exploring embodied or multimodal contexts as sources of richer learning signals. While most current multimodal models are limited to processing and reasoning primarily in one modality and using the other modality as auxiliary information, I aim to enable models to truly utilize the best of both modalities and seamlessly reason across modalities to gain the most information.

  • Mechanistic Understanding: Working with multimodality, I observed that while many recent works have shown that models exhibit traces of learning emergent meaning across modalities, few have rigorously tested whether the correlations they learn reflect genuine abstraction or merely statistical association. This observation sparked my interest in a mechanistic view of learning—examining whether and how their internal representations track state, form abstract structures, or evolve with data properties.

You can refer to my research statement for more detail.

Misc: Coming from a physics background, I am also interested in exploring how I can utilize machine learning to physics or the other way around. I am generally interested in learning about and exploring new fields and research.

Publications

  • Donald Loveland, Yao-An Yang, Danai Koutra. 2025. Glance for Context: Learning When to Leverage LLMs for Node-Aware GNN-LLM Fusion. Under Review at the International Conference on Learning Representations (ICLR), 2026.
    [Paper]

  • Jiong Zhu*, Gaotang Li*, Yao-An Yang, Jing Zhu, Xuehao Cui, Danai Koutra. 2024. On the impact of feature heterophily on link prediction with graph neural networks. Advances in Neural Information Processing Systems (NeurIPS), 2024.
    [Paper] [Code]

(*equal contributions)

Awards and Honors

  • Dec 2022 - May 2025: University Honors (All 6 semesters)
  • Mar 2024 & Mar 2025: James B. Angell Scholar (2 time recipient)
  • Dec 2024: NeurIPS Scholar Award
  • Mar 2023: William J. Branstrom Freshman Prize (top 5% freshman)

Education

  • Sep 2022 - now: B.S. in Physics and Computer Science
  • Sep 2019 – Jun 2022: Taipei Municipal JianGuo High School