Yuekai Sun

Research

AI can match or even outperform humans in many tasks, but they are also prone to mistakes on unfamiliar inputs. Unfortunately, AI often encounter unfamiliar inputs when deployed in the real world because training data is seldom diverse enough to reflect the diversity of real-world inputs. This not only degrades performance but can also lead to AI safety violations. For example,

  1. polygenic risk scores are many times less accurate on non-European people because most subjects in genome-wide association studies are European [Martin et al];
  2. word embeddings perpetuate gender stereotypes: “man is to computer programmer as woman is to homemaker” [Bolukbasi et al];
  3. malicious users can circumvent ChatGPT’s safety training with adversarial inputs (“jailbreak” attacks) [Wei et al].

My research leverages statistical science to improve the safety of AI in the real world. Towards this goal, we work on:

AI alignment & safety

Experts predict that AI will exceed human capabilities in the near future [Bengio et al]. While superintelligent AI can transform the world, they are equally dangerous in the hands of malicious actors. We develop new theories and methods to help us anticipate and manage AI risks. Some representative papers are:

  1. Aligners: Decoupling LLMs and Alignment
    L Ngweta, M Agarwal, S Maity, A Gittens, Y Sun, M Yurochkin.
  2. tinyBenchmarks: evaluating LLMs with few examples
    F Maia Polo, L Weber, L Choshen, Y Sun, G Xu, M Yurochkin.
  3. Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees
    S Xue, M Yurochkin, Y Sun. NeurIPS 2022.

Algorithmic fairness

My foray into algorithmic fairness began with an effort to develop a suite of algorithms to help practitioners implement individual fairness (IF), an intuitive notion of algorithmic fairness that requires algorithms to “treat similar inputs similarly”. The suite includes algorithms for:

  1. aligning similarity metrics with user feedback,
  2. auditing algorithms for violations of IF,
  3. training individually fair AI.

Before our work, IF was dismissed as impractical because there were no practical ways of picking similarity metrics for many AI tasks and no practical algorithms to train individually fair AI. The (similarity) metric learning algorithms in the suite address this issue by helping practitioners align similarity metrics with user feedback. Here are some representative papers:

  1. Post-processing for Individual Fairness
    F Petersen, D Mukherjee, Y Sun, M Yurochkin. NeurIPS 2021.
  2. SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness
    M Yurochkin, Y Sun. ICLR 2021.
  3. Two simple ways to learn individual fairness metrics from data
    D Mukherjee, M Yurochkin, M Banerjee, Y Sun. ICML 2020.

Here is a tutorial (by our collaborators at IBM Research) on our algorithmic fairness research and the inFairness package.

Learning under distribution shifts

AI often encounter adversarial and out-of-distribution inputs in the real world, but the pernicious effects of distribution shifts are poorly understood. This is a form of technical debt that hinders us from anticipating AI risks before they arise. We seek to repay this technical debt. A key insight from our work is (contrary to popular belief) aligning AI so that they are fair/safe/transparent may not be at odds with performance. In fact, alignment can improve (out-of-distribution) performance. Some representative papers are:

  1. An Investigation of Representation and Allocation Harms in Contrastive Learning
    S Maity, M Agarwal, M Yurochkin, Y Sun. ICLR 2024.
  2. Domain Adaptation meets Individual Fairness. And they get along.
    D Mukherjee, F Petersen, M Yurochkin, Y Sun. NeurIPS 2022.
  3. Understanding new tasks through the lens of training data via exponential tilting
    S Maity, M Yurochkin, M Banerjee, Y Sun. ICLR 2023.