Yuekai Sun

Research

AI can match or even outperform humans in many tasks, but they are also prone to mistakes on unfamiliar inputs. Unfortunately, AI often encounter unfamiliar inputs when deployed in the real world because training data is seldom diverse enough to reflect the diversity of real-world inputs. This not only degrades the AI’s performance but can also unduly harm protected demographic groups. For example,

  1. gender classification AI tends to misclassify dark-skinned females because they are underrepresented in training data [Buolamwini & Gebru];
  2. chest X-ray assessment AI performs worse when they are evaluated in hospitals that were not included in the training dataset [Zech et al];
  3. polygenic risk scores are many times less accurate on non-European people because most subjects in genome-wide association studies are European [Martin et al].

My research leverages statistical science to improve the robustness of AI in the real world. Towards this goal, we work on:

AI alignment and safety

AI often encounters adversarial and out-of-distribution inputs in the real world. Although there are many methods for training adversarially and distributionally robust AI, most lack theoretical justification. This is a form of technical debt that hinders us from anticipating AI safety issues before they occur. We seek to repay this technical debt. Some recent papers are

  1. Domain Adaptation meets Individual Fairness. And they get along.
    D Mukherjee, F Petersen, M Yurochkin, Y Sun. NeurIPS 2022.
  2. Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees
    S Xue, M Yurochkin, Y Sun. NeurIPS 2022.
  3. Does enforcing fairness mitigate biases caused by subpopulation shift?
    S Maity, D Mukherjee, M Yurochkin, Y Sun. NeurIPS 2021.

Algorithmic fairness

We developed a suite of algorithms to help practitioners implement individual fairness (IF), an intuitive notion of algorithmic fairness that requires algorithms to ``treat similar inputs similarly’’. The suite includes algorithms for:

  1. aligning similarity metrics with user feedback,
  2. auditing algorithms for violations of IF,
  3. training individually fair AI.

Before our work, IF was dismissed as impractical because there were no practical ways of picking similarity metrics for many AI tasks. The (similarity) metric learning algorithms in the suite address this issue by helping practitioners align similarity metrics with user feedback. Here are some representative papers:

  1. Post-processing for Individual Fairness
    F Petersen, D Mukherjee, Y Sun, M Yurochkin. NeurIPS 2021.
  2. SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness
    M Yurochkin, Y Sun. ICLR 2021.
  3. Two simple ways to learn individual fairness metrics from data
    D Mukherjee, M Yurochkin, M Banerjee, Y Sun. ICML 2020.

Here is a tutorial (by our collaborators at IBM Research) on our algorithmic fairness research and the inFairness package.

Learning under distribution shifts

Many instances of algorithmic bias are caused by distribution shifts (see preceding examples). To develop effective algorithmic fairness practices, we must address the underlying distribution shifts. This led to an ongoing effort to develop statistically-principled methods for transfer learning and domain adaptation. Some recent papers are:

  1. Understanding new tasks through the lens of training data via exponential tilting
    S Maity, M Yurochkin, M Banerjee, Y Sun. ICLR 2023.
  2. Predictor-corrector algorithms for stochastic optimization under gradual distribution shift
    S Maity, D Mukherjee, M Banerjee, Y Sun. ICLR 2023.
  3. Minimax optimal approaches to the label shift problem
    S Maity, Y Sun, M Banerjee. JMLR (2022).