AI can match or even outperform humans in many tasks, but they are also prone to mistakes on unfamiliar inputs. Unfortunately, AI often encounter unfamiliar inputs when deployed in the real world because training data is seldom diverse enough to reflect the diversity of real-world inputs. This not only degrades the AI’s performance but can also unduly harm protected demographic groups. For example,
My research leverages statistical science to improve the robustness of AI in the real world. Towards this goal, we work on:
AI often encounters adversarial and out-of-distribution inputs in the real world. Although there are many methods for training adversarially and distributionally robust AI, most lack theoretical justification. This is a form of technical debt that hinders us from anticipating AI safety issues before they occur. We seek to repay this technical debt. Some recent papers are
We developed a suite of algorithms to help practitioners implement individual fairness (IF), an intuitive notion of algorithmic fairness that requires algorithms to ``treat similar inputs similarly’’. The suite includes algorithms for:
Before our work, IF was dismissed as impractical because there were no practical ways of picking similarity metrics for many AI tasks. The (similarity) metric learning algorithms in the suite address this issue by helping practitioners align similarity metrics with user feedback. Here are some representative papers:
Here is a tutorial (by our collaborators at IBM Research) on our algorithmic fairness research and the inFairness package.
Many instances of algorithmic bias are caused by distribution shifts (see preceding examples). To develop effective algorithmic fairness practices, we must address the underlying distribution shifts. This led to an ongoing effort to develop statistically-principled methods for transfer learning and domain adaptation. Some recent papers are: