Kyle Hsu

Research Intern

Google Brain

Hi! I’m an incoming PhD student in computer science at Stanford University, where I’ll work as a member of the Stanford Artificial Intelligence Laboratory. My research and education will be generously supported by the Stanford Graduate Fellowship. In the meantime, I’m spending the summer as a research intern at Google Brain with Shane Gu.

I’m broadly interested in endowing robots with key qualities of intelligent behavior. Towards this end, I’ve explored topics such as controller synthesis, meta-learning, unsupervised learning, reinforcement learning, and Bayesian inference.

Previously, I majored in robotics as an Engineering Science undergraduate at the University of Toronto. During this time, I did research at the Vector Institute with Roger Grosse and Dan Roy. I’ve also spent time at Berkeley AI Research with Sergey Levine and Chelsea Finn, as well as at the Max Planck Institute for Software Systems with Rupak Majumdar. My first research experiences were in optoelectronics and photonics under Joyce Poon at the University of Toronto and Ming C. Wu at UC Berkeley.

Interests

  • Artificial Intelligence
  • Robotics

Education

  • BASc in Engineering Science, 2020

    University of Toronto

Publications

On the Role of Data in PAC-Bayes Bounds

We prove that linear PAC-Bayes bounds based on choosing the prior as the expected posterior can be improved by conditioning on a subset of the data, even with full knowledge of the underlying distribution. We apply this theoretical insight to achieve state-of-the-art, non-vacuous PAC-Bayes bounds for neural network image classifiers trained via stochastic gradient descent.

Unsupervised Curricula for Visual Meta-Reinforcement Learning

We develop an algorithm that constructs a task distribution for an unsupervised meta-learner by modeling interaction in a visual environment. The task distribution adapts as the agent explores the environment and learns to learn.

Unsupervised Learning via Meta-Learning

We propose CACTUs, a simple unsupervised learning → clustering → meta-learning pipeline for image classification pre-training. CACTUs can be thought of as a method that enables unsupervised meta-learning.

Lazy Abstraction-Based Controller Synthesis

This paper gives a self-contained presentation of lazy, multi-layered ABCS for reachability and safety specifications. It subsumes Multi-Layered Abstraction-Based Controller Synthesis for Continuous-Time Systems and Lazy Abstraction-Based Control for Safety Specifications.

Lazy Abstraction-Based Control for Safety Specifications

In this work, we restrict our attention to safety specifications, and extend multi-layered ABCS to be lazy: we entwine abstraction and synthesis to enable on-demand construction of the abstractions.