My primary research interests involve the recursive decomposition of complex, long-horizon tasks into simpler subproblems. I view hierarchical learning as both a structural prior that improves optimization in reinforcement learning and a framework for continual learning of increasingly sophisticated skills.

I spent my undergraduate studies and the first part of my Ph.D. studying neuroscience, investigating how the brain allocates control between model-free and model-based decision-making strategies and developing neural decoding applications.

Blog

Research

Previous Research