Welcome!

I am a fourth-year Computer Science PhD candidate at the University of Wisconsin-Madison, where I am advised by Josiah Hanna. My research is supported by the Cisco Systems Distinguished Graduate Fellowship. During Summer 2025, I will be a machine learning intern at Netflix Research. I have also worked as an AI research intern at Sony AI.

I am broadly interested in representation learning and abstractions for reinforcement learning. Poorly learned representations can lead to data-inefficient learning, instability, and high variance. My work studies how RL agents can learn appropriate representations to make reliable predictions about their environment for validation and control.

Previously, I completed my BS and MS in Computer Science from the University of Texas at Austin, where I was fortunate to be advised by Peter Stone. I also worked as a software engineer at Salesforce and SAS Institute.

Feel free to shoot me an email if you want to chat!

News

  • May 2025: I received the Cisco Systems Distinguished Graduate Fellowship!
  • May 2025: Our paper, Stable Offline Value Function Learning with Bisimulation-based Representations, was accepted at ICML 2025!
  • May 2025: I will be interning at Netflix Research this summer!
  • April 2025: Passed my prelim exam!

Publications

Conference Papers

2025

Stable Offline Value Function Learning with Bisimulation-based Representations

[arxiv]
Brahma S. Pavse, Yudong Chen, Qiaomin Xie, Josiah P. Hanna
Proceedings of the 42nd International Conference on Machine Learning (ICML), July 2025.  

2024

Learning to Stabilize Online Reinforcement Learning in Unbounded State Spaces

[arxiv] [code]
Brahma S. Pavse, Matthew Zurek, Yudong Chen, Qiaomin Xie, Josiah P. Hanna
Proceedings of the 41st International Conference on Machine Learning (ICML), July 2024.  

2023

State-Action Similarity-Based Representations for Off-Policy Evaluation

[arxiv] [bibtex] [code]
Brahma S. Pavse, Josiah P. Hanna
Proceedings of the 36th Neural Information Processing Systems (NeurIPS), December 2023.  

Scaling Marginalized Importance Sampling to High-Dimensional State-Spaces via State Abstraction (Oral Presentation)

[pdf] [bibtex]
Brahma S. Pavse, Josiah P. Hanna
Proceedings of the 37th Association for the Advancement of Artificial Intelligence (AAAI), February 2023.  
An earlier version appeared at the Offline RL Workshop: Offline RL as a "Launchpad" at NeurIPS 2022.  

2020

Reinforced Inverse Dynamics Modeling for Learning from a Single Observed Demonstration

[pdf] [bibtex]
Brahma S. Pavse*, Faraz Torabi*, Josiah Hanna, Garrett Warnell, Peter Stone
*Equal contribution.
Contains material from my undergraduate honors thesis.
IEEE Robotics and Automation Letters, July 2020.  
Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020), October 2020.  
An earlier version appeared in the Imitation, Intent, and Interaction (I3) workshop at ICML 2019.  

Journal Articles

2020

Reinforced Inverse Dynamics Modeling for Learning from a Single Observed Demonstration

[pdf] [bibtex]
Brahma S. Pavse*, Faraz Torabi*, Josiah Hanna, Garrett Warnell, Peter Stone
*Equal contribution.
Contains material from my undergraduate honors thesis.
IEEE Robotics and Automation Letters, July 2020.  
Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020), October 2020.  
An earlier version appeared in the Imitation, Intent, and Interaction (I3) workshop at ICML 2019.  

Theses

Reducing Sampling Error in Batch Temporal Difference Learning

[pdf] [bibtex]
Brahma S. Pavse, advised by Peter Stone and Josiah Hanna
MS Thesis, University of Texas at Austin, 2020.  

Reinforced Inverse Dynamics Modeling for Learning from a Single Observed Demonstration

[pdf] [bibtex]
Brahma S. Pavse, advised by Peter Stone
BS Honors Thesis, University of Texas at Austin, 2019.