About
Hi, I'm Alex Havrilla, a PhD student at Georgia Tech studying generative modeling in both theory and practice. My research aims to leverage insights from both disciplines to advance each other, ultimately unifying them under a single conceptual framework. I have interned at FAIR, Microsoft, and am a cofounder of the open-source RLHF research group CarperAI.
I graduated from Carnegie Mellon University with a joint MS/BS in mathematics and an additional major in computer science.

Research
My research covers a broad range of topics intersecting with generative modeling including LLMs, Reinforcement Learning, Diffusion Models, and statistical/approxmation theory for generative models. Recently, I’ve been deeply engaged in exploring ways to enhance the reasoning capabilities of Language Models (LLMs) for knowledge discovery, utilizing techniques from reinforcement learning. On the theoretical front, I focus on approximation and statistical theories for generative models, with a strong emphasis on validating these theories through empirical observations.
Papers
A. Havrilla, A. Dai, L. O’Mahony, K. Oostermeijer, V. Zisler, A. Albalak, F. Milo, S. Raparthy, K. Gandhi, B. Abbasi, D. Phung, M. Iyer, D. Mahan, C. Blagden, S. Gureja, M. Hamdy, W. Li, G. Paolini, P. Ammanamanchi, E. Meyerson, \textit{Surveying the Effects of Quality, Diversity, and Complexity in Synthetic Data From Large Language Models}. ArXiv. [pdf]
A. Havrilla, D. Melis, N. Fusi. \textit{Interactive Causal Discovery through Large Language Models.} Submitted to ICML 2025. [pdf]
A. Havrilla, S. Raparthy, C. Nalmpantis, J. Yu, M. Zhuravinskyi, E. Hambro, R. Railneau, \textit{GLoRe: When, Where, and How to Improve LLM Reasoning via Global and Local Refinements}. Accepted to ICML 2024. [pdf]
A. Havrilla, W. Liao, \textit{Predicting Scaling Laws with Statistical and approximation Theory for Transformer Neural Networks}. Accepted to Neurips 2024. [pdf]
A. Havrilla, Y Du, S. Raparthy, C. Nalmpantis, J. Yu, M. Zhuravinskyi, E. Hambro, R. Railneau, \textit{Teaching Large Language Models to Reason with Reinforcement Learning}. Submitted to Neurips 2024. [pdf]
Y. Du, A. Havrilla, R. Railneau, \textit{A study in RL for LLM reasoning}. Accepted to Neurips 2023 ICBINB workshop. [pdf]
T. Sawada, D. Paleka, A. Havrilla, P. Vidas, A. Kranias, P. Tadepilli, A. Komatsuzaki, ARB: An Advanced Reasoning Benchmark for Large Language Modeling. To appear in Neurips 2023 MathAI workshop. [pdf]
H. Liu, A. Havrilla, R. Lai, W. Liao. Deep Nonparmaetric Estimation of Intrinsic Data Structures by Chart Autoencoders: Generalization Error and Robustness. To appear in Journal of Applied and Computation Harmonic Analysis. [pdf]
A. Havrilla, M. Zhuravynski, A. Tiwari, J. Tow, E. Kim, Q. Anthony, S. Biderman, L. Castricato. trlX: A Framework for Large Scale Reinforcement Learning from Human Feedback. To appear in EMNLP 2023. [pdf]
A. Havrilla, K. Rojas, W. Liao, M. Tao, Dual-FNO UNet: Scale-Robust Diffusion Model for Zero-Shot Super-Resolution Image Generation. To appear in Neurips 2023 workshop on diffusion models. [pdf]
A. Havrilla, M. Iyer, Training Large Language Models with Noisy Algorithmic Chain of Thought. To appear in ICML 2023 worksohp on Symbolic and Data driven methods for reasoning in NLP. [pdf]
S. Biderman, A. Tikiwari, L. Castricato, E. Hallahan, A. Havrilla, Q. Anthony, E. Raff, What Makes a Good Multimodal Embedding Space? Theoretical and Empirical Insights from Topology. To appear on ArXiv.
A. Havrilla, L. Castricato, M. Shabuland, I. Yang, S. Frazier, M. Riedl, Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning. To appear on ArXiv. [pdf]
B. Dahal, A. Havrilla, M. Chen, T. Zhao, W. Liao, On Deep Generative models for Approximation and Estimation of Distributions on Manifolds. To appear in Neurips 2022. [pdf]
A. Havrilla, P. Nayar, T. Tkocz, Khinchin-type inequalities via Hadamard’s factorisation. To appear in International Mathematics Research Notices. [pdf]
A. Havrilla, T. Tkocz, Sharp Khinchin-type inequalities for symmetric discrete uniform random variables. To appear in Israel J. Math. [pdf]
My Thesis on Sharp Khintchine type Inequalities and some New Results [pdf]