Xavier Gonzalez
  • Home

Xavier Gonzalez

Parallelizing Dynamical Systems in Artificial Intelligence

I am a final-year PhD student studying Artificial Intelligence at Stanford. My advisor is Scott Linderman.

My thesis research focuses on developing and studying methods to parallelize processes previously believed to be inherently sequential. Examplies include recurrent neural networks (RNNs) and Markov chain Monte Carlo (MCMC).

This ability to parallelize over the sequence length may seem like time travel or magic, but it is just an elegant application of Newton’s method! I have proved under what conditions such parallelization techniques can yield dramatic speed-ups on GPUs over sequential evaluation, and developed scalable and stable parallelization techniques. I call these parallelization techniques the ungulates—large hoofed mammals like DEER and ELK. We are hard at work improving the ungulates and applying them to other domains!


Going forward, I am extremely interested in the quest to develop artificial general intelligence (AGI), and more broadly the study of intelligence—both natural and artificial.

I am particularly interested in:

  • developing recurrent (transformer alternative) architectures that can more natively reason
  • developing hardware-aware AI algorithms, and novel hardware that can unlock novel AI algorithms
  • drawing inspiration from natural intelligence, and designing and scaling up neuro-inspired algorithms and hardware
  • applying AI to education technology to improve education and mentorship for the next generation

I’m actively looking for research jobs, so please reach out if you know of a good fit!

See my publications on my Google Scholar page.

Google Scholar