Xavier Gonzalez
  • Home
  • Talks
  • Blog

Xavier Gonzalez

Parallelizing Dynamical Systems in Artificial Intelligence

I am a final-year PhD student studying Artificial Intelligence at Stanford. My advisor is Scott Linderman.

My thesis research focuses on developing and studying methods to parallelize processes previously believed to be “inherently sequential.” Examples include recurrent neural networks (RNNs) and Markov chain Monte Carlo (MCMC).

This ability to parallelize over the sequence length may seem like time travel or magic, but it is just an elegant application of Newton’s method! I have proved under what conditions such parallelization techniques can yield dramatic speed-ups on GPUs over sequential evaluation, and developed scalable and stable parallelization techniques. I call these parallelization techniques the ungulates—large hoofed mammals like DEER and ELK. Please reach out if you are interested and want to improve the ungulates or apply them to other domains!


Going forward, I am extremely interested in the quest to develop artificial general intelligence (AGI), and more broadly the study of intelligence—both natural and artificial.

I am particularly interested in:

  • developing recurrent architectures that can more natively reason
  • developing hardware-aware AI algorithms, and novel hardware that can unlock novel AI algorithms
  • drawing inspiration from natural intelligence, and designing and scaling up neuro-inspired algorithms and hardware
  • applying AI to education technology to improve education and mentorship for the next generation

I will shortly join the technical staff at Unconventional AI. At Unconventional, we are trying to build the next generation of AI hardware and algorithms to be 1000x more energy efficient than current workflows. Join us if you are interested in tackling this pressing problem with deep and fundamental research!

See my publications on my Google Scholar page.

Google Scholar