Xavier Gonzalez
I am a graduating PhD student studying Artificial Intelligence at Stanford. My advisor is Scott Linderman.
My PhD research focused on developing and studying methods to parallelize processes previously believed to be “inherently sequential.” Examples include recurrent neural networks (RNNs) and Markov chain Monte Carlo (MCMC). My work has helped to break the sequential bottlenecks these important AI methods used to suffer from.
This ability to parallelize over the sequence length may seem like time travel or magic, but it is just an elegant application of Newton’s method! I have proved under what conditions such parallelization techniques can yield dramatic speed-ups on GPUs over sequential evaluation, and developed scalable and stable parallelization techniques. I call these parallelization techniques the ungulates—large hoofed mammals like DEER and ELK.
My PhD Disssertation serves as an intro, quick-start guide that I recommend if you want to quickly learn what paralellizing sequential computation is all about!
Going forward, I am excited about the quest to develop artificial general intelligence (AGI), and more broadly the study of intelligence—both natural and artificial.
I am particularly interested in:
- developing recurrent architectures that can more natively reason
- developing hardware-aware AI algorithms, and novel hardware that can unlock novel AI algorithms
- drawing inspiration from natural intelligence, and designing and scaling up neuro-inspired algorithms and hardware
- applying AI to education technology to improve education and mentorship for the next generation
I will shortly join the technical staff at Unconventional AI. At Unconventional, we are trying to build the next generation of AI hardware and algorithms to be 1000x more energy efficient than current workflows. Join us if you are interested in tackling this pressing problem with deep and fundamental research!
See my publications on my Google Scholar page.