MOMCHIL TOMOV
Neuroscientist, Artificial Intelligence Researcher
Research
Throughout our lives, the seemingly haphazard neural processing in our brains – as if by magic – gives rise to remarkable cognitive abilities that allow us to accomplish tasks big and small, such as winning a video game, driving a car, or discovering a scientific theory. How does this magic work? How can we make it work in artificial brains?
I tackle the first question by building computational models of information processing in the brain and evaluating them using behavioral experiments, neuroimaging, and electrophysiology. I am currently leveraging insights from this work to tackle the second question within two domains: a) motion planning for self-driving cars, and b) automated theory discovery for the cognitive sciences.
My research has both basic science and applied aims and connects multiple disciplines, including neuroscience, psychology, machine learning, robotics, and cognitive science.
Algorithmic approximations to optimal behavior
In the real world, planning and decision making often take place under strict computational and time constraints. This makes optimal behavior unattainable beyond the simplest cases. In one strand of research, I investigate the algorithmic strategies that humans employ to approximate the optimal solution in a variety of domains.
Selected publications:
-
Carvalho, W.*, Tomov, M.*, de Cothi, W*, Barry, C., Gershman, S. Predictive representations: building blocks of intelligence. (submitted)
-
Tomov, M.*, Schulz, E.*, & Gershman, S.J. (2021). Multi-task reinforcement learning in humans. Nature Human Behaviour
-
Tomov, M., Truong, V., Hundia, R., & Gershman, S.J. (2020). Dissociable neural correlates of uncertainty underlie different exploration strategies. Nature Communications
-
Tomov, M., Yagati, S., Kumar, A., Yang, W., & Gershman, S.J. (2020). Discovery of hierarchical representations for efficient planning. PLOS Computational Biology
Causal inference in the brain
To support planning of action sequences that have desired outcomes, our internal model of the world must necessarily be causal. That is, the brain must model how different actions, events, and objects (causes) contribute to future events (effects). In another strand of research, I study what form these causal relationships take, how they are learned and represented by the brain, and how they interact with reward-based learning.
Selected publications:
-
Tomov, M.S., Tsividis, P.A., Pouncy, T., Tenenbaum, J.B., & Gershman, S.J. (2023). The neural architecture of theory-based reinforcement learning. Neuron [also see accompanying blog post]
-
Dorfman, H.M.*, Tomov, M.S.*, Cheung, B., Clarke, D., Gershman, S.J., & Hughes, B.L. (2021). Causal inference gates corticostriatal learning. Journal of Neuroscience
-
Tomov, M.S., Dorfman, H.M., & Gershman, S.J. (2018). Neural computations underlying causal structure learning. Journal of Neuroscience
Autonomous vehicles as testbed for
human-like intelligence
Recently, I joined Motional, an autonomous vehicles startup, to work on building more human-like planners for self-driving cars. This work draws directly from principles of human planning and decision making.
Selected publications:
-
Phan-Minh, T., Howington, F., Chu, T.-S., Tomov, M., Beaudoin, R., Lee, S. U., Li, N., Dicle, C., Findler, S., Suarez-Ruiz, F., et al. (2023). DriveIRL: Drive in Real Life with Inverse Reinforcement Learning. IEEE International Conference on Robotics and Automation (ICRA)
Automated neuroscientist
I am also working on a system that simultaneously discovers and fits new computational models of the brain to neural and behavioral data across multiple levels of description, with the ultimate goal of taking the theoretician out of the loop.
Selected publications:
-
Bhattasali, N.X., Tomov, M.S., & Gershman, S.J. (2021). CCNLab: A benchmarking framework for computational cognitive neuroscience. 35th Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks.