MOMCHIL TOMOV
Cognitive Neuroscientist, Artificial Intelligence Researcher
Research
Throughout our lives, the seemingly haphazard neural processing in our brains – as if by magic – gives rise to remarkable cognitive abilities that allow us to accomplish tasks big and small, such as winning a video game, driving a car, or discovering a scientific theory. How does this magic work? How can we make it work in artificial brains?
I tackle the first question by building computational models of information processing in the brain and evaluating them using behavioral experiments, neuroimaging, and electrophysiology. I am currently leveraging insights from this work to tackle the second question within two domains: a) motion planning for self-driving cars, and b) automated theory discovery for the cognitive sciences.
My research has both basic science and applied aims and connects multiple disciplines, including neuroscience, psychology, machine learning, robotics, and cognitive science.
Naturalistic learning and decision making
How does the brain support adaptive decision making in the real world? Traditional approaches to answering this central question in cognitive science/cognitive neuroscience distill the complexity of the real world into highly controlled experiments that allow the comparison of a small set of hypotheses, often expressed as computational models. While this has shed light on different facets of decision making in isolation, it fails to capture many aspects of real-world decision making that only emerge in naturalistic environments.
My primary research focus is to address this gap by recording human behavior and brain activity in video games, which capture many aspects of real-world decision making, and using those data to compare and refine computational models of naturalistic decision making based on state-of-the-art AI systems.
Selected publications:
-
Tomov, M. S., Tsividis, P.A., Pouncy, T., Tenenbaum, J.B., & Gershman, S.J. (2023). The neural architecture of theory-based reinforcement learning. Neuron [also see accompanying blog post]
Building machines that learn, plan, and act like humans
The ultimate test of a computational account of naturalistic decision making is whether it can perform like humans in the real world. Self-driving cars present an excellent yet unsolved testbed for this challenge. Unlike other AI applications (e.g., chatbots), they face many of the same constraints as humans, such as limited time, compute, and memory budgets. They also operate in dynamic, high-dimensional, safety-critical environments that require continuous interaction with other humans, which means they need to plan accurately and efficiently, they need to be interpretable, and they need to behave in human-like ways. Despite significant advances over the last decade, current autonomous driving technology still lacks human-level abilities, with planning and decision making – the core cognitive functions that determine driving behavior – posing the greatest challenge.
My secondary research focus is to address this gap by imbuing autonomous vehicles with inductive biases derived from our understanding of naturalistic human decision making and evaluating them against human driving in simulation and in the real world.
Selected publications:
-
Kenny, E., Dharmavaram, A., Lee, S. U., Phan-Minh, T., Rajesh, S., Major, L., Hu, Y., Tomov, M. S.*, Shah, J.* (2024). Explainable deep learning improves human mental models of self-driving cars. (submitted)
-
Heim, M., Suarez-Ruiz, F., Bhuiyan, I., Brito, B., Tomov, M. S. (2024). Lab2Car: a versatile wrapper for deploying experimental planners in complex real-world environments. (submitted)
-
Phan-Minh, T., Howington, F., Chu, T.-S., Tomov, M. S., Beaudoin, R., Lee, S. U., Li, N., Dicle, C., Findler, S., Suarez-Ruiz, F., et al. (2023). DriveIRL: drive in real life with inverse reinforcement learning. IEEE International Conference on Robotics and Automation (ICRA)
* - equal contribution
Automated neuroscientist
When moving to naturalistic domains, the space of possible cognitive models grows exponentially. This further increases the burden on theoreticians, who are already the primary bottleneck in advancing conceptual understanding in the cognitive sciences. There is thus a pressing need to developed tools that can systematically explore the space of cognitive models based on data. While state-of-the-art AI systems can serve as starting points for this exploration, ultimately the models we converge on should be based both on normative considerations (i.e., can they perform naturalistic tasks) as well as descriptive considerations (i.e., do they explain behavior and brain activity).
To address this, I am working on an Automated Neuroscientist that can discover computational theories directly from behavioral and neural data.
Selected publications:
-
Moro, V., Dugan, O., Dangovski, D., Negrello, T., Gershman, S. J., Soljačić, M., Tomov, M. S. Towards an automated neuroscientist. (in prep)
-
Tomov, M. S. (2024). Inverse stochastic learning. Conference on Cognitive Computational Neuroscience (CCN 2024).
-
Bhattasali, N.X., Tomov, M. S., & Gershman, S.J. (2021). CCNLab: A benchmarking framework for computational cognitive neuroscience. 35th Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks.
Algorithmic approximations to optimal behavior
In the real world, planning and decision making often take place under strict computational and time constraints. This makes optimal behavior unattainable beyond the simplest cases. In another strand of research, I investigate the algorithmic strategies that humans employ to approximate the optimal solution in a variety of domains.
Selected publications:
-
Hall-McMaster, S., Tomov, M. S., Gershman, S., Schuck, N. (2024). Neural prioritisation of past solutions supports generalization. (submitted)
-
Carvalho, W.*, Tomov, M. S.*, de Cothi, W*, Barry, C., Gershman, S. (2024). Predictive representations: building blocks of intelligence. Neural Computation
-
Tomov, M. S.*, Schulz, E.*, & Gershman, S.J. (2021). Multi-task reinforcement learning in humans. Nature Human Behaviour
-
Tomov, M. S., Truong, V., Hundia, R., & Gershman, S.J. (2020). Dissociable neural correlates of uncertainty underlie different exploration strategies. Nature Communications
-
Tomov, M. S., Yagati, S., Kumar, A., Yang, W., & Gershman, S.J. (2020). Discovery of hierarchical representations for efficient planning. PLOS Computational Biology
* - equal contribution
Causal inference in the brain
To support planning of action sequences that have desired outcomes, our internal model of the world must necessarily be causal. That is, the brain must model how different actions, events, and objects (causes) contribute to future events (effects). I study what form these causal relationships take, how they are learned and represented by the brain, and how they interact with reward-based learning.
Selected publications:
-
Dorfman, H.M.*, Tomov, M.S.*, Cheung, B., Clarke, D., Gershman, S.J., & Hughes, B.L. (2021). Causal inference gates corticostriatal learning. Journal of Neuroscience
-
Tomov, M.S., Dorfman, H.M., & Gershman, S.J. (2018). Neural computations underlying causal structure learning. Journal of Neuroscience
* - equal contribution