Neural Reinforcement Learning Lab (NeuRLab)
Introduction
Living in an uncertain environment, we desire to pursue good things and to avoid bad things. We are interested in how the brain recognizes different situations and learns to make better decisions. Related questions are: How does the brain represent reward or punishment? How does the brain remember something good and pursue it? How does the brain choose one action out of multiple options? What makes one animal more intelligent than another animal? What can we learn about how the brain works from artificial intelligence?
Reinforcement learning (RL) theory provides theoretical and computational frameworks to these problems. Interestingly, it has been shown that dopamine activity in the brain resembles the teaching signal in one of reinforcement learning theories, temporal difference (TD) learning. However, the detailed neural mechanisms of adaptive behaviors remain elusive. We perform experiments using animals and analyze data using computational models derived from artificial intelligence (AI) to understand the biological mechanisms of reinforcement learning.
Selected Recent Publications
1. Kim HR*, Malik AM*, Mikhael JG, Bech P, Tsutsui-Kimura I, Sun F, Zhang Y, Li Y, Watabe-Uchida M, Gershman SJ, Uchida N (2020) A unified framework for dopamine signals across timescales. Cell (lead author)
2. Kim HR, Angelaki DE, DeAngelis GC (2017) Gain Modulation as a Mechanism for Coding Depth from Motion Parallax in Macaque Area MT. Journal of Neuroscience 37 (34), 8180-8197
3. Kim HR, Angelaki DE, DeAngelis GC (2015) A novel role for visual perspective cues in the neural computation of depth. Nature Neuroscience 18(1), 129-137.
I am a brand-new Assistant Professor at Sungkyunkwan University (SKKU) in South Korea, studying how the brain generates complex and intelligent behaviors. I am affiliated with the Institute for Basic Science (IBS) - Center for Neuroscience Imaging Research and the Department of Biomedical Engineering.
Previously, I was a postdoctoral associate/research scientist at MIT, working with Mehrdad Jazayeri, and at Yale, working with Daeyeol Lee. I obtained my Ph.D. in neuroscience from Seoul National University, mentored by Sang-hun Lee, and my master’s/undergrad from KAIST, mentored by Jaeseung Jeong.
My area of research is cognitive and systems neuroscience. I have been investigating how the brain measures and processes time using multiple approaches: behavioral experiments, computational modeling (e.g., Bayesian theory), human neuroimaging (EEG/fMRI), and electrophysiology in non-human primates. In my new lab, I will combine these techniques to study how the prefrontal and posterior parietal cortices process information about magnitude (time, number, and space).
In my spare time (if I have any!), I enjoy spending time with my daughters outdoors (camping,skiing) and would love to adopt a dog.
Recent Updates
February 2023: I start my own lab at Sungkyunkwan University (SKKU), Department of Biomedical Engineering & Institute for Basic Science - Center for Neuroscience Imaging Research
January 2023: Manuel & Nico’s work titled “Parametric control of flexible timing through low-dimensional neural manifolds”, which I am a part of, is published in Neuron
November 2022: Reza & Andrew’s work titled “A large-scale neural network training framework for generalized estimation of single-trial population dynamics”, which I am a part of, is published in Nature Methods
October 2022: Jason’s work that I mentored is accepted as an oral presentation in NeurIPS workshop
October 2021: My review paper with Devika titled “Neural implementations of Bayesian inference” is published in Current Opinion in Neurobiology
June 2021: My work titled “Validating model-based Bayesian integration using prior–cost metamers” is published in PNAS