Search from over 60,000 research works

Advanced Search

Reward-mediated, model-free reinforcement-learning mechanisms in Pavlovian and instrumental tasks are related

[thumbnail of MoinAfshar2023JoN.pdf]
Preview
MoinAfshar2023JoN.pdf - Published Version (2MB) | Preview
Available under license: Creative Commons Attribution
Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Moin Afshar, N., Cinotti, F. orcid id iconORCID: https://orcid.org/0000-0003-2921-0901, Martin, D., Khamassi, M., Calu, D. J., Taylor, J. R. and Groman, S. M. (2023) Reward-mediated, model-free reinforcement-learning mechanisms in Pavlovian and instrumental tasks are related. The Journal of Neuroscience, 43 (3). pp. 458-471. ISSN 1529-2401 doi: 10.1523/JNEUROSCI.1113-22.2022

Abstract/Summary

Model-free and model-based computations are argued to distinctly update action values that guide decision-making processes. It is not known, however, if these model-free and model-based reinforcement learning mechanisms recruited in operationally based instrumental tasks parallel those engaged by pavlovian-based behavioral procedures. Recently, computational work has suggested that individual differences in the attribution of incentive salience to reward predictive cues, that is, sign- and goal-tracking behaviors, are also governed by variations in model-free and model-based value representations that guide behavior. Moreover, it is not appreciated if these systems that are characterized computationally using model-free and model-based algorithms are conserved across tasks for individual animals. In the current study, we used a within-subject design to assess sign-tracking and goal-tracking behaviors using a pavlovian conditioned approach task and then characterized behavior using an instrumental multistage decision-making (MSDM) task in male rats. We hypothesized that both pavlovian and instrumental learning processes may be driven by common reinforcement-learning mechanisms. Our data confirm that sign-tracking behavior was associated with greater reward-mediated, model-free reinforcement learning and that it was also linked to model-free reinforcement learning in the MSDM task. Computational analyses revealed that pavlovian model-free updating was correlated with model-free reinforcement learning in the MSDM task. These data provide key insights into the computational mechanisms mediating associative learning that could have important implications for normal and abnormal states.

Altmetric Badge

Item Type Article
URI https://reading-clone.eprints-hosting.org/id/eprint/120876
Item Type Article
Refereed Yes
Divisions Life Sciences > School of Psychology and Clinical Language Sciences > Department of Psychology
Publisher The Society for Neuroscience
Download/View statistics View download statistics for this item

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Search Google Scholar