The computational roots of positivity and confirmation biases in reinforcement learning - PSE - Paris School of Economics
Article Dans Une Revue Trends in Cognitive Sciences Année : 2022

The computational roots of positivity and confirmation biases in reinforcement learning

Résumé

Humans do not integrate new information objectively: outcomes carrying a positive affective value and evidence confirming one's own prior belief are overweighed. Until recently, theoretical and empirical accounts of the positivity and confirmation biases assumed them to be specific to 'high-level' belief updates. We present evidence against this account. Learning rates in reinforcement learning (RL) tasks, estimated across different contexts and species, generally present the same characteristic asymmetry, suggesting that belief and value updating processes share key computational principles and distortions. This bias generates over-optimistic expectations about the probability of making the right choices and, consequently, generates over-optimistic reward expectations. We discuss the normative and neurobiological roots of these RL biases and their position within the greater picture of behavioral decision-making theories.
Fichier principal
Vignette du fichier
S1364661322000894.pdf (745.32 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04215577 , version 1 (22-07-2024)

Licence

Identifiants

Citer

Stefano Palminteri, Maël Lebreton. The computational roots of positivity and confirmation biases in reinforcement learning. Trends in Cognitive Sciences, 2022, 26 (7), pp.607-621. ⟨10.1016/j.tics.2022.04.005⟩. ⟨hal-04215577⟩
221 Consultations
9 Téléchargements

Altmetric

Partager

More