A Comparison of Reinforcement Learning Models for the Iowa Gambling Task Using Parameter Space Partitioning Helen Steingroever,1 Ruud Wetzels,2 and Eric-Jan Wagenmakers1 Abstract The Iowa gambling task (IGT) is one of the most popular tasks used to study decision- making deficits in clinical populations. In order to decompose performance on the IGT in its constituent psychological processes, several cognitive models have been proposed (e.g., the Expectancy Valence (EV) and Prospect Valence Learning (PVL) models). Here we present a comparison of three models—the EV and PVL models, and a combination of these models (EV-PU)—based on the method of parameter space partitioning. This method allows us to assess the choice patterns predicted by the models across their entire parameter space. Our results show that the EV model is unable to account for a frequency-of-losses effect, whereas the PVL and EV-PU models are unable to account for a pronounced preference for the bad decks with many switches. All three models under- represent pronounced choice patterns that are frequently seen in experiments. Overall, our results suggest that the search of an appropriate IGT model has not yet come to an end. Keywords decision making, loss aversion, Expectancy Valence model, Prospect Valence Learning model 1University of Amsterdam. Please direct correspondence to
[email protected]. 2The European Foundation for the Improvement of Living and Working Conditions, Dublin, Ireland The Journal of Problem Solving • volume 5, no. 2 (Spring 2013) http://dx.doi.org/10.7771/1932-6246.1150 1 2 H. Steingroever, R. Wetzels, & E.-J.