Abstract
Theories of decision-making preferences and utility formation (e.g., normative, descriptive and experience- based) share common assumptions and predictions. Despite all their differences, normative (utilitarian), psychological descriptive and experience-based decision theories predict that human agents have stable and coherent preferences, informed by consistent use of psychological strategy/processing (computational or non-computational sampling) that guide their choices between alternatives varying in risk and reward. Rather than having fixed preferences/strategies (utilitarian or non-utilitarian) for risky choice, we argue that decision preferences are constructed dynamically based on strategy selection as a reinforcement-learning model. Accordingly, we found that associative learning (supervised learning tasks) predicts strategy selection (probability-bet and dollar-bet strategies) and govern decision makers’ risky preferences.
| Original language | English |
|---|---|
| Publication status | Published - 9 Nov 2017 |
| Event | 58th Annual Meeting of the Psychonomic Society - Vancouver Convention Centre, Vancouver, Canada Duration: 9 Nov 2017 → 12 Nov 2017 |
Conference
| Conference | 58th Annual Meeting of the Psychonomic Society |
|---|---|
| Country/Territory | Canada |
| City | Vancouver |
| Period | 9/11/17 → 12/11/17 |