Abstract
Theories of decision-making preferences and utility formation (e.g., normative, descriptive and experience- based) share common assumptions and predictions. Despite all their differences, normative (utilitarian), psychological descriptive and experience-based decision theories predict that human agents have stable and coherent preferences, informed by consistent use of psychological strategy/processing (computational or non-computational sampling) that guide their choices between alternatives varying in risk and reward. Rather than having fixed preferences/strategies (utilitarian or non-utilitarian) for risky choice, we argue that decision preferences are constructed dynamically based on strategy selection as a reinforcement-learning model. Accordingly, we found that associative learning (supervised learning tasks) predicts strategy selection (probability-bet and dollar-bet strategies) and govern decision makers’ risky preferences.
Original language | English |
---|---|
Publication status | Published - 9 Nov 2017 |
Event | 58th Annual Meeting of the Psychonomic Society - Duration: 11 Sept 2017 → … |
Conference
Conference | 58th Annual Meeting of the Psychonomic Society |
---|---|
Period | 11/09/17 → … |