Schedule of Events | Search Abstracts | Invited Symposia | Symposia | Poster Sessions | Data Blitz
A Mechanical Analogy for the Observed Oscillations in Choice Behavior of a Population Facing a Time-Varying Reward
Poster Session D - Monday, March 9, 2026, 8:00 – 10:00 am PDT, Fairview/Kitsilano Ballroom
Nasim Nozarnejad1 (), Christopher Lapish2, Jeremy Seasman3, Eldon Emberly1; 1Simon Fraser University, 2Indiana University, 3University of British Columbia
Here we report on different behavioural strategies that emerge in a population when confronted with time varying rewards. Our investigation uses a two-lever choice task to study decision-making in which participants must repeatedly choose between an immediate, but variable reward, or a constant but delayed one. The reward on the immediate reward goes inversely with its selection frequency. For rats carrying out this task, we find two behavioural groups: patient rats who choose to wait for the constant reward, and impatient rats who prefer immediate, and overall, less rewards. Analysis of the choices from each group shows that their decisions oscillate back and forth over time in a regular pattern. Stochastic simulations of this task using a reinforcement learning (RL) framework reproduce experimental observations. Using a mean-field approximation we show that the RL equations for the population dynamics can be approximated by a set of ODEs, which for the given task map to that of a damped harmonic oscillator. This mechanical analogy shows how oscillations in decision preferences occur at characteristic frequencies determined by how strongly the individual values reward, akin to the spring stiffness, and their learning rate, that is analogous to the inverse mass. There is a biasing force that moves the equilibrium toward either waiting or impatience. The spring analogy indicates that there are intrinsic limits to how rapidly agents can adjust their preferences while maintaining stable decision dynamics
Topic Area: THINKING: Decision making
CNS Account Login
March 7 – 10, 2026