The notion that human beings are "rational" agents is an idea with a long history throughout the behavioural sciences, and much work has found that people align with the predictions of idealized models on tasks requiring them to adjust their beliefs or select the optimally rewarding choice.
However, these models often struggle to account for the findings of other work that shows that people exhibit systematic deviations from the predictions of these models, either due to cognitive limitations or motivational factors. One line of my research attempts to reconcile these findings by developing models that are not only "resource-rational"—that is, assuming that agents rationally allocate limited cognitive resources to trade off between accuracy and computational cost—but that also account for other forms of potential cost, such as the social costs of holding a belief that is not shared by one's peers, or that conflicts with one's own sense of identity.
I develop and test this theory with a family of particle filter models, which assume that people simultaneously evaluate a limited number of hypotheses, and update their beliefs after observing new information with a relatively small number of "rejuvenation" steps, where they sample from the posterior distribution to generate new hypotheses to consider. These models have shown that reasoning can often approximate optimality in many circumstances, but that other times they produce phenomena such as conservatism (underweighting new evidence), learning traps (being unable to update beliefs when new hypotheses are difficult to generate), and belief polarization (diverging in beliefs when presented with the same evidence).