Significance
Changing individuals behavior is key to tackling some of todays most pressing societal challenges such as the COVID-19 pandemic or climate change. Choice architecture interventions aim to nudge people toward personally and socially desirable behavior through the design of choice environments. Although increasingly popular, little is known about the overall effectiveness of choice architecture interventions and the conditions under which they facilitate behavior change. Here we quantitatively review over a decade of research, showing that choice architecture interventions successfully promote behavior change across key behavioral domains, populations, and locations. Our findings offer insights into the effects of choice architecture and provide guidelines for behaviorally informed policy making.
Over the past decade, choice architecture interventions or so-called nudges have received widespread attention from both researchers and policy makers. Built on insights from the behavioral sciences, this class of behavioral interventions focuses on the design of choice environments that facilitate personally and socially desirable decisions without restricting people in their freedom of choice. Drawing on more than 200 studies reporting over 450 effect sizes (n = 2,149,683), we present a comprehensive analysis of the effectiveness of choice architecture interventions across techniques, behavioral domains, and contextual study characteristics. Our results show that choice architecture interventions overall promote behavior change with a small to medium effect size of Cohens d = 0.45 (95% CI [0.39, 0.52]). In addition, we find that the effectiveness of choice architecture interventions varies significantly as a function of technique and domain. Across behavioral domains, interventions that target the organization and structure of choice alternatives (decision structure) consistently outperform interventions that focus on the description of alternatives (decision information) or the reinforcement of behavioral intentions (decision assistance). Food choices are particularly responsive to choice architecture interventions, with effect sizes up to 2.5 times larger than those in other behavioral domains. Overall, choice architecture interventions affect behavior relatively independently of contextual study characteristics such as the geographical location or the target population of the intervention. Our analysis further reveals a moderate publication bias toward positive results in the literature. We end with a discussion of the implications of our findings for theory and behaviorally informed policy making.
Many of todays most pressing societal challenges such as the successful navigation of the COVID-19 pandemic or the mitigation of climate change call for substantial changes in individuals behavior. Whereas microeconomic and psychological approaches based on rational agent models have traditionally dominated the discussion about how to achieve behavior change, the release of Thaler and Sunsteins book NudgeImproving Decisions about Health, Wealth, and Happiness (1) widely introduced a complementary intervention approach known as choice architecture or nudging, which aims to change behavior by (re)designing the physical, social, or psychological environment in which people make decisions while preserving their freedom of choice (2). Since the publication of the first edition of Thaler and Sunstein (1) in 2008, choice architecture interventions have seen an immense increase in popularity (Fig. 1). However, little is known about their overall effectiveness and the conditions under which they facilitate behavior changea gap the present meta-analysis aims to address by analyzing the effects of the most widely used choice architecture techniques across key behavioral domains and contextual study characteristics.
Number of citations of Thaler and Sunstein (1) between 2008 and 2020. Counts are based on citation search in Web of Science.
Traditional microeconomic intervention approaches are often built around a rational agent model of decision making, which assumes that people base their decisions on known and consistent preferences that aim to maximize the utility, or value, of their actions. In determining their preferences, people are thought to engage in an exhaustive analysis of the probabilities and potential costs and benefits of all available options to identify which option provides the highest expected utility and is thus the most favorable (3). Interventions aiming to change behavior are accordingly designed to increase the utility of the desired option, either by educating people about the existing costs and benefits of a certain behavior or by creating entirely new incentive structures by means of subsidies, tax credits, fines, or similar economic measures. Likewise, traditional psychological intervention approaches explain behavior as the result of a deliberate decision making process that weighs and integrates internal representations of peoples belief structures, values, attitudes, and norms (4, 5). Interventions accordingly focus on measures such as information campaigns that aim to shift behavior through changes in peoples beliefs or attitudes (6).
Over the past years, intervention approaches informed by research in the behavioral sciences have emerged as a complement to rational agent-based approaches. They draw on an alternative model of decision making which acknowledges that people are bounded in their ability to make rational decisions. Rooted in dual-process theories of cognition and information processing (7), this model recognizes that human behavior is not always driven by the elaborate and rational thought processes assumed by the rational agent model but instead often relies on automatic and computationally less intensive forms of decision making that allow people to navigate the demands of everyday life in the face of limited time, available information, and computational power (8, 9). Boundedly rational decision makers often construct their preferences ad hoc based on cognitive shortcuts and biases, which makes them susceptible to supposedly irrational contextual influences, such as the way in which information is presented or structured (1012). This susceptibility to contextual factors, while seemingly detrimental to decision making, has been identified as a promising lever for behavior change because it offers the opportunity to influence peoples decisions through simple changes in the so-called choice architecture that defines the physical, social, and psychological context in which decisions are made (2). Rather than relying on education or significant economic incentives, choice architecture interventions aim to guide people toward personally and socially desirable behavior by designing environments that anticipate and integrate peoples limitations in decision making to facilitate access to decision-relevant information, support the evaluation and comparison of available choice alternatives, or reinforce previously formed behavioral intentions (13) (see Table 1 for an overview of intervention techniques based on choice architecture*).
Taxonomy of choice architecture categories and intervention techniques
Unlike the assumption of the rational agent model, people rarely have access to all relevant information when making a decision. Instead, they tend to base their decisions on information that is directly available to them at the moment of the decision (14, 15) and to discount or even ignore information that is too complex or meaningless to them (16, 17). Choice architecture interventions based on the provision of decision information aim to facilitate access to decision-relevant information by increasing its availability, comprehensibility, and/or personal relevance to the decision maker. One way to achieve this is to provide social reference information that reduces the ambiguity of a situation and helps overcome uncertainty about appropriate behavioral responses. In a natural field experiment with more than 600,000 US households, for instance, Allcott (18) demonstrated the effectiveness of descriptive social norms in promoting energy conservation. Specifically, the study showed that households which regularly received a letter comparing their own energy consumption to that of similar neighbors reduced their consumption by an average of 2%. This effect was estimated to be equivalent to that of a short-term electricity price increase of 11 to 20%. Other examples of decision information interventions include measures that increase the visibility of otherwise covert information (e.g., feedback devices and nutrition labels; refs. 19, 20), or that translate existing descriptions of choice options into more comprehensible or relevant information (e.g., through simplifying or reframing information; ref. 21).
Not only do people have limited access to decision-relevant information, but they often refrain from engaging in the elaborate cost-benefit analyses assumed by the rational agent model to evaluate and compare the expected utility of all choice options. Instead, they use contextual cues about the way in which choice alternatives are organized and structured within the decision environment to inform their behavior. Choice architecture interventions built around changes in the decision structure utilize this context dependency to influence behavior through the arrangement of choice alternatives or the format of decision making. One of the most prominent examples of this intervention approach is choice default, or the preselection of an option that is imposed if no active choice is made. In a study comparing organ donation policies across European countries, Johnson and Goldstein (22) demonstrated the impact of defaults on even highly consequential decisions, showing that in countries with presumed consent laws, which by default register individuals as organ donors, the rate of donor registrations was nearly 60 percentage points higher than in countries with explicit consent laws, which require individuals to formally agree to becoming an organ donor. Other examples of decision structure interventions include changes in the effort related to choosing an option (23), the range or composition of options (24), and the consequences attached to options (25).
Even if people make a deliberate and potentially rational decision to change their behavior, limited attentional capacities and a lack of self-control may prevent this decision from actually translating into the desired actions, a phenomenon described as the intentionbehavior gap (26). Choice architecture interventions that provide measures of decision assistance aim to bridge the intentionbehavior gap by reinforcing self-regulation. One example of this intervention approach are commitment devices, which are designed to strengthen self-control by removing psychological barriers such as procrastination and intertemporal discounting that often stand in the way of successful behavior change. Thaler and Benartzi (27) demonstrated the effectiveness of such commitment devices in a large-scale field study of the Save More Tomorrow program, showing that employees increased their average saving rates from 3.5 to 13.6% when committing in advance to allocating parts of their future salary increases toward retirement savings. If applied across the United States, this program was estimated to increase the total of annual retirement contributions by approximately $25 billion for each 1% increase in saving rates. Other examples of decision assistance interventions are reminders, which affect decision making by increasing the salience of the intended behavior (28).
Despite the growing interest in choice architecture, only a few attempts have been made to quantitatively integrate the empirical evidence on its effectiveness as a behavior change tool (2932). Previous studies have mostly been restricted to the analysis of a single choice architecture technique (3335) or a specific behavioral domain (3639), leaving important questions unanswered, including how effective choice architecture interventions overall are in changing behavior and whether there are systematic differences across choice architecture techniques and behavioral domains that so far may have remained undetected and that may offer new insights into the psychological mechanisms that drive choice architecture interventions.
The aim of the present meta-analysis was to address these questions by first quantifying the overall effect of choice architecture interventions on behavior and then providing a systematic comparison of choice architecture interventions across different techniques, behavioral domains, and contextual study characteristics to answer 1) whether some choice architecture techniques are more effective in changing behavior than others, 2) whether some behavioral domains are more receptive to the effects of choice architecture interventions than others, 3) whether choice architecture techniques differ in their effectiveness across varying behavioral domains, and finally, 4) whether the effectiveness of choice architecture interventions is impacted by contextual study characteristics such as the location or target population of the intervention. Drawing on an exhaustive literature search that yielded more than 200 published and unpublished studies, this comprehensive analysis presents important insights into the effects and potential boundary conditions of choice architecture interventions and provides an evidence-based guideline for selecting behaviorally informed intervention measures.
Our meta-analysis of 455 effect sizes from 214 publications (N = 2, 149, 683) revealed a statistically significant effect of choice architecture interventions on behavior (Cohens d=0.45, 95% CI [0.39, 0.52], t(340)=14.38, P < 0.001) (Fig. 2). Using conventional criteria, this effect can be classified to be of small to medium size (40). The effect size was reliable across several robustness checks, including the removal of influential outliers, which marginally decreased the overall size of the effect but did not change its statistical significance (d=0.42, 95% CI [0.37, 0.46], t(338)=17.06, P < 0.001). Additional leave-one-out analyses at the individual effect size level and the publication level found the effect of choice architecture interventions to be robust to the exclusion of any one effect size and publication, with d ranging from 0.43 to 0.46 and all P < 0.001.
Forest plot of all effect sizes (k = 455) included in the meta-analysis with their corresponding 95% confidence intervals. Extracted Cohens d values ranged from 0.69 to 4.69. The proportion of true to total variance was estimated at I2 = 99.67%. ***P<0.001.
The total heterogeneity was estimated to be 2=0.23, indicating considerable variability in the effect size of choice architecture interventions. More specifically, the dispersion of effect sizes suggests that while the majority of choice architecture interventions will successfully promote the desired behavior change with a small to large effect size, 15% of interventions are likely to backfire, i.e., reduce or even reverse the desired behavior, with a small to medium effect (95% prediction interval [0.48, 1.39]) (4042).
Visual inspection of the relation between effect sizes and their corresponding SEs (Fig. 3) revealed an asymmetric distribution that suggested a one-tailed overrepresentation of positive effect sizes in studies with comparatively low statistical power (43). This finding was formally confirmed by Eggers test (44), which found a positive association between effect sizes and SEs (b=2.28, 95% CI [1.31, 3.25], t(339)=4.61, P < 0.001). Together, these results point to a publication bias in the literature that may favor the reporting of successful as opposed to unsuccessful implementations of choice architecture interventions in studies with small sample sizes. Sensitivity analyses imposing a priori weight functions on a simplified random effects model suggested that this one-tailed publication bias could have potentially affected the estimate of our meta-analytic model (43). Assuming a moderate one-tailed publication bias in the literature attenuated the overall effect size of choice architecture interventions by 26.79% from Cohens d = 0.42, 95% CI [0.37, 0.46], and 2=0.20 (SE=0.02) to d=0.31 and 2=0.23. Assuming a severe one-tailed publication bias attenuated the overall effect size even further to d=0.03 and 2=0.34; however, this assumption was only partially supported by the funnel plot. Although our general conclusion about the effects of choice architecture interventions on behavior remains the same in the light of these findings, the true effect size of interventions is likely to be smaller than estimated by our meta-analytic model due to the overrepresentation of positive effect sizes in our sample.
Funnel plot displaying each observation as a function of its effect size and SE. In the absence of publication bias, observations should scatter symmetrically around the pooled effect size indicated by the gray vertical line and within the boundaries of the 95% confidence intervals shaded in white. The asymmetric distribution shown here indicates a one-tailed publication bias in the literature that favors the reporting of successful implementations of choice architecture interventions in studies with small sample sizes.
Supported by the high heterogeneity among effect sizes, we next tested the extent to which the effectiveness of choice architecture interventions was moderated by the type of intervention, the behavioral domain in which it was implemented, and contextual study characteristics.
Our first analysis focused on identifying potential differences between the effect sizes of decision information, decision structure, and decision assistance interventions. This analysis found that intervention category indeed moderated the effect of choice architecture interventions on behavior (F(3,337)=9.79, P < 0.001). With average effect sizes ranging from d=0.31 to 0.55, interventions across all three categories were effective in inducing statistically significant behavior change (all P<0.001; Fig. 4). Planned contrasts between categories, however, revealed that interventions in the decision structure category had a stronger effect on behavior compared to interventions in the decision information (b = 0.17, 95% CI [0.03, 0.31], t(337)=2.32, P = 0.02) and the decision assistance category (b=0.24, 95% CI [0.11, 0.36], t(337)=3.79, P < 0.001). No difference was found in the effectiveness of decision information and decision assistance interventions (b=0.07, 95% CI [0.19,0.05], t(337)=1.16, P = 0.25). Including intervention category as a moderator in our meta-analytic model marginally reduced the proportion of true to total variability in effect sizes from I2=99.67% to I2=99.57% (I(3)2=92.44%; I(2)2=7.13%; SI Appendix, Table S3).
Forest plot of effect sizes across categories of choice architecture intervention techniques (see Table 1 for more detailed description of techniques). The position of squares on the x axis indicates the effect size of each respective intervention technique. Bars indicate the 95% confidence intervals of effect sizes. The size of squares is inversely proportional to the SE of effect sizes. Diamond shapes indicate the average effect size and confidence intervals of intervention categories. The solid line represents an effect size of Cohens d = 0. The dotted line represents the overall effect size of choice architecture interventions, Cohens d = 0.45, 95% CI [0.39, 0.52]. Identical letter superscripts indicate statistically significant (P < 0.05) pairwise comparisons.
To test whether the effect sizes of the three intervention categories adequately represented differences on the underlying level of choice architecture techniques, we reran our analysis with intervention technique rather than category as the key moderator. As illustrated in Fig. 4, each of the nine intervention techniques was effective in inducing behavior change, with Cohens d ranging from 0.30 to 0.62 (all P < 0.01). Within intervention categories, techniques were generally consistent in their effect sizes (for all contrasts, P > 0.05). Between categories, however, techniques showed in parts substantial differences in effect sizes. In line with the previously reported results, techniques within the decision structure category were consistently stronger in their effects on behavior than intervention techniques within the decision information or the decision assistance category. The observed effect size differences between the decision information, the decision structure, and the decision assistance category were thus unlikely to be driven by a single intervention technique but rather representative of the entire set of techniques within those categories.
Following our analysis of the effectiveness of varying types of choice architecture interventions, we next focused on identifying potential differences among the behavioral domains in which interventions were implemented. As illustrated in Fig. 5, effect sizes varied quite substantially across domains, with Cohens d ranging from 0.25 to 0.72. Our analysis confirmed that the effectiveness of interventions was moderated by domain (F(6,334)=4.62, P < 0.001). Specifically, it showed that choice architecture interventions, while generally effective in inducing behavior change across all six domains, had a particularly strong effect on behavior in the food domain, with d=0.72 (95% CI [0.49, 0.95]). No other domain showed comparably large effect sizes (for all contrasts, P < 0.05). The smallest effects were observed in the financial domain. With an average intervention effect of d = 0.25 (95% CI [0.12, 0.37]), this domain was less receptive to choice architecture interventions than the other behavioral domains we investigated. Introducing behavioral domain as a moderator in our meta-analytic model marginally reduced the ratio of true to total heterogeneity among effect sizes from I2=99.67% to I2=99.58% (I(3)2=94.56%; I(2)2=5.02%; SI Appendix, Table S3).
Forest plot of effect sizes across categories of choice architecture interventions and behavioral domains. The position of squares on the x axis indicates the effect size of each intervention category within a behavioral domain. Bars indicate the 95% confidence intervals of effect sizes. The size of squares is inversely proportional to the SE of effect sizes. Diamond shapes indicate the overall effect size and confidence intervals of choice architecture interventions within a behavioral domain. The solid line represents an effect size of Cohens d = 0. The dotted line represents the overall effect size of choice architecture interventions, Cohens d = 0.45, 95% CI [0.39, 0.52]. Identical letter superscripts indicate statistically significant (P < 0.05) pairwise comparisons.
Comparing the effectiveness of decision information, decision structure, and decision assistance interventions across domains consistently showed interventions within the decision structure category to have the largest effect on behavior, with Cohens d ranging from 0.33 to 0.86 (Fig. 5). This result suggests that the observed effect size differences between the three categories of choice architecture interventions were relatively stable and independent from the behavioral domain in which interventions were applied. Including the interaction of intervention category and behavioral domain in our meta-analytic model reduced the proportion of true to total effect size variability from I2=99.67% to I2=99.52% (I(3)2=91.86%; I(2)2=7.67%; SI Appendix, Table S3).
Last, we were interested in the extent to which the effect size of choice architecture interventions was moderated by contextual study characteristics, such as the location of the intervention (inside vs. outside of the United States), the target population of the intervention (adults vs. children and adolescents), the experimental setting in which the intervention was investigated (conventional laboratory experiment, artifactual field experiment, framed field experiment, or natural field experiment; ref. 45), and the year in which the data were published. As can be seen in Table 2, choice architecture interventions affected behavior relatively independently of contextual influences since neither location nor target population had a statistically significant impact on the effect size of interventions. In support of the external validity of behavioral measures, our analysis moreover did not find any difference in the effect size of different types of experiments. Only year of publication predicted the effect of interventions on behavior, with more recent publications reporting smaller effect sizes than older publications.
Parameter estimates of three-level meta-analytic models showing the overall effect size of choice architecture interventions as well as effect sizes across categories, techniques, behavioral domains, and contextual study characteristics
Changing individuals behavior is key to solving some of todays most pressing societal challenges. However, how can this behavior change be achieved? Recently, more and more researchers and policy makers have approached this question through the use of choice architecture interventions. The present meta-analysis integrates over a decades worth of research to shed light on the effectiveness of choice architecture and the conditions under which it facilitates behavior change. Our results show that choice architecture interventions promote behavior change with a small to medium effect size of Cohens d = 0.45, which is comparable to more traditional intervention approaches like education campaigns or financial incentives (4648). Our findings are largely consistent with those of previous analyses that investigated the effectiveness of choice architecture interventions in a smaller subset of the literature (e.g., refs. 29, 30, 32, 33). In their recent meta-analysis of choice architecture interventions across academic disciplines, Beshears and Kosowksy (30), for example, found that choice architecture interventions had an average effect size of d=0.41. Similarly, focusing on one choice architecture technique only, Jachimowicz et al. (33) found that choice defaults had an average effect size of d=0.68, which is slightly higher than the effect size our analysis revealed for this intervention technique (d = 0.62). Our results suggest a somewhat higher overall effectiveness of choice architecture interventions than meta-analyses that have focused exclusively on field experimental research (31, 37), a discrepancy that holds even when accounting for differences between experimental settings (45). This inconsistency in findings may in part be explained by differences in metaanalytic samples. Only 7% of the studies analyzed by DellaVigna and Linos (31), for example, meet the strict inclusion and exclusion criteria of the present meta-analysis. Among others, these criteria excluded studies that combined multiple choice architecture techniques. While this restriction allowed us to isolate the unique effect of each individual intervention technique, it may conflict with the reality of field experimental research that often requires researchers to leverage the effects of several choice architecture techniques to address the specific behavioral challenge at hand (see Materials and Methods for details on the literature search process and inclusion criteria). Similarly, the techniques that are available to field experimental researchers may not always align with the underlying psychological barriers to the target behavior (Table 1), decreasing their effectiveness in encouraging the desired behavior change.
Not only does choice architecture facilitate behavior change, but according to our results, it does so across a wide range of behavioral domains, population segments, and geographical locations. In contrast to theoretical and empirical work challenging its effectiveness (4951), choice architecture constitutes a versatile intervention approach that lends itself as an effective behavior change tool across many contexts and policy areas. Although the present meta-analysis focuses on studies that tested the effects of choice architecture alone, the applicability of choice architecture is not restricted to stand-alone interventions but extends to hybrid policy measures that use choice architecture as a complement to more traditional intervention approaches (52). Previous research, for example, has shown that the impact of economic interventions such as taxes or financial incentives can be enhanced through choice architecture (5355).
In addition to the overall effect size of choice architecture interventions, our systematic comparison of interventions across different techniques, behavioral domains, and contextual study characteristics reveals substantial variations in the effectiveness of choice architecture as a behavior change tool. Most notably, we find that across behavioral domains, decision structure interventions that modify decision environments to address decision makers limited capacity to evaluate and compare choice options are consistently more effective in changing behavior than decision information interventions that address decision makers limited access to decision-relevant information or decision assistance interventions that address decision makers limited attention and self-control. This relative advantage of structural choice architecture techniques may be due to the specific psychological mechanisms that underlie the different intervention techniques or, more specifically, their demands on information processing. Decision information and decision assistance interventions rely on relatively elaborate forms of information processing in that the information and assistance they provide needs to be encoded and evaluated in terms of personal values and/or goals to determine the overall utility of a given choice option (56). Decision structure interventions, by contrast, often do not require this type of information processing but provide a general utility boost for specific choice options that offers a cognitive shortcut for determining the most desirable option (57, 58). Accordingly, decision information and decision assistance interventions have previously been described as attempts to facilitate more deliberate decision making processes, whereas decision structure interventions have been characterized as attempts to advance more automatic decision making processes (59). Decision information and decision assistance interventions may thus more frequently fail to induce behavior change and show overall smaller effect sizes than decision structure interventions because they may exceed peoples cognitive limits in decision making more often, especially in situations of high cognitive load or time pressure.
The engagement of internal value and goal representations by decision information and decision assistance interventions introduces a second factor that may impact their effectiveness to change behavior: the moderating influence of individual differences. Nutrition labels, a prominent example of decision information interventions, for instance, have been shown to be more frequently used by consumers who are concerned about their diet and overall health than consumers who do not share those concerns (60). By targeting only certain population segments, information and assistance-based choice architecture interventions may show an overall smaller effect size when assessed at the population level compared to structure-based interventions, which rely less on individual values and goals and may therefore have an overall larger impact across the whole population. From a practical perspective, this suggests that policy makers who wish to use choice architecture as a behavioral intervention measure may need to precede decision information and decision assistance interventions by an assessment and analysis of the values and goals of the target population or, alternatively, choose a decision structure approach in cases when a segmentation of the population in terms of individual differences is not possible.
In summary, the higher effectiveness of decision structure interventions may potentially be explained by a combination of two factors: 1) lower demand on information processing and 2) lower susceptibility to individual differences in values and goals. Our explanation remains somewhat speculative, however, as empirical research especially on the cognitive processes underlying choice architecture interventions is still relatively scarce (but see refs. 53, 56, 57). More research efforts are needed to clarify the psychological mechanisms that drive the impact of choice architecture interventions and determine their effectiveness in changing behavior.
Besides the effect size variations between different categories of choice architecture techniques, our results reveal considerable differences in the effectiveness of choice architecture interventions across behavioral domains. Specifically, we find that choice architecture interventions had a particularly strong effect on behavior in the food domain, with average effect sizes up to 2.5 times larger than those in the health, environmental, financial, prosocial, or other behavioral domain. A key characteristic of food choices and other food-related behaviors is the fact that they bear relatively low behavioral costs and few, if any, perceived long-term consequences for the decision maker. Previous research has found that the potential impact of a decision can indeed moderate the effectiveness of choice architecture interventions, with techniques such as gain and loss framing having a smaller effect on behavior when the decision at hand has a high, direct impact on the decision maker than when the decision has little to no impact (61). Consistent with this research, we observe not only the largest effect sizes of choice architecture interventions in the food domain but also the overall smallest effect sizes of interventions in the financial domain, a domain that predominantly represents decisions of high impact to the decision maker. This systematic variation of effect sizes across behavioral domains suggests that when making decisions that are perceived to have a substantial impact on their lives, people may be less prone to the influence of automatic biases and heuristics, and thus the effects of choice architecture interventions, than when making decisions of comparatively smaller impact.
Another characteristic of food choices that may explain the high effectiveness of choice architecture interventions in the food domain is the fact that they are often driven by habits. Commonly defined as highly automatized behavioral responses to cues in the choice environment, habits distinguish themselves from other behaviors through a particularly strong association between behavior on the one hand and choice environment on the other hand (62, 63). It is possible that choice architecture interventions benefit from this association to the extent that they target the choice environment and thus potentially alter triggers of habitualized, undesirable behaviors. To illustrate, previous research has shown that people tend to adjust their food consumption relative to portion size, meaning that they consume more when presented with large portions and less when presented with small portions (39). Here portion size acts as an environmental cue that triggers and guides the behavioral response to eat. Choice architecture interventions that target this environmental cue, for example, by changing the default size of a food portion, are likely to be successful in changing the amount of food people consume because they capitalize on the highly automatized association between portion size and food consumption. The congruence between factors that trigger habitualized behaviors and factors that are targeted by choice architecture interventions may not only explain why interventions in our sample were so effective in changing food choices but more generally indicate that choice architecture interventions are an effective tool for changing instances of habitualized behaviors (64). This finding is particularly relevant from a policy making perspective as habits tend to be relatively unresponsive to traditional intervention approaches and are therefore generally considered to be difficult to change (62). Given that choice architecture interventions can only target the environmental cues that trigger habitualized responses but not the association between choice environment and behavior per se, it should be noted though that the effects of interventions are likely limited to the specific choice contexts in which they are implemented.
While the present meta-analysis provides a comprehensive overview of the effectiveness of choice architecture as a behavior change tool, more research is needed to complement and complete our findings. For example, our methodological focus on individuals as the unit of analysis excludes a large number of studies that have investigated choice architecture interventions on broader levels, such as households, school classes, or organizations, which may reduce the generalizability of our results. Future research should target these studies specifically to add to the current analysis. Similarly, our data show very high levels of heterogeneity among the effect sizes of choice architecture interventions. Although the type of intervention, the behavioral domain in which it is applied, and contextual study characteristics account for some of this heterogeneity (SI Appendix, Table S3), more research is needed to identify factors that may explain the variability in effect sizes above and beyond those investigated here. Research has recently started to reveal some of those potential moderators of choice architecture interventions, including sociodemographic factors such as income and socioeconomic status as well as psychological factors such as domain knowledge, numerical ability, and attitudes (6567). Investigating these moderators systematically cannot only provide a more nuanced understanding of the conditions under which choice architecture facilitates behavior change but may also help to inform the design and implementation of targeted interventions that take into account individual differences in the susceptibility to choice architecture interventions (68). Ethical considerations should play a prominent role in this process to ensure that potentially more susceptible populations, such as children or low-income households, retain their ability to make decisions that are in their personal best interest (66, 69, 70). Based on the results of our own moderator analyses, additional avenues for future research may include the study of how information processing influences the effectiveness of varying types of choice architecture interventions and how the overall effect of interventions is determined by the type of behavior they target (e.g., high-impact vs. low-impact behaviors and habitual vs. one-time decisions). In addition, we identified a moderate publication bias toward the reporting of effect sizes that support a positive effect of choice architecture interventions on behavior. Future research efforts should take this finding into account and place special emphasis on appropriate sample size planning and analysis standards when evaluating choice architecture interventions. Finally, given our choice to focus our primary literature search on the terms choice architecture and nudge, we recognize that the present meta-analysis may have failed to capture parts of the literature published before the popularization of this now widely used terminology, despite our efforts to expand the search beyond those terms (for details on the literature search process, see Materials and Methods). Due to the large increase in choice architecture research over the past decade (Fig. 1), however, the results presented here likely offer a good representation of the existing evidence on the effectiveness of choice architecture in changing individuals behavior.
Few behavioral intervention measures have lately received as much attention from researchers and policy makers as choice architecture interventions. Integrating the results of more than 450 behavioral interventions, the present meta-analysis finds that choice architecture is an effective and widely applicable behavior change tool that facilitates personally and socially desirable choices across behavioral domains, geographical locations, and populations. Our results provide insights into the overall effectiveness of choice architecture interventions as well as systematic effect size variations among them, revealing promising directions for future research that may facilitate the development of theories in this still new but fast-growing field of research. Our work also provides a comprehensive overview of the effectiveness of choice architecture interventions across a wide range of intervention contexts that are representative of some of the most pressing societal challenges we are currently facing. This overview can serve as a guideline for policy makers who seek reliable, evidence-based information on the potential impact of choice architecture interventions and the conditions under which they promote behavior change.
The meta-analysis was conducted in accordance with guidelines for conducting systematic reviews (71) and conforms to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (72) standards.
We searched the electronic databases PsycINFO, PubMed, PubPsych, and ScienceDirect using a combination of keywords associated with choice architecture (nudge OR choice architecture) and empirical research (method* OR empiric* OR procedure OR design). Since the terms nudge and choice architecture were established only after the seminal book by Thaler and Sunstein (1), we restricted this search to studies that were published no earlier than 2008. To compensate for the potential bias this temporal restriction might introduce to the results of our meta-analysis, we identified additional studies, including studies published before 2008, through the reference lists of relevant review articles and a search for research reports by governmental and nongovernmental behavioral science units. To reduce the possibly confounding effects of publication status on the estimation of effect sizes, we further searched for unpublished studies using the ProQuest Dissertations & Theses database and requesting unpublished data through academic mailing lists. The search concluded in June 2019, yielding a total of 9,606 unique publications.
Given the exceptionally high heterogeneity in choice architecture research, we restricted our meta-analysis to studies that 1) empirically tested one or more choice architecture techniques using a randomized controlled experimental design, 2) had a behavioral outcome measure that was assessed in a real-life or hypothetical choice situation, 3) used individuals as the unit of analysis, and 4) were published in English. Studies that examined choice architecture in combination with other intervention measures, such as significant economic incentives or education programs, were excluded from our analyses to isolate the unique effects of choice architecture interventions on behavior.
The final sample comprised 455 effect sizes from 214 publications with a pooled sample size of 2,149,683 participants (N ranging from 14 to 813,990). SI Appendix, Fig. S1 illustrates the literature search and review process. All meta-analytic data and analyses reported in this paper are publicly available on the Open Science Framework (https://osf.io/fywae/) (74).
Due to the large variation in behavioral outcome measures, we calculated Cohens d (40) for a standardized effect size measure of the mean difference between control and treatment conditions. Positive Cohens d values were coded to reflect behavior change in the desired direction of the intervention, whereas negative values reflected an undesirable change in behavior.
To categorize systematic differences between choice architecture interventions, we coded studies for seven moderators describing the type of intervention, the behavioral domain in which it was implemented, and contextual study characteristics. The type of choice architecture intervention was classified using a taxonomy developed by Mnscher and colleagues (13), which distinguishes three broad categories of choice architecture: decision information, decision structure, and decision assistance. Each of these categories targets a specific aspect of the choice environment, with decision information interventions targeting the way in which choice alternatives are described (e.g., framing), decision structure interventions targeting the way in which those choice alternatives are organized and structured (e.g., choice defaults), and decision assistance interventions targeting the way in which decisions can be reinforced (e.g., commitment devices). With its tripartite categorization framework the taxonomy is able to capture and categorize the vast majority of choice architecture interventions described in the literature, making it one of the most comprehensive classification schemes of choice architecture techniques in the field (see Table 1 for an overview). Many alternative attempts to organize and structure choice architecture interventions are considered problematic because they combine descriptive categorization approaches, which classify interventions based on choice architecture technique, and explanatory categorization approaches, which classify interventions based on underlying psychological mechanisms, within a single framework. The taxonomy we use here adopts a descriptive categorization approach in that it organizes interventions exclusively in terms of choice architecture techniques. We chose this approach to not only omit common shortcomings of hybrid classification schemes, such as a reduction in the interpretability of results, but also to warrant a highly reliable categorization of interventions in the absence of psychological outcome measures that would allow us to infer explanatory mechanisms. Using a descriptive categorization approach further allowed us to generate theoretically meaningful insights that can be easily translated into concrete recommendations for policy making. Each intervention was coded according to its specific technique and corresponding category. Interventions that combined multiple choice architecture techniques were excluded from our analyses to isolate the unique effect of each approach. Based on previous reviews (73) and inspection of our data, we distinguished six behavioral domains: health, food, environment, finance, prosocial behavior, and other behavior. Contextual study characteristics included the type of experiment that had been conducted (conventional laboratory experiment, artifactual field experiment, framed field experiment, or natural field experiment), the location of the intervention (inside vs. outside of the United States), the target population of the intervention (adults vs. children and adolescents), and the year in which the data were published. Interrater reliability across a random sample of 20% of the publications was high, with Cohens ranging from 0.76 to 1 (M=0.87).
We estimated the overall effect of choice architecture interventions using a three-level meta-analytic model with random effects on the treatment and the publication level. This approach allowed us to account for the hierarchical structure of our data due to publications that reported multiple relevant outcome variables and/or more than one experiment (7577). To further account for dependency in sampling errors due to overlapping samples (e.g., in cases where multiple treatment conditions were compared to the same control condition), we computed cluster-robust SEs, confidence intervals, and statistical tests for the estimated effect sizes (78, 79).
To identify systematic differences between choice architecture interventions, we ran multiple moderator analyses in which we tested for the effects of type of intervention, behavioral domain, and study characteristics using mixed-effects meta-analytic models with random effects on the treatment and the publication level. All analyses were conducted in R using the package metafor (80).
This research was supported by Swiss National Science Foundation Grant PYAPP1_160571 awarded to Tobias Brosch and Swiss Federal Office of Energy Grant SI/501597-01. It is part of the activities of the Swiss Competence Center for Energy Research Competence Center for Research in Energy, Society and Transition, supported by the Swiss Innovation Agency (Innosuisse). The funding sources had no involvement in the preparation of the article; in the study design; in the collection, analysis, and interpretation of data; nor in the writing of the manuscript. We thank Allegra Mulas and Laura Pagel for their assistance in data collection and extraction.
Author contributions: S.M., M.H., U.J.J.H., and T.B. designed research; S.M. and M.H. performed research; S.M. analyzed data; and S.M., M.H., U.J.J.H., and T.B. wrote the paper.
The authors declare no competing interest.
This article is a PNAS Direct Submission.
This article contains supporting information online at https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.2107346118/-/DCSupplemental.
*While alternative classification schemes of choice architecture interventions can be found in the literature, the taxonomy used in the present meta-analysis distinguishes itself through its comprehensiveness, which makes it a highly reliable categorization tool and allows for inferences of both theoretical and practical relevance.
Please note that our results are robust to the exclusion of nonretracted studies by the Cornell Food and Brand Laboratory which has been criticized for repeated scientific misconduct; retracted studies by this research group were excluded from the meta-analysis.
Search terms were adapted from Szaszi et al. (73).
Read the original here:
The effectiveness of nudging: A meta-analysis of choice architecture interventions across behavioral domains - pnas.org
- The Impact of AI on Human Behavior: Insights and Implications - iTMunch - January 23rd, 2025 [January 23rd, 2025]
- Disturbing Wildlife Isnt Fun: IFS Parveen Kaswan Raises Concern Over Human Behavior in Viral Clip - Indian Masterminds - January 15th, 2025 [January 15th, 2025]
- The interplay of time and space in human behavior: a sociological perspective on the TSCH model - Nature.com - January 1st, 2025 [January 1st, 2025]
- Thinking Slowly: The Paradoxical Slowness of Human Behavior - Caltech - December 23rd, 2024 [December 23rd, 2024]
- From smog to crime: How air pollution is shaping human behavior and public safety - The Times of India - December 9th, 2024 [December 9th, 2024]
- The Smell Of Death Has A Strange Influence On Human Behavior - IFLScience - October 26th, 2024 [October 26th, 2024]
- "WEIRD" in psychology literature oversimplifies the global diversity of human behavior. - Psychology Today - October 2nd, 2024 [October 2nd, 2024]
- Scientists issue warning about increasingly alarming whale behavior due to human activity - Orcasonian - September 23rd, 2024 [September 23rd, 2024]
- Does AI adoption call for a change in human behavior? - Fast Company - July 26th, 2024 [July 26th, 2024]
- Dogs can smell human stress and it alters their own behavior, study reveals - New York Post - July 26th, 2024 [July 26th, 2024]
- Trajectories of brain and behaviour development in the womb, at birth and through infancy - Nature.com - June 18th, 2024 [June 18th, 2024]
- AI model predicts human behavior from our poor decision-making - Big Think - June 18th, 2024 [June 18th, 2024]
- ZkSync defends Sybil measures as Binance offers own ZK token airdrop - TradingView - June 18th, 2024 [June 18th, 2024]
- On TikTok, Goldendoodles Are People Trapped in Dog Bodies - The New York Times - June 18th, 2024 [June 18th, 2024]
- 10 things only introverts find irritating, according to psychology - Hack Spirit - June 18th, 2024 [June 18th, 2024]
- 32 animals that act weirdly human sometimes - Livescience.com - May 24th, 2024 [May 24th, 2024]
- NBC Is Using Animals To Push The LGBT Agenda. Here Are 5 Abhorrent Animal Behaviors Humans Shouldn't Emulate - The Daily Wire - May 24th, 2024 [May 24th, 2024]
- New study examines the dynamics of adaptive autonomy in human volition and behavior - PsyPost - May 24th, 2024 [May 24th, 2024]
- 30000 years of history reveals that hard times boost human societies' resilience - Livescience.com - May 12th, 2024 [May 12th, 2024]
- Kingdom of the Planet of the Apes Actors Had Trouble Reverting Back to Human - CBR - May 12th, 2024 [May 12th, 2024]
- The need to feel safe is a core driver of human behavior. - Psychology Today - April 15th, 2024 [April 15th, 2024]
- AI learned how to sway humans by watching a cooperative cooking game - Science News Magazine - March 29th, 2024 [March 29th, 2024]
- We can't combat climate change without changing minds. This psychology class explores how. - Northeastern University - March 11th, 2024 [March 11th, 2024]
- Bees Reveal a Human-Like Collective Intelligence We Never Knew Existed - ScienceAlert - March 11th, 2024 [March 11th, 2024]
- Franciscan AI expert warns of technology becoming a 'pseudo-religion' - Detroit Catholic - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - messenger-inquirer - March 11th, 2024 [March 11th, 2024]
- Astrocytes Play Critical Role in Regulating Behavior - Neuroscience News - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - Sunnyside Sun - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - Blue Mountain Eagle - March 11th, 2024 [March 11th, 2024]
- 7 Books on Human Behavior - Times Now - March 11th, 2024 [March 11th, 2024]
- Euphemisms increasingly used to soften behavior that would be questionable in direct language - Norfolk Daily News - February 29th, 2024 [February 29th, 2024]
- Linking environmental influences, genetic research to address concerns of genetic determinism of human behavior - Phys.org - February 29th, 2024 [February 29th, 2024]
- Emerson's Insight: Navigating the Three Fundamental Desires of Human Nature - The Good Men Project - February 29th, 2024 [February 29th, 2024]
- Dogs can recognize a bad person and there's science to prove it. - GOOD - February 29th, 2024 [February 29th, 2024]
- What Is Organizational Behavior? Everything You Need To Know - MarketWatch - February 4th, 2024 [February 4th, 2024]
- Overcoming 'Otherness' in Scientific Research Commentary in Nature Human Behavior USA - English - USA - PR Newswire - February 4th, 2024 [February 4th, 2024]
- "Reichman University's behavioral economics program: Navigating human be - The Jerusalem Post - January 19th, 2024 [January 19th, 2024]
- Of trees, symbols of humankind, on Tu BShevat - The Jewish Star - January 19th, 2024 [January 19th, 2024]
- Tapping Into The Power Of Positive Psychology With Acclaimed Expert Niyc Pidgeon - GirlTalkHQ - January 19th, 2024 [January 19th, 2024]
- Don't just make resolutions, 'be the architect of your future self,' says Stanford-trained human behavior expert - CNBC - December 31st, 2023 [December 31st, 2023]
- Never happy? Humans tend to imagine how life could be better : Short Wave - NPR - December 31st, 2023 [December 31st, 2023]
- People who feel unhappy but hide it well usually exhibit these 9 behaviors - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- If you display these 9 behaviors, you're being passive aggressive without realizing it - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- Men who are relationship-oriented by nature usually display these 9 behaviors - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- A look at the curious 'winter break' behavior of ChatGPT-4 - ReadWrite - December 14th, 2023 [December 14th, 2023]
- Neuroscience and Behavior Major (B.S.) | College of Liberal Arts - UNH's College of Liberal Arts - December 14th, 2023 [December 14th, 2023]
- The positive health effects of prosocial behaviors | News | Harvard ... - HSPH News - October 27th, 2023 [October 27th, 2023]
- The valuable link between succession planning and skills - Human Resource Executive - October 27th, 2023 [October 27th, 2023]
- Okinawa's ants show reduced seasonal behavior in areas with more human development - Phys.org - October 27th, 2023 [October 27th, 2023]
- How humans use their sense of smell to find their way | Penn Today - Penn Today - October 27th, 2023 [October 27th, 2023]
- Wrestling With Evil in the World, or Is It Something Else? - Psychiatric Times - October 27th, 2023 [October 27th, 2023]
- Shimmying like electric fish is a universal movement across species - Earth.com - October 27th, 2023 [October 27th, 2023]
- Why do dogs get the zoomies? - Care.com - October 27th, 2023 [October 27th, 2023]
- How Stuart Robinson's misconduct went overlooked for years - Washington Square News - October 27th, 2023 [October 27th, 2023]
- Whatchamacolumn: Homeless camps back in the news - News-Register - October 27th, 2023 [October 27th, 2023]
- Stunted Growth in Infants Reshapes Brain Function and Cognitive ... - Neuroscience News - October 27th, 2023 [October 27th, 2023]
- Social medias role in modeling human behavior, societies - kuwaittimes - October 27th, 2023 [October 27th, 2023]
- The gift of reformation - Living Lutheran - October 27th, 2023 [October 27th, 2023]
- After pandemic, birds are surprisingly becoming less fearful of humans - Study Finds - October 27th, 2023 [October 27th, 2023]
- Nick Treglia: The trouble with fairness and the search for truth - 1819 News - October 27th, 2023 [October 27th, 2023]
- Science has an answer for why people still wave on Zoom - Press Herald - October 27th, 2023 [October 27th, 2023]
- Orcas are learning terrifying new behaviors. Are they getting smarter? - Livescience.com - October 27th, 2023 [October 27th, 2023]
- Augmenting the Regulatory Worker: Are We Making Them Better or ... - BioSpace - October 27th, 2023 [October 27th, 2023]
- What "The Creator", a film about the future, tells us about the present - InCyber - October 27th, 2023 [October 27th, 2023]
- WashU Expert: Some parasites turn hosts into 'zombies' - The ... - Washington University in St. Louis - October 27th, 2023 [October 27th, 2023]
- Is secondhand smoke from vapes less toxic than from traditional ... - Missouri S&T News and Research - October 27th, 2023 [October 27th, 2023]
- How apocalyptic cults use psychological tricks to brainwash their ... - Big Think - October 27th, 2023 [October 27th, 2023]
- Human action pushing the world closer to environmental tipping ... - Morung Express - October 27th, 2023 [October 27th, 2023]
- What We Get When We Give | Harvard Medicine Magazine - Harvard University - October 27th, 2023 [October 27th, 2023]
- Psychological Anime: 12 Series You Should Watch - But Why Tho? - October 27th, 2023 [October 27th, 2023]
- Roosters May Recognize Their Reflections in Mirrors, Study Suggests - Smithsonian Magazine - October 27th, 2023 [October 27th, 2023]
- June 30 Zodiac: Sign, Traits, Compatibility and More - AZ Animals - May 13th, 2023 [May 13th, 2023]
- Indiana's Funding Ban for Kinsey Sex-Research Institute Threatens ... - The Chronicle of Higher Education - May 13th, 2023 [May 13th, 2023]
- Have AI Chatbots Developed Theory of Mind? What We Do and Do ... - The New York Times - March 31st, 2023 [March 31st, 2023]
- Scoop: Coming Up on a New Episode of HOUSEBROKEN on FOX ... - Broadway World - March 31st, 2023 [March 31st, 2023]
- Here's five fall 2023 classes to fire up your bookbag - Duke Chronicle - March 31st, 2023 [March 31st, 2023]
- McDonald: Aspen's like living in a 'Pullman town' - The Aspen Times - March 31st, 2023 [March 31st, 2023]
- Children Who Are Exposed to Awe-Inspiring Art Are More Likely to Become Generous, Empathic Adults, a New Study Says - artnet News - March 31st, 2023 [March 31st, 2023]
- DataDome Raises Another $42M to Prevent Bot Attacks in Real ... - AlleyWatch - March 31st, 2023 [March 31st, 2023]
- Observing group-living animals with drones may help us understand ... - Innovation Origins - March 31st, 2023 [March 31st, 2023]