The social norms approach (SNA) to changing problematic behaviors has been in use for roughly 30 years. SNA has been applied to a variety of unhealthy behaviors (e.g., seatbelt nonuse, smoking, drinking, marijuana use, bullying, sexual assault, etc.) with diverse populations ranging from elementary school pupils to adults, administered as interventions at the personal, group, institution or mass public level, and in multiple countries worldwide. However, two recent systematic reviews of reported studies concluded that the evidence regarding the effectiveness of SNA efforts is mixed.
The SNA assumes a significant portion of the prevalence of problematic behavior results from individuals trying to follow a group’s norm which they have misperceived. SNA interventions typically involve communicating actual prevalence information to correct misperceptions and, thereby, reduce the problematic behavior. Studies have consistently documented such misperceptions and that SNA marketing can change the perceived prevalence. However, many studies appear to find little evidence of effectiveness at reducing the problematic behavior. Sometimes, results indicating ineffectiveness of the approach can be attributed to shortcomings in the SNA campaign implementation (e.g., inadequate dosage, too short a duration, unclear or confusing messages, unbelievable messages, etc.), shortcomings of the data gathered, shortcomings of the evaluation (e.g., outcomes measured too soon, poor measures, poor design, etc.), or all three, rather than as a consequence of the failure of the theory underlying the approach.
Determining if the problematic behaviors changed is the central issue. Most SNA studies have involved one-group pretest, posttest evaluation designs over relatively short durations with no control group – randomized or otherwise. Despite the often-repeated advice to include control groups in intervention designs, it rarely happens. Consequently, even studies’ finding evidence of effectiveness cannot rule out an alternative explanation that the same results would have occurred in the absence of the SNA effort because of a secular trend (i.e., a threat to its internal validity). Without a control group, findings that indicate little or no change in the behaviors of the group receiving a SNA intervention would be misjudged as ineffective if the effort had, in fact, prevented the worsening of the problematic behavior a secular trend in the control group would have demonstrated. The absence of randomized control groups for nearly all SNA studies regarding alcohol use among college students was a primary reason Foxcroft et al. excluded all but two published SNA marketing studies in their review for the Cochrane Institute’s database of treatments and effectiveness.
To be sure, the failure to find consistent evidence of effectiveness has pushed SNA researchers to add conceptual elements that have further developed and refined the approach (e.g., types of norms, reference groups, salience, protective behaviors, etc.) But without persuasive evidence of effectiveness, continuing use of the approach seems more an act of wishful thinking than a data-driven decision.
Recently, Hembroff et al. reported detailed findings regarding outcomes and process to evaluate a 13+ year-long SNA marketing campaign to reduce harm from high-risk drinking among Michigan State University (MSU) students. Virtually all the findings reported were consistent with the conclusion that the SNA marketing campaign worked. However, the question lingers as to whether these changes were caused by the SNA marketing efforts or merely reflected secular trends in this age group within American society during this time period.
The evaluation design Hembroff et al. used was a one-group time-series design with biennial measurements over a 14-year time period (baseline plus seven follow-ups). Shadish et al, expanding on Campbell and Stanley’s framework for assessing evaluation designs, contend that adding a quasi-control group with its own time-series of measures on critical dependent variables for the same time periods greatly strengthens the multiple time-series design, making it an “excellent quasi-experimental design, perhaps the best of the more feasible designs (p. 57).”
The purpose of this paper is to introduce a quasi-control group to the time-series design in order to address the research question as to whether the trends in the group receiving SNA treatment at MSU represent something different from the corresponding trends in the equivalent group not subjected to the same treatment. The more specific working hypothesis is that trends do differ and in ways consistent with the thrust of the SNA messaging at MSU as it differs from constraints or programmatic efforts to which similar students and non-students were subjected nationally. To do so, a control group (or groups) is needed – a group of similar subjects not exposed to the social norming campaign but on whom there are at least some of the same measures over the same period of time. The MSU SNA marketing campaign was implemented campus-wide so no quasi-control group is possible from within MSU. Instead, a quasi-control group from outside the MSU student population is needed.