- Open Access
- Open Peer Review
Explaining clinical behaviors using multiple theoretical models
Implementation Science volume 7, Article number: 99 (2012)
In the field of implementation research, there is an increased interest in use of theory when designing implementation research studies involving behavior change. In 2003, we initiated a series of five studies to establish a scientific rationale for interventions to translate research findings into clinical practice by exploring the performance of a number of different, commonly used, overlapping behavioral theories and models. We reflect on the strengths and weaknesses of the methods, the performance of the theories, and consider where these methods sit alongside the range of methods for studying healthcare professional behavior change.
These were five studies of the theory-based cognitions and clinical behaviors (taking dental radiographs, performing dental restorations, placing fissure sealants, managing upper respiratory tract infections without prescribing antibiotics, managing low back pain without ordering lumbar spine x-rays) of random samples of primary care dentists and physicians. Measures were derived for the explanatory theoretical constructs in the Theory of Planned Behavior (TPB), Social Cognitive Theory (SCT), and Illness Representations specified by the Common Sense Self Regulation Model (CSSRM). We constructed self-report measures of two constructs from Learning Theory (LT), a measure of Implementation Intentions (II), and the Precaution Adoption Process. We collected data on theory-based cognitions (explanatory measures) and two interim outcome measures (stated behavioral intention and simulated behavior) by postal questionnaire survey during the 12-month period to which objective measures of behavior (collected from routine administrative sources) were related. Planned analyses explored the predictive value of theories in explaining variance in intention, behavioral simulation and behavior.
Response rates across the five surveys ranged from 21% to 48%; we achieved the target sample size for three of the five surveys. For the predictor variables, the mean construct scores were above the mid-point on the scale with median values across the five behaviors generally being above four out of seven and the range being from 1.53 to 6.01. Across all of the theories, the highest proportion of the variance explained was always for intention and the lowest was for behavior. The Knowledge-Attitudes-Behavior Model performed poorly across all behaviors and dependent variables; CSSRM also performed poorly. For TPB, SCT, II, and LT across the five behaviors, we predicted median R2 of 25% to 42.6% for intention, 6.2% to 16% for behavioral simulation, and 2.4% to 6.3% for behavior.
We operationalized multiple theories measuring across five behaviors. Continuing challenges that emerge from our work are: better specification of behaviors, better operationalization of theories; how best to appropriately extend the range of theories; further assessment of the value of theories in different settings and groups; exploring the implications of these methods for the management of chronic diseases; and moving to experimental designs to allow an understanding of behavior change.
In the field of implementation research, there are problems with trying to understand the effect of interventions to change healthcare professional behavior. When the effects of interventions are retrospectively examined within systematic reviews[1, 2], it is difficult to understand or explain the variation in effect sizes that are often seen. Given the increasing interest in understanding the causal mechanisms of action within interventions[3, 4], there is an increased interest in the use of theories of behavior when designing implementation research studies to change behavior. Historically, theory has not been widely used; when analysing a systematic review of guideline implementation studies Davies et al. concluded that ‘there was poor justification of choice of intervention and use of theory in implementation research in the identified studies until at least 1998’. Although there are some terminological debates (reflecting the several ways in which theory can be used—being theory-based as opposed to theory-inspired), using theory has a number of clear advantages: it allows a view of why/how behaviors change; it offers an approach to how interventions can be developed; it can guide analysis; and it offers a framework to support generalizability[3, 6].
Having articulated a role for theory, a researcher then faces the next decision of ‘which theory’? To date, groupings of theories have been examined within systematic reviews (e.g., social cognitive theories), they have been aggregated into overarching theories (almost meta-theories), or, at least in part to deal with the plethora of available options, they have been grouped and then reconfigured into common underlying conceptual domains. Against this evolving backdrop, in 2003 we initiated a series of five studies to establish a scientific rationale for interventions to translate research findings into clinical practice. We explored the performance of a number of different, commonly used, and overlapping theories and models[10–14]. Working across five behaviors and two disciplines of healthcare professionals, the studies were novel, and demanding to conduct, analyse, and interpret. Having completed the analyses of the individual studies, in this paper, we present an overview of the methods and results across the five conditions and multiple theories. We reflect on the performance of the theories, the strengths and weaknesses of the method, and consider where these methods sit alongside the range of methods for studying healthcare professional behavior change.
The methods of the studies (along with the questionnaires used) are described in detail elsewhere[10–14]. Four of the analyses have been previously reported[11–14]; we report the fifth analysis in Additional file1 and include the questionnaire used as Additional file2.
In summary, these were five predictive studies exploring theory-based cognitions as predictors across five different clinical behaviors within a series of random samples of primary care dentists and primary care physicians in Grampian, Tayside and Lothian in Scotland and Durham, Newcastle and South Tees in northern England. Data on theory-based cognitions (explanatory measures) and two interim outcome measures (stated behavioral intention and simulated behavior) were collected by a single postal questionnaire survey that took place during the 12-month period to which data on objective measures of behavior (collected from routine administrative sources) related.
Planned analyses explored the predictive value of theories and theory-based cognitions in explaining variance in the behavioral data.
Explanatory measures and theories
Explanatory measures were chosen to reflect both volitional (processes that are aware, deliberative, reflective, and may therefore be considered subject to volition) and non-volitional (processes that are associative, automatic, impulsive, and may therefore be considered not subject to volition; here represented by Learning Theory, or LT) theoretical constructs that have been found to predict behavior. All measures were included in each study. Measures were derived for the constructs in the Theory of Planned Behavior (TPB), Social Cognitive Theory (SCT)[16, 17], and for illness representations specified by the Common Sense Self Regulation Model (CSSRM). We constructed self-report measures of two constructs from LT (habitual behavior and anticipated consequences)[19, 20], a measure of implementation intentions (II), and questions to assess stage in the process of change as specified by the Precaution Adoption Process (PAP)[22, 23].
The TPB by Aizen and Fishbein proposes that the strength of an individual’s behavioral intention to engage in a behavior, and the degree of control they feel they have over that behavior (perceived behavioral control) are the proximal determinants of engaging in it. The perceived behavioral control construct is closely related to the concept of self-efficacy in SCT. The theory of planned behavior also proposes that behavioral intention strength is determined by three variables: attitudes towards the behavior (the extent to which the behavior will result in consequences that the person values); subjective norms for the behavior (the belief that other people whose opinion influences the person think the person should engage in the behavior); and perceived behavioral control over it. These variables in turn are based upon salient behavioral, normative, and control beliefs about the behavior. Each predictor variable may be measured directly (by asking respondents about their overall attitude), or indirectly (by asking respondents about specific behavioral beliefs and outcome evaluations). Aizen suggests that behavior change is brought about by interventions that change these beliefs.
Bandura’s SCT proposes that behavior is determined by three kinds of expectancies: situation-outcome expectancies, outcome expectancies, and self-efficacy expectancies. Situation-outcome expectancies are beliefs about how events are connected (e.g., a diagnosis of cancer is likely to result in death). Outcome expectancies refer to beliefs about the consequences of performing a behavior (e.g., if I stop smoking, I will put on weight). Self-efficacy expectancies are beliefs about one’s ability to perform the behavior (e.g., I can stop smoking). Self-efficacy has been found to be the strongest determinant of behavior, and the theory proposes four methods of changing self-efficacy in order to change behavior: providing mastery experiences (i.e., successful behavioral performance); modelling (i.e., observation of successful behavioral performance); verbal persuasion; and addressing attributions for physiological and affective states
Leventhal’s CSSRM proposes that individuals attempt to make sense of illness by making use of pre-existing knowledge or schemas to develop both cognitive representations of the illness (including its identity, consequences, timeline, cause, and cure/control) and emotional representations of the threat involved. These representations give rise to behavioral responses. For patients, behavioral responses may include going to see a doctor, taking prescribed or non-prescribed medicines, or participating in rehabilitation programmes. For clinicians, responses may include referring a patient for diagnostic tests, prescribing a drug, or restoring a carious tooth. The model is described as self-regulatory because this is seen as a dynamic process. The aim of the actions that people take in response to a health threat is to reduce the degree of threat or to restore their own (or their patient’s) emotional equilibrium. The individual’s own understanding of the health condition and their personal emotional reaction are central to the self-regulatory approach. It seems likely that health professionals also experience emotional reactions to some conditions (e.g., terminal conditions in children) or patient groups, for example ‘heart sink’ reactions, which may influence their practice. In this model, behavior change results from changes in the cognitive representations or in the responses available to manage either the threat or the emotions.
Learning theory proposes that behaviors that have positive consequences for the individual (such as remuneration) are likely to be repeated, whereas those that have unpleasant consequences will become less frequent. Learning happens when consequences occur if and only if the behavior has occurred and can take a variety of forms, from material rewards (e.g., financial incentives), through social rewards (e.g., receiving praise) to personally salient rewards (e.g., achieving a desired goal). The principle that positive consequences promote repetition of behavior is well established and has been widely and successfully used to understand behavior and behavior change. As behaviors are repeatedly rewarded, they may become ‘habitual’ and, if reward is scheduled appropriately, the behavior may be maintained with a reduced frequency of reward; if a behavior is increased by being rewarded and then removed without planned scheduling, the learned behavior will be ‘extinguished’ unless some other form of reward is available,
‘Implementation Intentions’ are explicit ‘if-then’ action plans about when and where a goal intention will be achieved. Gollwitzer has made the distinction between goal intentions and II. A goal intention is an intention to perform a behavior or achieve a goal (e.g., I intend to reduce the number of referrals I make for lumbar spine x-rays). This is conceptually close to the behavioral intention construct in the TPB. Gollwitzer argues that by creating an II, people effectively transfer control of the behavior to the environment—establishing cues to action. For example, by saying that ‘When a patient tells me about their low back pain, I will explain the pros and cons of an x-ray to them.’
Stage theories propose that behavior change occurs in a stepwise process, rather than a linear fashion as implied by motivational or action theories. From a stage theory perspective, interventions to facilitate change will be most effective if they are tailored to the stage an individual has reached within this process. While there are differences between the stage models in the number and nature of stages proposed, stage theories typically distinguish motivation and action steps[17, 22, 25]. In the PAP model, Weinstein has proposed an additional early stage when individuals may be unaware of the need for behavior change, a stage which may have particular relevance to the early stages of the implementation of evidence.
A common implicit model is the Knowledge-Attitudes-Behavior model. This assumes that a change in knowledge will produce a change in attitude, and this will, in turn, produce a change in behavior. While this model has some value, it should be tested rather than assumed before being used as a basis for behavior change. Accordingly, we examined the ability of knowledge to predict behavior.
The questions were developed based on the standard methods for each theory where possible. Additionally, for each behavior five knowledge questions, for which there was good evidence, were developed by the study team. Table1 provides a summary of the explanatory measures used in each of the studies (with examples from the Managing Low Back Pain Without Ordering Lumbar Spine X-rays study). Questions were rated on a seven-point Likert scale from ‘strongly disagree’ to ‘strongly agree.’ Scores were adjusted so that a higher score equated with a greater propensity towards evidence-based behavior. Exemplar questionnaires are available through previous publications[11, 13, 14] and in Additional file2.
Behaviors, simulated behavior, and behavioral intention (dependent variables)
The three dental and two medical behaviors are shown in Table2. In each case, the description of the clinical behavior contained in the questionnaires was specified according to the TACT principle—in terms of its Target patient, the Action to be performed, the Context or conditions for the action and when the action would be performed (Time). For example, for managing low back pain, the target is patients presenting with back pain, the action is managing the patient without referring for lumbo-sacral spine (lumbar) X-rays, the context is general practice, and the time is (implicitly) now.
For each of the five behaviors, we collected two measures of behavior, an objective measure (Table2) collected from routine data systems and gathered for a 12-month period to control for seasonal variations, and a measure of simulated behavior. For simulated behavior, key elements that may influence clinicians’ decisions to perform the behaviors were identified from the literature, opinion of the clinical members of the research team, and the interviews with clinicians. From this, we constructed five clinical scenarios that described patients presenting in relation to the index behaviors. Respondents were asked to decide what they would do, and responses were summed to create a total score out of a possible maximum of five (Table1).
We also elicited behavioral intention using three questions assessing intention to perform the behavior of interest. With a behavior specific stem the questions were worded ‘I have in mind to,’ ‘I intend to,’ ‘I aim to’ (rated on a seven-point scale from ‘strongly disagree’ to ‘strongly agree’).
Clinicians were sent an invitation pack comprising a letter of invitation, a questionnaire consisting of theory-based cognition measures (explanatory measures), two interim outcome measures (behavioral intention and behavioral simulation), and demographic measures, plus a consent form to allow access to their data (objective behavior measure) from the routine systems, and a reply-paid envelope. Three postal reminders were sent to non-responders at two, four, and six weeks from the first mailing.
Each study included a large number of predictor variables, and a power calculation suggested that, for each of the five studies, a minimum sample of 200 clinicians was required to detect an effect size of 0.40, with alpha of 0.05, 95% power.
The measures of the theoretical constructs were tested for acceptable internal consistency. If the criterion value (0.60) was not reached, items were dropped from the variable measures until the maximum possible Cronbach’s alpha was achieved or, for two items, a correlation greater than 0.25.
We examined the relationship between predictive and dependent variables using correlation, except for the PAP where ANOVA was used. For the two general medical behaviors for the PAP analysis, respondents were dichotomized into two groups (decided to perform desired behavior or have already performed desired behavior versus all other responses), and the relationship between predictive and outcome variables were examined using regression models.
Multiple regression analyses were used to examine the predictive value of each of the theoretical models. For the behavior ‘managing low back pain without referring for a lumbar spine x-ray,’ given the distribution of the behavioral data, we used negative binomial regression (NBR) to model the predictive ability of individual theoretical constructs and complete theories.
The study was approved by the UK South East Multi-Centre Research Ethics Committee (MREC/03/01/03).
Questionnaire response rates were: 48% (214/450) for taking dental radiographs; 29% (130/450) for performing dental restorations; 29% (120/407) for placing fissure sealants; 21% (230/1100) for managing upper respiratory tract infections (URTIs) without prescribing antibiotics; and 27% (299/1100) for managing low back pain without ordering lumbar spine x-rays. We achieved the target sample size for three of the five studies.
Summary of scores
The rates of performing the objective behaviors and the scores for simulated behavior and intention are shown in Table2.
Summaries of the mean scores for the explanatory variables are shown in Table3 and Additional file3. In general, the mean scores were above the midpoint on the scale, with median values across the five behaviors generally being above four out of seven and the range being from 1.53 to 6.01.
Descriptive summary of variance explained
The proportion of variance in the three dependent variables of intention, behavioral simulation, and behavior explained by the theories for each of the five behaviors are shown in Table4 and Additional file4.
Across all of the theories, the highest proportion of the variance explained was always for intention and the lowest was for objective behavior. The Knowledge-Attitudes-Behavior Model performed poorly across all five behaviors and three dependent variables. CSSRM performed poorly for predicting behavior, though it predicted intention for two of the behaviors. For TPB, SCT, II, and LT across the five behaviors, the theories predicted median R2 of 25% to 42.6% for intention, 6.2% to 16% for behavioral simulation, and 2.4% to 6.3% for (three of the five) objective measures of behavior.
Comparing the performance of the models, TPB was predictive of three behaviors (taking dental radiographs, managing URTIs without prescribing antibiotics, and managing low back pain), SCT and LT were predictive of (the same) two behaviors (taking dental radiographs and managing URTIs without prescribing antibiotics), action planning predicted one behavior (taking dental radiographs) while other models were not predictive of behavior.
The TPB, SCT, and LT performed well in predicting intention, confirming their effectiveness in predicting key elements of the models (behavioral intention in TPB; goals in SCT). However, LT predicted more variance in intention than either TPB or SCT for four of the five behaviors, despite having fewer predictive variables and not usually being used as a model of cognitions. The PAP (Additional file 4) did not predict any dependent variable for ‘taking dental radiographs,’ predicted behavioral simulation and intention for ‘performing dental restorations,’ predicted intention for ‘placing fissure sealants,’ predicted all three dependent variables for ‘managing URTIs without prescribing antibiotics’ and predicted intention for ‘managing low back pain without ordering lumbar spine x-rays.’
We operationalized five theories plus II and Knowledge-Attitudes-Behavior model and conducted five large-scale postal questionnaire surveys across two professional groups of working healthcare professionals. We obtained responses covering the full range of the scales and, for three of the behaviors, we achieved the a priori sample size; our response rates were low for all five studies. The theories predicted varying amounts of the three study dependent variables, and there was variation in the amount of variance explained both across the dependent variables and across the five behaviors. Both volitional and non-volitional theoretical constructs predicted behavior.
Strengths and weaknesses
A major strength of conducting five studies and having multiple theories/models operationalized in the same way across a wide range of behaviors that spanned disciplines was that this allowed the identification of variation in the performance of the models, both between models and across behaviors. Such variation would not be apparent from single condition, single theory studies. It allows an additional degree of interpretation that is not otherwise possible.
The fact that we used several theories/models and tried to adhere to the best practices for measuring constructs related to each theory (e.g., having three items measuring each construct to achieve a reliable measure of the construct) meant that the questionnaire was both long and appeared repetitive (e.g., When I see patients with URTIs, I automatically consider managing them without an antibiotic; it is my usual practice not to prescribe antibiotics for patients with URTIs; I am not to prescribe antibiotics for patients with URTI). In addition, the wording specified by each of the theories also meant that, for many questions the format of the questions may have appeared constrained and non-intuitive (e.g., the symptoms of an URTI are puzzling to me). This almost certainly affected our response rates.
We did not achieve our target number of responses for two of the surveys. This means that we may be underpowered for a number of our analyses and may underestimate the performance of the models. Higher response rates would give greater confidence in the representativeness of the data, though this aspiration is set against the evidence of modest and slowly declining response rates to postal surveys among healthcare professionals—61% over the 10 years up to 1995 and 57% in the 10 years after. Increased response rates might be achieved by fully compensating the recipients for their time. While financial incentives have shown little impact in increasing survey response rates, the amounts involved have usually been small and not equivalent to full recompense; it would be reasonable to explore the effectiveness of full recompense and to use it as a strategy in the future.
Although we used standard methods and wordings whenever possible, this was not possible for II and LT where constructs have more often been manipulated in experiments than measured in predictive studies such as those we have conducted. In addition, a number of the theories contained constructs that were similar and difficult to differentiate (e.g., TPB attitudes (indirect) and SCT outcome expectancies), and we ended up using the same questionnaire items as a measure of both.
Specifying and measuring behavior
For any study of the behavior of healthcare professionals, there is the key decision about how to measure their behavior. We used measures of behavior that were available from routine sources. We were trading off any lack of specificity of these measures of behavior compared to the behaviors we aimed to predict against the costs of collecting the most specific data. This would have involved data collection within the practice of each recipient and would have been very costly.
We were aware that the concordance of the TACT specification of the behaviors in the questionnaires and the behavioral measures available within the various routine data sets varied. Of the five sets of behavior data, the three general dental measures were all claims-based measures from a comprehensive central claims database. We were confident that the dental radiographs data provided a good measure of the behavior we specified in the questionnaires, but there were problems with the other two dental behavior measures. The fee claim database was too imprecise to be used for the placing fissure sealants behavior (due to our interest in a specific age defined subgroup) and was not analysed, although a randomized controlled trial running during the time period of this study successfully collected practice based data on this behavior and, using the same theories, successfully predicted behavior. We were specifically interested in restorations in permanent teeth in children aged <17 years, but the dental restorations behavior data did not sufficiently specify this age group and tooth restored and so was less specific than we anticipated and would have wished.
Performing or not performing target behaviors?
The two general medical behaviors we specified within the questionnaires were in the context of not performing a behavior—doctors managing URTI without prescribing antibiotics, and managing patients with low back pain without taking x-rays. Our objective measures of these behaviors were rates of prescribing antibiotics and performing x-rays. This meant that we could never know the number of patients who presented with URTI or back pain, or the proportion of those presenting who were not treated nor referred. We had to assume that this proportion varied across practices in the same way as the data we collected. It is also the case for these two behaviors that ‘not performing the target behavior’ is unlikely to reflect one or more common alternative behaviors across all clinician’s practice. Clinicians are likely to have a number of alternative behaviors that they perform in order to avoid prescribing or ordering x-rays rather than doing nothing. We have previously shown that generating alternative clinical behaviors can increase intention to ‘not perform’ behaviors that are counter to evidence-based practice. Identifying and specifying these may have improved the precision of our prediction in these clinical areas—assuming we could also have identified a measure of them. Given the low proportions of variance in behavior explained (with the possible exception of dental radiographs), it would seem prudent to exercise caution about the use of routinely collected data unless it is a good match to the specified clinical behavior in the survey.
Performance of the theories
The theories all predicted intention better than they predicted simulated behavior and, in turn, simulated behavior better than objective measures of behavior as is usually found and is due at least in part to common method (self-report) variance. It seems logical that, as the length of the causal chain (attitudes to intention to self-report behavior to observable behavior) increases, there is more opportunity for other factors to come into play and prediction to be more imprecise.
The effectiveness of LT (and its consideration of habit) in predicting behavior raises the importance of considering non-volitional, non-deliberative aspects of performing clinical behaviors and the idea of dual processing models (i.e., models which propose both deliberative and associative or automatic processes) as explanatory models of professional behavior and as ways of considering analysing such data[26, 27, 32].
Prediction of behavior by knowledge was included because it is one of the most common implicit models as a basis for changing behavior used in medicine; based on its poor performance, it would be easy to justify omitting it from further study and sensible to suggest that this is not a viable model for promoting the uptake of evidence-based practice. However, given that we only included five questions and may not have sampled the critical knowledge, there may be different measures of knowledge that are predictive. The CSSRM performed less well than other models. The CSSRM is model that was developed specifically in the context of an individual’s illness and ‘health’ behaviors and thus has a different standpoint to the other theories. We operationalized it to assess a clinician's view about a condition they themselves were not experiencing, but one that they had to treat. The poor results of this model over our studies suggest that clinicians’ perceptions about a condition, in and of itself, are unlikely to be taken into account when choosing the behavior to treat it.
Several of the theories that we used performed well and should be further used in implementation research (TPB, SCT, II, LT). Given the length of any questionnaire using multiple models (and our experience of low response rates), it is important to consider ways of shortening the instruments. A number of simple steps seem worth considering. Researchers should consider removing models that do not consistently perform well (CSSRM and KAB) from these studies. In future, it might be possible to select theories that explain three key elements of behavior change: how motivation to change behavior is developed; how motivated individuals turn this motivation into action; and how behavior may be changed without the development of a deliberative intention to change.
It would also be important to include theories or models that suggest not only how to explain behavior change that has occurred, but also suggest how to change behavior; both LT and SCT give clear guidance on how to change behavior, while this is less clear for TPB. Further, II have been shown to be methods of changing behavior.
A further option as a prior step to developing an intervention could be to develop a screening instrument that uses single (or small numbers of) item measures of key constructs. For each behavior, an investigator could ask a single intention question, a single action question, and a single question tapping non-volitional processes in order to gain a profile of the influences on the behavior. The advantage of such a method is that the initial approach involves a very short instrument. If intention scores were high, there would be little additional benefit in measuring any pre-intentional measures because strengthening any of them could not produce a large increase in intention; under such circumstances, exploring further the post-intentional or contextual/organisational factors would be a better use of resources. A similar, though slightly longer approach would be to use the Theoretical Domains Framework[26, 29, 30, 32, 33] as an instrument to help identify potentially relevant domains and to target theoretical surveys addressing key domains.
Our studies represented a non-experimental empirical approach to understanding behavior and the processes involved in enacting them. They allowed the exploration of a range of theories and the potential for subsequent exclusion of theories that did not predict behaviors. They also allowed an understanding of both volitional and non-volitional processes. However, they do not have the ability to quantify behavior change or explain causal mechanisms, and to do that would require the use of experimental designs, either using interim outcomes[27, 28, 34] or actual clinical behaviors. We have taken the ideas from these studies and one of these behaviors on to identifying and modelling the performance of an intervention[28, 36]. Further, we have tested a clinical intervention based on LT, contrasting it with an intervention based on a knowledge model, in increasing dentists’ use of fissure sealants for children; the LT intervention, providing contingent consequences for each behavior, was effective in changing behavior, while the knowledge based intervention, providing additional education, did not influence behavior.
Within these five studies, and many other theory-based studies reported in the literature, the behaviors are those enacted by individuals in isolation from other healthcare professionals. However, much of the work of primary care medical practice involves the behaviors of teams of individuals (as in the management of chronic diseases) where patients are managed by the (more or less) integrated actions of teams of healthcare professionals, and this is where much future interest in the utility of theory-based methods will lie. With the unit of analysis at the level of the team (which in primary care is often the same as the practice), there is legitimate interest in the cognitions of all members of a team and the need to explore the impact of these on practice level behavior[37, 38].
We operationalized multiple theories measuring across five behaviors. Some of the theories that we used successfully predicted behavior offering a basis for choosing between them. Continuing challenges that emerge from our work are: the need for better specification of behaviors; to better operationalize theories; to consider how best to appropriately extend the range of theories; to further assess value of theories in different settings and groups; exploring the implications of these methods for the management of chronic diseases; and moving to experimental designs to allow an understanding of behavior change.
Foy R, Eccles M, Jamtvedt G, Grimshaw J, Baker R: What do we know about how to do audit and feedback? Pitfalls in applying evidence from a systematic review. BMC Heal Serv Res. 2005, 5: 50-10.1186/1472-6963-5-50.
Gardner B, Whittington C, McAteer J, Eccles MP, Michie S: Using theory to synthesise evidence from behaviour change interventions: the example of audit and feedback. Soc Sci Med. 2010, 70 (10): 1618-1625. 10.1016/j.socscimed.2010.01.039.
The Improved Clinical Effectiveness through Behavioural Research Group (ICEBeRG): Designing theoretically-informed implementation interventions. Implement Sci. 2006, 1: 4-
Michie S, Fixsen D, Grimshaw J, Eccles MP: Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci. 2009, 4 (40):
Davies P, Walker AE, Grimshaw JM: A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implement Sci. 2010, 5 (14):
Eccles M, Grimshaw J, Walker A, Johnston M, Pitts N: Changing the behaviour of healthcare professionals: the use of theory in promoting the uptake of research findings. J Clin Epidemiol. 2005, 58: 107-112. 10.1016/j.jclinepi.2004.09.002.
Godin G, Belanger-Gravel A, Eccles M, Grimshaw J: Healthcare professionals’ intentions and behaviours: a systematic review of studies based on social cognitive theories. Implement Sci. 2008, 3 (36):
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC: Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009, 4 (50):
Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A: Psychological Theory. Group. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005, 14 (1): 26-33. 10.1136/qshc.2004.011155.
Walker A, Grimshaw JM, Johnston M, Pitts N, Steen N, Eccles MP: PRocess modelling in ImpleMEntation research: selecting a theoretical basis for interventions to change clinical practice. BMC Heal Serv Res. 2003, 3: 22-10.1186/1472-6963-3-22.
Bonetti D, Pitts NB, Eccles M, Grimshaw J, Steen N, Glidewell L, Thomas R, Maclennan G, Clarkson JE, Walker A: Applying psychological theory to evidence-based clinical practice: identifying factors predictive of taking intra-oral radiographs. Soc Sci Med. 2006, 63: 1889-1899. 10.1016/j.socscimed.2006.04.005.
Bonetti D, Johnston M, Clarkson JE, Grimshaw J, Pitts NB, Eccles MP, Steen IN, Thomas R, MacLennan G, Glidewell L: Applying psychological theories to evidence-based clinical practice: identifying factors predictive of placing preventive fissure sealants. Implement Sci. 2010, 5 (25):
Eccles MP, Grimshaw J, Johnston M, Steen IN, Pitts NB, Thomas R: Applying psychological theories to evidence-based clinical practice: identifying factors predictive of managing upper respiratory tract infections without antibiotics. Implement Sci. 2007, 2: 26-10.1186/1748-5908-2-26.
Grimshaw JM, Eccles MP, Steen IN, Johnston M, Pitts NB, Glidewell E, Maclennen G, Thomas R, Bonetti D, Walker A: Applying psychological theories to evidence-based clinical practice: identifying factors predictive of lumbar spine x-ray for low back pain in UK primary care practice. Implement Sci. 2011, 6 (55):
Ajzen I: The theory of planned behaviour. Organ Behav Hum Decis Process. 1991, 50: 179-211. 10.1016/0749-5978(91)90020-T.
Bandura A: Self-efficacy: towards a unifying theory of behaviour change. Psychol Rev. 1977, 84: 191-215.
Schwarzer R: Self-efficacy in the adoption and maintenance of health behaviours: theoretical approaches and a new model. Self-efficacy: thought control of action. Edited by: Schwarzer R. 1992, London: Hemisphere, 217-243.
Leventhal H, Nerenz D, Steele DJ: Handbook of psychology and health. 1984, New Jersey: Lawrence Erlbaum
Blackman D: Operant conditioning: an experimental analysis of behaviour. 1974, London: Methuen
Gollwitzer PM: Goal achievement: the role of intentions. European review of social psychology. Edited by: Stroebe W, Hewstone M. 1993, Chichester, UK: Wiley, 141-185.
Weinstein ND: The precaution adoption process. Heal Psychol. 1988, 7: 355-386.
Weinstein ND, Rothman A, Sutton SR: Stages theories of health behaviour: conceptual and methodological issues. Heal Psychol. 1998, 17: 290-299.
Kanfer FH, Goldstein AP: Helping people change: a textbook of methods. 1986, Oxford: Pergamon, 3
Prochaska J, DiClemente C: Stages and processes of self-change in smoking: toward an integrative model of change. J Consult Clin Psychol. 1983, 51: 390-395.
Cane J, O’Connor D, Michie S: Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012, 7 (1): 37-10.1186/1748-5908-7-37.
Bonetti D, Eccles M, Johnston M, Steen IN, Grimshaw J, Baker R, Walker A, Pitts N: Guiding the design and selection of interventions to influence the implementation of evidence-based practice: an experimental simulation of a complex intervention trial. Soc Sci Med. 2005, 60: 2135-2147. 10.1016/j.socscimed.2004.08.072.
Hrisos S, Eccles MP, Johnston M, Francis J, Kaner E, Steen IN, Grimshaw J: An intervention modelling experiment to change GPs’ intentions to implement evidence-based practice: using theory-based interventions to promote GP management of upper respiratory tract infection without prescribing antibiotics #2. BMC Heal Serv Res. 2008, 8: 10-10.1186/1472-6963-8-10.
French SD, Green SE, O’Connor DA, McKenzie JE, Francis JJ, Michie S, Buchbinder R, Schattner P, Spike N, Grimshaw JM: Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the theoretical domains framework. Implement Sci. 2012, 7 (1): 38-10.1186/1748-5908-7-38.
Francis JJ, O’Connor D, Curran J: Theories of behaviour change synthesised into a set of theoretical groupings: introducing a thematic series on the theoretical domains framework. Implement Sci. 2012, 7 (1): 35-10.1186/1748-5908-7-35.
Webb TL, Sheeran P: Does changing behavioural intention engender behaviour change? A meta-analysis of the experimental evidence. Psychol Bull. 2006, 132 (2): 249-268.
Beenstock J, Sniehotta FF, White M, Bell R, Milne EMG, Araujo-Soares V: What helps and hinders midwives in engaging with pregnant women about stopping smoking? A cross-sectional survey of perceived implementation difficulties among midwives in the northeast of England. Implement Sci. 2012, 7 (1): 36-10.1186/1748-5908-7-36.
Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A: Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005, 14: 26-33. 10.1136/qshc.2004.011155.
Eccles MP, Francis J, Foy R, Johnston M, Bamford C, Grimshaw JM, Hughes J, Lecouturier J, Steen IN, Whitty PM: Improving professional practice in the disclosure of a diagnosis of dementia: a modelling experiment to evaluate a theory based intervention. Int J Behav Med. 2009, 16 (4):
Clarkson JE, Turner S, Grimshaw JM, Ramsay M, Johnston M, Scott A, Bonetti D, Tilley CJ, Maclennan G, Ibbetson R: Changing clinicians’ behavior: a randomized controlled trial of fees and education. J Dent Res. 2008, 87: 640-644. 10.1177/154405910808700701.
Hrisos S, Eccles MP, Johnston M, Francis J, KEF S, Steen IN, Grimshaw J: Developing the content of two behavioural interventions: using theory-based interventions to promote GP management of upper respiratory tract infection without prescribing antibiotics #1. BMC Heal Serv Res. 2008, 8: 11-10.1186/1472-6963-8-11.
Eccles MP, Hawthorne G, Johnston M, Hunter M, Steen N, Francis J, Hrisos S, Elovainio M, Grimshaw JM: Improving the delivery of care for patients with diabetes through understanding optimised team work and organisation in primary care. Implement Sci. 2009, 4 (22):
Eccles MP, Hrisos S, Francis J, Stamp E, Johnston M, Hawthorne G, Steen IN, Grimshaw J, Elovainio M, Presseau J: Instrument development, data collection, and characteristics of practices, staff, and measures in the Improving Quality of Care in Diabetes (iQuaD) Study. Implement Sci. 2011, 6 (61):
The development of these studies was supported by the UK Medical Research Council Health Services Research Collaboration. The studies were funded by a grant from the UK Medical Research Council (G0001325). The Health Services Research Unit is funded by the Chief Scientist Office of the Scottish Government Health Directorates. Ruth Thomas was funded by the Wellcome Trust (GR063790MA). Jeremy Grimshaw holds a Canada Research Chair in Health Knowledge Transfer and Uptake. The views expressed in this paper are those of the authors and may not be shared by the funding bodies. We would also like to thank Jill Francis and participating primary care dentists and physicians for their contribution to these studies.
MPE is Editor in Chief of Implementation Science; Jeremy Grimshaw, Liz Glidewell, and Nick Steen are Editorial Board members. All decisions on this paper were made by another editor.
MPE with GM, MJ and DB synthesized the data across the studies. MPE led the writing of the paper. All authors contributed on sequential drafts and approved the final draft.