- Open Access
- Open Peer Review
Developing program theory for purveyor programs
Implementation Sciencevolume 8, Article number: 23 (2013)
Frequently, social interventions produce less for the intended beneficiaries than was initially planned. One possible reason is that ideas embodied in interventions are not self-executing and require careful and systematic translation to put into practice. The capacity of implementers to deliver interventions is thus paramount. Purveyor organizations provide external support to implementers to develop that capacity and to encourage high-fidelity implementation behavior. Literature on the theory underlying this type of program is not plentiful. Research shows that detailed, explicit, and agreed-upon program theory contributes to and encourages high-fidelity implementation behavior. The process of developing and depicting program theory is flexible and leaves the researcher with what might be seen as an overwhelming number of options.
This study was designed to develop and depict the program theory underlying the support services delivered by a South African purveyor. The purveyor supports seventeen local organizations in delivering a peer education program to young people as an HIV/AIDS prevention intervention. Purposive sampling was employed to identify and select study participants. An iterative process that involved site visits, a desktop review of program documentation, one-on-one unstructured interviews, and a subsequent verification process, was used to develop a comprehensive program logic model.
The study resulted in a formalized logic model of how the specific purveyor is supposed to function; that model was accepted by all study participants.
The study serves as an example of how program theory of a ‘real life’ program can be developed and depicted. It highlights the strengths and weakness of this evaluation approach, and provides direction and recommendations for future research on programs that employ the purveyor method to disseminate interventions.
In many resource-poor countries, governments and non-governmental organizations struggle with what is generally referred to as ‘lack of capacity.’ This is typically seen as a basic human resource issue: there are inadequate skills and expertise in an organization to perform the required tasks, and in addition, the organization has inadequate governance and management structures to support its personnel [1, 2]. A variety of organizations, referred to as purveyors and intermediary organizations, have been developed to address these problems in program implementation.
Purveyors are individuals or organizations that operate as outside experts representing a particular program; they support organizations, systems, and practitioners in striving to adopt and implement that program with fidelity . They are typically involved in specific programs or practices, while intermediary organizations tend to have a broader role in the support of multiple programs; that role generally entails building capacity within a system or agency . Provider organizations adopt interventions and employ a group of individuals, also known as implementers, who deliver the intervention to the intended beneficiaries . In this article the term ‘purveyor programs’ refers to the support services delivered by purveyors.
Purveyors and intermediary organizations are receiving increasing attention in implementation science. In August 2011, the first Global Implementation Conference  was held in Washington, DC, and included a plenary session on purveyors. That conference is a good source of information on practices and the science related to implementation, and was attended by representatives from a number of organizations active in the fielda. A group known as the Practice Group for Purveyors and Intermediary Organizations  grew out of the Washington conference.
Authors such as Elliot and Mihalic , Fixsen, Blasé, Naoom, and Wallace , and Fixsen, Naoom, Blasé, Friedman, and Wallace  have made significant contributions to the field. Moreover, the National Implementation Research Network  has developed a number of core implementation components, which have been applied to purveyors, especially by Fixsen et al. Research has tentatively identified some core factors deemed necessary to ensure high fidelity implementation of interventions . These activities are known as core implementation drivers and include both technical (site selection, training sessions, consultation and coaching, staff and program evaluation) and management capacity (facilitative administrative support and systems interventions) [1, 8]. The work of purveyors is made more complicated because their support has to be assimilated by provider organizations. This is an important area for further research, where we still lack information about effective procedures [11, 12]. Fixsen et al. singled out inconsistent implementation strategies and procedures employed by purveyors as a characteristic of the field at present . One significant finding is that ‘augmented products’ that could include a combination of customisation, training, coaching, manuals, and a help desk, are adopted more easily by provider organizations and implementers .
These caveats notwithstanding, the purveyor method, in its various shapes and forms, has been applied to a wide variety of human service fields to disseminate interventions: these fields include education , juvenile justice , substance abuse , family support , medicine , nursing , mental health , and social work .
Three years ago, we became involved in the work of a South African non-governmental organization (NGO) that acts as a purveyor to seventeen local organizations that deliver a peer education intervention to young people as an HIV prevention intervention. It was not easy to classify this NGO as a purveyor or intermediary organization, in the light of the definitions given above. It contains elements of both types of organization, but eventually a decision was taken to classify it as a purveyor, because it provides external expertise, and works with one particular type of program (peer education).
It is widely known that South Africa faces a serious HIV/AIDS problem: in 2009, HIV prevalence among adults aged 15 to 49 was calculated at 17.8% of this population, with the number of adults and children aged 0 to 49 living with HIV estimated at 5,600,000 . Peer education is a promising approach that is believed to have a positive effect on sexual behavior among youth [21–23]. It is an approach that has been used increasingly over recent years, especially by interventions in the field of youth HIV prevention and sexual health [24, 25]. It is currently one of the most important ingredients in preventive, supportive, and educational interventions , despite some doubt about its effectiveness .
The seventeen provider organizations work in 102 schools in South Africa and Botswana, and have trained more than 5,000 peer educators. At the time of the study, the purveyor (from here on the agency) had been in existence for three years, with a growing demand for its services. Currently, it has an international office in the Western Cape and four provincial offices. The international office supports the provincial offices in their efforts to provide ongoing technical assistance and support to the provider organizations and implementers in the respective provinces. The director is based at the international office, and each province has a provincial manager in charge of its activities. The agency’s purveyor program is divided into the following six functional areas, each with a manager in charge to ensure that services are delivered consistently to all provider organizations and implementers:
Advocacy and visibility—develops and distributes resources, knowledge, and skill to promote the peer education intervention.
Quality assurance—provides knowledge and skill to monitor delivery of the peer education intervention.
Research and development—develops and distributes all resources required to implement the peer education intervention.
Resource mobilization—develops and distributes resources, knowledge, and skill to help provider organizations to obtain resources independently.
Stakeholder management—ensures that an increasing number of organizations deliver the peer education intervention. It also ensures that current provider organizations and implementers have opportunities to interact to ensure that they do not feel isolated.
Training and support— provides implementers with the knowledge and skill needed to deliver the peer education intervention with fidelity.
Although the original evaluation approach was formulated in terms of an implementation evaluation of the purveyor program, our initial contact with the agency revealed a problem that needed addressing before we could embark on an implementation study. In program evaluation terms, the agency had no explicit and agreed-upon program theory. To put it differently, it became apparent to us that it was not clear to the stakeholders how the purveyor program should work.
Program theory-based evaluation is a well-known evaluation approach , and does not need extensive coverage here. Rossi, Lipsey, and Freeman  called it ‘the set of assumptions about the manner in which the program relates to the social benefits it is expected to produce and the strategy and tactics the program have adopted to achieve its goals and objectives.’ According to two prominent authors in the field, Donaldson and Lipsey , it has become more valued and common to depict program theory in evaluations.
Funnel and Rogers  argue that program theory could be especially useful for interventions that employ purveyor programs because so many organizations are involved. They used this theory, for example, to develop service agreements and specifications for what the programs should entail. In his paper dealing with taking programs to scale, Baker  called program theory the first ‘pre-exploration’ phase in enlarging a program, in which a clear logic model must be developed. In addition, the exercise of making implicit assumptions explicit often exposes faulty thinking on the part of the original program developers, which can subsequently be corrected to improve the conceptual base of programs . Also, a common understanding of how the program is meant to work encourages program staff members to work together and to focus on those activities that are most important for program success .
Funnel and Rogers  identified four clusters of program theory use. These are: to aid in planning or preplanning an intervention; for managing and engaging stakeholders; for monitoring and evaluation; and for evidence-based practice by documenting innovative practices and supporting adaptation of program elements. As the agency had already been providing services for three years when the present study started, the first cluster identified by Funnel and Rogers did not apply. The benefits we envisaged were in line with the other three clusters of theory use. It would clarify the specific mechanisms involved in the purveyor program that result in the following: a shared understanding of the purveyor program among stakeholders; a more effective and efficient monitoring system; and a solid basis for any future evaluation of the agency’s work. It was argued that this process would not only encourage and increase the commitment, focus, and effectiveness of the agency’s staff members, but would also ensure that essential components of the purveyor program are clearly defined and remain relevant when the agency expands its service delivery.
A decision was therefore taken, in close consultation with program management, to articulate the ‘theory’ underlying the purveyor program; in other words, the understanding of the way it is supposed to work, both in terms of process and outcome. This was thought to be an important exercise, because it was expected that the agency’s purveyor program would be extended to other provider organizations in the local context.
Thus, the primary focus of the study was to unfold the theory of change underlying the agency’s purveyor program. On a practical level, it was argued that this would result in a better understanding of the purveyor program among the agency staff members, and would consequently encourage the strength and fidelity of implementation of the purveyor program to current and future provider organizations and implementers. Less modestly, we also aimed to furnish information that could contribute towards a better understanding of purveyor programs in general. This study also serves as an example of how a detailed, explicit, and agreed-upon representation of program theory could be developed and depicted. In other words, it is a relatively small contribution to the burgeoning field of implementation science and practice.
The information used to construct program theory can be obtained from multiple sources, including the following: a review of program documentation; interviews with those closest to the program; observation of the program; prior theory and research in the specific program domain; or exploratory research that tests the critical assumptions of the program .
The first decision to be taken was who should be involved in developing the program theory of the purveyor program . The method of extracting program theory can vary from cases where the evaluator largely takes the responsibility of developing the program theory to cases where it is developed solely by those closest to the program . Various evaluation practitioners describe the best approach for extracting and developing program theory as lying somewhere between these two extremes [28, 35, 36]. Program staff and stakeholders hold essential context-specific information on how programs ought to prevent or ameliorate social problems. The evaluator’s knowledge of social science theory and access to prior research, if this exists, can be applied to develop further and assess the feasibility of the program theory as it is being developed . In addition, involving evaluation stakeholders during the design and implementation phases of the evaluation increases their buy-in to the evaluation and also facilitates understanding of the evaluation processes; both of these factors have been shown to contribute significantly to the relevance and use of the evaluation processes and findings .
There are three main approaches to developing program theory: deductive; inductive; and user-focused . The deductive approach relies exclusively on empirical research and employs dominant theories of various disciplines to inform theory development. The inductive approach requires the evaluator to generate the program theory by observing the program in action through fieldwork and review of program documentation, while the user-focused approach requires the evaluator to obtain information from program staff that is then used to construct the program theory.
Due to the scarcity of literature and general lack of knowledge about the program theory underlying purveyor programs, the authors opted to combine the inductive and user-focused approaches. The first author took the lead role, with the second author acting as evaluation consultant, which implied that it fitted an approach where evaluators facilitate a collaborative process of developing the program theory . Given time and resource constraints, a decision was taken to involve only those closest to the purveyor program; in other words, individuals who were sufficiently knowledgeable to act as reliable sources of information. Only staff members of the agency participated in this exercise. These were the director, eight senior managers, and two provincial and six functional area managers.
The development of logic models usually involves the evaluator constructing a preliminary draft based on available program documentation, which is then presented to program staff members for validation. This results in an iterative process of moving back and forth between the development of the logic model and receiving feedback from staff members, which will continue until all staff members agree that the model is an accurate and detailed description of the program as it was originally intended [29, 39].
Numerous site visits to the international office served to develop familiarity with the agency, to establish contact with program staff, to observe the program as it is being delivered, and to identify useful documentation that should be included in the review. Relevant documents were identified and studied, including operational plans, the implementation manual, peer educator portfolios, annual reports, quality assurance mid-year assessments and the like. These documents were analyzed and data sorted into coding categories according to standard logic model components. As a result, we ended up with a preliminary working understanding that covered the following categories: inputs; activities; outputs; pivotal proximal and intermediate outcomes; and distal outcomes.
This preliminary logic model was then submitted via a series of relatively unstructured interviews to the individuals selected to participate in the exercise. Two main topics structured these interviews: first, what the goals and objectives were for the specific program areas that these individuals managed; and second, how these aims contributed to the overall effectiveness of the program. Thus, an iterative process was started between our observations, the program documentation, and data resulting from the interviews and the subsequent verification process; this ultimately culminated in a final program logic model. Each iteration resulted in a more refined and accurate model of the program. It took four months to move from initial data gathering to reach the point at which all the participants were satisfied that the model accurately and comprehensively represented how the agency aims to affect change within the provider organizations and implementers.
In summary: we developed the program theory of the purveyor program from implicit theories of those closest to the program, observations of the program, and program documentation.
The study was approved by an Ethics Review Committee of the Faculty of Humanities at the University of Cape Town. The director of the agency supported the study and communicated its purpose and usefulness to the various evaluation stakeholders and urged them to cooperate. No one refused to participate in the study.
Through an iterative process of logic model development, refinement via interviews, and further presentations of the revised logic model, we arrived at the major result of this study: a formalized logic model of the purveyor program’s theory which was accepted by all concerned as an accurate reflection of the work done by the agency. The challenge however is to present such a complex program visually, in a way that is useful to the stakeholders, comprehensible and engaging to those less familiar with it, communicates effectively, and simplifies without trivializing . We decided to build a cascading visual model of the program theory, starting with the general logic of purveyor programs. As indicated above, the central idea behind purveyor programs is that they provide external systems of support to increase capacity in provider organizations, which in turn will support high fidelity behavior in implementers. Figure 1 captures this overall logic.
Note, in particular, that the present exercise refers only to the two boxes on the left of Figure 1, in dotted outline. In other words, for the purveyor, an increase in the indicated capacities is an outcome of note, but further down the outcome chain, the provider organizations and implementers must bring about the ultimate benefits of this effort—i.e., the reduction of HIV infections among young people. Thus, the purveyor may be improving the capacity of implementing organizations to deliver the peer education intervention as intended, but it remains an open question whether the latter actually do implement peer education with fidelity and strength. At the time of our involvement, the agency had focused its monitoring and evaluation efforts on how provider organizations and implementers deliver the peer education intervention, with little attention given to monitoring the agency’s supportive activities. Although this is understandable, it fails to recognize the extent to which subsequent steps (the two boxes on the right: intervention implementation, outcome, and impact) depend on getting fidelity and strength in the agency’s own activities.
The next step was to elaborate on these two boxes that represent the purveyor program. One way to deal with this complexity is to use sub-pages, as is the case in visual outcomes, model- building software such as DoView . In other words, below this level or page depicting the general program theory for purveyor programs, there are further pages, depicting the logic behind a particular program element. We show this in Figure 2, providing a more detailed description of the assumed progress between each functional area and the distal outcomes of the purveyor program.
Of course, this model is still fairly general, because each functional area has a distinctive service delivery plan that includes various activities deemed necessary to result in the function’s expected outcomes. In other words, another sub-page that depicts these aspects is required. We went one step further for the agency, and developed the program theory underlying the activities of each functional area. These six separate logic models are not presented here, as we are more interested in the general principles rather than in the specific details of the purveyor program.
The theory flows from left to right, starting with the six program functions. The activities of each function are described below. These activities are presumed to lead to short-term, early changes, which in turn set in motion changes in the medium term, which are expected to result in a distal outcome as indicated.
The advocacy and visibility function includes an annual activity to promote the agency’s work, but invests most of its time and resources in developing and distributing marketing resources. Provider organizations receive the marketing material from the agency, and this is intended to assist them in promoting the peer education intervention in the relevant communities.
The quality assurance function offers provider organizations a standard monitoring and evaluation system to monitor delivery of the peer education services. The system is based on a Logical Framework Analysis that provides standards and guidelines for implementation practices. Implementers have the skills and knowledge needed to use the system, and they gather information to send to the agency’s quality assurance coordinator at the international office. There, the incoming data is analyzed to identify patterns of delivery to provide systematic feedback on the provider organizations’ performance in comparison to quarterly targets.
In terms of research and development, training resources such as the training sessions and workshops are examined and modified if necessary. Research also is conducted with stakeholders to update, for example, the implementation guide. Thus, implementers have easy access to resources that are continuously examined for its relevance and usefulness in those communities where the provider organizations deliver the peer education intervention.
The resource mobilization function offers funding to provider organizations and connects them with reliable and sustainable financial partners. In addition, it spends a significant amount of its time and resources on activities to equip provider organizations with the skills and tools needed to obtain their own resources, independently from the agency. These activities include the development and distribution of an information pack, the delivery of a workshop, and the development and distribution of a list of potential donors.
The stakeholder management function has the role its title suggests: it identifies and recruits new provider organizations, and maintains the collaboration of current provider organizations with the agency. Particular care is taken to increase the supportiveness of the environment in which the peer education intervention is delivered, via efforts such as peer education forums and peer education workshops.
The training and support services offered to provider organizations include training sessions, workshops, mentoring, and coaching for the implementers. The purpose is typical of such interventions: to equip implementers with the skills and support deemed necessary to ensure effective delivery of the peer education intervention. The training function is responsible for delivering eleven training sessions and workshops, and it conducts bi-annual on-site visits and offers continuous telephone and email support to all the implementers.
This study was launched by the realization that the agency lacked an explicit model or theory of how the purveyor program was supposed to work, and what it was supposed to achieve. This was even though it had been running for three years, and with a growing demand to extend its reach to include more provider organizations. In practical terms, our first objective was to develop such a theory for the purveyor program and its staff. This was regarded as an important precursor to further evaluation studies and in particular, a possible process and impact evaluation.
As indicated earlier, we also had a somewhat more ambitious objective in mind: to use this particular instance of program theory development as a tentative indication of how one could depict purveyor programs. These programs are complex, because they involve multiple implementation sites, and many provider organizations: in this study, 17 provider organizations, with the possibility of expanding this number. Purveyor programs of this kind present an interesting challenge to evaluation, and to program theory-building in particular. It was exactly this realization that diverted our attention from an implementation assessment to a theory-building exercise in the present study.
In terms of the actual supportive activities carried out by the agency, it is a relatively simple program: all the functions and their associated activities are well-defined and understood, and delivered in a reasonably consistent and even manualized manner by the agency. Nevertheless, these are all delivered at very different sites, and to provider organizations with very different governances. This presents a challenge in terms of understanding the purveyor program, what the expectancies of various shareholders are, what local variations can be tolerated, and so on. For present purposes, we thought an understanding of the causal path underlying the intervention was particularly important.
Thus, the present study focused on those activities and outcomes involving the agency, as depicted in Figure 1, and not the provider organizations. Because of multiple functions contained in the purveyor program, one has to expect multiple simultaneous causal strands. In an attempt to address this, we followed a stepped approach to theory development: first the overall program logic was developed as shown in Figure 1, then one which disaggregated the purveyor program in terms of different functions (Figure 2), and finally, the logic of each individual function (not dealt with in this study). It is our impression that the logic models that emerged as a result of this exercise were useful to evaluators, agency stakeholders, and that these clarified their thinking about the purveyor program considerably.
If, however, the agency wishes to understand and/or investigate what happens in the provider organizations themselves, after their involvement with the agency (the two boxes on the right in Figure 1), the situation very quickly becomes much more complicated. Suppose the agency, for example, wishes to address the question: ‘Does increasing capacity in 17 provider organizations lead to the peer education interventions being implemented at strength and with fidelity?’ In this instance, every one of these organizations will have to be studied quite carefully. This is the attribution challenge faced by programs that employ the purveyor method to disseminate interventions, and it is our contention that drawing out an explicit logic model helps the agency to come to grips with it. A first step could be the development of a logic model for each of the 17 provider organizations, describing the theory of change, and the activities they carry out to achieve their goals. Figure 1 makes it clear that overall the agency and its provider organizations share the same program goals, even though it is likely that they follow different paths to get there. Indeed, such an exercise will yield useful comparative information about the provider organizations, and the breadth and depth of their activities.
Starting with documentation, we repeatedly submitted the emerging program theory to the agency’s staff members for consideration. This recursive way of working is in line with how program theory typically is drawn out, and it resulted in four major benefits. All of these benefits are likely to improve the usefulness and utility of subsequent evaluations of the purveyor program.
First, it allowed for a clear articulation of the activities that make up the purveyor program. In many instances it was not clear what the activities of the agency include, and where the boundaries of the different components were. The importance of good program descriptions is widely acknowledged, especially in the light of the present program’s ambition to be scaled up. The present theory-driven approach filled that gap in this specific purveyor program, and elicited the causal assumptions underlying these activities.
Second, it allowed us ample time to spend with the agency’s staff members, and to become thoroughly familiar with the purveyor program. Rossi, Lipsey and Freeman  have indicated that this process builds a knowledge base about the program, which enabled us to develop a detailed description of what supposedly occurs between the intended activities and the expected benefits of the purveyor program. Indeed, literature suggests that stakeholder input increases, substantially, the evaluation’s relevance and usefulness . Interactions with the staff members also provided an opportunity to implement various strategies to manage and overcome evaluation anxiety. These strategies included explaining the purpose of the evaluation, allowing stakeholders to discuss and affect the evaluation, and distinguishing between program and personnel evaluation. If not addressed adequately, evaluation anxiety could have consequences that range from reduced utilization of evaluation findings to problems with compliance and cooperation .
Third, it helped us to become familiar with the context in which the agency operates. Literature suggests that this not only will facilitate the development of the data collection tools and the interpretation of the evaluation findings, but it will also greatly increase the evaluation’s relevance and usefulness.
Finally, the theory development process can, quite easily, be employed to ensure that evaluations are conducted only on programs that meet evaluability assessment criteria . For example, it enabled the present study to ascertain whether the agency had well-defined and plausible goals and objectives, and whether relevant performance data could be obtained at reasonable cost.
Although a program theory-based approach has substantial benefits to evaluation, it challenges the researcher in at least two ways. For a start, one is confronted with a relative lack of clarity on just what is meant by ‘program theory’ and there is a scarcity of examples of applications using this approach [29, 44]. In terms of methods, what for many is a benefit—the flexibility it allows in terms of methods chosen —can for others be quite daunting, especially when they are less experienced.
Since capacity building is such an important outcome envisaged for the program described, we turn briefly to this literature. It is a more extensive literature than on purveyor programs, but it can be argued that it contains much that can be useful to the latter. Kopf and Thayer , for example, reviewed the most successful capacity building initiatives of providers in the USA. They discovered that various external factors, apart from the content and quality of the capacity building program itself, influence significantly the effectiveness of these efforts. In particular:
Each organization has specific needs that should be taken into account by capacity building initiatives. Providers who work with organizations’ specific needs, instead of relying on formulas, get better results.
The better the understanding of an organization’s situation, history, and culture, the more effective the capacity building becomes.
Listening, communicating, and understanding an organization’s context is essential for effective capacity building.
Trust between the capacity-building initiative and the organization is essential for capacity building to occur. Both parties should feel free to communicate openly, to ask for help beyond the usual, and to listen and learn.
Capacity-building initiatives should spend sufficient time with organizations to obtain a good understanding of what the organization needs and how their skills and knowledge can be moulded to yield the most benefits for that organization.
This line of reasoning suggests that the nature of the relationship between purveyors, provider organizations, and implementers will have a strong influence on the success of these efforts. Furthermore, Kopf and Thayer’s findings  indicate that purveyors might benefit by becoming more flexible in the way they provide services to provider organizations. It would seem that a ‘one-size fits all’ approach might not be the most effective way to equip provider organizations to deliver interventions. Ideally, purveyors should spend sufficient time with each provider organization that they support to determine how their services could best be adapted to equip each specific organization. One of the external factors affecting provider organizations is the base capacity when adopting an intervention. It stands to reason that provider organizations with more capacity at the outset can benefit more from support than those who struggle with capacity issues. This is also the best indication of how supportive services should be adapted to deliver the best results. It will of course not be easy for purveyors to incorporate these observations into their daily activities, but we believe it to be worthwhile in building a sound evidence-based practice.
It is clear from the literature that purveyors and intermediary organizations have an important role to play in the implementation of programs, especially in resource-poor settings. As we indicated above, they are a response to a perceived lack of capacity in provider organizations to deliver services with strength and fidelity, and as such they have an important role to fulfill. In the program that we studied, and many similar ones, this is what they set out to achieve. What would this ‘capacity’ mean in different human service settings, such as mental health, juvenile justice, public health, etc.? Would we recognize it when we see it? As basic human resource and organizational issues, our view is that the question can be answered affirmatively: after a purveyor has worked with a provider organization, the latter’s staff should be more skilled to deliver a specific intervention and have greater expertise than before, and the organization itself should have adequate governance and management structures to support its personnel. Whether it is possible to extract a more general program theory for purveyors across human services from this study is much more open to doubt, we believe. One, at least, would have to compare the effectiveness of the impact theories to which different purveyors subscribe, to be able to answer this question in some empirical way. A program-theory driven approach to evaluation nevertheless holds much promise in this regard.
a Examples of intermediary and purveyor organizations that operate within this burgeoning field include: The Centre for Effective Services, based in Dublin, Ireland (http://www.effectiveservices.org/); Practice and Research Together, of Toronto, Canada (http://www.partontario.org/); The Evidence-based Prevention and Intervention Support Center (EPISCenter) of Pennsylvania State University (USA) (http://www.episcenter.psu.edu/); and the Connecticut Center for Effective Practice, based in New England, USA (http://www.chdi.org/ccep-initiatives.php). The in-text origin of this endnote is in the third paragraph under the Background section.
Fixsen DL, Naoom SF, Blasé KA, Friedman RM, Wallace F: Implementation research: A synthesis of the literature. [http://nirn.fpg.unc.edu/resources/implementation-research-synthesis-literature]
Patrizi PA, Gross EC, Freedman S: Strength in flexibility: Lessons from a cluster of capacity building grants in the juvenile justice field. Eval Program Plann. 2006, 29: 162-170. 10.1016/j.evalprogplan.2006.01.010.
Fixen DL, Blasé KA, Naoom SF, Wallace F: Core implementation components. Res Social Work Prac. 2009, 19 (5): 531-540. 10.1177/1049731509335549.
Frank RP: Role of the intermediary organization in promoting and disseminating best practices for children and youth: The Connecticut center for effective practice. Emotional Behav Disord Youth. 2010, 10 (4): 87-93.
Fixen DL, Blasé KA, Naoom SF, Wallace F: Implementation Insight#1 Purveyor Roles and Activities: Taking Programs and Practices to Scale. [http://nirn.fpg.unc.edu/resources/implementation-insight-1-purveyor-roles-and-activities-taking-programs-and-practices-scale]
Global Implementation Conference: [http://www.implementationconference.org/]
Practice Group for Purveyors and Intermediary Organizations: [http://gicpurveyors.com/]
Elliot DS, Mihalic S: Issues in disseminating and replicating effective prevention programs. Prev Sci. 2004, 47 (6): 47-52.
Fixen DL, Blase KA, Naoom SF, Dyke M, Duda M: The National Implementation Research Network. [http://www.fpg.unc.edu/~nirn/default.cfm]
Fagan AA, Mihalic A: Strategies for enhancing the adoption of school-based prevention programs: Lessons learned from the blueprints for violence prevention replications of the skills training programme. J Community Psychol. 2003, 31 (Suppl 3): 235-253.
McHugh RK, Barlow DH: The dissemination and implementation of evidence-based psychological treatments: A review of current efforts. Am Psychol. 2010, 65 (2): 73-84.
Greenhalgh T, Robert G, MacFarlane F, Bate P, Kyriakidou O: Diffusion of inovations in service organizations: Systematic review and recommendations. Milbank Q. 2004, 82 (4): 581-629. 10.1111/j.0887-378X.2004.00325.x.
Madden NA, Slavin RE, Karweit NL, Dolan LJ, Wasik BA: Success for All: Longitudinal effects of a restructring program for inner-city elementary schools. Am Educ Res J. 1993, 30 (1): 123-148.
Schoenwald SK, Brown TL, Henggeler SW: Inside multisystemic therapy: Therapist, supervisory, and program practices. J Emot Behav Disord. 2000, 8 (2): 113-127. 10.1177/106342660000800207.
Dumas JE, Lynch AM, Laughlin JE, Smith EP, Prinz RJ: Promoting intervention fidelity conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trail. Am J Prev Med. 2001, 20 (1S): 38-47.
Mazzucchelli TG, Sanders MR: Facilitating practioner flexibility within an empirically supported intervention: Lessons from a system of parenting support. Clin Psychol- Sci Pr. 2010, 17: 238-252. 10.1111/j.1468-2850.2010.01215.x.
Stetler CB, Legro MW, Rycroft-Malone J, Bowman C, Curran G, Guihan M, Hagedorn H, Pineros S, Wallace CM: Role of “external facilitation” in implementation of research findings: A qualitative evaluation of facilitation experiences in the veterans health administraton. Implementation Sci. 2006, 1: 23-10.1186/1748-5908-1-23.
Cheater FM, Closs SJ: The effectiveness of methods of dissemination and implementation of clinical guidelines for nursing practice: A selective review. Clin Effectiveness Nurs. 1997, 1: 4-15. 10.1016/S1361-9004(97)80022-2.
Mullen EJ, Bledsoe SE, Bellamy JL: Implementing evidence-based social work practice. Res Social Work Prac. 2008, 18 (4): 325-338.
UNAIDS report on the global AIDS epidemic : 2010, [http://www.unaids.org/globalreport/Global_report.htm]
Luna G, Rotheram-Borus M: Youth living with HIV as peer leaders. Am J Community Psychol. 1999, 27 (1): 1-23.
Stephenson JM, Strange V, Forrest S, Oakley A: Pupil-led sex education in England: Cluster-randomized intervention trail. Lancet. 2004, 364 (9431): 338-346. 10.1016/S0140-6736(04)16722-6.
Strange V, Forrest S, Oakley A: Peer-led sex education: Characteristics of peer education and their perceptions of the impact on them of participation in a peer education program. Health Educ Res. 2002, 17 (3): 327-337. 10.1093/her/17.3.327.
Campbell C, Foulis CA: Creating context that support youth-led HIV prevention in schools. Soc Transit. 2002, 33 (3): 339-356.
Harrison A, Smit JA, Meyer L: Prevention of HIV/AIDS in South Africa: a review of behaviour change interventions, evidence and options for the future. S Afr J Sci. 2000, 96 (6): 285-291.
Sweifach J, LaPorte HH: Perceptions of peer to peer HIV/AIDS education: A social work perspective. J HIV/AIDS Prev Child Youth. 2006, 7 (2): 119-134.
Borgia P, Marinacci C, Schifano P, Perucci CA: Is peer education the best approach for HIV prevention in schools? Findings from a randomized controlled trail. J Adolesc Health. 2005, 36: 508-516. 10.1016/j.jadohealth.2004.03.005.
Donaldson SI: Program theory-driven evaluation science: Strategies and applications. 2007, New York: Lawrence Erlbaum Associates
Rossi PH, Lipsey MW, Freeman HE: Tailoring evaluations. Evaluation: A systematic approach. Edited by: Rossi PH, Lipsey MW, Freeman HE. 2004, Thousand Oaks: Sage Publications, 31-65. 7
Donaldson SI, Lipsey MW: Roles for theory in contemporary evaluation practice: Developing practical knowledge. The Handbook of Evaluation: Policies, Programs, and Practices. Edited by: Shaw I, Greene JC, Mark MM. 2006, London: Sage, 56-75.
Funnell SC, Rogers PJ: Purposeful program theory. 2011, San Francisco: Jossey-Bass
Baker EL: Taking programs to scale: A phased approach to expanding proven interventions. J Public Health Manag Pract. 2010, 16 (3): 264-269.
Rogers PJ, Petrosino A, Huebner TA, Hacsi TA: Program theory evaluation: Practice, promise, and problems. New Dir Eval. 2000, 87: 5-13.
Rogers PJ: Causal models in program theory evaluation. New Dir Eval. 2000, 87: 47-55.
Patton MQ: Utilization-focused evaluation. 2008, Thousand Oaks: Sage
Pawson R, Tilley N: Realistic Evaluation. 1995, London: Sage
Torres RT, Preskill H: Evaluation and organizational learning: Past, present, and future. Am J Eval. 2001, 22 (3): 387-395.
Frechtling JA: Logic modelling methods in program evaluation. 2007, San Francisco: John Wiley & Sons
Barrett M, Bissell M: A process evaluation of the youth education about health (YEAH) program: A peer-designed and peer-led sexual health education program. Can J Hum Sex. 2005, 14 (3–4): 129-141.
Doview: Visualizing outcomes: [http://www.doview.com]
Chen HT: Practical program evaluation: Assessing and improving planning, implementation and effectiveness. 2005, Thousand Oaks: Sage
Donadson SI, Gooler LE, Scriven M: Strategies for managing evaluation anxiety: Toward a psychology of program evaluation. Am J Eval. 2002, 23 (3): 261-273.
Wholey J: Evaluability Assessment. Handbook of Practical Program Evaluation. Edited by: Wholey J, Hatry H, Newcomer K. 2004, San Francisco: Jossey Bass, 33-62.
Weiss CH: How can theory-based evaluation make greater headway?. Evaluation Rev. 1997, 21: 501-524. 10.1177/0193841X9702100405.
Kopf N, Thayer C: Echoes from the field: Proven capacity-building principles for nonprofits. 2001, Washington: Innovation Network Inc
Funding for this study was provided by the National Research Foundation (NRF) of South Africa, Harry Crossley Foundation, the KW Johnston Bequest, and the Universtiy of Cape Town. The views expressed are those of the authors and not the funders.
The authors declare that they have no competing interests.
CO and JL conceived the study together. CO conducted the fieldwork, and JL provided input on the interpretation of the results. JL prepared the first draft of this manuscript, and both authors provided initial and final refinements. Both CO and JL read and approved the final manuscript.