- Open Access
- Open Peer Review
A complementary marriage of perspectives: understanding organizational social context using mixed methods
Implementation Sciencevolume 9, Article number: 175 (2014)
Organizational factors impact the delivery of mental health services in community settings. Mixed-methods analytic approaches have been recommended, though little research within implementation science has explicitly compared inductive and deductive perspectives to understand their relative value in understanding the same constructs. The purpose of our study is to use two different paradigmatic approaches to deepen our understanding of organizational social context. We accomplish this by using a mixed-methods approach in an investigation of organizational social context in community mental health clinics.
Nineteen agencies, representing 23 sites, participated. Enrolled participants included 130 therapists, 36 supervisors, and 22 executive administrators. Quantitative data was obtained via the Organizational Social Context (OSC) measure. Qualitative data, comprised of direct observation with spot sampling generated from agency visits, was coded using content analysis and grounded theory. The present study examined elements of organizational social context that would have been missed if only quantitative data had been obtained and utilized mixed methods to investigate if stratifying observations based on quantitative ratings from the OSC resulted in the emergence of differential themes.
Four of the six OSC constructs were commonly observed in field observations (i.e., proficiency, rigidity, functionality, stress), while the remaining two constructs were not frequently observed (i.e., resistance, engagement). Constructs emerged related to organizational social context that may have been missed if only quantitative measurement was employed, including those around the physical environment, commentary about evidence-based practice initiatives, leadership, cultural diversity, distrust, and affect. Stratifying agencies by “best,” “average,” and “worst” organizational social context impacted interpretation for three constructs (affect, stress, and leadership).
Results support the additive value of integrating inductive and deductive perspectives in implementation science research. This synthesis of approaches facilitated a more comprehensive understanding and interpretation of the findings than would have been possible if either methodology had been employed in isolation.
A burgeoning body of research has emerged to suggest that organizational factors are particularly important characteristics that impact delivery of mental health services for youth in the community –. The organizational literature suggests the importance of an individual’s social context on one’s attitudes, beliefs, and subsequent behavior around adoption of innovation ,. In the case of youth mental health services, the most important social context refers to the organizations within which treatment is delivered (i.e., organizational social context) . Two important constructs, culture and climate, contribute to organizational social context. Organizational culture refers to shared employee perceptions around “the behavioral expectations and norms that characterize the way work is done in an organization” (, p. 858), whereas organizational climate refers to shared employee perceptions around “the psychological impact of their work environment on their own personal well-being” (, p. 64). Organizational culture and climate have been associated with provider turnover ,, quality of services , sustainment of adoption of new practices , and youth mental health outcomes ,. The gold standard assessment of organizational culture and climate is the Organizational Social Context (OSC), a quantitative measure  developed over the past 35 years . Items are based on qualitative work, expert review, and empirical testing (Glisson, personal communication, 2014).
Measurement continues to present a thorny challenge in implementation science ,. As efforts to translate research to practice have become a national priority, more diverse methodologies such as mixed-methods approaches are becoming more sophisticated and increasingly utilized ,. Mixed methods allow for quantitative data (e.g., surveys) to be integrated with qualitative data (e.g., interviews) to allow for a more comprehensive understanding of organizations (e.g., ). When research is done from a purely etic perspective, the outsider or “expert” guides the research. In contrast, research that aims to understand the insider’s view takes an emic perspective . Mixed-methods research can ideally result in a synergistic effect in which the combination of emic and etic perspectives is greater than either individual contribution .
The purpose of our study is to demonstrate the use of mixed methods to deepen our understanding of organizational social context. We accomplish this by juxtaposing the use of a validated and reliable measure with an inductive real-world set of observations to examine organizational social context in community mental health clinics. We take three different and complementary approaches to understand organizational social context within community mental health clinics. We began with a deductive approach using the OSC. As a standardized, validated instrument, the OSC is based on the premise that the constructs of proficiency, rigidity, resistance, engagement, functionality, and stress are central to characterizing and quantifying organizational culture and climate, two major components of organizational social context. The instrument’s purpose is to identify the presence and strengths of the key constructs. However, the instrument is limited to measuring only the constructs identified by experts as important to organizational social context. Second, we conducted direct observations with spot sampling of 23 agencies which refers to observation and recording of behavior with periodic randomly selected visits to the context of interest . We used content analysis to identify whether we could observe real-world examples of the six OSC constructs within our field notes. Finally, we used a grounded theory approach to determine if there were additional factors in our field notes, not covered by the OSC constructs, which might contribute toward organizational social context. Agencies were stratified based on their organizational social context scores (“best,” “average,” and “worst”) to determine if different constructs were present based on the quality of organizational social context.
We used purposive sampling to recruit the 29 largest child-serving agencies in Philadelphia, which together serve approximately 80% of youth receiving publically funded mental health care. Of these 29 agencies, 18 (62%) agreed to participate. Additionally, one agency involved in evidence-based practice (EBP) efforts asked if they could participate, resulting in a final sample of 19 agencies. Several agencies had multiple locations, resulting in 23 sites, 130 therapists, 36 supervisors, and 22 administrators. Each site (N = 23), rather than each agency (N = 19), was treated as a distinct organization because of different leadership structures, locations, and staff. Going forward, we will refer to the site as “agency”.
All procedures were approved by associated Institutional Review Boards. Qualitative data were gathered through direct observation with spot sampling and were recorded as field notes during agency visits. These observations were collected as part of a study to measure use of EBP in community mental health agencies (see ). As part of this project, the research team visited 23 agencies for approximately 2 h to administer a battery of self-report measures and to collect observational data. Following the visit, both the first and third author recorded field notes based on observations of the therapists and supervisors with whom they administered measures. The majority of the field notes documented direct observation of the group. However, a small minority of interactions were one-on-one through short informal conversations that occurred while visiting with the group. All field notes included comments on the physical atmosphere (e.g., temperature, building appearance), the professional atmosphere (e.g., collegiality among staff), and general impressions about the visit as these were a priori constructs of interest.
The OSC quantitative measure was collected as part of the 2-h visit mentioned above along with a number of other measures. Therapists completed the OSC in their referent group as indicated by the measure developers. Supervisors and executive administrators completed the measure separately. Participants received $50.00 for participating in the larger study.
We asked all participants to provide information on their background (e.g., age, gender, ethnicity).
Organizational social context
The OSC measurement system  is a 105-item measure of the social context of mental health and social service organizations. The OSC measures organizational culture and organizational climate with a mean of 50 and a standard deviation of 10. The OSC measurement model defines organizational culture as comprised of three dimensions—proficiency, rigidity, and resistance—whereas organizational climate is also comprised of three dimensions—engagement, functionality, and stress .
The OSC has norms based on a normative sample of 100 mental health service organizations nationally and has strong psychometric properties . This measure was completed by therapists, supervisors, and executive administrators. In this sample, the measure demonstrated adequate internal consistency across subscales (Cronbach’s alpha = .71–.95) .
Data analytic plan
To calculate each dimension of the OSC, clinician responses by agency were aggregated, after ensuring that aggregation was indicated through the use of the rwg statistic ,. Intra-group agreement was excellent (mean rwg = .95), supporting aggregation of clinician, supervisor, and administrator response. We elected to include supervisors and administrators in our sample because it minimally impacted the data and allowed us to retain all of the agencies in our OSC sample.
Following procedures outlined by Glisson and colleagues, we used latent profile analysis (LPA) to generate a continuous score demarcating summative agency organizational social context ranging from 1 to 3 . This composite OSC score is derived from the six profiles for each agency based on the probability weighted sum of class membership in one of three empirically derived OSC profiles (“worst,” “average,” and “best”). These three profiles were originally identified by using LPA on OSC scores from a national sample of 100 children’s mental health agencies . Using LPA, we applied estimates from the national sample to the agencies in this study to calculate the probability that each of the agencies was a member of the three empirically derived classes. Agencies were assigned composite OSC scores by multiplying the possibility that the agency was a member of each class by the value for that respective class and summing the products resulting in a variable that ranged from 1.00 (most negative profile; “worst”) to 3.00 (most positive profile; “best”) , allowing us to classify the 23 agencies into the following types of organizational social context: “best” (N = 7), “average” (N = 5), and “worst” (N = 11).
Our qualitative analysis proceeded in two phases. The first (RSB) and second (CBW) authors independently conducted a content analysis of the field notes. We examined field notes from each agency to identify the presence or absence of examples of constructs from the OSC (i.e., proficiency, resistance, rigidity, engagement, functionality, stress). Next, we used a grounded theory analysis of the field notes to identify additional features that might affect organization social context within the agency. We (RSB, CBW) conducted an independent line-by-line reading of each field note and created a set of codes as they emerged from the texts. We used the inter-rater reliability function in QSR Nvivo 10.0 (Kappa = .94) to ascertain agreement.
Constructs that emerged from our grounded theory process had some overlap with OSC constructs. For example, phrases in the field notes that referred to both leadership and the interactions between leaders and staff could be double coded under both “leadership” and “rigidity.” However, we felt that our direct observations suggested the importance of creating a distinct construct that documented the impact of leaders on organizational social context that was not fully captured under the rigidity construct. See Table 1 for the codebook.
The structure of our mixed-methods analysis was QUAN + QUAL (i.e., simultaneous collection and analysis of both data types, giving both equal weights). The function of our analysis was convergence (i.e., do all data collection strategies answer similar questions?) and complementarity (i.e., elaboration of the quantitative data using qualitative data) . The OSC LPA quantitative score was used to stratify the qualitative data so that comparisons could be made across the three types of agencies in each construct.
On average, agencies employed 11.65 (SD = 9.80) therapists. Table 2 provides demographic information about therapist and supervisor participants. Administrators (N = 22) were split equally between male (50%) and female (50%). Of the total group of administrators, 15% identified as Hispanic/Latino. Subsequently, when asked to identify ethnicity/race, executive administrators self-identified as Asian (9.1%), African American/Black (18.2%), White/Caucasian (54.5%), multiracial (9.1%), or missing ethnicity/race (9.1%). Highest educational degree attained included bachelor’s degree (9.1%), master’s degree (50.0%), doctoral degree (31.8%), and missing (9.1%).
Quantitative OSC results
See Table 3 for quantitative scores presenting the six dimensions of culture and climate as measured by the OSC as well as the categorization for each agency (i.e., best, average, worst) as calculated by the LPA.
Qualitative results—content analysis
Content analysis of the field notes revealed “real-world” examples of the OSC constructs (i.e., proficiency, rigidity, resistance, engagement, functionality, and stress).
Proficiency was observed through physical representations as well as discussions had with participants at each agency. Physical representations of proficiency were observed through evidence of participation in evidence-based or evidence-informed efforts (e.g., sanctuary principles were hanging on the wall in several agencies), or more generally, posters that were therapeutically relevant (e.g., emotions), or suggested child-friendly environments (e.g., toys).
In a number of agencies, observations of participants elucidated the type of knowledge individuals had. At one agency, a participant had not heard of the government organization responsible for overseeing city mental health efforts. At another agency, a number of therapists noted that they were receiving training in motivational interviewing the next day.
Rigidity was primarily observed around leadership-related interactions. First, at a number of agencies, rigidity was reflected in therapist understanding of their participation in this study. Many initially thought that participation was mandatory as it was presented by their leadership, suggesting little discretion and autonomy among therapists. Second, most observations of rigidity revolved around leader behaviors. At one agency, leadership elected to stay in the room despite our request that they complete their surveys in a separate room. Third, a number of therapists voiced concerns about how the information collected would be shared with leadership.
Relatively few instances of resistance were observed in the field notes. Of those observations, they primarily related to one-on-one conversations with therapists who relayed their concerns which were almost always specific to cognitive-behavioral therapy (CBT), and a sense that those practicing psychodynamic therapies or those with more years of clinical experience were resistant to CBT.
Few instances of engagement were observed in the field notes from agency visits. Of the two observations, one therapist noted caring very much for her youth clients, but feeling overwhelmed because her case load was very high. Another therapist reflected that his supervisor did not emphasize how to best treat children and families, focusing on money.
Functionality was observed through the collegial, professional, and respectful relationship among participants observed at the majority of the agencies. The collegial environment observed across most agencies contrasted with a minority of settings where the chaotic environment appeared to undermine functionality or when staff members appeared not to know one another.
Evidence of stress was observed across a number of domains during agency visits, including stress around job security, fatigue, emotional exhaustion, and the physical environment. At a number of agencies, therapists voiced concerns about job security. Typically, these concerns were around recent downsizing. Understandably, participants had some concerns about sharing negative perceptions of their organization given recent cutbacks. Further, fatigue often came up during agency visits, and comments were made about the OSC question related to fatigue. We observed both emotional and physical environment stress. Emotional stress refers to comments and affect around fatigue, feeling overwhelmed, and feeling burned out. Physical environment stress refers to stressors in the physical space such as a building close to a train (i.e., frequent vibrations and noise), cramped quarters, and/or unpleasant working conditions.
Qualitative results—grounded theory
A grounded theory analysis of the field notes revealed six additional distinct factors beyond those included in the OSC that seemed important to organizational social context. These factors included the physical environment, leadership, participation in initiatives, cultural diversity, distrust, and affect.
The majority of observations referred to the neighborhood/location of the agency and condition and cleanliness of the facility. Additional observations described security, technology, and sensory factors that may contribute to the work environment. Note that this construct frequently overlapped with “stress,” but also captured positive aspects of the physical environment, as well as the actual physical layout of the agency.
Neighborhood/location observations spanned desirable and undesirable qualities and included comments about the safety of the neighborhood and the area of town, the type of building (e.g., a converted row home), and comments on convenience (e.g., has its own parking lot) or potential pitfalls of the location (e.g., under the train tracks). With regard to condition and cleanliness, some agencies were perceived as poorly maintained and lacking adequate resources (e.g., meeting rooms were too small to accommodate everyone), while others had dedicated conference space with adequate seating, appeared well kept, and were perceived as clean and welcoming (e.g., child-friendly decorations, values of the agency displayed on posters). At several agencies, a security guard was present at the entrance. Additional physical environment observations included comments about technology (e.g., the conference room was set up with technology) and sensory observations. Temperature and noise (e.g., the intercom repeatedly went off, a train passed and shook the building periodically) were frequently noted.
Observations relating to city-sponsored EBP initiatives included participant mention of specific EBP initiatives or training opportunities, experience implementing EBP, and comments about the Evidence-Based Practice and Innovation Center (EPIC), a new center sponsored by the Department of Behavioral Health and Intellectual disAbility Services (DBHIDS) to support EBP in the City of Philadelphia.
At several agencies, therapists and/or leadership mentioned participation in city-sponsored training initiatives (often CBT) or other evidence of participation in initiatives was evident (e.g., trauma informed care binders were visible). Some participants described positive experiences implementing EBPs following participation in an initiative, including feeling like experts following training and consultation in CBT, while others conveyed feeling that particular initiatives were disorganized or required rigid adherence. While most agencies were familiar with EPIC, several exceptions were noted including agencies that had never heard of EPIC and/or were confused or distrustful about what EPIC would accomplish.
Leadership observations that were not captured in the “rigidity” construct included differences in therapist behavior when supervisors were not present and frequency of contact between therapists and leaders.
Therapist/leader behavior and affect among one another was observed in a number of instances. Most commonly, this was seen as changes in therapist behavior after leaders exited the room (e.g., increased rowdiness) or when therapists were asked to rate certain leaders (e.g., making faces, laughing). At a number of agencies, therapists did not know who the leadership of their agency was. Finally, contact between leadership and therapists was typically captured when therapists mentioned infrequent contact with supervisors or leaders (e.g., only seeing their supervisor two to four times per month).
Observations relating to cultural diversity were identified at a few agencies that identified as primarily non-English speaking. Most cultural diversity observations were related to spoken and written language, including that therapists were speaking to one another in another language, or that some therapists required translation of paperwork. Occasionally, the research team had difficulty understanding conversation among staff because they communicated with one another in another language.
These observations primarily consisted of distrust of the research team, agency leadership, and general groupthink. Distrust of research included instances in which participants expressed concern about loss of confidentiality and about the purpose of the research. Distrust of agency leadership captured instances in which participants expressed fear of repercussions if negative perceptions were made known to administration. Finally, several instances of groupthink were coded in which participants waited until others had consented to participate before doing so for themselves, or where no one asked any questions.
Affect was commonly observed around completion of the OSC (i.e., laughter and joking) and when asked to rate supervisors/leadership. Two types of affect were commonly observed: anxiety and laughter. Note that this construct sometimes overlapped with “stress”; however, affect was not fully captured by stress.
With regard to affect, in several instances, individuals were tearful or overwhelmed when considering OSC questions about fatigue and burnout. Faces were made by several therapists when told they would have to rate their agency leadership. Further, many groups of therapists became more lively and rowdy while completing the OSC, calling out questions they found amusing, particularly when leadership was no longer in the room.
Stratification by “best,” “average,” and “worst” organizational social context
For our mixed-methods analyses, we stratified agencies into “best,” “average,” and “worst” organizational social context based upon by their quantitative score on the LPA. Then, we explored and compared the content generated in each node by agency type (i.e., “best,” “average,” “worst”) to see if differential qualitative content was yielded. Table 4 provides information on how many of the agencies within each type of agency had content coded at that node (i.e., for the “affect” construct, eight out of the eight “worst” agencies had content coded at that node). Table 5 provides examples of qualitative content stratified by organizational social context.
Four of the factors (i.e., culture/diversity, distrust, engagement, resistance) were only observed in a few agencies, so we did not explore differential content because there were too few observations to make meaningful interpretations. Five of the factors (i.e., rigidity, functionality, proficiency, initiatives, and physical space) did not result in differential content when stratified by organizational social context. Three of the factors (affect, leadership, stress) demonstrated differential content when stratified by organizational social context (see Table 5 for example content).
Agencies that were rated as “worst” organizational social context appeared to display more instances of affect when compared to “best” or “average” agencies. Specifically, therapists in these agencies demonstrated more negative affect by being more likely to sigh or engage in ironic laughter when completing measures, or grimace when asked to rate their supervisor. However, interestingly, in these agencies, therapists and supervisors were more likely to joke with one another than in “best” or “average” agencies.
In agencies that were rated as “worst” or “average” organizational social context, we observed marked differences in therapist interactions with the research team when supervisors were not in the room. Further, at these agencies, it seemed like therapists were less likely to have a relationship with upper-level leadership, while more likely to have a close relationship with their direct supervisor. Agencies that were rated as “best” organizational social context seemed to demonstrate evidence of more respectful and bidirectional relationships between therapists and leadership.
Agencies that were rated as “worst” or “average” organizational social context demonstrated more human/emotional stress and related affect when compared to agencies that were rated as “best” organizational social context. For example, typical observations in “worst” and “average” agencies included participants stating that they felt “overwhelmed,” “burned out,” and “fatigued.” Both therapists and supervisors in these agencies appeared to be preoccupied and distracted. Furthermore, in several agencies, the therapists mentioned recent downsizing. Additionally, the physical space was more likely to be described as “uncomfortable” in “worst” and “average” agencies. In “best” agencies, typical observations related to stress and chaos in the physical environment included disruptions such as intercom buzzing.
Case examples of best, average, and worst organizational social context
To illustrate our findings, we provide case examples from three agencies representing either “best,” “average,” or “worst” organizational social context. These examples are condensed from the full field notes. See Table 3 for quantitative information relating to the OSC.
Best organizational social context
Agency J employed 25 therapists providing services to youth; 12 of these therapists were engaged in the study (48%). Therapists predominantly were doctoral level (50%); other educational backgrounds included bachelor’s level (8%) and master’s level (17%). Educational level was missing for 25% of participants.
Agency J was located in a converted row home in a predominantly residential area. The research team met with staff in a room with unfinished floors and mismatched chairs; therapists had to complete paperwork on their laps. Stickers were arranged haphazardly on the walls. Children could be heard in the waiting room crying at times. Overall, the agency was described as “chaotic” but “respectful.” Many therapists were late, would frequently leave the room, and were “constantly talking to each other.” There were primarily [non-English] speaking therapists, and therapists were observed interacting primarily with those speaking their primary language. The research team reported there was frequent laughter but that they often did not know what the staff was laughing about because those staff members were speaking [another language]. Many therapists required translation of questionnaires. The executive director was described as “very helpful”.
Average organizational social context
Agency P had a total of 4 therapists providing services to youth; 2 of these therapists were engaged in the study (50%). One therapist had a bachelor’s degree, the other a master’s degree.
Agency P was located in an “old” building “in a not so nice part of town” with “an old elevator that looked unsafe.” A nearby train could frequently be heard passing by. The building included a range of behavioral health services. The meeting room had group rules on the wall and posters about “feelings.” There was a table with chairs around it and “several of the chairs were broken.” Outside the room, children could be heard crying and playing. The group was described by the observers as “very friendly,” and it was noted that “everyone seemed very collegial and respectful.” The clinical director “seemed really nice and well-respected” and was heard reassuring therapists that they could answer questions freely without concern of repercussions.
Worst organizational social context
Agency E had a total of 12 therapists providing services to youth; 4 of these therapists were engaged in the study (33%). Therapists all had a master’s degree.
Agency E was in an “old stone” building that was part of a larger campus with a security guard and several security checkpoints to reach the meeting room. The room contained a table, computer, and projector. The therapists were “quiet and respectful” and “didn’t interact very much with their supervisor.” The therapists “had no questions” about the research. The clinical director asked a lot of questions, and there “seemed to be a lot of distrust around completion of the study.” One therapist arrived late and a member of the research team met with her individually. During this time, the staff member made unsolicited comments about the negative perceptions she has of the agency’s leadership.
The results from this study suggest the additive value of approaching implementation science-related questions, particularly those relating to organizational social context, using mixed methods. Calls have been made to include mixed methods in implementation science –, but empirical demonstrations are less common. We identified constructs within our field notes consistent with our quantitative measurement model using content analysis and also used grounded theory to identify newly emergent constructs which would have been missed without approaching our research question in this manner. This allowed us to demonstrate the convergent and divergent findings that emerged through the use of mixed methods in a study of organizational social context in community mental health clinics.
Given that organizational culture and climate were measured reliably and validly using the OSC, a primary research question had to do with whether we would be able to identify the six elements measured by the OSC in field notes. Four of the constructs were frequently observed (i.e., proficiency, rigidity, functionality, stress). Two of the constructs were not readily observed in agency visits (i.e., resistance, engagement). It is likely that these constructs would be better captured in the use of semi-structured interviews with stakeholders, rather than participant observation, given that when we did observe these constructs, it was within the context of one-one-one conversations with therapists. Interviews can be more effective ways to gather difficult, less visible information .
A number of constructs may have been missed if only quantitative measurement was employed such as observations about the physical environment, participation in initiatives, leadership, cultural diversity, distrust, and affect. Three of these constructs, (i.e., physical environment, initiatives, and leadership) can be conceptualized as consistent with the deductive perspective but not wholly captured under deductive constructs. Physical environment was observed at every agency and is a construct that direct observation is well suited to capture. Although this theme had overlap with the stress construct, a number of important additional observations were made around safety, comfort, cleanliness, the sensory experience, and favorable physical environments. Similarly, the initiatives construct had overlap with proficiency, but captured specific comments and opinions around various initiatives sponsored by the City of Philadelphia. Leadership also overlapped with the rigidity construct. An important distinction not captured by rigidity had to do with frontline therapist interactions with leadership. Surprisingly, a number of frontline providers expressed that they had never met agency leadership. We suggest that perhaps these important observations, particularly around leadership and physical space, be integrated into quantitative measurement models of organizational social context. For example, questions related to leadership could include “How often do you interact with your leadership?” and “Do you know your leader’s name?”, whereas questions related to physical space could include “How clean is your physical environment?” and “How comfortable is your work environment?” Further, much discourse around the importance of leadership – in implementation of innovation suggests the importance of consideration of leadership as a distinct aspect of organizational social context.
The other three constructs related to organizational social context, (i.e., cultural diversity, distrust, and affect) would have been missed completely without using an inductive framework, and suggest the strength and added value of approaching measurement from a mixed-methods perspective ,. Cultural diversity was observed in agencies where primarily ethnic minority youth and families were served; in certain instances, therapists were primarily non-English speaking and had difficulty understanding some of the self-report measures. Themes around distrust emerged at approximately one-third of the agencies visited. Therapists and supervisors made comments indicating distrust around the research team and leadership, suggesting important insights around the researcher and community partner relationship. Finally, affect (e.g., laughter, sadness) was observed in a number of agencies, particularly around completion of the organizational measures. Given these findings, we suggest that these are theoretically important dimensions that should be taken into account alongside organizational social context in future implementation research. We recommend that researchers include these constructs in field notes when collecting quantitative data on organizational social context in the future.
Stratifying agencies by “best,” “average,” and “worst” organizational social context resulted in differential content for the constructs of affect, leadership, and stress. “worst” agencies demonstrated more displays of affect, both negative and positive from participants when compared to “average” and “best” agencies. It may be that at agencies with poor organizational social context, there may be fewer professional boundaries with regard to affect expression so that therapists are more used to expressing their emotions freely. “worst” and “average” agencies demonstrated more evidence of poor interactions with leadership, pointing to the importance of organizational interventions that address participant affect and leadership . Finally, “worst” and “average” agencies demonstrated more instances of human stress rather than environmental stress when compared to “best” agencies, as expected .
First, the majority of observations obtained herein involved groups versus individual staff members. It is possible that another method, such as individual interviews, would have better facilitated the disclosure of sensitive information (e.g., leadership) . However, group formats are excellent for observing interactions among individuals within a group ,. We were interested in organizational-level constructs rather than individual-level constructs; thus, we believe that observations in the group setting are more relevant. It is possible that individuals who work in positive organizational social contexts may be more likely to feel comfortable expressing concerns about their organization in the presence of coworkers, while those who work in negative organizational social contexts may be more reluctant, making this a potential limitation. However, our experiences suggest that therapists felt comfortable expressing their feelings about their agency, even when they had negative organizational social context ratings. Second, we only spent 2 h at each agency which is a short period of time to fully appreciate the context in which the phenomenon of interest takes place and must be considered a limitation of our study. Moreover, the sample of time in which the observations took place may not be representative of the context at another day or time of day. The interpersonal dynamics between the observer/interviewer and interview can also affect the impressions that are gathered, and intrapersonal factors (e.g., having a bad day) may affect the observations and impressions that are gathered. However, the systematic nature of our observations and analysis (i.e., focus on actors, activities, setting; chronologic nature of observations; field notes that go from wider context to more focused context; immediate processing of field notes, systematic coding and inter-rater reliability checks) helps to address some of these necessary shortcomings. Third, because we did not have 100% of therapists at each agency participating, there is a potential threat to representativeness. Fourth, because administrators were included in the ratings of organizational social context, it is possible that the ratings may have been different if only therapists were included, although we explored the data with only therapists and it was not different.
Results of the present study suggest there may be organizational social context challenges to overcome in the City of Philadelphia. At present, there are no system-wide interventions targeting organizational social context across a large service system. Evidence-based interventions to improve organizational social context within agencies exist such as the Availability, Responsiveness, and Continuity (ARC) organizational intervention . An evidence-based intervention strategy, such as the ARC, has the potential to be extended and applied more broadly to an entire system. By targeting systems, rather than individual agencies, a broader impact may be realized.
It is also important to consider how organizational social context is assessed within a system. Typically, organizational social context is measured using quantitative methods. Our results suggest that a combination of participant observation and interviews may facilitate capturing a range of important constructs. A multi-method, multi-informant approach to assessing organizational social context may be recommended, as it has been in other areas of health research. For example, in the assessment of child psychopathology, best practice recommendations include that the perspectives of parents, teachers, and the children themselves be obtained and that, when possible, varying assessment modalities (e.g., self-report and interview) are employed ,. Implementation science research may benefit from a similar model. Using mixed methods may require more resources than utilizing either approach in isolation. While it is possible that this may limit the feasibility of using both approaches in future research, we found that we were able to integrate these perspectives relatively easily and cost-effectively by recording field notes obtained during the quantitative data collection phase of a project.
A number of frontline providers expressed that they did not know individuals in key leadership roles within their agency. The disconnect between providers and leadership is also likely problematic, as leaders have been shown to play an important role in change within an organization –. Transformational leadership has been associated with innovation climate during implementation, and innovation climate is related to provider attitudes toward EBPs . Situations in which frontline providers and leaders have little interaction may require interventions that target the agency’s culture first (e.g., increasing interactions among members of the organization) before targeting leadership styles and implementation climate. One potential avenue for exploration includes whether agencies that employ fee-for-service therapists have less therapist-leadership interaction.
The constructs of cultural diversity, distrust, and affect were novel and have implications for future research in this area. Given that cultural diversity emerged as a construct, this suggests the need to attend to cultural and language issues in implementation research. For example, it will be important to understand whether current measures of organizational social context are valid for use with therapists for whom English is not the primary language or in agencies implementing EBPs with diverse populations. Distrust around the research team and leadership was observed in a number of agencies. This suggests opportunities may exist to bolster a more collaborative and bidirectional partnership between the community agencies/providers and the researchers involved with EBP implementation, as has been recommended by Chambers and Azrin . Finally, affect (e.g., laughter, sadness) was often observed, particularly around completion of the organizational measures. As the assessment of organizational social context appears to have elicited observable affect in this sample, this may present opportunities for future research to consider the development of an intervention for this purpose. For example, an intervention for clinicians who express feeling overwhelmed or depressed in the context of their organization could include strategies for managing stress and identifying supports in their workplace.
The present study explored the complementary contributions of inductive and deductive perspectives. Results support the additive value of mixed-method perspectives in implementation science research. Indeed, this synthesis of approaches allowed us to better understand and interpret the findings than would have been possible if the quantitative and qualitative findings were viewed in isolation. Future mixed-methods research in implementation science research is recommended.
Organizational Social Context
Department of Behavioral Health and Intellectual disAbility Services
Evidence-Based Practice and Innovation Center
Availability Responsiveness and Continuity organizational intervention
Glisson C, Schoenwald SK, Hemmelgarn A, Green P, Dukes D, Armstrong KS, Chapman JE: Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. J Consult Clin Psychol. 2010, 78: 537-550. 10.1037/a0019160.
Hoagwood K, Burns BJ, Kiser L, Ringeisen H, Schoenwald SK: Evidence-based practice in child and adolescent mental health services. Psychiatr Serv. 2001, 52: 1179-1189. 10.1176/appi.ps.52.9.1179.
Glisson C: The organizational context of children's mental health services. Clin Child Fam Psychol Rev. 2002, 5: 233-253. 10.1023/A:1020972906177.
Rogers EM: Diffusion of Innovations. 2003, Free Press, New York, 5
Williams NJ, Glisson C: The role of organizational culture and climate in the dissemination and implementation of empirically supported treatments for youth. In Dissemination and Implementation of Evidence-Based Practices in Child and Adolescent Mental Health. 1st edition. Edited by Beidas RS, Kendall PC. New York: Oxford University Press; 2014:61–81.
Glisson C, Dukes D, Green P: The effects of the ARC organizational intervention on caseworker turnover, climate, and culture in children's service systems. Child Abuse Negl. 2006, 30: 855-880. 10.1016/j.chiabu.2005.12.010. discussion 849–854
Glisson C, Schoenwald SK, Kelleher K, Landsverk J, Hoagwood KE, Mayberg S, Green P: Resarch Network on Youth Mental Health: Therapist turnover and new program sustainability in mental health clinics as a function of organizational culture, climate, and service structure. Adm Policy Ment Hlth. 2008, 35: 124-133. 10.1007/s10488-007-0152-9.
Glisson C, Hemmelgarn A: The effects of organizational climate and interorganizational coordination on the quality and outcomes of children's service systems. Child Abuse Negl. 1998, 22: 401-421. 10.1016/S0145-2134(98)00005-2.
Glisson C, Green P: Organizational climate, services, and outcomes in child welfare systems. Child Abuse Negl. 2011, 35: 582-591. 10.1016/j.chiabu.2011.04.009.
Glisson C, Landsverk J, Schoenwald S, Kelleher K, Hoagwood KE, Mayberg S, Green P: Research Network on Youth Mental Health: Assessing the organizational social context (OSC) of mental health services: implications for research and practice. Adm Policy Ment Hlth. 2008, 35: 98-113. 10.1007/s10488-007-0148-5.
Glisson CA: Dependence of technological routinization on structural variables in human service organizations. Admin Sci Quart. 1978, 23: 383-395. 10.2307/2392416.
Proctor EK, Powell BJ, Feely MA: Measurement in dissemination and implementation science. In Dissemination and Implementation of Evidence-Based Practices in Child and Adolescent Mental Health. 1st edition. Edited by Beidas RS, Kendall PC. New York: Oxford University Press; 2014:22–43.
Martinez RG, Lewis CC, Weiner BJ: Instrumentation issues in implementation science. Implement Sci. 2014, 9: 118-10.1186/s13012-014-0118-8.
Plano Clark VL: The adoption and practice of mixed methods: U.S. trends in federally funded health-related research. Qual Inq. 2010, 16: 428-440. 10.1177/1077800410364609.
Best practices for mixed methods research in the health sciences.S, [http://obssr.od.nih.gov/mixed_methods_research]
Nastasi BK, Schensul SL: Contributions of qualitative research to the validity of intervention research. J School Psychol. 2005, 43: 177-195. 10.1016/j.jsp.2005.04.003.
Stewart M, Makwarimba E, Barnfather A, Letourneau N, Neufeld A: Researching reducing health disparities: mixed-methods approaches. Soc Sci Med. 2008, 66: 1406-1417. 10.1016/j.socscimed.2007.11.021.
Bernard RH: Research Methods in Anthropology: Qualitative and Quantitative Approaches. Alta Mira Press; 2011.
Beidas RS, Aarons G, Barg F, Evans A, Hadley T, Hoagwood K, Marcus S, Schoenwald S, Walsh L, Mandell DS: Policy to implementation: evidence-based practice in community mental health—study protocol. Implement Sci. 2013, 8: 38-10.1186/1748-5908-8-38.
Glisson C, Green P, Williams NJ: Assessing the organizational social context (OSC) of child welfare systems: implications for research and practice. Child Abuse Negl. 2012, 36: 621-632. 10.1016/j.chiabu.2012.06.002.
Cronbach LJ: Coefficient alpha and the internal structure of tests. Psychometrika. 1951, 16: 297-334. 10.1007/BF02310555.
Bliese PD: Within-group agreement, non-independence, and reliability: implications for data aggregation and analysis. In Multilevel Theory, Research, and Methods in Organizations. 1st edition. Edited by Klein KJ, Kozlowski SWJ. San Francisco: Joseey-Bass; 2000:349–380.
James LR, Demaree RG, Wolf G: R(Wg) - an assessment of within-group interrater agreement. J Appl Psychol. 1993, 78: 306-309. 10.1037/0021-9010.78.2.306.
Glisson C, Hemmelgarn A, Green P, Williams NJ: Randomized trial of the Availability, Responsiveness and Continuity (ARC) organizational intervention for improving youth outcomes in community mental health programs. J Am Acad Child Psy. 2013, 52: 493-500. 10.1016/j.jaac.2013.02.005.
Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk JA: Mixed methods designs in implementation research. Adm Policy Ment Hlth. 2011, 38: 44-53. 10.1007/s10488-010-0314-z.
Green CA, Duan N, Gibbons RD, Hoagwood KE, Palinkas LA, Wisdom JP: Approaches to mixed methods dissemination and implementation research: methods, strengths, caveats, and opportunities.Adm Policy Ment Health 2014.,
Aarons GA, Fettes DL, Sommerfeld DH, Palinkas LA: Mixed methods for implementation research: application to evidence-based practice implementation and staff turnover in community-based organizations providing child welfare services. Child Maltreat. 2012, 17: 67-79. 10.1177/1077559511426908.
Gill P, Stewart K, Treasure E, Chadwick B: Methods of data collection in qualitative research: interviews and focus groups. Br Dent J. 2008, 204: 291-295. 10.1038/bdj.2008.192.
Aarons GA, Ehrhart MG, Farahnak LR, Sklar M: Aligning leadership across systems and organizations to develop a strategic climate for evidence-based practice implementation. Annu Rev Public Health. 2014, 35: 255-274. 10.1146/annurev-publhealth-032013-182447.
Green AE, Albanese BJ, Cafri G, Aarons GA: Leadership, organizational climate, and working alliance in a children's mental health service system.Community Ment Health J 2013.,
Aarons GA, Sommerfeld DH: Leadership, innovation climate, and attitudes toward evidence-based practice during a statewide implementation. J Am Acad Child Adolesc Psychiatry. 2012, 51: 423-431. 10.1016/j.jaac.2012.01.018.
Aarons GA, Sommerfeld DH, Willging CE: The soft underbelly of system change: the role of leadership and organizational climate in turnover during statewide behavioral health reform. Psychol Serv. 2011, 8: 269-281. 10.1037/a0026196.
Glisson C, Schoenwald SK: The ARC organizational and community intervention strategy for implementing evidence-based children's mental health treatments. Ment Health Serv Res. 2005, 7: 243-259. 10.1007/s11020-005-7456-1.
Dirks MA, De Los RA, Briggs-Gowan M, Cella D, Wakschlag LS: Annual research review: embracing not erasing contextual variability in children's behavior—theory and utility in the selection and use of methods and informants in developmental psychopathology. J Child Psychol Psychiatry. 2012, 53: 558-574. 10.1111/j.1469-7610.2012.02537.x.
Hunsley J, Mash EJ: Evidence-based assessment. Annu Rev Clin Psychol. 2007, 3: 29-51. 10.1146/annurev.clinpsy.3.022806.091419.
Damanpour F, Schneider M: Phases of the adoption of innovation in organizations: effects of environment, organization and top managers. Brit J Manage. 2006, 17: 215-236. 10.1111/j.1467-8551.2006.00498.x.
Gumusluoglu L, Ilsev A: Transformational leadership, creativity, and organizational innovation. J Bus Res. 2009, 62: 461-473. 10.1016/j.jbusres.2007.07.032.
Jung DI, Chow C, Wu A: The role of transformational leadership in enhancing organizational innovation: hypotheses and some preliminary findings. Leadership Quart. 2003, 14: 525-544. 10.1016/S1048-9843(03)00050-X.
Scott SG, Bruce RA: Determinants of innovative behavior: a path model of individual innovation in the workplace. Acad Manage J. 1994, 37: 580-607. 10.2307/256701.
Chambers DA, Azrin ST: Partnership: a fundamental component of dissemination and implementation research. Psychiatr Serv. 2013, 64: 509-511. 10.1176/appi.ps.201300032.
Funding for this research project was supported by NIMH grants to Beidas (MH099179) and Benjamin Wolk (MH103955). Additionally, the preparation of this article was supported in part by the Implementation Research Institute (IRI) at the George Warren Brown School of Social Work, Washington University in St. Louis, through an award from the National Institute of Mental Health (R25 MH080916) and Quality Enhancement Research Initiative (QUERI), Department of Veterans Affairs Contract, Veterans Health Administration, Office of Research & Development, Health Services Research & Development Service. Dr. Beidas was an IRI fellow from 2012 to 2014.
The authors declare that they have no competing interests.
RSB, FB, ACE, and MOH were involved in the study design. RSB and FB conceptualized the specific research question. RSB and LMW collected all data and wrote field notes. RSB and CBW coded the field notes. RSB and CBW wrote the manuscript with guidance from FB. FB, LMW, ACE, and MOH provided feedback on the manuscript. All authors read and modified drafts and approved the final manuscript.