Skip to content

Advertisement

  • Research
  • Open Access
  • Open Peer Review

A WHO-HPH operational program versus usual routines for implementing clinical health promotion: an RCT in health promoting hospitals (HPH)

  • 1Email authorView ORCID ID profile,
  • 2, 3,
  • 4, 5,
  • 6,
  • 7,
  • 8,
  • 9,
  • 10,
  • 11,
  • 12 and
  • 1, 13
Implementation Science201813:153

https://doi.org/10.1186/s13012-018-0848-0

  • Received: 6 April 2018
  • Accepted: 6 December 2018
  • Published:
Open Peer Review reports

Abstract

Background

Implementation of clinical health promotion (CHP) aiming at better health gain is slow despite its effect. CHP focuses on potentially modifiable lifestyle risks such as smoking, alcohol, diet, and physical inactivity. An operational program was created to improve implementation. It included patients, staff, and the organization, and it combined existing standards, indicators, documentation models, a performance recognition process, and a fast-track implementation model.

The aim of this study was to evaluate if the operational program improved implementation of CHP in clinical hospital departments, as measured by health status of patients and staff, frequency of CHP service delivery, and standards compliance.

Methods

Forty-eight hospital departments were recruited via open call and stratified by country. Departments were assigned to the operational program (intervention) or usual routine (control group). Data for analyses included 36 of these departments and their 5285 patients (median 147 per department; range 29–201), 2529 staff members (70; 10–393), 1750 medical records (50; 50–50), and standards compliance assessments.

Follow-up was measured after 1 year. The outcomes were health status, service delivery, and standards compliance.

Results

No health differences between groups were found, but the intervention group had higher identification of lifestyle risk (81% versus 60%, p < 0.01), related information/short intervention and intensive intervention (54% versus 39%, p < 0.01 and 43% versus 25%, p < 0.01, respectively), and standards compliance (95% versus 80%, p = 0.02).

Conclusions

The operational program improved implementation by way of lifestyle risk identification, CHP service delivery, and standards compliance. The unknown health effects, the bias, and the limitations should be considered in implementation efforts and further studies.

Trial registration

ClinicalTrials.gov: NCT01563575. Registered 27 March 2012. https://clinicaltrials.gov/ct2/show/NCT01563575

Keywords

  • Strategic implementation
  • Fast-track implementation
  • Quality improvement
  • Clinical health promotion
  • Health promoting hospitals
  • Lifestyle risk
  • Patients
  • Hospital staff

Background

Turning evidence into practice in healthcare often takes decades [13]. Slow implementation also occurs with patient-centered activities to modify lifestyle risk factors [4], such as clinical health promotion (CHP) aiming at better health gain for patients, staff, and communities.

A sub-type of health promotion [57], CHP covers patient-enablement, disease prevention, health promotion, and rehabilitation, which takes place within patient pathways [8, 9]. CHP relies on counseling [1013] where clinical staff support patients to control and improve both health and modifiable determinants thereof [14, 15], such as daily smoking, risky alcohol drinking, poor nutrition, physical inactivity, and other lifestyle risks [8]. On short term within pathways, CHP has been shown to improve treatment results and prognoses in surgery [1619], obstetrics [2022], internal medicine [2327], and psychiatry [28]. It is also cost-effective [29] and well-received by patients [3032]. On long term, it can contribute to better public health [16, 33]. Even so, however, CHP is rarely implemented [33].

Furthering implementation of CHP, and of the World Health Organization (WHO) concepts, values, strategies, and standards of health promotion in general, into the organizational structure and culture of hospitals and health services is the aim of the WHO-initiated International Network of Health Promoting Hospitals and Health Services (HPH) [34, 35].

To support the attainment of this goal, a package of validated tools and a recognition of performance (RP) were recently developed by WHO and HPH. The package of tools included five WHO standards with related indicators [34, 36, 37] that were developed according to the International Society for Quality in Health Care (ISQUA) criteria [38], and two HPH documentation models [39, 40]. The RP used HPH certifications recognizing fulfillment of the five WHO standards [41].

To speed up implementation, a 1-year, fast-track implementation model for CHP (Fast-IM) was also added [41]. The Fast-IM was data-driven and used resources related to strategic implementation of evidence [3, 4249] as well as general quality improvement tools such as the Plan-Do-Check-Act (PDCA) cycle [50]. The Fast-IM aims to set 1-year implementation goals for the individual organization based on own local data identifying important implementation gaps. The Fast-IM incorporates adjustable quality plans with clear, measurable 3-month milestones and is driven by the data [41].

Combined, the package of tools, the RP, and the Fast-IM were evaluated as a WHO-HPH operational program versus usual routines for implementation of CHP. Specifically, the evaluation focused on the operational program’s ability to potentially improve dosage, quality, and fidelity of implementation by way of risk identification, CHP service delivery, and standards compliance, and by this route, possibly, improve the health of patients and staff [41].

Aim

The aim of this study was to evaluate if the operational program improved implementation of CHP in clinical hospital departments. This was measured by health status of patients and staff and by implementation process in terms of frequency of CHP service delivery and standards compliance.

Methods

Participants

For our randomized controlled trial with the clinical hospital department as the unit of randomization and analysis (i.e. no cluster), we hypothesized that allocation to the operational program (intervention group) would improve health of patients and staff, increase delivery of CHP services to at risk-patients, and improve WHO standards compliance at the department level, compared to the control group departments continuing usual implementation routines. The service delivery and standards compliance outcomes serve as measures of procedural and structural changes in implementation, and the health outcomes serve as a measure of the potential effect of such changes.

Inclusion criteria were clinical hospital departments responsible for treatment. Exclusion criteria were > 1 department from each hospital and palliative departments, nursing homes, pediatric departments, non-hospital clinics, and primary care facilities since the CHP-specific process components were not validated in these settings.

Based on a secondary outcome (standards compliance), we calculated a sample size of 2 × 40 clinical departments, because no studies existed on the primary outcome of health status. The power calculation was based on a previous study [40], which had shown that baseline CHP service deliveries could be expected to reach no more than 40% of the at-risk patients. The minimum relevant difference in service deliveries was 30%, the expected outcome was 70%, power was 80%, and two-sided significance was 5% [41].

Full inclusion of the 2 × 40 departments, along with the 2-year follow-up, was not obtained before an update and revision of WHO standards in 2016, at which point only 48 departments had been included and randomized (Fig. 1). The 48 included departments that participated were recruited via an open call among HPH member hospitals (Fig. 1). Owing to the commitment of HPH members to use WHO standards, and since the revised WHO standards were markedly different, new centers and already participating centers in the RCT could not be expected to begin to or continue to use the old version.
Fig. 1
Fig. 1

Trial profile

Of the 48 included departments, 8 (4 in each group) dropped out after allocation. The remaining 40 departments were from 8 countries/regions in Asia and Europe: Taiwan (n = 21), Czech Republic (8), Slovenia (3), Croatia (2), Estonia (2), Japan (2), Denmark (1), and Malaysia (1). Of these, 36 (75% of the originally included 48) completed the study. The characteristics at department-level, staff-level, and patient-level are presented in Table 1. The study was registered at ClinicalTrials.gov (NCT01563575).
Table 1

Characteristics of the 40 hospital departments that participated and characteristics of the 5285 patients and 2529 staff from the 36/40 clinical departments that submitted data for analyses, presented at the department level as median and range

Hospital departments

Intervention (n = 22)

Control (n = 18)

 Hospital type

Community/general/specialized/university

18/50/9/23%

6/61/11/22%

 Ownership

Public/private non-profit/private for-profit

64/27/9%

33/61/6%

 Department

Medicine/surgery and obs-gyn/psychiatry

73/18/9%

66/28/6%

 Catchment area

Urban/rural/mixed

32/9/59%

61/11/28%

 Number of beds

< 200/200–599/> 599

23/36/41%

0/50/50%

 HPH member

 

100%

100%

Patients and staff

Intervention (n = 18)

Control (n = 18)

 Patients (n = 5285)

152 (75–201)

142 (29–200)

  Age (years) 18–29

8% (0–48%)

5% (0–26%)

   30–49

19% (0–56%)

23% (6–66%)

   50–69

40% (10–61%)

44% (15–59%)

   > 70

33% (0–91%)

28% (0–71%)

  Women

52% (28–79%)

55% (32–100%)

  BMI

26 (17–44)

25 (16–38)

  Daily smoking

10% (0–23%)

10% (2–16%)

  Hazardous alcohol drinking

3% (0–13%)

2% (0–8%)

  Physical inactivity

60% (15–83%)

66% (32–88%)

  Risk of malnutrition

44% (16–72%)

36% (15–60%)

  Overweight/obesity

60% (25–92%)

51 (25–86%)

 Staff (n = 2529)

70 (10–393)

71 (18–223)

  Age (years) 18–29

28% (0–66%)

25% (4–52%)

    30–49

53% (29–80%)

59% (40–73%)

   > 50

19% (0–60%)

16% (0–43%)

  Women

80% (49–95%)

85% (64–97%)

  BMI

23 (17–34)

23 (17–34)

  Daily smoking

5% (0–20%)

8% (0–37%)

  Hazardous alcohol drinking

1% (0–9%)

1% (0–12%)

  Physical inactivity

73% (24–100%)

79% (27–99%)

  Risk of malnutrition

26% (4–60%)

27% (7–47%)

  Overweight/obesity

33% (10–75%)

28% (12–52%)

Missing data 0–9%

Randomization and masking

We randomly allocated departments to undertaking the operational program (intervention group) or to continuing usual implementation routines (control group). An external researcher performed the computerized randomization, using blocks of unknown sizes and stratification by country. Sealed, opaque envelopes concealed allocation numbers from the international research team that enrolled departments. All allocation was video recorded. In view of the nature of the intervention, the participating departments and project staff were aware of their allocation.

Procedures

All data were collected between October 2012 and October 2016. Each department’s data collection took 1–3 months. The international research team developed and provided project instruction manuals, forms, and templates [41] (see Additional file 1). All translation and provision of information to staff was handled locally. An introduction seminar covering data collection, surveying, and auditing training was held for staff at all participating departments (online or on location) before randomization and project start.

Participating departments assigned data collection tasks to one or more of their own staff according to local needs and resource availability.

Intervention group

The intervention group began the operational program [41] immediately after allocation. After 1 year, they repeated the data collection and underwent an external audit including interviews with staff and managers [41, 51].

Control group

To reduce the risk of contamination, the control group departments did not measure pre-implementation status but instead waited 1 year. During this wait, they continued their own usual implementation routines (understood as usual management activity—parallel to the clinical term, treatment as usual). These usual implementation procedures presumably varied among participating hospitals, and since all were HPH member hospitals, many worked with the WHO standards already. No data was collected on usual implementation routines in each hospital in advance of the project. After the 1-year wait, the control group departments collected their data (Fig. 1). For convenience, and only after the trial had ended, the control group was also offered the operational program. This offer was accepted by 17/18 control group departments.

Data collection (both groups)

The collected data covered patient, staff, and department levels. Validated Short-Form 36 version 2 (SF36v2) health surveys [52, 53] were used to assess patient and staff health. As described in the RCT protocol, each department surveyed 200 consecutive patients or 1 month’s population of patients, depending on which of these two stopping-points were reached first, and all staff currently employed by the department [41]. The eight dimensions of SF36v2 were summarized in physical (PCS) and mental (MCS) component scores for analyses, where a score of 100 represented maximum functionality. For identification of patient lifestyle risks and related CHP service delivery, validated medical record audit tools were used [39, 40] according to the operational program [41]. Here, each department audited 50 consecutive medical records concerning documentation of patient lifestyle risks, using the HPH DATA Model [39], and concerning associated delivery of relevant CHP services, using the HPH DocAct Model [40], as described in the protocol [41]. If data was available in the medical record, e.g., “does the patient smoke daily?”, the auditing staff member would answer “yes” or “no” as relevant, and that would count as documented risk (either positive or negative). If data was unavailable, the staff member would answer “unknown” and that would count as undocumented risk [39].

The same approach was used for auditing of CHP service documentation in the records [40]. CHP services can be categorized as either short interventions (SI) or intensive interventions (II) [54, 55]. SI do not exceed three counseling sessions and/or a total contact time of 1 h [55]. II consist of four or more in-person sessions of 10 min or more each [54, 55]. II are often theory-based, offered by trained staff, include patient education and pharmacological support. While the SI/II categorization [54, 55] was used in the study, the design and contents of each CHP intervention were determined locally.

Compliance with the validated WHO standards [34, 36, 37] was also assessed (Table 2). The WHO standards contain 40 measurable elements within five standard domains [34] (see Table 2). Department performance was recognized with a certificate based on standard compliance (91–100% was gold level).
Table 2

The 5 WHO standards for health promotion in hospitals: Compliance to the 40 measurable elements (ME) of the intervention and control group, presented as median and range

Standard

No. of ME

Description

Objective

Intervention group (n = 18)

Control group (n = 18)

p

1 Management policy

9

The organization has a policy for HP. The policy is implemented as part of the overall QM system.

To describe the framework for the organization’s HP activities as an integral part of the QM system.

8 (6–9)

7 (3–9)

 

Measurable elements:

 1. Stated aims and mission include HP

 2. Minutes of governing body reaffirms agreement within the past year to participate in the WHO HPH Network

 3. The current quality and business plans include HP for patients, staff and the community

 4. Personnel and functions for the coordination of HP are identified

 5. There is an identifiable budget for HP services and materials

 6. Operational procedures such as practice guidelines or pathways incorporating HP are available in clinical departments

 7. Specific structures and facilities required for HP (including resources, space, equipment) can be identified

 8. Data are routinely captured on HP interventions and available to staff for evaluation

 9. A programme for quality assessment of HP activities is established

 

2 Patient assessment

7

In partnership with patients, staff systematically assess the needs for HP activities.

To support patient treatment, improve prognosis and promote the health and wellbeing of patients.

7 (5–7)

6 (1–7)

 

Measurable elements:

 1. Guidelines on how to identify smoking, alcohol consumption, nutritional and psycho-social-economic status are present

 2. Guidelines/procedures have been revised within the last year

 3. Guidelines are present on how to identify needs for HP for groups of patients (e.g. asthma patients, diabetes patients etc.)

 4. The assessment is documented in the patient’s medical record at admission

 5. There are guidelines/procedures for reassessing needs at discharge or end of a given intervention

 6. Information from referring physician or other relevant sources is available in the patient’s record

 7. The patient’s record documents social and cultural background as appropriate

 

3 Patient information and intervention

6

Patients receive info on significant factors concerning disease/condition, and HP interventions are established in all pathways.

To ensure patients are informed about activities, empowered in an active partnership and to facilitate integration of HP activities in all pathways.

6 (4–6)

4 (1–6)

 

Measurable elements:

1. Information given to the patient is recorded in the patient’s records

2. Health promotion activities and expected results are documented and evaluated in the records

3. Patient satisfaction assessment of the information given is performed and the results are integrated into the QM system

4. General health information is available

5. Detailed information about high-risk diseases is available

6. Information is available on patient organizations

 

4 Promoting a healthy workplace

10

The management establishes conditions for the development of a healthy workplace.

To support the development of a healthy and safe workplace and to support health promotion activities of staff.

10 (7–10)

8 (2–10)

 

Measurable elements:

1. Working conditions comply with national/regional directives and indicators

2. Staff comply with health and safety requirements and all workplace risks are identified

3. New staff receive an introductory training that addresses the hospital’s HP policy

4. Staff in all departments are aware of the content of the organization’s health promotion policy

5. The performance appraisal system and continuing professional development include HP

6. Working practices (procedures and guidelines) are developed by multidisciplinary teams

7. Staff are involved in hospital policy-making, audit and review

8. Policies for awareness on health issues are available for staff

9. Smoking cessation programmes are offered

10. Annual staff surveys are carried out including an assessment of individual behaviour, knowledge on supportive services/policies, and use of supportive seminars

 

5 Continuity and cooperation

8

The organization has a planned approach to collaboration with other providers and other institutions and sectors.

To ensure collaboration with relevant providers and initiate partnerships to optimize integration of HP activities in pathways.

8 (7–8)

7 (3–8)

 

Measurable elements:

 1. The management board is taking the regional health policy plan into account

 2. The management board can provide a list of health and social care providers working in partnership with the hospital

 3. The intra- and intersectoral collaboration with others is based on execution of the regional health policy plan

4. There is a written plan for collaboration with partners to improve the patients’ continuity of care

5. Patients/families are given understandable follow-up instructions at out-patient consultation, referral or discharge

6. There is an agreed upon procedure for exchange practices between organizations for all relevant patient information

7. The receiving organization gets a written summary of condition, health needs and interventions already provided

8. If appropriate, a plan for rehabilitation describing roles of the organization/collaborators is documented in the record

 

Overall compliance

40

  

38 (31–40)

32 (12–40)

0.02

HP (clinical) health promotion, QM quality management, overall 3% missing data

Outcomes

The primary outcome was health status of patients and staff as measured by SF36v2. The secondary outcomes were CHP service delivery to identified at-risk patients as well as WHO standards compliance.

Statistical analysis

The characteristics and results were reported as medians and ranges for each department. The two groups were compared by an external researcher, blinded to group allocation, using non-parametric statistics. Health status and standard compliance were analyzed using the Wilcoxon unpaired test, and service delivery frequencies were analyzed using Fisher’s exact test. P values below 0.05 were considered significant. All analyses were performed using SAS 9.4.

Results

The response rate of departments was 40/48 (83%), but only 36/48 (75%) submitted complete data sets for the analyses. The data from the 36 departments covered 5285 patients (median per department = 147; range = 29–201) and 2529 staff members (70; 10–393). Overall missing data were 0–9% per factor (Table 1). All results were analyzed at department level.

Health status of patients and staff

No differences in the health of patients or staff were found. At baseline, the intervention group’s (n = 22) SF36v2 patient PCS and MCS per department were 57 (11–95) and 61 (13–97). Their baseline staff PCS and MCS were 75 (36–97) and 71 (31–96).

At follow-up, the intervention group patient PCS was 58 (7–96) versus 64 (12–98) in the control group (p = 0.19), and the patient MCS was 64 (9–98) versus 69 (17–99) (p = 0.25). The staff PCS was 74 (34–97) in the intervention group at follow-up versus 73 (36–97) (p = 0.58) in the control group, and the staff MCS was 70 (29–95) versus 68 (30–95) (p = 0.26). Figure 2 presents the PCS and MCS of patients and staff.
Fig. 2
Fig. 2

Comparison of the (n = 36) intervention (black) and control (red) departments’ patients (n = 5285. Median 147; range 29–201) and staff (n = 2529. 70; 10–393) health status by Short Form 36 version 2 (SF36v2) presented as median physical component score (PCS) and mental component score (MCS)

Identification of lifestyle risks and related CHP service delivery

The frequency of at-risk patients was similar: 252 (28%, 24–447) patients per risk factor on average in the intervention group versus 225 (26%, 75–419) in the control group. However, the completeness of patient risk documentation in the medical records was significantly better in the intervention group (81% versus 60%, p < 0.01). Delivery of information and/or short intervention (54% versus 39%, p < 0.01) and intensive intervention services (43% versus 25%, p < 0.01) to at-risk patients were also significantly more frequent in the intervention group. Figure 3 presents the completeness of the departments’ documentation of risk per factor and the degree to which systematic CHP interventions were then provided to documented at-risk patients.
Fig. 3
Fig. 3

Comparison of the intervention (black) and control (red) departments’ documentation of their 1750 patients’ risks in the medical records (2a) and related delivery of information/shorter intervention (2b) and intensive intervention (2c) to at-risk patients (3% missing data. *P < 0.05)

Looking at the intervention group at baseline (n = 22), the completeness of risk documentation was 65%, delivery of information and/or short intervention services to at-risk patients was 40%, and delivery of intensive intervention services to at-risk patients was 35%.

Standard compliance

The overall compliance with WHO standards was significantly higher in the intervention group (95% versus 80%, p = 0.02) (Table 2). Gold-level certificates for fulfilling ≥ 91% of standards were issued to 14 intervention group departments and 9 control group departments. Figure 4 shows the compliance improvements of the intervention group as well as the compliance of the control group. The in-group median standards compliance improvement within the intervention group was 12% (ranging 0–50%).
Fig. 4
Fig. 4

Compliance with WHO standards for health promotion in hospitals (in %) in the clinical departments of the intervention group (baseline n = 22, 1-year follow-up n = 18) and of the control group (n = 18)

Sensitivity analysis of the reported data

The reported WHO standards compliance and CHP service delivery were randomly evaluated during the site visits in the intervention group using external medical record audits and interviews. No significant differences were found (Table 3).
Table 3

Sensitivity analysis comparing the internal (IA) and external audit (at site visit) (EA) of medical records in the intervention group for documentation of four risk factors and related service delivery (IA = 850 medical records; in total 4 × 850 = 3.400. EA = 64 medical records; in total 4 × 64 = 256)

 

Risk documentation

Service delivery to documented at-risk patients

IA

EA

p

IA

EA

p

Documented/total (%)

Documented/total (%)

At-risk/documented (%)

Serviced risk/at-risk (%)

At-risk/documented (%)

Serviced risk/at-risk (%)

Daily smoking

769/850 (90%)

61/64 (95%)

 

98/769 (13%)

57/98 (58%)

8/61 (13%)

4/8 (50%)

 

Hazardous alcohol drinking

734/850 (86%)

55/64 (86%)

 

24/734 (3%)

13/24 (54%)

1/55 (2%)

0/1 (0%)

 

Physical inactivity

603/850 (71%)

45/64 (70%)

 

338/603 (56%)

232/338 (69%)

35/45 (78%)

23/35 (66%)

 

Nutritional problems

767/850 (90%)

62/64 (97%)

 

634/767 (83%)

283/634 (45%)

40/62 (65%)

23/40 (57%)

 

Total (all four risk factors)

2873/3400 (85%)

223/256 (87%)

0.3

1094/2873 (38%)

585/1094 (53%)

84/223 (38%)

50/84 (60%)

0.3

CHP service delivery to no-risk patients

Interestingly, CHP services were also delivered to patients with no or undocumented risks. Both groups provided CHP services to 13% of the documented no-risk patients and to 9–12% of patients with undocumented risks. In total, across all risks and regardless of risk documentation, 65% of patients in the intervention group and 47% in the control group received CHP services.

Discussion

While we found no differences in the health status of patients and staff between groups, we did find that the intervention group identified patient lifestyle risks better and more frequently delivered related CHP services to at-risk patients compared to the control group that continued usual implementation routines. We also found that the intervention group had an overall higher compliance with the WHO standards.

Service delivery to at-risk patients

Risk documentation and service delivery (Fig. 3) is relevant to healthcare quality, since these issues have been reported to be a general challenge [56, 57] not least for smoking [33, 58, 59]. Reported barriers include lacking treatment resources, awareness of the negative influence of lifestyle risks on treatment results, reimbursement of CHP services, management support, organizational focus on CHP, competencies of staff, and knowledge of the implementation process [33, 60, 61]. Suggested strategies to overcome these barriers include securing awareness of the evidence of CHP effectiveness, strengthening leadership engagement, and incentivizing CHP treatments [33].

Our results indicate that the operational program improved central parts of implementation within 1 year. The possible explanations for this improvement were explored in a nested qualitative study, and here, staff and managers echoed the already reported barriers and stated that the operational program increased awareness of and engagement in CHP within the departments [51]. In this light, it is plausible that the very presence of the study might have contributed to improved implementation by raising awareness of the implementation process. It is also possible that the prominent place CHP services have in the WHO standards may have contributed to improved implementation, and thus that the standards compliance increase found may in fact correlate with the improvements found for risk identification and service delivery. If this explanation holds, it is highly interesting that the Joint Commission has by now adopted tobacco and alcohol screening and treatment measures in their standards [6264] and that the American College of Surgeons have added smoking as a risk factor in their National Surgical Quality Improvement Program [65]. The effects of adopting such risk documentation and CHP services will prove interesting in future.

WHO standards compliance

Worldwide, healthcare systems use standards and indicators, and these are commonly viewed as an integral and justifiable part of quality management [66].

However, the evidence of the effect of quality improvement using standards and indicators is sparse [6672]. Randomized studies have found effects on healthcare process and structure outcomes as a result of quality improvement programs [7376], but either without investigating potential health effects [73, 76] or without finding effect on health [74, 75]. Non-randomized studies have shown mixed results; some have found health effects [77], while others have not [78]. On this basis, improving standards compliance alone (Table 2 and Fig. 4) is not sufficiently robust evidence to merit and inform action, and our study thus also included clinical outcomes related to actual risk identification, service delivery, and health status.

Patient and staff health

For patients, the evidence of the positive effects of integrating CHP services in clinical treatments is growing [16, 17, 20, 21, 23, 24, 28, 33]. The reason we did not find evidence of health improvements among patients could relate to our following up on departments and not individuals. Thus, our study does not disprove health effects at the level of individual patients, which are expectable considering the effect of CHP interventions known from the literature on for instance smoking cessation [79]. In our study design, it is possible that fast flow of patients in the departments might have diluted health effects.

The WHO standards also include staff, and hospitals are notoriously hazardous workplaces [80]. Additionally, staff members are the ones delivering CHP to patients and both health and competencies of staff and managers have been shown to be associated with implementation of CHP. Smoking staff and managers, for instance, are less positive towards smoking cessation [81, 82]. Smoking staff less frequently deliver interventions [83] and follow-up services [84], and smoking managers less frequently adopt no smoking policies [82]. Lacking CHP competencies of staff are also a main barrier to actually delivering services [83, 84]. It seems probable then that improving both competencies and lifestyle risks among staff and managers might reduce barriers to CHP implementation.

The nested qualitative study, which was carried out alongside this RCT, reported that staff and managers were generally positive towards the operational program introduced and considered it to be worthwhile [51].

Just as for patients, the fact that we did not find an effect on staff health might be explainable by our following up on departments rather than individual staff members.

Even so, it can be noted that the staff in our study had a relatively high health-related quality of life, compared to the literature. The staff in our study had a PCS of 73–74 and an MCS of 68–70, as measured by SF36v2, whereas a 2013 sample of 2964 Norwegian nurses had lower PCS and MCS averages of 50 and 48, respectively, using the same 0–100 scale, but measured by SF12 [85].

Interestingly, staff and patients in our study had similar MCS. This could be due to a generally positive culture in HPH hospitals and because only three psychiatric departments took part. Compared to staff, patients had markedly lower PCS, which is readily explainable by their physical illness.

Bias and limitations

Our study has several biases and limitations. One major bias is that the study included only 48 of the targeted 80 departments. Of these, 40 (83%) departments participated, and 36 (75%) departments completed. This small number introduces a high risk of overlooking potentially significant results (e.g., health gain), which a fully powered study might have found (type-2 error).

Along with the open-call inclusion strategy, this might have resulted in a sample of departments with only highly motivated managements, potentially making our results optimistic. Likewise, the small sample size might also have meant that the randomization could not adequately minimize confounding differences between groups.

Another major limitation is the design that did not follow individual patients and staff, which was chosen due to our ambition to show a 1-year health effect at department level and thus avoid clustering. This alone might have rendered the study unable to identify health gains among patients and staff—provided such differences exist.

As this study is one of the first in its area, it might also suffer from risk of type-1 error regarding the significant results on risk identification, service delivery, and compliance. Further, sizeable studies would be needed to reduce this risk.

The lack of blinding resulting from the nature of the intervention adds further risk of bias. However, the analyses were blinded, and all reporting of results was performed in accordance with the level of randomization (i.e., department level), which avoided bias related to the use of clusters.

Both organizational and survey data were collected via self-assessment and self-reporting, which may introduce bias. However, these issues would presumably be relatively similar in both groups, since this is a ubiquitous type of bias in self-reported data, meaning that there is no apparent reason to speculate that it would be more present in one group than the other. Additionally, it was a strength that core tools used in the study had been validated in advance [37, 39, 40, 52, 53] and that the nested qualitative study indicated that staff generally found the project and its data collection doable [51].

Another risk of bias arose from the fact that most countries/regions participated with only a few hospitals each. This produced a skewed geographical distribution in both groups. However, the country stratification modified this risk, which was a strength.

The real-life conditions of the study were also a strength, but only HPH member hospitals were included, which potentially limits the generalization of our findings to non-HPH hospital settings. However, the international participation did broaden the representativeness of the results, which is a strength in terms of generalization.

It was a limitation that usual implementation routines naturally varied between participating departments, but it was a strength that site visits were performed and externally validated data from medical records.

Finally, it was a strength that the control group also showed improved service delivery and compliance after their 1-year implementation, when they were offered the intervention after the study had ended. In total, 17/18 control groups received the afterwards offered intervention, and their in-group results resembled those of the intervention group; standards compliance went up from 80 to 98%, documentation of risk from 60 to 85%, information/shorter intervention from 39 to 79%, and intensive intervention from 25 to 46%. As in the data for the study, no difference could be seen concerning health status.

Perspectives

Faster implementation of evidence has major implications in healthcare. In terms of the evidence for CHP, clinical departments and the healthcare system could potentially benefit from the operational program because it accelerates CHP implementation. In research, further investigation of the operational program in non-HPH settings should be undertaken. Also, use of the Fast-IM itself might turn out to accelerate evidence-based practices in other areas than CHP—e.g., new clinical procedures, new organizational improvements, or new technological initiatives or solutions.

A possible reason for the scarcity of RCTs in implementation as opposed to in intervention may be cultural and a result of research traditions, but it could also be that it is difficult to conduct them and recruit for them. Even so, interest does appear to be growing and the need for solid, experimental research and adequate reporting thereof is high [86]. We hope that such future, sizable trials will be able to draw from the learnings related to our operational program.

Conclusion

Compared to usual implementation routines, the operational program improved implementation of CHP by better identification of lifestyle risks, more frequent delivery of CHP services, and higher compliance with standards. No differences in health status of patients or staff were found at the level of clinical departments, and the study was limited by low inclusion of departments and by having the department as the unit of analysis as opposed to individual patients and staff. These issues should be considered carefully in strategic implementation efforts and in designing new randomized studies.

Abbreviations

CHP: 

Clinical health promotion

Fast-IM: 

The fast-track implementation model for clinical health promotion

HPH: 

International Network of Health Promoting Hospitals and Health Services

MCS: 

Mental component score (of SF36v2)

PCS: 

Physical component score (of SF36v2)

PDCA-cycle: 

Plan-Do-Check-Act Cycle for Quality Improvement

SF36v2: 

Short-Form 36 health questionnaire (version 2)

WHO: 

World Health Organization

WHO-CC Denmark: 

WHO Collaborating Centre for Evidence-based Health Promotion in Hospitals and Health Services

WHO-CC Sweden: 

WHO Collaborating Centre for Implementation of Evidence-based Clinical Health Promotion

Declarations

Acknowledgements

The World Health Organization, Regional Office for Europe and the International Network of Health Promoting Hospitals & Health Services (HPH) are acknowledged for their technical support. The National/Regional HPH Networks and their membership in the Czech Republic, Estonia, Japan, Slovenia, and Taiwan and the membership in Croatia, Denmark, and Malaysia are acknowledged for their support and collection of data for the project. Bolette Pedersen, the National Board of Social Services of Denmark, is acknowledged for performing external statistical analyses, and Mette Rasmussen, the WHO-CC Denmark, is acknowledged for the interpretation of results.

Funding

Participating departments funded their own participation and local core staff. Ministries of health (in Czech Republic and Taiwan) as well as the WHO Regional Office for Europe (Czech Republic) supported project participation. WHO-CCs in Copenhagen (Denmark) and in Skåne (Sweden) funded the international research team. The external funders played no role in the study design, data collection, data analysis, data interpretation, or writing of the research paper. The international research team had final responsibility for the decision to submit for publication. The authors affirm that the manuscript is an honest, accurate, and transparent account of the study, that no important aspects of the study were omitted, and that all discrepancies from the study as planned and registered are explained.

Availability of data and materials

The data analyzed are not publicly available due to legal agreements with the participating departments but are available from the corresponding author on reasonable request.

Authors’ contributions

JKS, project leader and PhD student, drafted the manuscript. HT, principal investigator and supervisor for PhD study, devised the initial hypothesis and protocol. OG was the project advisor. STC was the co-supervisor for the PhD study. STC, MK, MZB, IF, TH, JF, YA, and MØA were responsible for the data collection. All authors revised and approved the final manuscript.

Ethics approval and consent to participate

All data were anonymized at the source, and no person-identifiable information was recorded or transferred. The study included no biological material. The anonymized data were stored and secured by Capital Region Denmark CIMT. Paper records were maintained under double lock. Only the international research team had access to all data. The study was approved by the Internal Review Board of Bispebjerg-Frederiksberg Hospital, University of Copenhagen, Denmark, and by the Danish Data Protection Agency (j.nr. 2012-41-0152 / 2017-41-5029). All participating departments had local approval from their internal review body, department head, and hospital management. Participation in the study was not associated with any risks. The time required to complete forms and surveys was considered a minor inconvenience.

Consent for publication

All participating clinical departments that delivered data obtained local approval from their internal review body, approval from the head of the department, and from the hospital management. Consent from individual patients or staff was not required as no person-identifiable data was captured at any point.

Competing interests

All authors completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare that they have no competing interest for the submitted work.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Authors’ Affiliations

(1)
Clinical Health Promotion Centre, WHO-CC, Bispebjerg and Frederiksberg Hospital, Copenhagen University Hospitals, Nordre Fasanvej 57, Build. 14, Entr. 5, 2nd fl, 2000 Frederiksberg, Denmark
(2)
School of Medicine, National Yang-Ming University, Taipei, Taiwan
(3)
Cheng Hsin General Hospital, Taipei, Taiwan
(4)
OptiMedis AG, Hamburg, Germany
(5)
Department of Health Services Research and Policy, London School of Hygiene & Tropical Medicine, London, UK
(6)
Health Services Quality Department, Ministry of Health, Prague, Czech Republic
(7)
General hospital “Dr. Tomislav Bardek”, Koprivnica, Županija Koprivničko-križevačka, Croatia
(8)
Saitama Cooperative Hospital, Kawaguchi, Saitama, Japan
(9)
National Institute for Health Development;, Tallin, Estonia
(10)
National Institute of Public Health, Ljubljana, Slovenia
(11)
Penang Adventist Hospital, Penang, Malaysia
(12)
Sector for Spine Surgery and Research, Lillebaelt Hospital, Middelfart, Denmark
(13)
Clinical Health Promotion Centre, WHO-CC, Health Sciences, Lund University, Lund, Sweden

References

  1. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.View ArticleGoogle Scholar
  2. Balas EA, Boren SA. Yearbook of medical informatics: managing clinical knowledge for health care improvement. Stuttgart: Schattauer Verlagsgesellschaft mbH; 2000.Google Scholar
  3. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.Google Scholar
  4. Glasgow RE, Klesges LM, Dzewaltowski DA, Bull SS, Estabrooks P. The future of health behavior change research: what is needed to improve translation of research into health promotion practice? Ann Behav Med. 2004;27(1):3–12.View ArticleGoogle Scholar
  5. World health organization. Ottawa charter for health promotion. Health Promot Int. 1986;1(4):405.Google Scholar
  6. World Health Organization (WHO). Health promotion glossary www.who.int/healthpromotion/about/HPR%20Glossary%201998.pdf [cited 2018 April 24].
  7. Smith BJ, Tank KC, Nutbeam DWHO. Health promotion glossary: new terms. Oxford University Press, Oxford UK; 2006. www.who.int/healthpromotion/about/HP%20Glossay%20in%20HPI.pdf.
  8. Tønnesen H. Clinical health promotion - what does it mean? Clin Health Promotion. 2011;1(2):39–40.Google Scholar
  9. The working group concerning prevention, health promotion and population health under the Danish National Terminologi Counsel for Healthcare [Arbejdsgruppen vedrørende forebyggelse, sundhedsfremme og folkesundhed under Det Nationale Begrebsråd for Sundhedsvæsenet]. Terminology: Prevention, health promotion and population health [Terminologi: Forebyggelse, sundhedsfremme og folkesundhed]: Danish National Board of Health; 2005.Google Scholar
  10. Golechha M. Health promotion methods for smoking prevention and cessation: a comprehensive review of effectiveness and the way forward. Int J Prev Med. 2016;7:7.View ArticleGoogle Scholar
  11. Pell JP, Haw S, Cobbe S, Newby DE, Pell AC, Fischbacher C, et al. Smoke-free legislation and hospitalizations for acute coronary syndrome. N Engl J Med. 2008;359(5):482–91.View ArticleGoogle Scholar
  12. Mackay DF, Irfan MO, Haw S, Pell JP. Meta-analysis of the effect of comprehensive smoke-free legislation on acute coronary events. Heart. 2010;96(19):1525–30.View ArticleGoogle Scholar
  13. National Board of Health and Social Affairs [Socialstyrelsen]. Swedish national guidelines for prevention and treatment of unhealthy lifestyle habits [Nationella riktlinjer för prevention och behandling vid ohälsosamma levnadsvanor] 2017.Google Scholar
  14. World Health Organization (WHO). Health impact assessment (HIA). The determinants of health www.who.int/hia/evidence/doh/en [cited 2018 April 24].
  15. Nutbeam D. The evolving concept of health literacy. Soc Sci Med. 2008;67(12):2072–8.View ArticleGoogle Scholar
  16. Thomsen T, Villebro N, Møller AM. Interventions for preoperative smoking cessation. Cochrane Database Syst Rev. 2014;(3):CD002294.Google Scholar
  17. Egholm JWM, Pedersen B, Adami J, Møller A, Tønnesen H. Perioperative alcohol cessation intervention for postoperative complications. Cochrane Database Syst Rev. 2018;(11):CD008343.Google Scholar
  18. Beier-Holgersen R, Boesby S. Influence of postoperative enteral nutrition on postsurgical infections. Gut. 1996;39(6):833–5.View ArticleGoogle Scholar
  19. Nielsen PR, Andreasen J, Asmussen M, Tønnesen H. Costs and quality of life for prehabilitation and early rehabilitation after surgery of the lumbar spine. BMC Health Serv Res. 2008;8(1):209.View ArticleGoogle Scholar
  20. Cnattingius S. The epidemiology of smoking during pregnancy: smoking prevalence, maternal characteristics and pregnancy outcomes. Nicotine Tob Res. 2004;6(Suppl 2):S125–40.View ArticleGoogle Scholar
  21. Rasmussen M, Heitmann BL, Tønnesen H. Effectiveness of the gold standard programmes (GSP) for smoking cessation in pregnant and non-pregnant women. Int J Environ Res Public Health. 2013;10(8):3653–66.View ArticleGoogle Scholar
  22. Ota E, Hori H, Mori R, Tobe-Gai R, Farrar D. Antenatal dietary education and supplementation to increase energy and protein intake. Cochrane Database Syst Rev. 2015;(6):CD000032.Google Scholar
  23. Heiwe S, Jacobson SH. Exercise training for adults with chronic kidney disease. Cochrane Database Syst Rev. 2011;(10):CD0032362011. Google Scholar
  24. Anderson L, Taylor RS. Cardiac rehabilitation for people with heart disease: an overview of Cochrane systematic reviews. Cochrane Database Syst Rev. 2014;(12):CD011273.Google Scholar
  25. Thomas DE, Elliott EJ, Naughton GA. Exercise for type 2 diabetes mellitus. Cochrane Database Syst Rev. 2006;3:CD002968.Google Scholar
  26. McKeough ZJ, Velloso M, Lima VP, Alison JA. Upper limb exercise training for COPD. Cochrane Database Syst Rev. 2016;11:CD011434.Google Scholar
  27. Oellgaard J, Gaede P, Rossing P, Rorth R, Kober L, Parving HH, et al. Reduced risk of heart failure with intensified multifactorial intervention in individuals with type 2 diabetes and microalbuminuria: 21 years of follow-up in the randomised Steno-2 study. Diabetologia. 2018;61(8):1724–33.View ArticleGoogle Scholar
  28. Taylor G, McNeill A, Girling A, Farley A, Lindson-Hawley N, Aveyard P. Change in mental health after smoking cessation: systematic review and meta-analysis. BMJ. 2014;348:g1151.View ArticleGoogle Scholar
  29. Pedersen B, Hansen PE, Tønnesen H. Scand-ankle: cost-effectiveness of alcohol cessation intervention in acute fracture surgery. Clin Health Promot. 2014;2 suppl:1–68.Google Scholar
  30. Møller AM, Villebro NM. Preoperative smoking intervention: what do patients think? A qualitative study. Ugeskr Laeger. 2004;166:3714–8.PubMedGoogle Scholar
  31. Lindström D, Sundberg-Petersson I, Adami J, Tønnesen H. Disappointment and drop-out rate after being allocated to control group in a smoking cessation trial. Contemp Clin Trials. 2010;31(1):22–6.View ArticleGoogle Scholar
  32. Lauridsen SV, Thomsen T, Kaldan G, Lydom LN, Tønnesen H. Smoking and alcohol cessation intervention in relation to radical cystectomy: a qualitative study of heavy smokers and risky drinkers’ experiences. BMC Cancer. 2017;17:793Google Scholar
  33. Nolan MB, Warner DO. Perioperative tobacco use treatments: putting them into practice. BMJ. 2017;358:j3340Google Scholar
  34. WHO. Implementing health promotion in hospitals: manual and self-assessment forms: WHO Regional Office for Europe; 2010.Google Scholar
  35. International Network of Health Promoting Hospitals & Health Services (HPH). www.hphnet.org [cited 2018 April 24].
  36. Groene O, Jorgensen SJ, Fugleholm AM, Møller L, Garcia-Barbero M. Standards for health promotion in hospitals: development and pilot test in nine European countries. Int J Health Care Qual Assur. 2005;18(4):300–7.View ArticleGoogle Scholar
  37. Groene O, Alonso J, Klazinga N. Development and validation of the WHO self-assessment tool for health promotion in hospitals: results of a study in 38 hospitals in eight countries. Health Promot Int. 2010;25(2):221–9.View ArticleGoogle Scholar
  38. ISQua. Guidelines and principles for the development of health and social care standards. www.isqua.org; 2015.
  39. Tønnesen H, Svane JK, Lenzi L, et al. Handling clinical health promotion in the HPH DATA model: basic documentation of health determinants in medical records of tobacco, malnutrition, overweight, physical inactivity & alcohol. Clin Health Promot. 2012;2:5–11.Google Scholar
  40. Tønnesen H, Christensen ME, Groene O, O'Riordan A, Simonelli F, Suurorg L, et al. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study. BMC Health Serv Res. 2007;7:145Google Scholar
  41. Tønnesen H, Svane JK, Groene O, Chiou ST. The WHO-HPH recognition project: fast-track implementation of clinical health promotion - a protocol for a multi-center RCT. Clin Health Promot. 2016;6(1):13–20.Google Scholar
  42. Birken SA, Powell BJ, Presseau J, Kirk MA, Lorencatto F, Gould NJ, et al. Combined use of the Consolidated Framework for Implementation Research (CFIR) and the Theoretical Domains Framework (TDF): a systematic review. Implement Sci. 2017;12(1):2.View ArticleGoogle Scholar
  43. Bauer MS, Damschroder L, Hagedorn H, Smith J, Kilbourne AM. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3(1):32.Google Scholar
  44. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–7.View ArticleGoogle Scholar
  45. Glasgow RE, Klesges LM, Dzewaltowski DA, Estabrooks PA, Vogt TM. Evaluating the impact of health promotion programs: using the RE-AIM framework to form summary measures for decision making involving complex issues. Health Educ Res. 2006;21(5):688–94.View ArticleGoogle Scholar
  46. Gaglio B, Shoup JA, Glasgow RE. The RE-AIM framework: a systematic review of use over time. Am J Public Health. 2013;103(6):e38–46.View ArticleGoogle Scholar
  47. Pantoja T, Opiyo N, Lewin S, Paulsen E, Ciapponi A, Wiysonge CS, et al. Implementation strategies for health systems in low-income countries: an overview of systematic reviews. Cochrane Database Syst Rev. 2017;(9):CD011086.Google Scholar
  48. Rubenstein LV, Pugh J. Strategies for promoting organizational and practice change by advancing implementation research. J Gen Intern Med. 2006;21(S2):S58–64.View ArticleGoogle Scholar
  49. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):53.Google Scholar
  50. Moen R, Norman C. Evolution of the PDCA cycle. Tokyo: Asian Network for Quality Conference; 2009.Google Scholar
  51. Svane JK, Egerod I, Tønnesen H. Staff experiences with strategic implementation of clinical health promotion: a nested qualitative study in the WHO-HPH Recognition Process RCT. Sage Open Medicine. 2018;6:1–12.View ArticleGoogle Scholar
  52. McHorney CA, Johne W, Anastasiae R. The MOS 36-item short-form health survey (SF-36). Med Care. 1993;31(3):247–63.View ArticleGoogle Scholar
  53. Jenkinson C, Stewart-Brown S, Petersen S, Paice C. Assessment of the SF-36 version 2 in the United Kingdom. J Epidemiol Community Health. 1999;53(1):46–50.View ArticleGoogle Scholar
  54. Clinical Practice Guideline Treating Tobacco Use and Dependence 2008 Update Panel. A clinical practice guideline for treating tobacco use and dependence: 2008 update. A U.S. Public Health Service report. Am J Prev Med. 2008;35(2):158–76.View ArticleGoogle Scholar
  55. Rasmussen M. Efficacy of intensive face-to-face interventions for smoking cessation (prospective registry of a systematic review). Prospero. 2018.Google Scholar
  56. Svane JK, Raisova B, Stanecka Z, Dolezel Z, Richter M, Cahlikova J, et al. Clinical health promotion in the Czech Republic: standards compliance and service provision. Clin Health Promot. 2014;4(1):15–21.Google Scholar
  57. Svane JK, Chiou ST, Chang YL, Shen SH, Huang CH, Pan CY, et al. Integration of health promotion in clinical hospital departments: standards fulfilment, documentation of needs and service delivery. Clin Health Promot. 2015;5(1):11–7.Google Scholar
  58. Ferketich AK, Khan Y, Wewers ME. Are physicians asking about tobacco use and assisting with cessation? Results from the 2001-2004 national ambulatory medical care survey (NAMCS). Prev Med. 2006;43(6):472–6.View ArticleGoogle Scholar
  59. Jamal A, Dube SR, King BA. Tobacco use screening and counseling during hospital outpatient visits among US adults, 2005-2010. Prev Chronic Dis. 2015;12:E132.View ArticleGoogle Scholar
  60. Kemppainen V, Tossavainen K, Turunen H. Nurses’ roles in health promotion practice: an integrative review. Health Promot Int. 2013;28(4):490–501.View ArticleGoogle Scholar
  61. Burke LE, Fair J. Promoting prevention: skill sets and attributes of health care providers who deliver behavioral interventions. J Cardiovasc Nurs. 2003;18(4):256–66.View ArticleGoogle Scholar
  62. Joint Commision International (JCI) 2010. Screening and treating tobacco and alcohol use. https://manual.jointcommission.org/releases/archive/TJC2010B/ScreeningandTreatingTobaccoandAlcoholUse.html
  63. Fiore MC, Goplerud E, Schroeder SA. The joint commission’s new tobacco-cessation measures — will hospitals do the right thing? N Engl J Med. 2012;366:1172–4.View ArticleGoogle Scholar
  64. Makdissi R, Stewart SH. Care for hospitalized patients with unhealthy alcohol use: a narrative review. Addict Sci Clin Pract. 2013;8:11.View ArticleGoogle Scholar
  65. American College of Surgeons. Business case for the National Surgical Quality Improvement Program. https://www.facs.org/quality-programs/acs-nsqip/joinnow/businesscase
  66. Greenfield D, Pawsey M, Hinchcliff R, Moldovan M, Braithwaite J. The standard of healthcare accreditation standards: a review of empirical research underpinning their development and impact. BMC Health Serv Res. 2012;12(1):329.View ArticleGoogle Scholar
  67. Flodgren G, Gonçalves-Bradley DC, Pomey M-P. External inspection of compliance with standards for improved healthcare outcomes. Cochrane Database Syst Rev. 2016;(12):CD008992.Google Scholar
  68. Nicklin W, Dickson S. The value and impact of health care accreditation: a literature review. https://aventa.org/pdfs/valueimpactaccreditation.pdf: Accreditation Canada; 2011.
  69. Brubakk K, Vist GE, Bukholm G, Barach P, Tjomsland O. A systematic review of hospital accreditation: the challenges of measuring complex intervention effects. BMC Health Serv Res. 2015;15(1):280.Google Scholar
  70. Greenfield D, Braithwaite J. Health sector accreditation research: a systematic review. Int J Qual Health Care. 2008;20(3):172–83.View ArticleGoogle Scholar
  71. Hinchcliff R, Greenfield D, Moldovan M, Westbrook JI, Pawsey M, Mumford V, et al. Narrative synthesis of health service accreditation literature. BMJ Qual Saf. 2012;21(12):979–91.View ArticleGoogle Scholar
  72. Vist GE, Nøstberg AM, Brubakk K, Munkeby BH. Effects of certification and accreditation of hospitals. Oslo: Norwegian knowledge Center for the Health Services [Nasjonalt Kunnskapssenter for helsetjenester]; 2009.Google Scholar
  73. Scales DC. A multifaceted intervention for quality improvement in a network of intensive care units. JAMA. 2011;305(4):363.View ArticleGoogle Scholar
  74. Cavalcanti AB, Bozza FA, Machado FR, Salluh JIF, Campagnucci VP, Vendramim P, et al. Effect of a quality improvement intervention with daily round checklists, goal setting, and clinician prompting on mortality of critically ill patients. JAMA. 2016;315(14):1480.View ArticleGoogle Scholar
  75. Du X, Gao R, Turnbull F, Wu Y, Rong Y, Lo S, et al. Hospital quality improvement initiative for patients with acute coronary syndromes in China: a cluster randomized, controlled trial. Circ Cardiovasc Qual Outcomes. 2014;7(2):217–26.View ArticleGoogle Scholar
  76. Salmon JW, Heavens J, Lombard C, Tavrow P. The impact of accreditation on the quality of hospital care: KwaZulu-Natal Province, Republic of South Africa. Operations Research Results US Agency for International Development (USAID), Quality Assurance Project, University Research Co. LLC Bethesda. 2003;2(17):1–49.Google Scholar
  77. Falstie-Jensen AM, Larsson H, Hollnagel E, Norgaard M, Svendsen MLO, Johnsen SP. Compliance with hospital accreditation and patient mortality: a Danish nationwide population-based study. Int J Qual Health Care. 2015;27(3):165–74.View ArticleGoogle Scholar
  78. OPM evaluation team. Evaluation of the healthcare commission’s healthcare associated infections inspection programme. OPM Report. 2009:1–23.Google Scholar
  79. Rigotti NA, Tindle HA, Regan S, Levy DE, Chang Y, Carpenter KM, et al. A post-discharge smoking-cessation intervention for hospital patients: helping hand 2 randomized clinical trial. Am J Prev Med. 2016;51(4):597–608.View ArticleGoogle Scholar
  80. US Department of Labor OSaHA. Worker safety in hospitals 2017. https://www.osha.gov/dsg/hospitals/understanding_problem.html
  81. Sejr HS, Osler M. Do smoking and health education influence student nurses’ knowledge, attitudes, and professional behavior? Prev Med. 2002;34(2):260–5.View ArticleGoogle Scholar
  82. Zabeen S, Tsourtos G, Campion J, Lawn S. Type of unit and population served matters when implementing a smoke-free policy in mental health settings: perceptions of unit managers across England. Int J Soc Psychiatry. 2015;61(7):700–10.View ArticleGoogle Scholar
  83. Willaing I, Iversen L, Jørgensen T. What are the effects of healthcare staffs’ individual smoking habits on knowledge, attitudes and councelling practices related to smoking? [Hvad betyder sygehuspersonalets individuelle rygevaner for viden, holdning og rådgivningspraksis relateret til rygning?]. Ugeskr Laeger. 2001;163(32):4180–1.PubMedGoogle Scholar
  84. Sarna L, Bialous SA, Wells M, Kotlerman J, Wewers ME, Froelicher ES. Frequency of nurses’ smoking cessation interventions: report from a national survey. J Clin Nurs. 2009;18(14):2066–77.View ArticleGoogle Scholar
  85. Roelen C, Rhenen W, Schaufeli W, Klink J, Magerøy N, Moen B, et al. Mental and physical health-related functioning mediates between psychological job demands and sickness absence among nurses 2013.Google Scholar
  86. Wilson PM, Sales A, Wensing M, Aarons GA, Flottorp S, Glidewell L, et al. Enhancing the reporting of implementation research. Implement Sci. 2017;12(1):13.View ArticleGoogle Scholar

Copyright

© The Author(s). 2018

Advertisement