Open Access

Testing health information technology tools to facilitate health insurance support: a protocol for an effectiveness-implementation hybrid randomized trial

  • Jennifer E. DeVoe1, 2,
  • Nathalie Huguet1Email author,
  • Sonja Likumahuwa-Ackman1,
  • Heather Angier1,
  • Christine Nelson2,
  • Miguel Marino1,
  • Deborah Cohen1,
  • Aleksandra Sumic2,
  • Megan Hoopes2,
  • Rose L. Harding1,
  • Marla Dearing2 and
  • Rachel Gold2, 3
Implementation Science201510:123

https://doi.org/10.1186/s13012-015-0311-4

Received: 30 July 2015

Accepted: 11 August 2015

Published: 25 August 2015

Abstract

Background

Patients with gaps in health insurance coverage often defer or forgo cancer prevention services. These delays in cancer detection and diagnoses lead to higher rates of morbidity and mortality and increased costs. Recent advances in health information technology (HIT) create new opportunities to enhance insurance support services that reduce coverage gaps through automated processes applied in healthcare settings. This study will assess the implementation of insurance support HIT tools and their effectiveness at improving patients’ insurance coverage continuity and cancer screening rates.

Methods/design

This study uses a hybrid cluster-randomized design—a combined effectiveness and implementation trial—in community health centers (CHCs) in the USA. Eligible CHC clinic sites will be randomly assigned to one of two groups in the trial’s implementation component: tools + basic training (Arm I) and tools + enhanced training + facilitation (Arm II). A propensity score-matched control group of clinics will be selected to assess the tools’ effectiveness. Quantitative analyses of the tools’ impact will use electronic health record and Medicaid data to assess effectiveness. Qualitative data will be collected to evaluate the implementation process, understand how the HIT tools are being used, and identify facilitators and barriers to their implementation and use.

Discussion

This study will test the effectiveness of HIT tools to enhance insurance support in CHCs and will compare strategies for facilitating their implementation in “real-world” practice settings. Findings will inform further development and, if indicated, more widespread implementation of insurance support HIT tools.

Trial registration

Clinical trial NTC02355262

Keywords

Cancer screening Health insurance Medicaid Health information technology Primary care Hybrid design

Background

Cancer morbidity and mortality can be greatly reduced through screening and prevention; however, not all patients have access to regular cancer prevention services. In the USA, uninsured populations are much less likely to receive these evidence-based services as recommended, compared to those with insurance coverage [18]. Further, when health insurance coverage gaps occur, patients often delay or forgo cancer prevention services [913]. These delays in cancer detection and diagnoses lead to higher rates of disease incidence and mortality and increased healthcare costs [2, 1422]. Patients who regain health insurance coverage are often able to catch up on missed prevention services [913]. Thus, interventions that optimize and stabilize health insurance coverage could substantially improve rates of receipt of timely cancer preventive care.

Community health centers (CHCs) are well-positioned to provide such health insurance support because many of their patients are uninsured or experience frequent coverage gaps [23, 24]. Recent advances in health information technology (HIT) create new opportunities for enhancing insurance support services in CHCs and other healthcare settings through automated processes. Since the passage of the Affordable Care Act (ACA) in 2010, there are new insurance programs (i.e., Medicaid expansions) available for socioeconomically vulnerable patients [25]. This confluence of factors presents a window of opportunity to develop, test, and implement health insurance support technologies.

Health insurance support HIT tools

Most healthcare institutions collect information about patients’ insurance coverage. These data are documented in the electronic health record (EHR) but then are often used exclusively for billing purposes. While many CHCs already engage in health insurance support, few have an automated way to use patients’ health insurance data to identify, track, or communicate with patients about their health insurance status. We built insurance support HIT tools that incorporated information already in the EHR, augmented by the collection of a few additional data elements. These insurance support HIT tools were modeled on those proven effective for chronic disease management such as a computer reminder system for colorectal cancer screening [2628]. The tools include a panel management/data aggregator system, which identifies patients who may be eligible for Medicaid coverage but are not yet insured or who may be nearing coverage expiration.

In recent pilot studies, we showed that these insurance support HIT tools can be integrated into clinic workflows in several ways [2931]. The tools can provide “pop-up” alerts that appear at check-in or while scheduling an appointment. The tools also create registry lists of patients who are uninsured or may be nearing insurance expiration dates. These registries provide clinic staff with information about patients to contact and offer insurance enrollment support. Registries also can be used to support automatic messages (e.g., e-mail, voicemail, texts, after-visit communications) to remind patients about upcoming insurance re-enrollment dates and provide them with resources.

Effectiveness-implementation hybrid study design

To further study the uptake and use of these health insurance support technologies, we will blend design components of clinical effectiveness and implementation research. Using an “effectiveness-implementation hybrid design,” as described by Curran and collaborators [32], this intervention study will simultaneously assess the effectiveness of (i) health insurance support HIT tools and (ii) the best strategies for implementing the tools in CHCs. We hypothesize that patients seen at CHCs that receive these insurance support HIT tools will have higher rates of continuous insurance coverage and will be more up-to-date on age- and gender-appropriate cancer screening and prevention services, as compared to those seen at CHCs without the tools. We further hypothesize that CHCs that receive enhanced training and facilitation to support tool use will have higher rates of uptake of the tools and better insurance and cancer prevention rates than CHCs that receive the tools without such enhanced training or those without the tools.

Specifically, we will assess the effectiveness and implementation of a suite of health insurance support HIT tools designed to (1) identify and assist in contacting uninsured CHC patients who are eligible for enrollment in Medicaid and (2) encourage re-enrollment of Medicaid-insured patients before coverage gaps occur. The current study builds on our preliminary work by refining and studying the effectiveness of EHR-based health insurance support HIT tools across a larger number of clinics and studying effective methods for implementation of such tools in an adult patient population [30, 3335].

Study objectives

  1. 1.

    Assess the effect of the health insurance support HIT tools on patients’ health insurance coverage rates.

     
  2. 2.

    Assess the effect of the health insurance support HIT tools on up-to-date status of cancer screening and preventive care received by patients.

     
  3. 3.

    Compare two levels of implementation support, evaluating patient and CHC staff acceptance and use of the health insurance support HIT tools, and patient-, provider-, and system-level factors associated with successful implementation of the tools.

     

Methods/design

Trial design

This is a two-arm, cluster-randomized trial with an external matched control group of clinics, which utilizes a hybrid design: an effectiveness and implementation trial [32, 3638]. To test for implementing the tools, intervention clinics will be randomized to one of the two implementation support arms (see Fig. 1). Thus, all intervention clinics will receive the health insurance support HIT tools but will be randomly assigned to Arm I (tool + basic training) or Arm II (tools + enhanced training + facilitation) as depicted in Fig. 1. The implementation component of the trial will evaluate the relative impact of the different support strategies in Arm II. For the effectiveness component of the trial, we will compare the intervention group (all participating clinics) to a propensity score-matched [3941] group of control clinics to assess the extent to which the use of insurance support HIT tools facilitates continuous health insurance coverage and improved cancer prevention care.
Fig. 1

Effectiveness-implementation hybrid design

Study setting and population

The study will be carried out at OCHIN, a non-profit 501(c)(3) collaborative that was created in 2001 to develop HIT tools for CHCs. Originally called the Oregon Community Health Information Network, OCHIN has 81 health system members in 17 states with 376 clinics and 6322 providers caring for nearly 2,000,000 patients.

OCHIN provides and maintains a comprehensive electronic health information infrastructure, built on software from Epic© Systems. The OCHIN EHR is linked across all member clinics through an Organized Health Care Arrangement, which gives clinics a fully integrated EHR in which each patient has a single medical record. There is one enterprise-wide master patient index; health record data “follow the patient” to any OCHIN clinic.

The Institutional Review Board of the Oregon Health & Science University has reviewed and approved this study. The study is registered with ClinicalTrials.gov (#NCT02355262).

Inclusion criteria for intervention and control sites. Study sites were required to be primary care clinics that implemented the OCHIN EHR prior to January 1, 2013, have >1000 adult patients with ≥1 visits in the past year, and be located in a state that expanded Medicaid in 2014 [42].

Inclusion criteria for patients. All patients (ages 18–64) with at least one encounter at a CHC in OCHIN since January 1, 2013 (n ≈ 1,000,000 adults), will be included in the study dataset for quantitative analysis. As very different insurance products are available to children <18 and patients >64 years of age, these age groups are excluded. All study sites serve low-income, ethnically diverse populations with lower rates of cancer screening (see Table 1) and higher rates of publicly insured and uninsured patients, compared to national rates [43].
Table 1

Insurance coverage and cancer screening rates in the US population and OCHIN

 

US population, 2010 (%)

OCHIN, 2014 (%)

Insurance coverage

 Private

64.2

17.6

 Public

34.3

48.4

 None

13.4

34.0

Cancer screening among insured patients

 Cervical

83.0

36.5

 Colorectal

58.6

42.8

 Breast

72.4

60.3

Cancer screening among uninsured patients

 Cervical

63.8

27.0

 Colorectal

20.7

19.9

 Breast

38.2

39.3

Source for US population: Census [71], CDC [72]; source for OCHIN population: OCHIN EHR data

Overview of implementation component of trial

Intervention sites

An OCHIN team member identified 99 clinics in 32 eligible health center organizations within OCHIN. Of those invited, 23 clinics in 7 health center organizations agreed to participate (23 % of eligible clinics) and 76 clinics in 25 organizations declined (77 % of eligible clinics). The 7 participating organizations are located in 3 Medicaid expansion states (Oregon, California, and Ohio). These organizations varied in size; the number of primary care clinic sites per organization ranged from 1 to 6. Due to the possible correlation among clinic sites in the same organization, an independent biostatistician randomized the 23 clinics by service organization. A covariate-constrained cluster randomization [44] was used to balance the number of clinics between the two treatment arms. The randomization was balanced using the following covariates: state the organization was located in (Oregon vs. non-Oregon), total number of patients per organization, and percentage of uninsured patients per organization. The biostatistician and research team members were blind to the clinics during randomization. Once randomized, the study team was given the list of clinics participating in each study arm (11 in Arm I and 12 in Arm II).

Pre-implementation phase

We will first assess baseline rates of insurance coverage, cancer screening, and general recommended preventive care across all study arms and the matched controls. We will also assess rates of these outcomes across the total OCHIN population to compare to the recruited clinics. Additional analyses including patients from all CHCs in OCHIN will be conducted to assess the impact of recent policy changes on insurance coverage and cancer preventive care. Data for these analyses will be extracted from the OCHIN EHR and will include demographic information (i.e., age, federal poverty level, race/ethnicity, language spoken), insurance coverage dates and types, as well as cancer screenings (e.g., fecal occult blood test (FOBT) orders, colonoscopy and mammography referrals, and pap smear receipt), immunizations (e.g., human papillomavirus (HPV) vaccination), cancer-related health behavior assessment and counseling (e.g., smoking screening and intervention and obesity screening and intervention), and receipt of general recommended preventive care (e.g., blood pressure, glucose, and lipid assessments). This information will be extracted using standardized codes for diagnoses, labs, medications, procedures, and referrals, and electronically collected vital signs and social history (e.g., smoking status) from visits. As seen in Table 2, we will define receipt of cancer screenings and prevention based on services with an A–B rating from the US Preventive Services Task Force and recommendation from the Advisory Committee on Immunization Practices [45, 46]. Our EHR dataset will also contain additional information about the use of health services such as encounter information, provider data, clinic location, reason for visit, level of service, and type and number of visits.
Table 2

Description of outcome measures

Dependent variable

Description/measure for preventive care

Insurance coverage

Covered at visit (yes, no); all visits covered vs. not covered; percent of visits covered

Insurance continuity

Months covered by Medicaid; percentage of time covered by Medicaid

Recommended preventive care services (NQF)

Breast cancer screeninga (0031), NCQA; currently used by MU1, HEDIS®, and MACS

Cervical cancer screeninga (0032), NCQA; currently used by MU1, HEDIS®, and MACS

Colorectal cancer screeninga (0034); MU1, HEDIS®

Tobacco use screen and medical assistance with tobacco cessationa (0027), NCQA; currently used by MU1, HEDIS®, and MACS

Obesity screening and counselinga

NQF National Quality Forum (www.qualityforum.org), MACS Medicaid Adult Core Set (www.medicaid.gov/Medicaid-CHIP-Program-Information/By-Topics/Quality-of-Care/Downloads/Medicaid-Adult-Core-Set-Manual.pdf), NCQA National Committee for Quality Assurance (www.ncqa.org), MU1 Meaningful Use Stage 1 of the Medicare & Medicaid Electronic Health Record Incentive Program (www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/CQM_Through_2013.html), HEDIS® Health Employer Data and Information Set (www.ncqa.org/HEDISQualityMeasurement.aspx), USPSTF www.uspreventiveservicestaskforce.org/Page/Name/uspstf-a-and-b-recommendations/

aUSPSTF A or B recommendation

Additionally, we will conduct an informational phone call with relevant clinic staff (e.g., clinic manager, insurance eligibility specialists) at all intervention sites to learn about the clinic workflows for providing insurance assistance and any other factors that may affect how a clinic implements and uses the insurance support HIT tools. We will share findings from the pre-implementation calls with the HIT tool development team to inform tool development and refinements. These calls will also inform the development of trainings and practice facilitation for Arm II sites. In addition, we will hold a retreat with CHC staff, patient advisors, project advisors and consultants, and policymakers to aid in refining the health insurance support HIT tools. At the retreat, we will utilize user-centered design processes to better understand users’ needs and wants with regard to product design and utilization of the HIT tools [47]. These processes will enable us to interactively share the first version of the tools and develop a work plan and timeline for further refinements. A health literacy expert will assess the reading demands and document complexity of all patient communication materials (e.g., automated e-mails) to ensure that content is appropriate for the CHC patient population.

Implementation phase

The HIT tools will be implemented in both study arms, but the arms will differ in the levels of training, facilitation, and implementation support received, as described below.

Arm I

Study training materials (e.g., a PowerPoint presentation covering tools and study purpose, a tool guide, sample best practice workflow) will be made available to the clinics in Arm I for independent training via the OCHIN learning management system (LMS). LMS is a software application for the management of documentation, tracking, reporting, and delivery of electronic educational technology, education courses, and training programs.

Arm II

In addition to the study training materials provided for Arm I, we will provide interactive trainings for Arm II clinics to explain the insurance support HIT tools, prepare clinic staff for using the tools, and assist clinics in revising workflows to maximize tool utilization. A practice facilitator will be available to the clinic staff and will continue to engage actively with tool users over the course of the implementation phase, including on-site, face-to-face trainings and support.

We will assess the use of the insurance support HIT tools as well as the facilitation process in Arm II qualitatively and quantitatively. We will conduct brief, semi-structured phone interviews with key informants from each Arm II clinic and the facilitator periodically to monitor tasks and workflows related to health insurance support and other factors that may affect how a clinic implements the HIT tools. The goal is to understand what is and is not working regarding the HIT tools, any changes in workflow that might impact insurance support tool use, and experiences with the practice facilitation process. We anticipate conducting 5–10 interviews with each clinic in Arm II. Findings may be used to guide and modify the facilitation process in Arm II and possibly future refinements to the insurance support tools.

Overview of effectiveness component of the trial

An independent biostatistician used propensity score matching [39] to identify the control group of 23 CHCs that most closely match the intervention clinics on clinic and patient characteristics that have the potential to confound the insurance support HIT tools’ effect on health insurance coverage rates and cancer screening. An OCHIN team member identified 64 clinics in three states that were eligible for the study’s control group and matched based on the state of the organization, the total number of patients in 2014, and the percentage of uninsured visits, female gender, and Medicaid beneficiaries. The 23 control clinics were selected in a 1:1 ratio based on the nearest available match from the intervention clinics. The intent was to achieve the optimal overall balance in the matching characteristics between the intervention and control groups. Balance diagnostics were performed to assess whether the propensity score model had been properly specified [48].

Post-implementation assessment

After implementation of the insurance support HIT tools, we will assess the effectiveness of these tools and the implementation support strategies using qualitative and quantitative methods.

Post-implementation quantitative assessment

We will compare post-implementation insurance coverage and cancer screening and prevention rates across all study arms and in the matched control groups. We will also assess insurance coverage and receipt of general recommended preventive care/cancer screening and prevention across the entire OCHIN population. Study outcome variables are described below (also see Table 2).

Post-implementation qualitative assessment

Based on data collected during the implementation period, we will purposively sample CHC sites to participate in site visits. At the site visits, we will evaluate the implementation process, how the HIT tools are being used by clinics and how workflows around insurance are changing, and facilitators and barriers to implementing and using the HIT tools. Based on the quantitative data (described above), we will explore differences between high and low performers across arms.

During each site visit, the research team will spend 3–5 half-days observing clinic operations that are pertinent to use of the insurance support HIT tools. This will include observing all operational areas (e.g., scheduling/check-in, provider encounters) where insurance issues arise and are addressed. The team will also opportunistically conduct informal interviews with clinic staff, as informal interviews often elicit different and more insightful information than is captured during formal interviews. Between 5 and 10 interviews will be conducted at each site. Researchers will prepare field notes, share observations, and strategize for additional data collection opportunities.

If quantitative analyses reveal that control clinics have significantly improved rates of coverage, similar to the intervention clinics, we will employ ethnographic methods to better understand what is contributing to their improvements.

Study measures

Study measures for the effectiveness component of the trial

Our outcome measures (presented in Table 2) are patients’ health insurance status and continuity rates and receipt of recommended cancer screenings and prevention (i.e., cervical, colorectal, and breast cancer screening, HPV vaccination, and smoking and obesity screening and intervention). Outcomes will be measured individually and in combination. For example, we will measure if cancer screening services were received within appropriate intervals and also the overall rate of indicated services received.

The primary independent variable is provision of insurance support HIT tools (i.e., intervention clinics with tools, control clinics without tools). Potential confounder variables in multivariable analyses will include patients’ demographic information (e.g., age, federal poverty level, race/ethnicity, language spoken, rural/urban location). We will also collect additional information from the EHR about the use of health services such as encounter information, provider data, clinic location, level of service, types of visits, and number of visits.

Study measures for the implementation component of the trial

The primary independent variable is study arm (Arm I: tool + basic training vs. Arm II: tool + enhanced training + facilitation). The primary outcome measures are directed by the widely accepted RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) framework for evaluation of implementation success [49]. Reach will involve rates of insured patient visits and rates of insurance continuity. Effectiveness will refer to rates of patients receiving guideline-concordant cancer screening and other preventive care. Adoption will pertain to patterns, frequency, and timing of insurance support HIT tools usage by clinic staff and patients. Implementation will relate to users’ perceptions (perceived ease of use, usefulness of receiving information about health insurance), and acceptance of the tools (intention to use and satisfaction with the tools). Maintenance will involve all of the above measures over time. Some of these outcomes overlap with those of the effectiveness component of the trial.

Analytic strategy

Quantitative analysis of the effectiveness component of the trial

We will estimate pre- and post-implementation rates of insurance coverage and cancer screenings for clinics in the intervention group and the matched control group. For each patient in the study population, we will determine continuity of coverage and receipt of age- and gender-appropriate recommended cancer screening and preventive care 18 months pre- and 18 months post-intervention. The intervention group will be compared with the matched control group using a difference-in-differences (DID) approach. We will utilize generalized linear/non-linear mixed models, which offer flexible regression modeling to accommodate different sources of correlations (serial and intra-clinic), categorical and continuous covariates, and fixed and time-dependent covariates. This general model allows us to study a wide range of dependent variables, including logistic regression (binary data), beta regression (percent data), Poisson regression (count data), and Gaussian regression (normally distributed data). For example, we will use a random-effect logistic regression model to analyze insurance continuity at pre- and post-intervention periods (dependent variables) as a function of whether a patient belongs to a control or intervention CHC (primary independent variable) and other possible confounders. Serial and intra-clinic correlations will be modeled as random effects.

Quantitative analysis of the implementation component of the trial

Similar to the effectiveness aspect of the trial, we will compare the pre- and post-intervention rates of patients’ insurance coverage and receipt of recommended cancer screenings using DID generalized linear/non-linear mixed models comparing Arm I and Arm II. Moreover, post-implementation, we will assess and compare use of the tools across the study arms, including information such as who uses which tools and functions, as well as frequency and timing of use. This will help identify individual- and clinic-level factors associated with more frequent use of the health insurance support HIT tools.

To evaluate tool use, we will assess monthly data (e.g., percent of patient charts with evidence of tool use) in regression models to estimate associations between use and patient panel characteristics. Additionally, we will describe and compare the characteristics of patient encounters with tool use vs. those with no tool use, via random-effect logistic regression models that assess associations between use of the insurance support HIT tools (dependent variables) and socio-demographic characteristics (independent variables) and other possible confounders.

Qualitative analysis

Our team will meet regularly to review transcripts and field notes. We will also listen to and discuss key segments of the audio-recorded interviews. This step is crucial to monitoring data quality, refining the observation and interview guides, making sampling decisions, and monitoring theme saturation. This ongoing process will be used to track emerging themes and to create a coding template for more in-depth analysis. We will follow the 5-phase analysis strategy described by Miller and Crabtree (describing, organizing, connecting, corroborating/legitimizing, representing) [50, 51]. To accomplish these steps, we will use an immersion-crystallization approach in which the team reads and discusses the data for each clinic (immersion) to identify key findings (crystallization) [52, 51]. We will do this two times: first, to identify key themes within each case (clinic) and second, to identify cross-case finding. A key step in this process is the connecting phase where we will connect what we are seeing in the qualitative data with the data usage patterns from the EHR, as well as with quantitative data.

Discussion

Recent policies have created new opportunities for CHC patients to obtain health insurance coverage. However, research indicates that simply making insurance available is not enough to improve rates of continuous coverage; outreach efforts are needed to keep eligible patients insured [5360]. Multi-strategy approaches have shown early promise in improving enrollment and retention of eligible children in public health insurance programs, but few technological solutions have been tested in primary care settings or with populations of adults eligible for Medicaid coverage [61, 62]. With the passage of the ACA, CHC patients in states that expanded Medicaid now have better access to insurance coverage. Medicaid enrollment increased by 12.9 % in expansion states and by 2.6 % in non-expansion states [63].

The growth of HIT infrastructure in CHCs has created new opportunities for building tools capable of supporting health insurance enrollment and retention. While clinic-based HIT tools (e.g., registries, “pop-up” alerts, automated e-mails to patients) have proven effective at enhancing chronic disease management [64], we know of no previous studies to test the use of similar mechanisms to connect adult patients to Medicaid coverage with the aim of improving insurance coverage continuity and cancer screening rates. Therefore, this study will test the effectiveness of these tools and compare different strategies for supporting their implementation in “real-world” practice settings.

This protocol has several limitations. First, a participant site could withdraw from the study; however, the large number (>300) of OCHIN member CHCs could provide substitute sites. Second, as with any study conducted in “real-world” settings, unobserved changes will undoubtedly occur over time in a non-random fashion within the study environments (e.g., community health insurance outreach efforts). We acknowledge that these unexpected occurrences may make it difficult to isolate the effect of the health insurance support HIT tools and the implementation support strategies in this pragmatic trial. To address this, our approach uses both clinic- and individual-level comparisons. Further, the implementation component of this trial is strengthened by the use of a two-arm cluster-randomized design, and the tool effectiveness component uses a propensity score-matched control group of clinics to adjust for temporal trend. Third, while EHR data sources are not developed for research purposes, we have conducted multiple validation studies and have successfully built research datasets in the past [3169]. Finally, as with most implementation research, there are questions about sustainability. Conducting this work in partnership with OCHIN positions us to sustain the aspects of the insurance support HIT tools proven effective. OCHIN is committed to maintaining the tools and will also enable rapid dissemination to more CHCs.

Despite these limitations, this study has great potential impact, given the central role played by CHCs in providing health care to vulnerable populations and the expansion of both CHCs and public health insurance coverage for adults as supported by the American Recovery and Reinvestment Act and the ACA [70]. Indeed, as health insurance expansions continue, it becomes increasingly important to know about effective methods for connecting patients to coverage. The proposed study will contribute to this knowledge gap. We believe our findings will have broad relevance to public health and healthcare reform efforts. We will work with our community partners and state policymakers to widely distribute findings and plan for widespread dissemination of the insurance support HIT tools, if proven effective. Further, the insurance support HIT tools have the potential to facilitate access to a wide range of recommended healthcare services beyond cancer screening and prevention.

Trial status

The recruitment and randomization into the study arms have been completed. We also identified the control group using the propensity-matched score method. The implementation of the health insurance support HIT tools is scheduled for Fall 2015.

Abbreviations

ACA: 

Affordable Care Act

CHC: 

community health center

DID: 

difference-in-differences

EHR: 

electronic health record

FOBT: 

fecal occult blood test

HIT: 

health information technology

HPV: 

human papillomavirus

LMS: 

learning management system

Declarations

Acknowledgements

The authors gratefully acknowledge the OCHIN practice-based research network health centers. This project is supported by the National Cancer Institute (NCI) R01CA181452. The funding agency had no involvement in the preparation, review, or approval of the manuscript.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Authors’ Affiliations

(1)
Department of Family Medicine, Oregon Health & Science University
(2)
OCHIN, Inc.
(3)
Center for Health Research Northwest

References

  1. Shi L, Lebrun LA, Zhu J, Tsai J. Cancer screening among racial/ethnic and insurance groups in the United States: a comparison of disparities in 2000 and 2008. J Health Care Poor Underserved. 2011;22(3):945–61. doi:https://doi.org/10.1353/hpu.2011.0079.PubMedGoogle Scholar
  2. Farkas DT, Greenbaum A, Singhal V, Cosgrove JM. Effect of insurance status on the stage of breast and colorectal cancers in a safety-net hospital. Am J Manag Care. 2012;18(5 Spec No. 2):SP65–70.PubMedGoogle Scholar
  3. Carney PA, O'Malley J, Buckley DI, Mori M, Lieberman DA, Fagnan LJ, et al. Influence of health insurance coverage on breast, cervical, and colorectal cancer screening in rural primary care settings. Cancer. 2012;118(24):6217–25. doi:https://doi.org/10.1002/cncr.27635.PubMedGoogle Scholar
  4. American Cancer Society. Cancer facts and figures, 2008. http://www.cancer.org/research/cancerfactsstatistics/cancerfactsfigures2008/index. Accessed September 15 2009.
  5. Robinson JMSV. The role of health insurance coverage in cancer screening utilization. J Health Care Poor Underserved. 2008;19(3):842–56.PubMedGoogle Scholar
  6. Swan JBN, Coates RJ, Rimer BK, Lee NC. Progress in cancer screening practices in the United States: results from the 2000 National Health Interview Survey. Cancer. 2003;97(6):1528–40.PubMedGoogle Scholar
  7. DeVoe J, Fryer G, Phillips R, Green L. Receipt of preventive care among adults: insurance status and usual source of care. Am J Public Health. 2003;93:786–91.PubMedPubMed CentralGoogle Scholar
  8. Mandelblatt JS, Yabroff KR, Kerner JF. Equitable access to cancer services: a review of barriers to quality care. Cancer. 1999;86:2378–90.PubMedGoogle Scholar
  9. Baicker K, Finkelstein A. The effects of Medicaid expansion: learning from the Oregon experiment. N Engl J Med. 2011;365(8):683–5.PubMedPubMed CentralGoogle Scholar
  10. McWilliams JM, Zaslavsky AM, Meara E, Ayanian JZ. Impact of Medicare coverage on basic clinical services for previously uninsured adults. JAMA. 2003;290(6):757–64. doi:https://doi.org/10.1001/jama.290.6.757.PubMedGoogle Scholar
  11. McWilliams JM, Meara E, Zaslavsky AM, Ayanian JZ. Use of health services by previously uninsured Medicare beneficiaries. N Engl J Med. 2007;357(2):143–53. doi:https://doi.org/10.1056/NEJMsa067712.PubMedGoogle Scholar
  12. Busch SH, Duchovny N. Family coverage expansions: impact on insurance coverage and health care utilization of parents. J Health Econ. 2005;24(5):876–90. http://dx.doi.org/10.1016/j.jhealeco.2005.03.007.PubMedGoogle Scholar
  13. Bednarek HL, Steinberg SB. Variation in preventive service use among the insured and uninsured: does length of time without coverage matter? J Health Care Poor Underserved. 2003;14(3):403–19.PubMedGoogle Scholar
  14. Bradley CJ, Gandhi SO, Neumark D, Garland S, Retchin SM. Lessons for coverage expansion: a Virginia primary care program for the uninsured reduced utilization and cut costs. Health Aff (Millwood). 2012;31(2):350–9.Google Scholar
  15. Siminoff LA, Ross L. Access and equity to cancer care in the USA: a review and assessment. Postgrad Med J. 2005;81(961):674–9.PubMedPubMed CentralGoogle Scholar
  16. Slatore CG, Au DH, Gould MK, American Thoracic Society Disparities in Healthcare Group. An official American Thoracic Society systematic review: insurance status and disparities in lung cancer practices and outcomes. Am J Respir Crit Care Med. 2010;182(9):1195–205. http://dx.doi.org/10.1164/rccm.2009-038ST.PubMedGoogle Scholar
  17. Bradley CJ, Dahman B, Bear HD. Insurance and inpatient care: differences in length of stay and costs between surgically treated cancer patients. Cancer. 2012;118(20):5084–91. http://dx.doi.org/10.1002/cncr.27508.PubMedGoogle Scholar
  18. Roetzheim RG, Pal N, Gonzalez EC, Ferrante JM, Van Durme DJ, Krischer JP. Effects of health insurance and race on colorectal cancer treatments and outcomes. Am J Public Health. 2000;90(11):1746–54.PubMedPubMed CentralGoogle Scholar
  19. Schootman M, Walker MS, Jeffe DB, Rohrer JE, Baker EA. Breast cancer screening and incidence in communities with a high proportion of uninsured. Am J Prev Med. 2007;33(5):379–86.PubMedPubMed CentralGoogle Scholar
  20. Harvin JA, Van Buren G, Tsao K, Cen P, Ko TC, Wray CJ. Hepatocellular carcinoma survival in uninsured and underinsured patients. J Surg Res. 2011;166(2):189–93.PubMedGoogle Scholar
  21. Miller DC, Litwin MS, Bergman J, Stepanian S, Connor SE, Kwan L, et al. Prostate cancer severity among low income, uninsured men. J Urol. 2009;181(2):579–84.PubMedGoogle Scholar
  22. Ward E, Halpern M, Schrag N, Cokkinides V, DeSantis C, Bandi P, et al. Association of insurance with cancer care utilization and outcomes. CA Cancer J Clin. 2008;58(1):9–31. doi:https://doi.org/10.3322/ca.2007.0011.PubMedGoogle Scholar
  23. Morgan D. Health centers for poor, uninsured see ranks swell. 2012. http://www.reuters.com/article/2012/05/01/us-usa-healthcare-centers-idUSBRE8401JL20120501?feedType=RSS&feedName=everything&virtualBrandChannel=11563. Accessed February 21 2013.
  24. National Association of Community Health Centers. A sketch of community health centers, chart book 2014. Bethesda, M.D: National Association of Community Health Centers. 2014. https://www.nachc.com/client//Chartbook_December_2014.pdf Accessed August 17 2015.
  25. Henry J. Kaiser Family Foundation. Summary of the Affordable Care Act. Menlo Park, CA: Henry J. Kaiser Family Foundation. 2013 Contract No.: 8061-02. http://files.kff.org/attachment/fact-sheet-summary-of-theaffordable-care-act Accessed August 17 2015.
  26. Young AS, Chaney E, Shoai R, Bonner L, Cohen AN, Doebbeling B, et al. Information technology to support improved care for chronic illness. J Gen Intern Med. 2007;22 Suppl 3:425–30. doi:https://doi.org/10.1007/s11606-007-0303-4.PubMedPubMed CentralGoogle Scholar
  27. Sequist TD, Zaslavsky AM, Marshall R, Fletcher RH, Ayanian JZ. Patient and physician reminders to promote colorectal cancer screening: a randomized controlled trial. Arch Intern Med. 2009;169(4):364–71. doi:https://doi.org/10.1001/archinternmed.2008.564.PubMedPubMed CentralGoogle Scholar
  28. Nease Jr DE, Ruffin MT, Klinkman MS, Jimbo M, Braun TM, Underwood JM. Impact of a generalizable reminder system on colorectal cancer screening in diverse primary care practices: a report from the prompting and reminding at encounters for prevention project. Med Care. 2008;46(9 Suppl 1):S68–73. doi:https://doi.org/10.1097/MLR.0b013e31817c60d7.PubMedPubMed CentralGoogle Scholar
  29. DeVoe JE, Angier H, Likumahuwa S, Hall J, Nelson C, Dickerson K, et al. Use of qualitative methods and user-centered design to develop customized health information technology tools within federally qualified health centers to keep children insured. J of Ambul Care Manage. 2014;37(2):148–54.Google Scholar
  30. DeVoe JE, Angier H, Burdick T, Gold R. Health information technology—an untapped resource for patient-centered medical homes to help keep patients insured. Ann Fam Med. 2014;12(6):568–72.PubMedPubMed CentralGoogle Scholar
  31. Gold R, Nelson C, Cowburn S, Bunce A, Hollombe C, Davis J, et al. Feasibility and impact of implementing a private care system’s diabetes quality improvement intervention in the safety net: a cluster-randomized trial. Implement Sci. 2015;10:83. doi:https://doi.org/10.1186/s13012-015-0259-4.PubMedPubMed CentralGoogle Scholar
  32. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26. doi:https://doi.org/10.1097/MLR.0b013e3182408812.PubMedPubMed CentralGoogle Scholar
  33. DeVoe J, Angier H, Likumahuwa S, Hall J, Nelson C, Dickerson K, et al. Use of qualitative methods and user-centered design to develop customized health information technology tools within federally qualified health centers to keep children insured. J Ambul Care Manage. 2014;37(2):148–54. doi:https://doi.org/10.1097/JAC.0000000000000016.PubMedGoogle Scholar
  34. Gold R, Muench J, Hill C, Turner A, Mital M, Milano C, et al. Collaborative development of a randomized study to adapt a diabetes quality improvement initiative for federally qualified health centers. J Health Care Poor Underserved. 2012;23(3 Suppl):236–46. doi:https://doi.org/10.1353/hpu.2012.0132.PubMedPubMed CentralGoogle Scholar
  35. Heintzman J, Gold R, Krist A, Crosson J, Likumahuwa S, DeVoe JE. Practice-based research networks (PBRNs) are promising laboratories for conducting dissemination and implementation research. J Am Board Fam Med. 2014;27(6):759–62. doi:https://doi.org/10.3122/jabfm.2014.06.140092.PubMedPubMed CentralGoogle Scholar
  36. Cully JA, Armento ME, Mott J, Nadorff MR, Naik AD, Stanley MA, et al. Brief cognitive behavioral therapy in primary care: a hybrid type 2 patient-randomized effectiveness-implementation design. Implement Sci. 2012;7:64. doi:https://doi.org/10.1186/1748-5908-7-64.PubMedPubMed CentralGoogle Scholar
  37. Smith JD, Stormshak EA, Kavanagh K. Results of a pragmatic effectiveness-implementation hybrid trial of the family check-up in community mental health agencies. Adm Policy Ment Health. 2014. doi:10.1007/s10488-014-0566-0Google Scholar
  38. van Dijk-de Vries A, van Bokhoven MA, Terluin B, van der Weijden T, van Eijk JT. Integrating nurse-led Self-Management Support (SMS) in routine primary care: design of a hybrid effectiveness-implementation study among type 2 diabetes patients with problems of daily functioning and emotional distress: a study protocol. BMC Fam Pract. 2013;14:77. doi:https://doi.org/10.1186/1471-2296-14-77.Google Scholar
  39. D'Agostino Jr RB. Propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Stat Med. 1998;17(19):2265–81.PubMedGoogle Scholar
  40. Rosenbaum PR, Rubin DB. Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. Am Statistician. 1985;39:33–8.Google Scholar
  41. Guo SY, Fraser MW. Propensity score analysis: statistical methods and 716 applications. Thousand Oaks, CA: Sage Publications. 2014.Google Scholar
  42. Henry J. Kaiser Family Foundation. Status of state action on the Medicaid expansion decision, 2014. Menlo Park, CA. 2014. http://kff.org/health-reform/state-indicator/state-activity-around-expanding-medicaid-under-the-affordable-care-act/. Accessed August 20 2014.
  43. Centers for Disease Control and Prevention (CDC). Cancer screening—United States, 2010. Morb Mortal Wkly Rep. 2012;61(3):41–5.Google Scholar
  44. Ivers NM, Halperin IJ, Barnsley J, Grimshaw JM, Shah BR, Tu K, et al. Allocation techniques for balance at baseline in cluster randomized trials: a methodological review. Trials. 2012;13:120. doi:https://doi.org/10.1186/1745-6215-13-120.PubMedPubMed CentralGoogle Scholar
  45. Bridges CB, Coyne-Beasley T, Advisory Committee on Immunization P. Advisory committee on immunization practices recommended immunization schedule for adults aged 19 years or older: United States, 2014. Ann Intern Med. 2014;160(3):190. doi:https://doi.org/10.7326/M13-2826.PubMedGoogle Scholar
  46. US Preventive Services Task Force. USPSTF A and B recommendations. October 2014. http://www.uspreventiveservicestaskforce.org/Page/Name/uspstf-a-and-b-recommendations/. Accessed February 3 2015.
  47. Beyer H, Holtzblatt K. Design: defining customer-centered systems. San Diego: Academic Press; 1998.Google Scholar
  48. Austin PC. Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity-score matched samples. Stat Med. 2009;28(25):3083–107. doi:https://doi.org/10.1002/sim.3697.PubMedPubMed CentralGoogle Scholar
  49. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–7.PubMedPubMed CentralGoogle Scholar
  50. Crabtree BF, Miller WL. Doing qualitative research. 2nd ed. Thousand Oaks: Sage Publications, Inc.; 1999.Google Scholar
  51. Miller WL, Crabtree BF. The dance of interpretation. Doing qualitative research. 2nd ed. Thousand Oaks: Sage Publications, Inc; 1999. p. 127–43.Google Scholar
  52. Borkan J. Immersion/crystallization. In: Crabtree BF, Miller WL, editors. Doing qualitative research. 2nd ed. Thousand Oaks: Sage Publications, Inc.; 1999. p. 179–94.Google Scholar
  53. Cuttler L, Kenney G. State Children’s Health Insurance Program and Pediatrics. Arch Pediatr Adolesc Med. 2007;161(7):630–3.PubMedGoogle Scholar
  54. Sommers BD, Rosenbaum S. Issues in health reform: how changes in eligibility may move millions back and forth between Medicaid and insurance exchanges. Health Aff. 2011;230(2):228–36.Google Scholar
  55. Klein K, Glied S, Ferry D. Entrances and exits: health insurance churning, 1998–2000. Issue Brief (Commonwealth Fund). 2005;855:1–12.Google Scholar
  56. Fairbrother G, Park H, Haidery A, Gray B. Periods of unmanaged care in Medicaid managed care. J Health Care Poor Underserved. 2005;16(3):444–52.PubMedGoogle Scholar
  57. DeVoe JE, Krois L, Edlund C, Smith J, Carlson NE. Uninsured but eligible children: are their parents insured? Recent findings from Oregon. Med Care. 2008;46(1):3.PubMedPubMed CentralGoogle Scholar
  58. US Census Bureau. 2008 Survey of Income and Program Participation.US Government, Washington DC.http://www.census.gov/programs-surveys/sipp/data.html. Accessed February 21 2009.
  59. Miller JE, Gaboda D, Cantor JC, Videon TM, Diaz Y. Demographics of disenrollment from SCHIP: evidence from NJ KidCare. J Health Care Poor Underserved. 2004;15(1):113–26.PubMedGoogle Scholar
  60. Phillips JA, Miller JE, Cantor JC, Gaboda D. Context or composition: what explains variation in SCHIP disenrollment? Health Serv Res. 2004;39(4 Pt 1):865–85.PubMedPubMed CentralGoogle Scholar
  61. Cousineau MR, Stevens GD, Farias A. Measuring the impact of outreach and enrollment strategies for public health insurance in California. Health Serv Res. 2011;46(1 Pt 2):319–35.PubMedPubMed CentralGoogle Scholar
  62. Meng Q, Yuan B, Jia L, Wang J, Garner P. Outreach strategies for expanding health insurance coverage in children [Review] Cochrane Database Syst Rev. 2010. 8(CD008194).Google Scholar
  63. Wachino V, Artiga S, Rudowitz R. How is the ACA impacting Medicaid enrollment? 2014. http://kff.org/medicaid/issue-brief/how-is-the-aca-impacting-medicaid-enrollment/. Accessed March 3 2015.
  64. Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012;157(1):29–43.PubMedGoogle Scholar
  65. Gold R, Nichols G, Muench J, Hill C, Mital M, Dudl J et al., editors. Implementing an integrated care setting’s diabetes QI initiative in safety net clinics: a practice-based randomized trial. 5th annual NIH conference on the science of dissemination and implementation: science at the crossroads; 2012. Bethesda, MD.Google Scholar
  66. Gold R, DeVoe J, Shah A, Chauvie S. Insurance continuity and receipt of diabetes preventive care in Oregon’s CHCs. Med Care. 2008;47(4):431–9.Google Scholar
  67. DeVoe JE, Gold R, McIntire P, Puro J, Chauvie S, Gallia CA. Electronic health records vs Medicaid claims: completeness of diabetes preventive care data in community health centers. Ann Fam Med. 2011;9(4):351–8. doi:https://doi.org/10.1370/afm.1279.PubMedPubMed CentralGoogle Scholar
  68. Gold R, Angier H, Mangione-Smith R, Gallia C, McIntire PJ, Cowburn S, et al. Feasibility of evaluating the CHIPRA care quality measures in electronic health record data. Pediatrics. 2012;130(1):139–49. doi:https://doi.org/10.1542/peds.2011-3705.PubMedPubMed CentralGoogle Scholar
  69. Gold R, DeVoe JE, McIntire PJ, Puro JE, Chauvie SL, Shah AR. Receipt of diabetes preventive care among safety net patients associated with differing levels of insurance coverage. J Am Board Fam Med. 2012;25:42–9.PubMedPubMed CentralGoogle Scholar
  70. US Department of Health and Human Services. Strategic goal 1: strengthen health care. n.d. http://www.hhs.gov/about/strategic-plan/strategic-goal-1/index.html. Accessed December 23 2014.
  71. Smith JC, Medalia C, US Census Bureau. Health insurance coverage in the United States: 2013. Washington, DC: U.S. Department of Commerce Economics and Statistics Administration. 2014Google Scholar
  72. Centers for Disease Control and Prevention. Cancer Screening — United States, 2010. MMWR 2012;61(03):41-45.Google Scholar

Copyright

© DeVoe et al. 2015

Advertisement