Skip to main content

The use of continuous surveys to generate and continuously report high quality timely maternal and newborn health data at the district level in Tanzania and Uganda



The lack of high quality timely data for evidence-informed decision making at the district level presents a challenge to improving maternal and newborn survival in low income settings. To address this problem, the EQUIP project (Expanded Quality Management using Information Power) implemented a continuous household and health facility survey for continuous feedback of data in two districts each in Tanzania and Uganda as part of a quality improvement innovation for mothers and newborns.


Within EQUIP, continuous survey data were used for quality improvement (intervention districts) and for effect evaluation (intervention and comparison districts). Over 30 months of intervention (November 2011 to April 2014), EQUIP conducted continuous cross-sectional household and health facility surveys using 24 independent probability samples of household clusters to represent each district each month, and repeat censuses of all government health facilities. Using repeat samples in this way allowed data to be aggregated at six four-monthly intervals to track progress over time for evaluation, and for continuous feedback to quality improvement teams in intervention districts.

In both countries, one continuous survey team of eight people was employed to complete approximately 7,200 household and 200 facility interviews in year one. Data were collected using personal digital assistants. After every four months, routine tabulations of indicators were produced and synthesized to report cards for use by the quality improvement teams.


The first 12 months were implemented as planned. Completion of household interviews was 96% in Tanzania and 91% in Uganda. Indicators across the continuum of care were tabulated every four months, results discussed by quality improvement teams, and report cards generated to support their work.


The EQUIP continuous surveys were feasible to implement as a method to continuously generate and report on demand and supply side indicators for maternal and newborn health; they have potential to be expanded to include other health topics. Documenting the design and implementation of a continuous data collection and feedback mechanism for prospective description, quality improvement, and evaluation in a low-income setting potentially represents a new paradigm that places equal weight on data systems for course correction, as well as evaluation.

Peer Review reports


The last 10 years has seen maternal and newborn health gain momentum as a priority public health issue [1]. Although some important gains have been made, mortality rates remain high in many countries [2]. A review of evidence demonstrated that in high mortality, low income settings, interventions that save lives can be delivered at primary level health facilities, and centre around timely care seeking by women, the adoption of behaviours by families and health workers, and availability of low cost, low technology drugs and equipment [3]. In theory, it should be a simple process to realise the potential of these interventions, but in practice, the coverage of life saving interventions remains too low [4], and data for tracking progress remain scarce [5].

The lack of high quality timely data to support evidence-informed decision making presents an important challenge to improving survival for all in low income settings [6]-[8]. Existing large-scale data collection approaches such as the Demographic and Health Survey (DHS) [9], the Multiple Indicator Cluster Survey (MICS) [10], and the Service Provision Assessment (SPA) [9] provide high quality estimates for a broad set of indicators that are an invaluable resource for understanding health in a country at one point in time, and for tracking progress across time and place, but they have three important limitations. First, the periodicity of these surveys is usually once every two to five years, meaning that program managers have to wait a long time to assess progress of scale-up strategies, or to consider actions for course correction. The issue of periodicity is further exacerbated by temporality in that data collected is retrospective, sometimes reflecting events up to five years in the past. Second, estimates rarely go below the regional level and thus provide district health managers with little high quality, timely population data with which to set priorities or benchmark progress. Third, existing large-scale survey models cannot readily link indicators on population level health to indicators on healthcare availability and quality [11]. Health information system (HIS) data generated at individual health facilities and aggregated up to district, regional and national levels have potential for supporting decision making, but are limited to reporting on health service users, not population level statistics. But, currently, these data are of poor quality in the majority of high mortality settings and are not systematically used to manage, course-correct, and maximise the potential of health programmes [12].

In 2009, continuous surveys (as part of a larger, continuous quality improvement strategy) were proposed as a possible approach to apply high quality methods for data collection such as those applied by DHS, but adapted in ways that fundamentally change the survey outputs, particularly in terms of reporting intervals and unit of geographical disaggregation [13]. Continuous surveys are conceptualised as occurring at the national level, with national point estimates for a broad range of indicators several times per year (for input, output, and outcome indicators), and with sample sizes that have reasonable precision for national and regional level analysis annually. Indicators at the sub-national or district level could be calculated even more frequently using small-area analysis techniques [14]. These more frequent, lower-precision, sub-national estimates could be used for program management and support a continuous quality improvement strategy [13].

Variations of the continuous survey concept have already been applied in different settings. For example, in Peru a continuous survey was implemented nationally for five consecutive years, 2003 to 2008, in lieu of one national DHS survey, primarily because there was strong demand for department (district) level indicator estimates, as well as national level estimates [15]. It was concluded to be broadly successful in its design and implementation, recommended to continue in Peru, and recommended for other countries that currently rely on large scale surveys at intervals over three years. In Malawi, sub-district continuous surveys were implemented during a period of rapid scale-up of malaria control efforts in 2010 [16]. The implementers reported that these surveys were feasible and affordable, had provided valuable information about where to target malaria control efforts, and were recommended as a promising district level monitoring and evaluation tool for short-term processes. More recently, continuous household and health facility surveys have been implemented in Senegal [17].

Here we describe the experience of implementing continuous household and health facility surveys at the district level in Tanzania and Uganda that are part of a quality improvement project for maternal and newborn health, called EQUIP (Expanded Quality Management using Information Power) [18]. To our knowledge, this is the first example of a high quality data collection method that was designed to be used prospectively for description, quality improvement, and evaluation in a high mortality setting: this potentially represents a new paradigm that places equal weight on data systems for course correction as well as evaluation.

The protocol for the EQUIP intervention is provided elsewhere [18]. In brief, EQUIP was implemented in southern Tanzania and eastern Uganda. A quality management intervention was implemented that simultaneously engaged with district health management teams, primary and secondary level health facility workers, and community volunteers to collectively take small, incremental steps to improve demand and supply side factors that limit maternal and newborn healthcare. A central component of the quality improvement has been that all teams were provided with locally generated, high quality, population based data generated by the continuous survey described here. Every quarter, in line with the cycle of quality improvement team meetings, continuous survey data were analysed and summary report cards created around priority health topics: teams were then presented with these report cards and encouraged to reflect on how their own behaviours - and the behaviours of people operating in their community, facility, or management context - could improve demand or supply side factors associated with these health topics. In both countries, one district received both quality management and continuous surveys (intervention districts) and a neighbouring district had only continuous surveys (comparison districts). The success of district and health facility level quality improvement teams is reflected using supply side indicators (health facility survey), together with population level coverage of interventions that represent a combination of both supply and demand (household survey). The success of community level quality improvement teams is reflected using demand side indicators (household survey).

In this report, we present the continuous survey methods, their rationale, and results from the first 12 months of survey implementation. To display strengths, limitations, and generalizability of the methods, we endeavour to provide all relevant detail [19]. Evidence on the effect of the EQUIP intervention will be reported separately after completion of the intervention period.


Survey objectives

The EQUIP continuous survey had a dual objective. First, to generate high quality data that could be synthesized at frequent intervals, and with minimal delays, into report cards that were of interest to quality improvement teams working to improve demand or supply side factors affecting maternal and newborn healthcare locally. And second, to generate data that were of sufficiently high quality to carry out a robust evaluation of the EQIUP quality improvement intervention.

Study setting

Maps of the study setting in both countries are available in the paper by Hanson, et al.[18]. Both countries established a decentralized district health system in the 1980s, and the populations are served by a network of public health facilities. Maternal and newborn healthcare is predominantly provided through these government facilities, although some women also seek maternity care from a small number of private or NGO-led (largely Christian mission) health facilities.

Administrative information for the four districts is provided in Table 1. In Tanzania, the intervention district (Tandahimba) and comparison district (Newala) are contiguous and have an unpaved road network, some roads being very difficult to pass during the rainy season. Each district has a single hospital. The maternal mortality ratio in the Southern Zone of Tanzania was estimated to be 712 per 100,000 live births [20], and the neonatal mortality rate 31 newborn deaths per 1,000 live births [21].

Table 1 EQUIP continuous survey sample selection after first year of continuous survey implementation (November 2011 to December 2012)

In Uganda, the intervention district (Mayuge) and comparison district (Namayingo) are also neighbouring and accessed by predominantly unpaved roads. Mayuge, but not Namayingo, has a district hospital. Both districts have small island populations that were included in early EQUIP planning, but which were subsequently dropped from the quality improvement and continuous survey protocols because of the high financial and time cost of reaching them. The maternal mortality ratio in Uganda was estimated to be 438 per 100,000 live births, and the neonatal mortality rate 23 newborn deaths per 1,000 live births [22]. In both settings, approximately two-thirds of women delivered their last born child in a health facility.

Continuous survey overview

EQUIP aimed to conduct cross-sectional household and health facility surveys using 24 independent probability samples of household clusters to represent each district each month, and to conduct repeat censuses of all government health facilities in each district. Each month, in each district, 10 household clusters (including 300 households) and 5 to 10 health facilities were surveyed. At the end of each 4-month round, in each district, the survey included 40 household clusters (1,200 households) and a census of health facilities (22 to 38 facilities, depending on the district [Table 1]). To allow for holidays, retraining, and data processing intervals, these 24 monthly household and facility surveys were planned for implementation over the 30-month EQUIP intervention period (November 2011 to April 2014). Field work was organised into six rounds of data collection, each round including four-monthly household samples and one complete health facility census (Additional file 1).

Using repeat samples in this way allowed data to be aggregated at intervals to track progress over time. For example, the percent of newborns having clean cord care could be tabulated with reasonable precision at the district level after each round of survey (four months), and tabulated at the sub-district level (division in Tanzania and county in Uganda) after every three survey rounds (12 months).

Sample size calculations and justification

In both of the Tanzanian districts, with an estimated total fertility rate of 5.4 in 2010 [9], it was expected that 10 clusters of 30 households each would result in 38 interviews per district each month with women who had a live birth in the 12 months prior to survey in each district, or 152 women per district in each four-monthly survey round. In Ugandan districts, with an estimated total fertility rate of 6.2 in 2011 [9], a higher number of women with a recent live birth would be interviewed. In Table 2, an illustrative set of sample size calculations are provided for a range of indicators across the continuum of care from pregnancy to newborn.

Table 2 Illustrative sample size calculations 1 for outcomes along the continuum of care from pregnancy to post-natal in Tanzania and Uganda

For the purpose of evaluation, our assumption was that an increase of between 10 and 20 percentage points would occur in indicators across the continuum of care over the 30-month period of intervention, depending in part on the topics being addressed by quality improvement teams. The continuous survey sample was calculated to have at least 80% power to detect small increases (fewer than 10 percentage points) between the beginning and end of the intervention period across the whole spectrum of maternal and newborn care; 10 percentage point increases each year of the intervention for the majority of indicators (for example, institutional delivery), and larger increases more frequently (for example, 20 percentage point increases in percent of newborns having clean cord care every four months of the intervention period).

Selection of clusters, households, and respondents

In both countries, probability methods were applied to select 24 independent probability samples of 10 clusters from each district prior to the start of survey.

In Tanzania, clusters were drawn using sub-village (vitongoji) household lists that had been compiled by a previous project in the study area in 2007 [23]. For each district, sub-villages were listed from north to south, the number of households in each sub-village cumulated, and clusters selected with probability proportional to the total number of households in the district. In the event that the selected cluster had more than 150 households, standard segmentation methods were applied [24] by a household survey mapper who visited each selected cluster prior to the survey interview team. The mapper then listed all households in the selected cluster (or segment) and systematically selected 30 households from that list using a fixed fraction (total number of households in the sub-village divided by 30).

Village level population data were not available for Uganda, and only one of the two Ugandan districts had enumeration area population data, as districts were reformed in 2011. Therefore, clusters were drawn using a government generated parish level list that stated total number of households per parish (a parish represents a group of 4 to 10 villages). For each district, parishes were listed from north to south, the number of households within parishes cumulated, and parishes selected with probability proportional to the total number of households in the district. Within each selected parish, villages were listed, allocated a random number, and the village with the lowest random number selected as the household cluster. Villages are relatively small in Uganda and all households were listed by the mapper with no segmentation, then 30 households systematically selected from the village list using a fixed fraction.

A list of the different data domains captured by the EQUIP continuous household and health facility survey is provided in Table 3 and includes input, process, output and outcome indicators, along the continuum of care from pre-pregnancy to the first 28 days of life of the newborn [25]. A total of 100 indicators were defined (Additional file 2). Questionnaires were developed based on well-established sequences of questions as used in DHS [9], MICS [10] and SPA [9], and also drew upon earlier work in the study areas [26]. The project held consultative meetings at national, regional and district levels prior to launching the continuous surveys in order to engage with stakeholders about the type of data they considered to be useful for decision making in maternal and newborn health.

Table 3 EQUIP continuous household and health facility survey interviews and their data domains

In both countries, all consenting household heads were interviewed about usual residents and household characteristics. Further, all resident women (aged 13 to 49 years in Tanzania and 15 to 49 years in Uganda) who gave consent were interviewed.

Repeat governmental health facility census and interviews with health staff

The EQUIP intervention targeted all government health facilities in intervention districts (representing almost exclusively the providers of routine maternal and newborn care in the study area); thus a repeat facility census was implemented rather than a sample survey of facilities. To maximise temporal alignment of data, each facility census was completed in each of the four districts once every four months, consistent with the reporting interval for household data. To try to minimise selection bias of health staff, the maternity register at each health facility was reviewed by the facility interviewer, and the member of facility staff who conducted the last delivery was invited to complete the health staff interview. If that member of staff was not at work on the day of survey, then the staff member assisting the second to last delivery was invited, and so on until an available staff respondent had been identified (Table 3).

All households and health facilities were mapped using a handheld global positioning system (GPS) with a 3-meter accuracy of `etrex Garmin’ type to capture their geographical locations (coordinates). Coordinates were downloaded and linked by the data manager to corresponding continuous survey data of each household and facility using unique identification numbers.

Continuous survey teams

Each country budgeted for one continuous survey team to cover two districts throughout the 30 months of data collection. Previous in-country experience suggested that one team of eight people (one mapper, four household interviewers, one facility interviewer, one team supervisor, and one driver/vehicle) could complete 20 clusters per month when each cluster comprised 30 modular household interviews, a health facility assessment, and interviews with health facility staff [27]. The recruited team supervisor in each country was educated to degree level and the interviewers educated to secondary or higher level. Selection of the survey teams prioritised applications from people with previous experience of large-scale survey work, and with family members living in the same region as the study.

The continuous survey teams were supported by other EQUIP staff. One part-time senior scientist led the method development, sample selection, writing of quality control protocols, and wrote analytical plans. In each country, one part-time data manager programmed the survey instruments, managed the digital data downloads, and ran frequency and consistency checks on each month of data completed. And in each country, the overall project coordinator met the survey team each month to discuss experiences from the field. Further details about routine quality control and supervision are provided below.

Quality control and supervision

The teams were trained in-house for five days prior to survey, followed by a two-cluster pilot and review of all procedures. Questionnaires were pre-tested prior to training and further refined following this pilot. These were implemented in Swahili in Tanzania and Lusoga in Uganda, using personal digital assistants (PDAs) [28]. The use of PDAs facilitated a number of quality control mechanisms. First, questionnaires were programmed to prevent illogical skip patterns or responses. Second, cluster and facility identifying numbers were pre-programmed on the basis of geographical information, and interviewers were prompted to confirm their entries. Third, daily download of data from the PDAs to the team supervisor's laptop enabled routine frequency checks to be run each day of survey, and queries addressed immediately. This back-up process also provided a high level of data security. Fourth, the team supervisor had a re-interview module on the PDA (implemented in 3 of every 30 households) that was designed as a checking mechanism for key variables. After daily download of all data, the supervisor ran a programme that was written to compare re-interview responses with original interview responses for the same household identification number and report on differences. This report was then used by the supervisor to discuss with interviewers why differences may have occurred. Site visits by the EQUIP coordinator and data manager were conducted once a month. Finally, the survey team in each country attended a refresher training after every four-monthly round (teams decided on a one-day duration in Uganda, three days in Tanzania). To assure teams that the continuous data were being used, after every four-monthly round, they were provided with tabulations of the indicators arising from the data they collected.

Data management and analysis for report cards

The data management processes for the continuous surveys were similar to those for conventional periodic surveys, although they required routine operational support throughout the duration of the survey in contrast to the one-off peak activity. Data were downloaded from PDA to supervisor laptop, and from laptop to CD each day of survey: data remained on all three media until transferred to the secure hard drive in the project office and backed up. Once secured and backed-up, data tables for each cluster were checked for completeness against a paper summary report written by the field supervisor. When all expected interviews had been transferred and confirmed by the data manager, data were anonymised. After every four-monthly round of data collection, a pre-written computer program in STATA 12 (StataCorp LR, College Station, Texas, US) was run by the data manager to produce routine tabulations of all indicators, adjusting for clustering using the .svy commands. These tabulations were compiled in a district level report, and subsets of tabulations prepared as report cards. The concept behind the report cards was that individual teams would be motivated by seeing a health topic illustrated by data from their own locality. Importantly, both demand and supply side indicators would be displayed so that community volunteers, health facility staff, and district health management could consider which behaviours they wanted to change in their communities and could begin to think about ideas for achieving those changes. By having regular access to high quality data through repeat report cards, these teams could frequently update their knowledge and track any evidence of change. The process from finishing a round of data collection to producing tabulations that could be shared as report cards for quality improvement typically took four to six weeks. The process for developing report card content was initially led by EQUIP study staff who had pre-identified priority topics for quality improvement teams to address (for example, birth preparedness during antenatal care), but as the intervention proceeded, this increasingly involved consultation with the quality improvement mentors to tailor report cards to the health topics that teams themselves identified as being a priority. For evaluation of the EQUIP intervention effects, small-area analysis techniques [14] will be applied to further explore sub-district and temporal variations in demand and supply side improvements in maternal and newborn healthcare.


Ethical clearance was obtained from the local and institutional review boards from Ifakara Health Institute through the Commission for Science and Technology, (NIMR/HQ/r.8a/Vol.IX/1034), the Uganda National Council of Science and Technology and London School of Hygiene and Tropical Medicine (LSHTM) ethical clearance No 5888. Advocacy meetings were held at the start of the project with district and sub-district authorities, and repeated in Tanzania toward the end of the 12-month period. Communities and health facilities were informed about the survey one day prior to interview by the mapper who used information sheets in the local language. Written, informed consent to participate was obtained from each household head, woman, facility in-charge, and health staff interviewed.


Sample selection and interviews completed

The first three rounds of data collection (120 clusters per district) took one month longer than the expected 12 months to complete in each country: November 2011 to December 2012 in Tanzania, and December 2011 to February 2013 in Uganda. In each district, 120 clusters of 30 households (10 clusters each month) were surveyed, with the exception of Tandahimba, where one cluster was missed due to civil unrest in the study area (Table 1). During the same period, all government health facilities were surveyed three times. The goal to interview one health facility staff member about a recent delivery at each facility was only realised in Tandahimba district; in Mayuge district, just 97 health staff interviews were conducted during 114 health facility interviews (Table 1). Missing interviews occurred most frequently in smaller facilities that were often staffed by a very small number of healthcare workers.

Completion of household interviews in all four districts was consistently high throughout the first year of survey when the goal was to survey a total of 3,600 households per district. The percent of sampled households with a completed household interview was 96% in Tanzanian districts, and 91% in Ugandan districts (Table 4). The percent of resident eligible women interviewed was 89% in Tanzania and 87% in Uganda (Table 4).

Table 4 EQUIP continuous household survey interviews completed by district and by four-monthly round of implementation

Generating report cards for quality improvement teams

Point estimates for indicators (Additional file 2) were tabulated after each round of data collection and entered into district-level reports together with their 95% confidence intervals. These reports were then discussed by quality improvement mentors who selected a smaller subset of indicators that related to the current topics of interest for the quality improvement teams working at different levels in the intervention districts. The development of report card format for each level of quality improvement team involved considerable piloting and consultation initially to determine preferred format and content (Additional file 3). However, once format had been established for each level, the interval between completing a four-monthly round of data collection and delivering a report card was typically four to six weeks. Contrary to the initial expectation of EQUIP staff, community level quality improvement teams showed considerable appetite for these report cards, as did facility and district level quality improvement teams, as evidenced by their repeat requests for new and updated report cards as the first year of implementation proceeded.

In addition, at the end of the first three rounds, the full indicator set was calculated using the aggregated 12 months of data, considerably reducing the width of confidence intervals around all point estimates. These estimates were recorded in a report and shared with the intervention district health management team during a meeting attended by the EQUIP data manager who could answer questions about data sources, as well as the EQUIP quality improvement lead. A similar report was generated for the comparison area district health management team but was not supported by a facilitative meeting that discussed the potential utility of the report for decision making.

Continuous survey teams

In Tanzania, the survey team lived in the main town of Tandahimba district, using an EQUIP satellite office as a base. They were paid an annual salary according to institutional rates at Ifakara Health Institute. In Uganda, the survey team lived in Mayuge town, which is not located in either study district but is where the EQUIP office is based. They, too, were paid an annual salary according to Makerere University pay scales. In both countries, the team travelled to survey sites each day Monday to Friday in a project vehicle. Hours of work varied at their discretion according to distance to the cluster and season. For example, during cashew nut harvest in Tanzania, the team often chose to leave the project office very early so that they arrived at the selected cluster before women left for the fields at around 7.30 a.m.; in this way, they calculated that they spent less time toward the end of the day waiting to interview women who had not yet returned from farms. After the first year of implementation, in Tanzania, one household interviewer had left the eight-person team due to finding alternative employment. No one left the Ugandan team, but sadly a facility interviewer died from a cause unrelated to her work and needed to be replaced.


The objectives of the district level continuous household and health facility survey were met and surveys implemented as planned in Uganda and Tanzania, with few deviations. At the time of writing, 12 of 24 months of survey have been completed (in 13 calendar months), with completion rates comparable to those of other high-quality surveys. Every four months, the data were synthesized into report cards to support the decision making of quality improvement teams working at the levels of community, health facility, and district health management; the survey is also on track for evaluation of the EQUIP quality improvement innovation.

Rationale for the continuous survey model in the context of quality improvement

Core to the design of the EQUIP continuous survey model was to place equal weight on a data system that could be used for prospective course correction, as well as for evaluation (results to be reported separately). Quality improvement teams in intervention districts were provided with report cards of synthesised survey data about topics that concerned them - whether the topic was new and teams wanted to understand the existing population or health facility level situation, or whether the topic had already been worked on and teams wanted to track changes in outcomes to assess the extent to which changes led to improvements. These report cards were produced once every four months showing evidence at the district level, and also once every year disaggregating evidence to the sub-district level.

The EQUIP continuous survey included both households and health facilities in each district. This was important for two reasons. First, increasing the coverage of life-saving interventions along the continuum of care, one of the core evaluation goals of EQUIP, required population level coverage (household survey) data for measurement. However, some of the life-saving interventions for mothers and newborns cannot be captured using household data alone [11]. For example, skilled attendance at birth is typically reported as a maternal health coverage indicator from household surveys but active management of the third stage of labour, a key life-saving behaviour performed by the skilled birth attendants, cannot be estimated from household interviews because women themselves may not remember - or may not understand - the life-saving behaviour. Instead, EQUIP asked women about skilled attendance at birth (household survey), and asked skilled attendants about the behaviours they carried out at the last birth they attended (health facility survey), and linked their responses at the level of the cluster. Second, improving health facility quality to provide quality care to mothers and newborns, from both perceived and readiness perspectives, was another core goal of EQUIP. Again, this required population level information from women about their perceptions of health facility quality, and facility level information about the availability of quality services.

The cost of implementing a continuous survey linked to quality improvement will be reported elsewhere (Manzi, forthcoming), but early analysis of costs incurred during data collection, data management, and supervision of the continuous survey in Tanzania indicated a per-household cost of US$15 in the first year of implementation. Meanwhile, a number of lessons have been learned about continuous data collection, and about continuous reporting of information, broadly summarised here under lessons about implementation and lessons about data use.

Lessons about continuous survey implementation

  1. 1.

    Continuous surveys require continuous survey management. The one-off intense activity normally associated with cross sectional surveys was still present before the launch of the survey, but implementation of a survey over a long period of time needed sustained inputs and systems that could be completed by people working routine hours (for example, to check and maintain data quality, and to summarise data outcomes). One benefit of the requirement for such long-term sustained input has been to raise the profile and professional development of the data teams.

  2. 2.

    Applying repeat probability sampling methods to a district level sampling frame was essential in order to obtain representative samples each month, but resulted in some villages being selected more than once as the continuous survey proceeded: four different sub-villages were sampled two times in the first year of survey in both Tandahimba and Newala districts; 17 different villages were sampled twice, and 2 different villages were sampled three times in Namayingo district; 14 different villages were sampled twice in Mayuge district. Because data were anonymized, it was not possible to know whether or not individual households were sampled repeatedly. Repeat sampling of villages never occurred in contiguous months. While returning to the same villages more than once over a period of 12 months was not problematic in itself, the survey teams needed guidance in how to explain the sampling method to community members. It should be noted that a continuous survey implemented over a larger geographical area would be far less likely to experience this issue.

  3. 3.

    Although the questionnaire content was based on pre-existing multi-country survey tools, developments in definition of indicators (particularly for the relatively new field of newborn health) required that questionnaires be kept up to date while maintaining internal consistency so that different rounds of data collection would be comparable. While this was a design challenge, it was also an advantage to be able to add indicators to the survey as new priorities were identified, for example the life-saving newborn intervention `kangaroo mother care’ implemented at the community level.

  4. 4.

    The use of PDAs was important to facilitate the potential of continuous surveys to be timely in their output - but any changes in content of the questionnaire required reprogramming of tools and thorough retesting prior to implementation. Thus, having a data manager who could also programme the survey tools was a distinct advantage. Having protocols for storing data on multiple formats until the data manager had checked and secured data (for example, data kept on the individual PDA and team supervisor laptop, and burned to compact disk) was important, and no data loss was experienced in the first year.

  5. 5.

    Surprisingly, our concern about retention of survey team members was not justified; this may in part be due to the relatively small geographical area of implementation that allowed the teams to have a permanent home-base. The teams developed strong camaraderie and cooperation: indeed they had time to iron out any personality clashes that can arise during an intense data collection period. The inclusion of a three week break period once every four months - allowing time for data synthesis and preparations for the next round - appeared to have worked well in this context.

Lessons about continuous data use

The synthesis and use of continuous survey data by quality improvement teams needed considerable facilitation. People with strong quality improvement skills do not necessarily have excellent skills in data interpretation. Understanding the limits of both survey and facility-based data, feeling confident in presenting the data sources, and overcoming resistance to the idea that communities and health facilities would have an appetite for such data were all obstacles that needed to be addressed to realise the potential of continuous surveys to continuously report. The data collected included a large set of demand and supply side indicators in maternal and newborn health, but the indicators best suited to quality improvement were those that might be expected to change in a short period of time, a relatively small subset of the entire data collected. Further, quality improvement teams repeatedly requested more and more granularity in data synthesis, which the continuous survey could not provide at frequent intervals, necessitating that each quality improvement team collected their own monitoring data to track progress in their locality. Finally, while continuous survey methods address problems around infrequent periodicity of reporting, the issue of temporality when interpreting evidence remained. Our sample size was determined by the expected number of women who had a birth 0 to 12 months preceding survey, which meant that there was a delay between the implementation time period and the possibility of observing effects in the household survey data that measured behaviours up to 12 months in the past. As such, the household data could not reflect changes arising from the quality improvement innovation during the first year of data collection, a point that frustrated quality improvement teams who preferred to see changes in real time. This frustration sometimes created tension between the learning from monitoring data and the learning from the survey data. To address this, EQUIP tried to maximise the use of facility data in report cards, reflecting service provision in the four-month interval between repeat censuses. It should be noted that this issue of temporality also presents a complex analytic issue in that EQUIP cannot easily correlate health facility readiness (as assessed from a survey census during a four-month period) with the quality of care received by women who had a recent birth (as assessed from a household survey with a 0 to 12 month recall period).


The two district continuous surveys in Tanzania and Uganda proved to be feasible using a single survey team in each country during the first year of implementation. The dual objectives of the survey design were met. An important innovation from the EQUIP method was the continuous reporting of data for course-correction and evidence-informed decision making at multiple levels. Current research agendas to improve health in high mortality, data poor settings increasingly focus on mechanisms to enhance the use of data for decision making by in-country actors at the same time as using data for international comparisons and evaluation. Documenting the design, methods and implementation [19] of continuously reporting high quality data contributes to the body of knowledge about what can work to support countries achieve global health targets.

Authors' contributions

TM, JS, FM, PM, CH, SP and AR conceived of the continuous survey in the context of the EQUIP project. TM, JS, AR, and CH adapted the methodology and data collection instruments. YS, ST, DK and JA tested and refined the methods, and oversaw implementation of the work. TM and AR drafted the manuscript. All authors read and approved the final manuscript.

Additional files


  1. Countdown to 2015 maternal, newborn & child survival. Building a future for women and children: the 2012 report. http://www.countdown2015mnchorg/reports-and-articles/2012-report 2012

  2. Lozano R, Wang H, Foreman KJ, Rajaratnam JK, Naghavi M, Marcus JR, Dwyer-Lindgren L, Lofgren KT, Phillips D, Atkinson C, Lopez AD, Murray CJ: Progress towards Millennium Development Goals 4 and 5 on maternal and child mortality: an updated systematic analysis. Lancet. 2011, 378 (9797): 1139-1165. 10.1016/S0140-6736(11)61337-8.

    Article  PubMed  Google Scholar 

  3. Essential Interventions, Commodities and Guidelines for Reproductive, Maternal, Newborn and Child Health. A Global Review of the Key Interventions Related to Reproductive, Maternal Newborn and Child Health (RMNCH). 2011, (last accessed 13/08/14),

  4. Bryce J, Daelmans B, Dwivedi A, Fauveau V, Lawn JE, Mason E, Newby H, Shankar A, Starrs A, Wardlaw T: Countdown to 2015 for maternal, newborn, and child survival: the 2008 report on tracking coverage of interventions. Lancet. 2008, 371 (9620): 1247-1258. 10.1016/S0140-6736(08)60559-0.

    Article  PubMed  Google Scholar 

  5. Lawn JE, Kinney MV, Black RE, Pitt C, Cousens S, Kerber K, Corbett E, Moran AC, Morrissey CS, Oestergaard MZ: Newborn survival: a multi-country analysis of a decade of change. Health Policy Plan. 2012, 27 Suppl 3: iii6-28.

    PubMed  Google Scholar 

  6. Hirschhorn LR, Baynes C, Sherr K, Chintu N, Awoonor-Williams JK, Finnegan K, Philips JF, Anatole M, Bawah AA, Basinga P: Approaches to ensuring and improving quality in the context of health system strengthening: a cross-site analysis of the five African Health Initiative Partnership programs. BMC health services research. 2013, 13 Suppl 2: S8-10.1186/1472-6963-13-S2-S8.

    Article  PubMed  Google Scholar 

  7. Maluka S, Kamuzora P, San Sebastian M, Byskov J, Olsen OE, Shayo E, Ndawi B, Hurtig AK: Decentralized health care priority-setting in Tanzania: evaluating against the accountability for reasonableness framework. Soc Sci Med. 2010, 71 (4): 751-759. 10.1016/j.socscimed.2010.04.035.

    Article  PubMed  Google Scholar 

  8. Mutemwa RI: HMIS and decision-making in Zambia: re-thinking information solutions for district health management in decentralized health systems. Health Policy Plan. 2006, 21 (1): 40-52. 10.1093/heapol/czj003.

    Article  PubMed  Google Scholar 

  9. Demographic and health surveys. Available at: 2013(Last accessed 24 July 2013), http://www.measuredhscom/

  10. Multiple indicator cluster survey. Available at: Last accessed 24 July 2013, http://www.uniceforg/statistics/index_24302html

  11. Bryce J, Arnold F, Blanc A, Hancioglu A, Newby H, Requejo J, Wardlaw T: Measuring coverage in MNCH: new findings, new strategies, and recommendations for action. PLoS Med. 2013, 10 (5): e1001423-10.1371/journal.pmed.1001423.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Mutale W, Chintu N, Amoroso C, Awoonor-Williams K, Phillips J, Baynes C, Michel C, Taylor A, Sherr K: Improving health information systems for decision making across five sub-Saharan African countries: Implementation strategies from the African Health Initiative. BMC health services research. 2013, 13 Suppl 2: S9-10.1186/1472-6963-13-S2-S9.

    Article  PubMed  Google Scholar 

  13. Rowe AK: Potential of integrated continuous surveys and quality management to support monitoring, evaluation, and the scale-up of health interventions in developing countries. Am J Trop Med Hyg. 2009, 80 (6): 971-979.

    PubMed  Google Scholar 

  14. Srebotnjak T, Mokdad AH, Murray CJ: A novel framework for validating and applying standardized small area measurement strategies. Population health metrics. 2010, 8: 26-10.1186/1478-7954-8-26.

    Article  PubMed  PubMed Central  Google Scholar 

  15. External evaluation of the Peru continuous survey experiment. Available at:, Last accessed 24 July 2013, http://www.ghtechprojectcom/files/Peru%20Evaluation,%20The%20Peru%20Continuous%20Survey%20Experiment%20December%202007pdf

  16. Roca-Feltrer A, Lalloo DG, Phiri K, Terlouw DJ: Rolling Malaria Indicator Surveys (rMIS): a potential district-level malaria monitoring and evaluation (M&E) tool for program managers. Am J Trop Med Hyg. 2012, 86 (1): 96-98. 10.4269/ajtmh.2012.11-0397.

    Article  PubMed  PubMed Central  Google Scholar 

  17. DHS: Measure DHS Continuous Demographic and Health Survey/Service Provision Assessment. 2013, last accessed 17/11/13, http://www.measuredhscom/publications/publication-dm34-other-dissemination-materialscfm

  18. Hanson C, Waiswa P, Marchant T, Marx M, Manzi F, Mbaruku G, Rowe A, Tomson G, Schellenberg J, Peterson S, Team ES: Expanded Quality Management Using Information Power (EQUIP): protocol for a quasi-experimental study to improve maternal and newborn health in Tanzania and Uganda. Implementation science : IS. 2014, 9 (1): 41-10.1186/1748-5908-9-41.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Ebrahim S, Clarke M: STROBE: new standards for reporting observational epidemiology, a chance to improve. Int J Epidemiol. 2007, 36 (5): 946-948. 10.1093/ije/dym185.

    Article  PubMed  Google Scholar 

  20. Hanson C: The epidemiology of maternal mortality in southern Tanzania., London School of Hygiene and Tropical Medicine, London, UK, 2013,

    Google Scholar 

  21. National Bureau of Statistics (NBS) Tanzania and ICF Macro: Tanzania Demographic and Health Survey. 2011, NBS and ICF Macro, Dar-es-Salaam, Tanzania

  22. Uganda Bureau of Statistics (UBOS) and ICF International Inc: Uganda Demographic and Health Survey 2011. 2012, Ugandan Bureau of Statistics and ICF Inc, 2012, Kampala, Uganda and Calverton, Maryland, (last accessed 14/8/14),

  23. Armstrong Schellenberg JR, Shirima K, Maokola W, Manzi F, Mrisho M, Mushi A, Mshinda H, Alonso P, Tanner M, Schellenberg DM: Community effectiveness of intermittent preventive treatment for infants (IPTi) in rural southern Tanzania. Am J Trop Med Hyg. 2010, 82 (5): 772-781. 10.4269/ajtmh.2010.09-0207.

    Article  PubMed  PubMed Central  Google Scholar 

  24. UNICEF: Multiple indicator cluster survey manual chapter 4: designing and selecting a sample. 2000, last accessed 30/10/12, http://www.childinfoorg/mics2_manualhtml

  25. Bryce J, Victora CG, Boerma T, Peters DH, Black RE: Evaluating the scale-up for maternal and child survival: a common framework. International Health. 2011, 3: 139-146. 10.1016/j.inhe.2011.04.003.

    Article  PubMed  Google Scholar 

  26. Penfold S, Hill Z, Mrisho M, Manzi F, Tanner M, Mshinda H, Schellenberg D, Armstrong Schellenberg JR: A large cross-sectional community-based study of newborn care practices in southern Tanzania. PLoS One. 2010, 5 (12): e15593-10.1371/journal.pone.0015593.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Hanson K, Nathan R, Marchant T, Mponda H, Jones C, Bruce J, Stephen G, Mulligan J, Mshinda H, Schellenberg JA: Vouchers for scaling up insecticide-treated nets in Tanzania: methods for monitoring and evaluation of a national health system intervention. BMC Public Health. 2008, 8: 205-10.1186/1471-2458-8-205.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Shirima K, Mukasa O, Armstrong Schellenberg JRM, Manzi F, John D, Mushi A, Mrisho M, Tanner M, Mshinda H, Schellenberg D: The use of Personal Digital Assistants for data entry at the point of collection in a large household survey in Southern Tanzania. Emerg Themes Epidemiol. 2007, 1 (4): 5-10.1186/1742-7622-4-5.

    Article  Google Scholar 

Download references


We are grateful to the interviewees and the continuous survey teams in each country, and for the support received by district and local officials during field work. EQUIP is funded by the European Union under the FP-7 grant agreement. The EQUIP consortium includes Karolinska Institutet in Sweden (coordinating university), Makerere University in Uganda, Ifakara Health Institute in Tanzania, the London School of Hygiene and Tropical Medicine in the United Kingdom and Evaplan in Germany. EU as the funding organisation had no role in the design of the EQUIP intervention and the research undertaken.

Equip study group

Ulrika Baker, Hudson Balidawa, Romanus Byaruhanga, Jennie Jaribu, James Kalungi, Rogers Mandu, Michael Marx, Godfrey Mbaruku, Pius Okong, Monica Okuga, Tara Tancred, Goran Tomson.


European Union (under FP-7 grant agreement n° 265827).

Author information

Authors and Affiliations


Corresponding author

Correspondence to Tanya Marchant.

Additional information

Competing interests

The authors declare that they have no competing interests.

Electronic supplementary material

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Marchant, T., Schellenberg, J., Peterson, S. et al. The use of continuous surveys to generate and continuously report high quality timely maternal and newborn health data at the district level in Tanzania and Uganda. Implementation Sci 9, 112 (2014).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: