Data extracted | Description1 |
---|---|
Instrument development | Methods used to generate items (e.g., items derived from existing instruments; new items generated from data from interviews or comprehensive review of theory) |
 | Methods used to refine instrument |
Administration & scoring | Method of administration (e.g., self-administered, facilitated) |
 | Feasibility of administration (e.g., researcher time, resources) |
 | Acceptability to respondents (e.g., views on burden and complexity) |
 | Methods of scoring and analysis |
Measurement properties | Methods and findings of assessments of: |
 | Content validity (e.g., clear description of content domain and theoretical basis, expert assessment of items for relevance and comprehensiveness) |
 | Construct validity |
 | - Hypothesis testing (e.g., whether scores on the instrument converge with measures of theoretically related variables, discriminate between groups, predict relevant outcomes) |
 | - Instrument structure (e.g., using factor analytic methods) |
 | Reliability (e.g., internal consistency, stability over time, inter-rater) |
 | Responsiveness |
Other assessments | Interpretability (potential for ceiling and floor effects; guidance on what constitutes an important change or difference in scale scores) |
 | Generalizability (sampling methods, description of sample, and response rate reported) |