IELS 2018 Design

Methodology
  • International large-scale sample survey of children, their parents and teachers
  • Child assessment allowing monitoring of trends across successive cycles
  • Predominantly quantitative, with qualitative information presented in descriptive country chapters
Method(s)
  • Overall approach to data collection
    • Proctored assessment of children’s skills
    • Self-administered questionnaires for parents and teachers
  • Specification
    • Cross-sectional
    • Some trend reporting possible across cycles
Target population

All children in centers/schools at the age of five at the time of the assessment

Sample design
Stratified two-stage cluster sampling design - optimized for the child population

First stage: sampling centers/schools

  • Selection probability proportional to the size of the center/school (PPS), i.e., the expected number of eligible children
  • Optional: stratification of centers/schools according to (demographic) variables of interest (e.g., center/school type or source of funding, level of urbanization, region of the country), either explicit or implicit
  • Random-start fixed-interval systematic sampling
  • Centers/Schools sampled at the same time for field trial and main data collection
  • For each sampled center/school, two replacement centers/schools were assigned where possible

 

Second stage: sampling children

Within centers/schools agreeing to participate:

  • Systematic random sampling was used to select children.
  • Each child had an equal selection probability within the center/school.

 

General notes

  • Sampling of centers/schools was conducted by the sampling team at IEA Hamburg.
  • Sampling procedures within the center/school were carried out by the national study centers, using the Within-school Sampling Software for Windows (WinW3S) provided by IEA.
Sample size

Intended per country

  • Approx. 3,000 children
  • A minimum of 200 centers/schools
  • In each sampled center/school, selection of 15 children (or all children in centers/schools with fewer than 15 children)
     

Total achieved

  • Almost 7,000 children and more than 5,000 parents participated in the study.
  • Almost 6,500 staff questionnaires were completed.
Data collection techniques and instruments

Student Assessments

Direct child assessment

  • A study administrator conducted the assessment with each child individually
    • Using electronic tablets
    • 15 minutes per domain
    • 4 domains conducted on two assessment days
  • Types of questions
    • multiple choice
    • free form
    • short response
    • display stimulus
    • re-usable templates
    • complex dynamic items, interactive on-screen element
  • Achievement items
    • A total of 128 items broken down by 4 domains and 13 sub-domains
    • Items per domain
      • Emergent Literacy: 21 items
      • Emergent Numeracy: 22 items
      • Self-Regulation: 69 items
      • Empathy: 16 items

Indirect assessment: As part of the

  • Parent questionnaire
  • ECEC staff or teacher questionnaire

 

Contextual information about the household and the child

  • Parent questionnaire
    • Completed either online or on paper
    • 24 questions containing 177 items
    • Time to complete approximately 40 minutes
  • ECEC staff or teacher questionnaire
    • Completed either online or on paper
    • Two sections containing 12 questions and 69 items
    • Time to complete approximately 20 minutes
Languages

The assessment instrument and the staff questionnaire were administered in 3 languages, the parent questionnaire in 4 languages:

  • English (ENG, USA): direct assessment, parent and staff questionnaire
  • Estonian (EST): direct assessment, parent and staff questionnaire
  • Russian (EST): direct assessment, parent and staff questionnaire
  • Spanish (USA): parent questionnaire
Translation procedures
  • Source materials for all instruments were developed in English.
  • Two independent translators translated the source material into the target language.
  • These two translations were merged by a third person into a single national version.
  • The translated/adapted versions of the instruments were verified by cApStAn.
Quality control of operations

Measures during data collection

  • National project managers (NPMs) in each participating country responsible for data collection
  • Standardized survey operation procedures: step-by-step documentation of all operational activities provided with manuals
  • Full-scale field test of all instruments and operational procedures (in each participating country)
  • Appointment of a national data manager (NDM) to oversee and implement all data tasks
  • National Quality Assurance Monitoring Program
  • International Quality Assurance Monitoring Program
  • Training of NPMs, center/school coordinators, study administrators, national quality assurance monitors (NQAMs)
  • Provision of software tools for supporting activities
  • Survey activities questionnaire (SAQ) to be completed by NPMs
  • Double data entry for questionnaires
  • Visits to centers/schools by international quality assurance monitors (IQAMs)

Measures during data processing and cleaning

  • Thorough testing of all data cleaning programs with simulated data sets
  • Material receipt database
  • National adaptation database
  • Standardized, iterative four-step cleaning process
    • Documentation and structure check
    • Identification variable (ID) and linkage cleaning
    • Background cleaning (resolving inconsistencies in questionnaire data)
    • Valid range checks
  • Repetition of data cleaning until all data shown to be consistent and comparable
  • Identification of irregularities in data patterns and correction of data errors