REDS Design

Methodology
Quantitative Study
Method(s)

Overall approach to data collection

  • Cross-sectional survey
  • Self-administered surveys for students, teachers, and school principals
Target population

Students

All students enrolled in the grade that represents eight years of schooling, counting from the first year of ISCED level 1.

 

Academic year

In most countries, the academic year changed between the reference period and the survey administration period. Hence, grade 8 students reflected on a situation they experienced in their seventh grade, whenever questions referred to the reference period. In Kenya, the academic year had been extended as a reaction to the interruptions caused by COVID-19. Therefore, students in grade 7 during survey administration had already been in grade 7 during the reference period.

 

Teachers

All teachers who had taught students of the target population during the reference period and were still teaching at the same schools during survey administration

 

Schools

Those schools where students of the above-described target population could be found

Sample design
TWO-STAGE STRATIFIED CLUSTER DESIGN

First stage: Selecting schools

  • In most countries, selection probability of schools was proportional to their size (PPS) as measured by the number of target grade students
  • Exceptions:
    • For Rwanda, where only principals were asked to participate, a systematic random sample of schools was drawn.
    • For Uzbekistan, the selection probability was proportional to the number of grade 4 students.
    • Additional sampling stages were used in the Russian Federation (three stages) and India (four stages)
  • For some countries, pre-existing samples from other IEA large-scale assessments (ICILS 2018, TIMSS 2019, or ICCS 2022) were used.

 

Second stage: Selecting students and/or teachers and/or principals

  • Within participating schools,
    • 20 students were randomly selected among all the enrolled students in the target grade, and
    • 20 teachers were randomly selected from all teachers teaching in the target grade during the reference period.
  • In schools with fewer eligible students or teachers, all were selected.
  • The principal of each sampled school was asked to complete the school questionnaire.
  • Exceptions in specific countries:
    • Denmark and Slovenia randomly selected a grade 8 class; within the selected class all students were asked to participate.
    • In Burkina Faso, Ethiopia, and Kenya, within-school sampling did not use IEA’s WinW3S software, deviating from sampling procedure requirements.
Sample size

Per country (intended)

  • At least 150 schools
  • 20 students and 20 teachers per school

 

Across all countries (achieved)

Data collection techniques and instruments

National questionnaire

  • Completed under the oversight of the national center
  • Completed by all countries

 

School questionnaire

  • Completed by or under the oversight of the school principal
  • Implemented by all countries

 

Teacher questionnaire

  • Completed by teachers
  • Rwanda did not implement this questionnaire.
  • Teachers were asked to focus their answers on a target class, i.e., the subject that they taught most in the target grade during the COVID-19 disruption.

 

Student questionnaire

  • Completed by students in the target grade
  • India, Rwanda, and Uruguay did not implement this questionnaire.

 

Student, teacher, and school questionnaires were implemented via the internet (and a standard web browser), using the IEA Online Survey System (IEA OSS) software. The national questionnaire was implemented through emailed Word documents. Paper versions of the questionnaires were provided if requested by countries.

 

Additional note

  • Data collection stretched over eight months (December 2020 to July 2021) altogether but was completed in most countries within three months (exception: Denmark) and conducted in two waves:
    • Wave 1: Denmark, the Russian Federation, Slovenia, the United Arab Emirates, and Uzbekistan
    • Wave 2: Burkina Faso, Ethiopia, India, Kenya, Rwanda, and Uruguay (this country administered the first version of the questionnaire)
  • Data collection waves were defined by scheduled dates of data collection activities: Wave 1 was scheduled to end data collection by the end of 2020. Wave 2 was scheduled to complete data collection activities in March/April 2021. Several countries, however, ended data collection after the intended end date.
  • Two slightly different versions of questionnaires were used.
    • The first version finished in October 2020, while the recruitment of additional participants was still ongoing.
    • Small adjustments were made to the second and final version to increase relevance for countries in which remote online teaching was not possible.
Techniques
  • questionnaire
Languages
  • Administration of questionnaires (= study instrument) in 12 different languages
  • The most common language was English, which was used in 6 countries (Kenya, Rwanda, UAE, Ethiopia (SchQ, TQ), India (SchQ))
Translation procedures
  • Development of an international version of all questionnaires in English.
  • Translation into applicable languages of instruction by national research coordinators (NRCs).
  • Translation verification by IEA to ensure the international comparability of all country data.
Quality control of operations

Measures during data collection

  • National Research Coordinator (NRC) in each country responsible for data collection
  • Standardized survey operation procedures: step-by-step documentation of all operational activities provided with manuals
  • Provision of software tools for supporting activities (e.g., sampling and tracking classes, students, and teachers, administering school, teacher, and student questionnaires online and in paper, creating data files)
  • Training of NRCs and their staff

 

Measures during data processing and cleaning

  • Testing of all data cleaning programs
  • Registration of all incoming data and documents in a specific database
  • Standardized cleaning process
  • National adaptation database
  • Repetition of data cleaning and comparison of new data sets with preceding versions
  • Identification of irregularities in data patterns and correction of data errors