PISA 2012 Design

Methodology
Quantitative Study
Method(s)
  • Overall approach to data collection
    • Proctored assessment and self-administered questionnaire
  • Specification
    • Cross-sectional, every 3 years
Target population

15-year-old students enrolled in an education institution at Grade 7 or higher in their respective countries and economies, including those:

  • Enrolled full-time in an educational institution
  • Enrolled in an educational institution but attending only on a part-time basis
  • Enrolled in a vocational training program, or any other related type of educational program
  • Attending a foreign school within the country (as well as students from other countries attending any of the programs in the first three categories).
Sample design
  • The international PISA target population in each participating country and economy consisted of 15-year-old students attending an educational institution in Grade 7 or higher.
  • In all but one country (the Russian Federation) the sampling design used for the PISA assessment was a two-stage stratified sample design.

 

The first-stage sampling units

  • Consisted of individual schools with 15-year-old students.
  • Schools were sampled systematically from a comprehensive national list of all PISA-eligible schools – the school sampling frame – with probabilities that were proportional to a measure of size.
  • The measure of size was a function of the estimated number of PISA-eligible 15-year-old students enrolled in the school. This is referred to as systematic probability proportional to size (PPS) sampling.
  • Prior to sampling, schools in the sampling frame were assigned to mutually exclusive groups based on school characteristics (explicit strata) specifically chosen to improve the precision of sample-based estimates.

 

The second-stage sampling units in countries using the two-stage design were students within sampled schools.

  • Once schools were selected to be in the sample, a complete list of each sampled school’s 15-year-old students was prepared.
  • For each country a target cluster size (TCS) was set, typically 35 students, although with approval, countries could use other values.
  • From each list of students larger than the TCS, a sample of typically 35 students was selected with equal probability
  • For lists smaller than the TCS, all students on the list were selected.
Sample size

Per country/economy

  • School sample:  at least 150 schools
  • Student sample:  between 4,500 to 10,000 students 

 

In total

Approximately 510,000 students total

Data collection techniques and instruments

Assessment in mathematics, reading, and science (2 hours total , 13 linked booklets)

  • In a range of countries and economies, an additional 40 minutes were devoted to the computer-based assessment of mathematics and reading.
  • Multiple-choice and open-ended questions were utilized.
  • Test items were organized in groups based on a passage setting out a real-life situation.
  • A total of approximately 390 minutes of test items was covered, with different students taking different combinations of test items.

 

Background questionnaires

  • Students answered a background questionnaire:
    • For providing information about themselves and their homes.
    • It took 30 minutes to complete.
  • School principals were given a 20-minute questionnaire about their schools.
  • In some countries and economies, optional short questionnaires were administered to:
    • Parents, to provide further information on past and present reading engagement in the student home;
    • Students, to provide information on their access to and use of computers, as well as their educational history and aspirations.

 

Rotation design

  • With 13 different booklets for each group of 35 students, no more than 3 students received the same booklet.
  • Booklets were allocated to individual students according to a random selection process.
Techniques
  • achievement or student test
  • questionnaire
Languages
  • The assessment instrument was administered in 48 languages.
  • The most common languages:
    • English  (17 countries)
    • Spanish (8 countries)
    • German (7 countries)
Translation procedures
  • Development of two source versions of the assessment instruments, one in English and one in French (except for the financial literacy and reading component skills options, and the operational manuals, which were provided only in English)
  • Double translation design
  • Preparation of detailed instructions for the translation of the instruments for the field trial and their subsequent review before the main survey
  • Preparation of translation and adaptation guidelines
  • Training of national staff in charge of the translation and/or adaptation of the instruments
  • Verification of the national versions by independent reviewers appointed by the consortium.
Quality control of operations
  • Procedures for sampling, translation, survey administration, and data processing were developed in accordance with international technical standards and fully documented.
  • Quality-monitoring activities observed and recorded any deviations from agreed-upon procedures during the implementation of the survey, and included:
    • Field trial and main survey review
    • A final visual check
    • National center quality monitor visits and consultations
    • PISA quality monitor visits
    • Test administration
    • A post final optical check.
  • All quality assurance data collected during the cycle were entered and compiled in a central data adjudication database.
    • The Technical Advisory Group (TAG) and the sampling referee reviewed this database to make country-by-country evaluations on the quality of field operations, printing, translation, school and student sampling, and coding.
    • The final report by TAG experts was then used for the purpose of data adjudication.