TALIS Starting Strong 2018 Design

Methodology
Quantitative Study
Method(s)

Overall approach to data collection: self-administered cross-sectional survey, with questionnaires being administered online or in paper form or both.

Target population
  • Staff and leaders in ISCED Level 02 ECEC centers that cater to children from three years of age up to the time they enter primary education
  • Staff and leaders in ECEC settings that cater to children under three years of age (U3)
Sample design

Stratified two-stage probability sample design

Stratification on basis of nationally relevant criteria (e.g., different types of centers, geography, urbanization level, source of funding, language of instruction).

First stage: sampling of centers

  • Systematic random sampling with probability proportional to size (PPS) within explicit strata was used in most cases; systematic equal probability random sampling was used where no measure of size (MOS) was available.
  • Samples for field trial (FT) and main study (MS) were selected at same time to avoid sample overlap.
  • Replacement centers were identified at the time of sample selection (two for each sample center).

Second stage: sampling of staff

  • Selected randomly from the list of in-scope staff for each of the selected centers.
  • Center leaders were asked to complete the leader questionnaire.

 

The stratified sampling was done as per each participant country’s respective desired stratification strategy, depending on country’s subgroup of interest.

Sample size
  • Nominally, 180 centers
  • Nominally, at least eight staff members per center; in the case of fewer staff members, all were selected
    • A total of 400 effective staff was set to adjust for the clustering effect within selected centers
Data collection techniques and instruments

Field trial questionnaire

  • Built-in experiments designed to test various question formats and wording to identify which question version had the better psychometric properties were included.

Main survey questionnaire

Languages
  • The survey instruments were translated by the national study centers of each participating country into their preferred local language.
  • The questionnaire was administered in eleven different languages for the nine participating countries (Israel and Norway conducted the survey in two languages).
  • The survey was conducted in the following languages:
    • Spanish
    • Danish
    • German
    • Icelandic
    • Arabic
    • Hebrew
    • Japanese
    • Korean
    • Bokmål
    • Nynorsk
    • Turkish
Translation procedures
  • The national study centers translated the international source version of all instruments including questionnaires and cover letters into their local language.
  • Instruments were then adapted using national adaptation forms (NAFs) by the national project managers (NPMs).
  • Adaptations were verified and approved by the International Study Center (ISC), then translated, internally reviewed, and revised by NPMs, and then checked by professional translation verifiers coordinated by IEA Amsterdam.
  • NPMs implemented translation verifiers’ feedback and adjusted the layout of the national instruments as necessary.
  • The ISC verified and approved the layout of the paper instruments and provided the Online Survey System (OSS); implementation of the instruments in the online delivery system was performed by NPMs.
  • The ISC verified and approved the final online instruments using the OSS.
Quality control of operations

Measures during data collection: 

  • IEA designed and supervised standardized quality control measures for the international quality control program and recruited independent international quality observers (IQOs) to visit participating centers and interview the people responsible for coordinating survey activities in these centers.
  • National quality control program were run separately by national study centers, where national quality observers (NQOs) ensured quality check procedures either by calling the sampled centers or by visiting them. The ISC required each NQO to personally visit at least ten centers.
  • The IEA helped the national study centers set up national quality control procedures by providing NPMs with a national quality observer manual that outlined the purpose and key components of quality monitoring.
  • The ISC complemented the quality control activities at the national and international levels by designing and administering a survey activities questionnaire developed to elicit information about the national project managers’ (NPMs) experiences when preparing for and conducting the TALIS Starting Strong 2018 data collection.

Measures during data processing and cleaning: 

  • The foundation for quality assurance was laid before the data were submitted to the ISC through the provision of manuals, training, and software designed to standardize a range of operational and data-related tasks, and through the verification of the content and layout of the national adaptation forms, paper questionnaires, online questionnaires, and codebooks.
  • Before the data-cleaning programs were applied to real data, they were thoroughly tested using simulated datasets containing all anticipated problems and inconsistencies.
  • To document versions and updates, all incoming data and documents were registered in a specific material-receipt database. The date of arrival was recorded, along with any specific issues meriting attention.
  • All national adaptations and all detected deviations from the international data structure were recorded in a national adaptation database and verified against the national adaptation form (NAF), the national instruments, the codebooks, and the data content.
  • Data cleaning was organized according to rules strictly and consistently applied to all national datasets, making deviations from the cleaning sequence impossible.
  • Any and all systematic or manual corrections made to data files were implemented and recorded in specific cleaning reports for the TALIS Starting Strong International Consortium and for NPM review and approval.

On completion of data cleaning for each participating country, all cleaning checks were repeated from the beginning to detect any problems that might have been inadvertently introduced during the cleaning process itself.