Abstract:
Over the past decades, using technology to enhance students’ learning has been actively promoted worldwide. Teachers’ Technological Pedagogical Content Knowledge (TPACK) profiles are pivotal in this effort, significantly influencing educational outcomes. TPACK is widely recognized as one of the critical indicators that determine the success of technology-related educational policies and reforms. Consequently, a plethora of questionnaire-based tools for assessing teachers’ TPACK have been developed. However, relatively few studies examine the adequacy of the validity evidence provided when developing these tools. To fill this gap, the current study aims to examine the validity evidence of the questionnaire-based tools that assess teachers’ TPACK by applying an argument-based validation approach. This research consists of two phases. First, a dataset of research articles using questionnaire-based tools for assessing teachers’ TPACK was developed based on studies conducted between January 2006 and June 2023. A total of 353 articles were selected and coded according to specific criteria to identify the critical questionnaire-based tools for TPACK that have been commonly applied in prior research. Second, using the lenses of the Argument-based Validation Framework, a thematic analysis was adopted to examine the validity evidence of these tools.
The research findings indicate that the commonly used questionnaire-based TPACK tools are derived from the studies of Schmidt et al. (2009), Chai et al. (2011), and Yurdakul et al. (2012). Scoring, generalization, extrapolation, and implications are regarded as important component categories for assessing the validity evidence of the questionnaire-based TPACK tools. In addition, this research found that evidence of the consequences of the assessment in the implications component was lacking.
The research offers several implications for those interested in developing useful questionnaire-based tools for educational researchers and policymakers. First, a thorough consideration of the consequences of the assessment should be included in the evaluation of teachers’ TPACK. Second, researchers and educational institutions should incorporate the consequences of the assessment as a crucial component in the validation process. Third, policymakers and educators can rely on more accurate and comprehensive data by ensuring the adequate validity evidence of the TPACK tools, leading to better-informed decisions regarding educational policies and reforms, ultimately improving student learning experiences.
The research findings indicate that the commonly used questionnaire-based TPACK tools are derived from the studies of Schmidt et al. (2009), Chai et al. (2011), and Yurdakul et al. (2012). Scoring, generalization, extrapolation, and implications are regarded as important component categories for assessing the validity evidence of the questionnaire-based TPACK tools. In addition, this research found that evidence of the consequences of the assessment in the implications component was lacking.
The research offers several implications for those interested in developing useful questionnaire-based tools for educational researchers and policymakers. First, a thorough consideration of the consequences of the assessment should be included in the evaluation of teachers’ TPACK. Second, researchers and educational institutions should incorporate the consequences of the assessment as a crucial component in the validation process. Third, policymakers and educators can rely on more accurate and comprehensive data by ensuring the adequate validity evidence of the TPACK tools, leading to better-informed decisions regarding educational policies and reforms, ultimately improving student learning experiences.