Abstract:
The value of national assessments like NAPLAN depends on the validity of the use of test data. Validity, the most important consideration in assessment, is the extent to which inferences drawn from test scores are appropriate. Even a well- constructed test becomes invalid if the results are misunderstood or misused. Validity is considered the most important aspect of assessment design and interpretations of test scores. This definition represents a shift from seeing validity as a fixed property of a test to viewing it as an argument about the appropriateness of the inferences made from the results. Any validity enquiry needs to know the proposed purpose of the test in order for issues of fitness-for- purpose, result interpretation and consequences to be evaluated.
Current research has suggested that the effectiveness of NAPLAN is seriously impacted by unintended consequences including a narrowed curriculum focus in schools, excessive time spent on test preparation, increased pressure and anxiety within school communities, and the diminishing value placed on teacher judgement. This presentation will explore the multiple uses of NAPLAN data by systems, schools and classrooms at each stage of the NAPLAN assessment cycle (administration, scoring, aggregation, generalization, extrapolation, evaluation, decision and impact) in terms of validity.
While there is a history of validity research and debate regarding large-scale, national assessments in other countries, this is largely absent from considerations and conversations about testing in Australia.
In the comparatively short period of time in which Australia has been involved in large-scale national testing no systematic enquiry on the validity of NAPLAN as an integrated concept has been conducted. This is problematic both for NAPLAN’s current on-paper form and its future as an online, adaptive test. This presentation will consider the challenges of validity for national assessment by mapping and critiquing some of the current uses of NAPLAN data. It will also consider the impact of this validity on the introduction of online, adaptive testing and propose a framework for the valid use of this data in an effort to prevent many of the unintended consequences currently identified in Australian research.
Current research has suggested that the effectiveness of NAPLAN is seriously impacted by unintended consequences including a narrowed curriculum focus in schools, excessive time spent on test preparation, increased pressure and anxiety within school communities, and the diminishing value placed on teacher judgement. This presentation will explore the multiple uses of NAPLAN data by systems, schools and classrooms at each stage of the NAPLAN assessment cycle (administration, scoring, aggregation, generalization, extrapolation, evaluation, decision and impact) in terms of validity.
While there is a history of validity research and debate regarding large-scale, national assessments in other countries, this is largely absent from considerations and conversations about testing in Australia.
In the comparatively short period of time in which Australia has been involved in large-scale national testing no systematic enquiry on the validity of NAPLAN as an integrated concept has been conducted. This is problematic both for NAPLAN’s current on-paper form and its future as an online, adaptive test. This presentation will consider the challenges of validity for national assessment by mapping and critiquing some of the current uses of NAPLAN data. It will also consider the impact of this validity on the introduction of online, adaptive testing and propose a framework for the valid use of this data in an effort to prevent many of the unintended consequences currently identified in Australian research.