Abstract:
Over the last years, collaborative problem solving (CPS) has been one of the most commonly discussed so called 21st century skills, central for academic and workplace success and usually linked with attempts to create innovative assessment methods. One of the CPS assessments that received increased attention during the last years was the one included in the 2015 Programme for International Student Assessment (PISA) (OECD, 2017). PISA aimed to assess 15-year-old students’ CPS achievement addressing a lack of internationally comparable data in this field, allowing, thus, countries and economies to see where their students stand in relation to students in other education systems.
A number of limitations have been recently raised concerning the ecological validity of the PISA 2015 CPS assessment, the replacement of humans by computer-agents in the tasks used, and the restricted communication between team members through pre-defined messages. It has been also argued that when an instrument is intended to be used with multiple populations such as those in PISA, further validation is required to warrant that the instrument operates in the same way across and within these populations. Such an exploration is of particular importance, first, due to PISA’s far-reaching political influence and second, due to the complexity of the CPS assessment framework developed.
In this paper we aim to shed more light into the conceptualisation and measurement of CPS achievement construct. We use Item response theory (Rasch model) to analyse item-level data on CPS achievement from PISA 2015 focusing on the responses from 1585 15-year-olds from England. We find evidence that the overall measure of CPS achievement can be supported, but there are also potentially meaningful sub-scales. As the instrument is built using components of cognitive/individual problem solving and social/collaborative domains, it is possible that these components form sub-scales along with the overall measure. Our results also suggest that measurement invariance is established for all items across gender but there is some significant differential item functioning when considering groups of students based on language. We will further explore and present differences with the use of these measures across groups and conclude that ensuring validity and comparability for this measure of young people’s CPS achievement is essential for making fair comparisons across different populations and sub-groups.
References
OECD (2017), PISA 2015 Assessment and Analytical Framework: Science, Reading, Mathematics, Financial Literacy and Collaborative Problem Solving, revised edition, PISA, OECD Publishing, Paris.
A number of limitations have been recently raised concerning the ecological validity of the PISA 2015 CPS assessment, the replacement of humans by computer-agents in the tasks used, and the restricted communication between team members through pre-defined messages. It has been also argued that when an instrument is intended to be used with multiple populations such as those in PISA, further validation is required to warrant that the instrument operates in the same way across and within these populations. Such an exploration is of particular importance, first, due to PISA’s far-reaching political influence and second, due to the complexity of the CPS assessment framework developed.
In this paper we aim to shed more light into the conceptualisation and measurement of CPS achievement construct. We use Item response theory (Rasch model) to analyse item-level data on CPS achievement from PISA 2015 focusing on the responses from 1585 15-year-olds from England. We find evidence that the overall measure of CPS achievement can be supported, but there are also potentially meaningful sub-scales. As the instrument is built using components of cognitive/individual problem solving and social/collaborative domains, it is possible that these components form sub-scales along with the overall measure. Our results also suggest that measurement invariance is established for all items across gender but there is some significant differential item functioning when considering groups of students based on language. We will further explore and present differences with the use of these measures across groups and conclude that ensuring validity and comparability for this measure of young people’s CPS achievement is essential for making fair comparisons across different populations and sub-groups.
References
OECD (2017), PISA 2015 Assessment and Analytical Framework: Science, Reading, Mathematics, Financial Literacy and Collaborative Problem Solving, revised edition, PISA, OECD Publishing, Paris.