Context Matters: Accounting for Item Features in the Assessment of Citizen Scientists’ Scientific Reasoning Skills

Research output: Contribution to journalArticleResearchpeer review

Authors

Research Organisations

External Research Organisations

  • Technische Universität Berlin
  • Leibniz Institute for Zoo and Wildlife Research (IZW)
  • IPN - Leibniz Institute for Science and Mathematics Education at Kiel University
View graph of relations

Details

Original languageEnglish
Article number21
Number of pages15
JournalCitizen Science: Theory and Practice
Volume6
Issue number1
Publication statusPublished - 25 Nov 2021

Abstract

Citizen science (CS) projects engage citizens for research purposes and promote individual learning outcomes such as scientific reasoning (SR) skills. SR refers to participants’ skills to solve problems scientifically. However, the evaluation of CS projects’ effects on learning outcomes has suffered from a lack of assessment instruments and resources. Assessments of SR have most often been validated in the context of formal education. They do not contextualize items to be authentic or to represent a wide variety of disciplines and contexts in CS research. Here, we describe the development of an assessment instrument that can be flexibly adapted to different CS research contexts. Furthermore, we show that this assessment instrument, the SR questionnaire, provides valid conclusions about participants’ SR skills. We found that the deep-structure and surface features of the items in the SR questionnaire represent the thinking processes associated with SR to a substantial extent. We suggest that practitioners and researchers consider these item features in future adaptations of the SR questionnaire. This will most likely enable them to draw valid conclusions about participants’ SR skills and to gain a deeper understanding of participants’ SR skills in CS project evaluation.

Keywords

    Assessment, Evaluation, Explanatory Rasch model, Learning outcomes, Science inquiry skills, Scientific reasoning

ASJC Scopus subject areas

Cite this

Context Matters: Accounting for Item Features in the Assessment of Citizen Scientists’ Scientific Reasoning Skills. / Bruckermann, Till; Straka, Tanja; Stillfried, Milena et al.
In: Citizen Science: Theory and Practice, Vol. 6, No. 1, 21, 25.11.2021.

Research output: Contribution to journalArticleResearchpeer review

Bruckermann T, Straka T, Stillfried M, Krell M. Context Matters: Accounting for Item Features in the Assessment of Citizen Scientists’ Scientific Reasoning Skills. Citizen Science: Theory and Practice. 2021 Nov 25;6(1):21. doi: 10.5334/cstp.309
Bruckermann, Till ; Straka, Tanja ; Stillfried, Milena et al. / Context Matters : Accounting for Item Features in the Assessment of Citizen Scientists’ Scientific Reasoning Skills. In: Citizen Science: Theory and Practice. 2021 ; Vol. 6, No. 1.
Download
@article{44a4f97c872448278193a5035452b6f6,
title = "Context Matters: Accounting for Item Features in the Assessment of Citizen Scientists{\textquoteright} Scientific Reasoning Skills",
abstract = "Citizen science (CS) projects engage citizens for research purposes and promote individual learning outcomes such as scientific reasoning (SR) skills. SR refers to participants{\textquoteright} skills to solve problems scientifically. However, the evaluation of CS projects{\textquoteright} effects on learning outcomes has suffered from a lack of assessment instruments and resources. Assessments of SR have most often been validated in the context of formal education. They do not contextualize items to be authentic or to represent a wide variety of disciplines and contexts in CS research. Here, we describe the development of an assessment instrument that can be flexibly adapted to different CS research contexts. Furthermore, we show that this assessment instrument, the SR questionnaire, provides valid conclusions about participants{\textquoteright} SR skills. We found that the deep-structure and surface features of the items in the SR questionnaire represent the thinking processes associated with SR to a substantial extent. We suggest that practitioners and researchers consider these item features in future adaptations of the SR questionnaire. This will most likely enable them to draw valid conclusions about participants{\textquoteright} SR skills and to gain a deeper understanding of participants{\textquoteright} SR skills in CS project evaluation.",
keywords = "Assessment, Evaluation, Explanatory Rasch model, Learning outcomes, Science inquiry skills, Scientific reasoning",
author = "Till Bruckermann and Tanja Straka and Milena Stillfried and Moritz Krell",
note = "Funding Information: This study was supported by the German Federal Ministry of Education and Research (grant numbers 01|O1727, 01|O1725). The publication of this article was funded by the Open Access Fund of the Leibniz Universit{\"a}t Hannover. The funding sources were involved neither in conducting the research nor in preparing the article.",
year = "2021",
month = nov,
day = "25",
doi = "10.5334/cstp.309",
language = "English",
volume = "6",
number = "1",

}

Download

TY - JOUR

T1 - Context Matters

T2 - Accounting for Item Features in the Assessment of Citizen Scientists’ Scientific Reasoning Skills

AU - Bruckermann, Till

AU - Straka, Tanja

AU - Stillfried, Milena

AU - Krell, Moritz

N1 - Funding Information: This study was supported by the German Federal Ministry of Education and Research (grant numbers 01|O1727, 01|O1725). The publication of this article was funded by the Open Access Fund of the Leibniz Universität Hannover. The funding sources were involved neither in conducting the research nor in preparing the article.

PY - 2021/11/25

Y1 - 2021/11/25

N2 - Citizen science (CS) projects engage citizens for research purposes and promote individual learning outcomes such as scientific reasoning (SR) skills. SR refers to participants’ skills to solve problems scientifically. However, the evaluation of CS projects’ effects on learning outcomes has suffered from a lack of assessment instruments and resources. Assessments of SR have most often been validated in the context of formal education. They do not contextualize items to be authentic or to represent a wide variety of disciplines and contexts in CS research. Here, we describe the development of an assessment instrument that can be flexibly adapted to different CS research contexts. Furthermore, we show that this assessment instrument, the SR questionnaire, provides valid conclusions about participants’ SR skills. We found that the deep-structure and surface features of the items in the SR questionnaire represent the thinking processes associated with SR to a substantial extent. We suggest that practitioners and researchers consider these item features in future adaptations of the SR questionnaire. This will most likely enable them to draw valid conclusions about participants’ SR skills and to gain a deeper understanding of participants’ SR skills in CS project evaluation.

AB - Citizen science (CS) projects engage citizens for research purposes and promote individual learning outcomes such as scientific reasoning (SR) skills. SR refers to participants’ skills to solve problems scientifically. However, the evaluation of CS projects’ effects on learning outcomes has suffered from a lack of assessment instruments and resources. Assessments of SR have most often been validated in the context of formal education. They do not contextualize items to be authentic or to represent a wide variety of disciplines and contexts in CS research. Here, we describe the development of an assessment instrument that can be flexibly adapted to different CS research contexts. Furthermore, we show that this assessment instrument, the SR questionnaire, provides valid conclusions about participants’ SR skills. We found that the deep-structure and surface features of the items in the SR questionnaire represent the thinking processes associated with SR to a substantial extent. We suggest that practitioners and researchers consider these item features in future adaptations of the SR questionnaire. This will most likely enable them to draw valid conclusions about participants’ SR skills and to gain a deeper understanding of participants’ SR skills in CS project evaluation.

KW - Assessment

KW - Evaluation

KW - Explanatory Rasch model

KW - Learning outcomes

KW - Science inquiry skills

KW - Scientific reasoning

UR - http://www.scopus.com/inward/record.url?scp=85121038815&partnerID=8YFLogxK

U2 - 10.5334/cstp.309

DO - 10.5334/cstp.309

M3 - Article

VL - 6

JO - Citizen Science: Theory and Practice

JF - Citizen Science: Theory and Practice

SN - 2057-4991

IS - 1

M1 - 21

ER -

By the same author(s)