Context Matters: Accounting for Item Features in the Assessment of Citizen Scientists’ Scientific Reasoning Skills

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autorschaft

Organisationseinheiten

Externe Organisationen

  • Technische Universität Berlin
  • Leibniz-Institut für Zoo- u Wildtierforschung (IZW)
  • IPN - Leibniz-Institut für die Pädagogik der Naturwissenschaften und Mathematik
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Aufsatznummer21
Seitenumfang15
FachzeitschriftCitizen Science: Theory and Practice
Jahrgang6
Ausgabenummer1
PublikationsstatusVeröffentlicht - 25 Nov. 2021

Abstract

Citizen science (CS) projects engage citizens for research purposes and promote individual learning outcomes such as scientific reasoning (SR) skills. SR refers to participants’ skills to solve problems scientifically. However, the evaluation of CS projects’ effects on learning outcomes has suffered from a lack of assessment instruments and resources. Assessments of SR have most often been validated in the context of formal education. They do not contextualize items to be authentic or to represent a wide variety of disciplines and contexts in CS research. Here, we describe the development of an assessment instrument that can be flexibly adapted to different CS research contexts. Furthermore, we show that this assessment instrument, the SR questionnaire, provides valid conclusions about participants’ SR skills. We found that the deep-structure and surface features of the items in the SR questionnaire represent the thinking processes associated with SR to a substantial extent. We suggest that practitioners and researchers consider these item features in future adaptations of the SR questionnaire. This will most likely enable them to draw valid conclusions about participants’ SR skills and to gain a deeper understanding of participants’ SR skills in CS project evaluation.

ASJC Scopus Sachgebiete

Zitieren

Context Matters: Accounting for Item Features in the Assessment of Citizen Scientists’ Scientific Reasoning Skills. / Bruckermann, Till; Straka, Tanja; Stillfried, Milena et al.
in: Citizen Science: Theory and Practice, Jahrgang 6, Nr. 1, 21, 25.11.2021.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Bruckermann T, Straka T, Stillfried M, Krell M. Context Matters: Accounting for Item Features in the Assessment of Citizen Scientists’ Scientific Reasoning Skills. Citizen Science: Theory and Practice. 2021 Nov 25;6(1):21. doi: 10.5334/cstp.309
Bruckermann, Till ; Straka, Tanja ; Stillfried, Milena et al. / Context Matters : Accounting for Item Features in the Assessment of Citizen Scientists’ Scientific Reasoning Skills. in: Citizen Science: Theory and Practice. 2021 ; Jahrgang 6, Nr. 1.
Download
@article{44a4f97c872448278193a5035452b6f6,
title = "Context Matters: Accounting for Item Features in the Assessment of Citizen Scientists{\textquoteright} Scientific Reasoning Skills",
abstract = "Citizen science (CS) projects engage citizens for research purposes and promote individual learning outcomes such as scientific reasoning (SR) skills. SR refers to participants{\textquoteright} skills to solve problems scientifically. However, the evaluation of CS projects{\textquoteright} effects on learning outcomes has suffered from a lack of assessment instruments and resources. Assessments of SR have most often been validated in the context of formal education. They do not contextualize items to be authentic or to represent a wide variety of disciplines and contexts in CS research. Here, we describe the development of an assessment instrument that can be flexibly adapted to different CS research contexts. Furthermore, we show that this assessment instrument, the SR questionnaire, provides valid conclusions about participants{\textquoteright} SR skills. We found that the deep-structure and surface features of the items in the SR questionnaire represent the thinking processes associated with SR to a substantial extent. We suggest that practitioners and researchers consider these item features in future adaptations of the SR questionnaire. This will most likely enable them to draw valid conclusions about participants{\textquoteright} SR skills and to gain a deeper understanding of participants{\textquoteright} SR skills in CS project evaluation.",
keywords = "Assessment, Evaluation, Explanatory Rasch model, Learning outcomes, Science inquiry skills, Scientific reasoning",
author = "Till Bruckermann and Tanja Straka and Milena Stillfried and Moritz Krell",
note = "Funding Information: This study was supported by the German Federal Ministry of Education and Research (grant numbers 01|O1727, 01|O1725). The publication of this article was funded by the Open Access Fund of the Leibniz Universit{\"a}t Hannover. The funding sources were involved neither in conducting the research nor in preparing the article.",
year = "2021",
month = nov,
day = "25",
doi = "10.5334/cstp.309",
language = "English",
volume = "6",
number = "1",

}

Download

TY - JOUR

T1 - Context Matters

T2 - Accounting for Item Features in the Assessment of Citizen Scientists’ Scientific Reasoning Skills

AU - Bruckermann, Till

AU - Straka, Tanja

AU - Stillfried, Milena

AU - Krell, Moritz

N1 - Funding Information: This study was supported by the German Federal Ministry of Education and Research (grant numbers 01|O1727, 01|O1725). The publication of this article was funded by the Open Access Fund of the Leibniz Universität Hannover. The funding sources were involved neither in conducting the research nor in preparing the article.

PY - 2021/11/25

Y1 - 2021/11/25

N2 - Citizen science (CS) projects engage citizens for research purposes and promote individual learning outcomes such as scientific reasoning (SR) skills. SR refers to participants’ skills to solve problems scientifically. However, the evaluation of CS projects’ effects on learning outcomes has suffered from a lack of assessment instruments and resources. Assessments of SR have most often been validated in the context of formal education. They do not contextualize items to be authentic or to represent a wide variety of disciplines and contexts in CS research. Here, we describe the development of an assessment instrument that can be flexibly adapted to different CS research contexts. Furthermore, we show that this assessment instrument, the SR questionnaire, provides valid conclusions about participants’ SR skills. We found that the deep-structure and surface features of the items in the SR questionnaire represent the thinking processes associated with SR to a substantial extent. We suggest that practitioners and researchers consider these item features in future adaptations of the SR questionnaire. This will most likely enable them to draw valid conclusions about participants’ SR skills and to gain a deeper understanding of participants’ SR skills in CS project evaluation.

AB - Citizen science (CS) projects engage citizens for research purposes and promote individual learning outcomes such as scientific reasoning (SR) skills. SR refers to participants’ skills to solve problems scientifically. However, the evaluation of CS projects’ effects on learning outcomes has suffered from a lack of assessment instruments and resources. Assessments of SR have most often been validated in the context of formal education. They do not contextualize items to be authentic or to represent a wide variety of disciplines and contexts in CS research. Here, we describe the development of an assessment instrument that can be flexibly adapted to different CS research contexts. Furthermore, we show that this assessment instrument, the SR questionnaire, provides valid conclusions about participants’ SR skills. We found that the deep-structure and surface features of the items in the SR questionnaire represent the thinking processes associated with SR to a substantial extent. We suggest that practitioners and researchers consider these item features in future adaptations of the SR questionnaire. This will most likely enable them to draw valid conclusions about participants’ SR skills and to gain a deeper understanding of participants’ SR skills in CS project evaluation.

KW - Assessment

KW - Evaluation

KW - Explanatory Rasch model

KW - Learning outcomes

KW - Science inquiry skills

KW - Scientific reasoning

UR - http://www.scopus.com/inward/record.url?scp=85121038815&partnerID=8YFLogxK

U2 - 10.5334/cstp.309

DO - 10.5334/cstp.309

M3 - Article

VL - 6

JO - Citizen Science: Theory and Practice

JF - Citizen Science: Theory and Practice

SN - 2057-4991

IS - 1

M1 - 21

ER -

Von denselben Autoren