Measuring Student Competency in University Introductory Computer Programming: Epistemological and Methodological Foundations

Leela Waheed, Rob Cavangh

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

University introductory programming courses, commonly referred to as Computer Science 1 (CS1), are beset by a paucity of invariant measures of programming competency. Common evaluative tools such as university exams are characterised by a lack of standardised scaling protocols, the absence of construct models to inform instrument design, and inconsistent selection of substantive content. These shortcomings severely limit the provision of meaningful data for pedagogic and research purposes. Typically, in most CS1 pedagogical research, the raw scores obtained from in-class tests and formal examinations are treated as measures of student competence. Thus, the veracity of statistical associations tested in these studies and the corresponding recommendations for pedagogic reform are questionable. A growing need has thus arisen for instruments to provide meaningful measurement of CS1 student competence. This report concerns the first phase in the development of an instrument to measure CS1 student competency. The overall methodological frame was the Unified Theory of Validity and the seven aspects of evidence applicable to an argument for validity. These were the content aspect, the substantive aspect, the structural aspect, the generalizability aspect, the external aspect, the consequential aspect and the interpretability aspect. The report concentrates on the qualitative procedures applied to deal with the literature, previous research, and existing instruments. The unified conception of validity emphasises construct validity and accordingly this report recounts in detail the garnering of content aspect evidence including the purpose, the domain of inference, the types of inferences, constraints and limitations, instrument specification—the construct, instrument specification—the construct model, instrument specification - the construct map, item development, the scoring model, the scaling model, and item technical quality. The next phase of the project is the subject of a second report and it is anticipated this will focus more on empirical procedures and results through application of the Rasch Partial Credit Model.
Original languageEnglish
Title of host publicationPacific Rim Objective Measurement Symposium (PROMS) 2016
EditorsQuan Zhang
PublisherSpringer Nature
Pages97-116
Number of pages18
ISBN (Electronic)978-981-10-8137-8
DOIs
Publication statusPublished - 27 Apr 2018
Externally publishedYes

Fingerprint Dive into the research topics of 'Measuring Student Competency in University Introductory Computer Programming: Epistemological and Methodological Foundations'. Together they form a unique fingerprint.

Cite this