Author(s)
Keywords
Abstract
Continuous assessment is an assessment methodology whose objective is to assess students on an ongoing basis. However, designing, organizing, correcting, and evaluating continuous assessment increases the workload of teachers. Moreover, this methodology may not promote deep learning if it is not implemented properly. In this study, we implemented continuous assessment in an undergraduate programming subject using an automated assessment tool to reduce the workload of professors. We used design-based research (DBR) to implement a prototype of assessment methodology which includes an automated assessment tool developed by our research group. DBR provides us with a scientific background for this implementation through an iterative process in which we progressively come to assess all the activities that students perform in the course. In the different iterations of this process, we have collected students' final and project grades, and their opinions through surveys about the assessments we have implemented. These results allow us to demonstrate that the performance of at least two types of students improves after the implementation of continuous assessment, while at the same time, the depth of learning in the class is not affected. We have also found that students are more motivated and committed to the course when continuous assessment is used as they prefer automated assessment over the traditional exercises. In addition, the implementation of the continuous assessment has shown us some unexpected outcomes about flexibility in methodology design, collection of large amounts of data from the learning process, and students acquiring useful skills for programming. In reality, this can result in students gaining deeper knowledge if they are confronted with a greater number of situations during this time in which they test their knowledge.