What is it about?
This study investigated whether assessing the programming knowledge of computer science (CS) students through code-writing (programming) tasks accurately reflects their understanding. The study was conducted in the context of code structure (programming patterns), and our findings indicated that traditional code-writing tasks do not provide a complete picture of student knowledge and sometimes underestimate their knowledge. Additionally, we discovered that assessing student knowledge through multiple lenses offers a more accurate representation of their understanding. To examine the accuracy of assessing student knowledge with code writing, we prompted students to write short functions that required the application of specific patterns. For example, they were tasked with 'Write a function that takes a string 'word.' If the word starts with 'A,' return true; otherwise, return false.' The pattern for this task involved directly returning a boolean expression: return word.startsWith("A");. The anti-pattern, on the other hand, would involve using an if statement and returning boolean literals, such as if (word.startsWith("A")) { return true; } return false;. When students deviated from these patterns, we flagged their code and provided them with progressive prompts and hints in three steps: Initially, students were shown the initial prompt along with their code-writing response. The prompt asked if they could improve the style of their code and correct any other errors they noticed. If students' revised code was also flagged, they received the second hint, which asked, 'Can you improve the style of your code by re-writing it without using an 'if' statement?' If students couldn't follow the hint or still needed assistance, they were given a worked example. This example included two code samples, one adhering to the pattern and another using the anti-pattern. Students were informed that both code samples had the same functionality. Our results indicated that, for some patterns, violations in student code-writing were indicative of knowledge gaps. For example, most students who wrote repeated code within if and else statements could not correctly refactor their own code even after all the prompts. However, for some patterns, code-writing understated student knowledge. For example, when it came to directly returning boolean expressions, many students who violated the pattern in their code-writing tasks were able to correctly refactor their code at the first prompt. This suggested that their violations of the patterns were not necessarily due to deep knowledge gaps, as the first prompt did not offer any information on what the problem was or how to fix it.
Featured Image
Photo by Glenn Carstens-Peters on Unsplash
Why is it important?
To design effective interventions for students learning of programming patterns, we must first assess their knowledge and understand the type of support they need. Currently, code writing is a dominant method for evaluating students' understanding of programming and programming patterns, and the availability of static code analyzers has made it easier to detect anti-patterns in student work. However, our study suggests that this approach may not provide a complete picture of student knowledge. Our findings align with Ohlesson's learning theory, which suggests that people can often identify their own mistakes when given the opportunity to review their work. Therefore, if a student's mistake is not due to a knowledge gap, they may not require teaching; instead, they may benefit from motivation and review practices to recognize opportunities for improvement.
Perspectives
Read the Original
This page is a summary of: Improving Assessment of Programming Pattern Knowledge through Code Editing and Revision, May 2023, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.1109/icse-seet58685.2023.00012.
You can read the full text:
Contributors
The following have contributed to this page