Show simple item record

dc.contributor.authorJayal, Ambikesh
dc.contributor.authorShepperd, Martin
dc.identifier.citationJayal, A. and Shepperd, M. (2009) 'The problem of labels in E-assessment of diagrams', Journal on Educational Resources in Computing (JERIC), 8(4), p.12. DOI: 10.1145/1482348.1482351.en_US
dc.descriptionArticle published in Journal on Educational Resources in Computing (JERIC) in January 2009, available at:
dc.description.abstractIn this article we explore a problematic aspect of automated assessment of diagrams. Diagrams have partial and sometimes inconsistent semantics. Typically much of the meaning of a diagram resides in the labels; however, the choice of labeling is largely unrestricted. This means a correct solution may utilize differing yet semantically equivalent labels to the specimen solution. With human marking this problem can be easily overcome. Unfortunately with e-assessment this is challenging. We empirically explore the scale of the problem of synonyms by analyzing 160 student solutions to a UML task. From this we find that cumulative growth of synonyms only shows a limited tendency to reduce at the margin despite using a range of text processing algorithms such as stemming and auto-correction of spelling errors. This finding has significant implications for the ease in which we may develop future e-assessment systems of diagrams, in that the need for better algorithms for assessing label semantic similarity becomes inescapable.en_US
dc.publisherAssociation for Computing Machinery (ACM)en_US
dc.relation.ispartofseriesJournal of Educational Resources in Computing;
dc.titleThe problem of labels in e-assessment of diagramsen_US

Files in this item


There are no files associated with this item.

This item appears in the following collection(s)

Show simple item record