Automatic detection of inconsistencies between numerical scores and textual feedback in peer-assessment processes with machine learning

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/125241
Información del item - Informació de l'item - Item information
Título: Automatic detection of inconsistencies between numerical scores and textual feedback in peer-assessment processes with machine learning
Autor/es: Rico-Juan, Juan Ramón | Gallego, Antonio-Javier | Calvo-Zaragoza, Jorge
Grupo/s de investigación o GITE: Reconocimiento de Formas e Inteligencia Artificial
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos
Palabras clave: Evaluación por pares | Trabajos abiertos | Evaluación asistida por ordenador | Aprendizaje automático | Inteligencia artificial | Procesamiento del lenguaje natural
Fecha de publicación: 2020
Editor: Asociación de Enseñantes Universitarios de la Informática (AENUI)
Cita bibliográfica: Rico-Juan, Juan Ramón; Gallego, A.-Javier; Calvo-Zaragoza, Jorge. “Automatic detection of inconsistencies between numerical scores and textual feedback in peer-assessment processes with machine learning”. En: Badía Contelles, José Manuel; Grimaldo Moreno, Francisco (eds.). Actas de las XXVI Jornadas sobre la Enseñanza Universitaria de la Informática, València, 8-9 de julio de 2020. València: Asociación de Enseñantes Universitarios de la Informática, 2020, p. 357
Resumen: The use of peer assessment for open-ended activities has advantages for both teachers and students. Teachers might reduce the workload of the correction process and students achieve a better understanding of the subject by evaluating the activities of their peers. In order to ease the process, it is advisable to provide the students with a rubric over which performing the assessment of their peers; however, restricting themselves to provide only numerical scores is detrimental, as it prevents providing valuable feedback to others peers. Since this assessment produces two modalities of the same evaluation, namely numerical score and textual feedback, it is possible to apply automatic techniques to detect inconsistencies in the evaluation, thus minimizing the teachers’ workload for supervising the whole process. This paper proposes a machine learning approach for the detection of such inconsistencies. To this end, we consider two different approaches, each of which is tested with different algorithms, in order to both evaluate the approach itself and find appropriate models to make it successful. The experiments carried out with 4 groups of students and 2 types of activities show that the proposed approach is able to yield reliable results, thus representing a valuable approach for ensuring a fair operation of the peer assessment process.
URI: http://hdl.handle.net/10045/125241
ISSN: 2531-0607
Idioma: eng
Tipo: info:eu-repo/semantics/conferenceObject
Derechos: Licencia Creative Commons Reconocimiento-NoComercial-CompartirIgual 4.0
Revisión científica: si
Aparece en las colecciones:JENUI 2020
INV - GRFIA - Comunicaciones a Congresos, Conferencias, etc.

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailJENUI_2020_051.pdf83,57 kBAdobe PDFAbrir Vista previa


Este ítem está licenciado bajo Licencia Creative Commons Creative Commons