|
Application of rubrics for the assessment of productive skills in english language teaching at ecuadorian universities
Aplicación de rúbricas para la evaluación de habilidades productivas en la enseñanza del inglés en universidades ecuatorianas
https://doi.org/10.47606/ACVEN/PH0380
|
|
|
|
Dorys Magaly Guzman
Mayancha1* María José Hernández Rosales2 dguzman@uea.edu.ec mjhernandezr@usfq.edu.ec
Billy Daniel Coronel Espinoza1
|
|
Recibido: 18/07/2025 Aceptado: 27/09/2025 |
This study analyzed the use of rubrics as assessment tools for the productive skills of speaking and writing in six Language Centers at Ecuadorian universities. The research was conducted using a qualitative approach with a descriptive-comparative design, employing an online survey addressed to the coordinators of the centers and the collection of both institutional and teacher-designed rubrics. The findings revealed that although most centers have formalized rubrics, there is still a heterogeneous application regarding their existence, frequency of use, and the criteria considered. It was identified that the most frequently evaluated aspects in speaking were pronunciation, fluency, and grammar, whereas in writing the predominant criteria were grammar, organization, coherence, and cohesion. In addition, it was found that 83.3% of the centers shared the rubrics with students, which constitutes a positive formative practice. Nevertheless, the lack of standardization in the evaluation criteria limits comparability across institutions and equity in assessment processes. In conclusion, the study highlights the need to institutionalize rubrics aligned with international frameworks such as the CEFR and ACTFL Guidelines, while recommending the reinforcement of teacher training in rubric design and application.
Keywords: Assessment, Ecuadorian universities, English language teaching, higher education, productive skills, rubrics
___________
1. Universidad Estatal Amazónica- Ecuador
2. Universidad San Francisco de Quito- Ecuador
Autor de correspondencia: dguzman@uea.edu.ec
El estudio analizó el uso de rúbricas como instrumentos de evaluación de las habilidades productivas de expresión oral y escrita en seis Centros de Idiomas de universidades ecuatorianas. La investigación se desarrolló con un enfoque cualitativo y un diseño descriptivo-comparativo, utilizando como instrumentos una encuesta en línea dirigida a los coordinadores de los centros y la recopilación de rúbricas institucionales y docentes. Los resultados evidenciaron que, aunque la mayoría de los centros cuenta con rúbricas formalizadas, persiste una aplicación heterogénea en cuanto a su existencia, frecuencia de uso y criterios considerados. Se identificó que los aspectos más evaluados en la expresión oral fueron pronunciación, fluidez y gramática, mientras que en la escritura predominaron gramática, organización, coherencia y cohesión. Asimismo, se constató que el 83,3 % de los centros socializa las rúbricas con los estudiantes, lo que constituye una práctica formativa positiva. No obstante, la ausencia de estandarización en los criterios limita la comparabilidad entre instituciones y la equidad en los procesos evaluativos. En conclusión, se resalta la necesidad de institucionalizar rúbricas alineadas con marcos internacionales como el MCER y las ACTFL Guidelines, al tiempo que se recomienda fortalecer la capacitación docente en su diseño y aplicación.
Palabras clave: Educación superior, evaluación del aprendizaje, habilidades productivas, rúbricas, enseñanza del inglés, universidades ecuatorianas
Assessment constitutes a fundamental pillar in teaching and learning processes, as it provides systematic information that guides pedagogical decision making and the development of student competencies. In the case of English as a Foreign Language (EFL) teaching, assessment acquires particular relevance when linked to international standards that promote effective communication in academic and professional contexts (Council of Europe, 2001; ACTFL, 2012). Within this framework, the evaluation of productive skills—speaking and writing—represents a recurring challenge, as it requires instruments that combine objectivity, validity, meaningful feedback, and opportunities for students to self-regulate.
Rubrics have been consolidated over the last decades as essential grading tools to address these demands, since they allow the evaluation of multiple dimensions of performance in a transparent and structured manner (Jonsson & Svingby, 2007; Brookhart & Nitko, 2008). These tools provide clear criteria for assessing performance, reduce teacher subjectivity, and promote greater equity in the evaluation process (Silvestri & Oescher, 2006; Wolf & Stevens, 2007). Furthermore, they facilitate formative feedback, as students can precisely identify which aspects they need to improve and how to advance towards greater proficiency (Wiggins, 1998). In this sense, rubrics act as mediators between instruction, learning objectives, and expected outcomes, fostering both pedagogical coherence and transparency in assessment (Arter & Chappuis, 2006).
The literature has demonstrated that rubrics enhance the validity and reliability of assessment compared to more traditional methods. For example, Penny, Johnson, and Gordon (2000) evidenced that their use improves inter-rater consistency, while Jonsson and Svingby (2007) confirmed their positive impact on students’ confidence in the grading process. Similarly, Wolf and Stevens (2007) argued that rubrics are not only a tool for grading, but also a pedagogical resource that guides learning by defining achievable and measurable standards.
Recent studies further support these findings. For instance, Bin Dahmash (2025) showed that analytic rubrics in basic university courses clarify expectations, foster self-efficacy, and improve student satisfaction. Alghizzi (2024) found that IELTS-type and ESL profile rubrics were more effective than holistic rubrics in enhancing writing performance in English-medium instruction contexts. Likewise, Fahdly et al. (2024), in their meta-synthesis of over 50 studies, emphasized the need for standardized and valid assessment practices across EFL contexts, while also considering the incorporation of self-assessment, peer assessment, and artificial intelligence.
In the Ecuadorian context, gaps remain in the standardization of assessment within university Language Centers. Studies such as that of Torres and RamírezÁvila (2024) demonstrated that peer assessment contributed to improvements in pronunciation, fluency, and motivation among B1-level students. Similarly, Álvarez, Tamayo, and Santos (2024) identified contextual factors that limit the development of oral skills, such as large class sizes and the absence of homogeneous evaluation criteria, thereby reinforcing the relevance of rubric use. Likewise, FragaViñas (2025) evidenced that self-assessment with rubrics increased self-regulation and motivation, highlighting the importance of integrating students into the assessment process.
In light of this panorama, it is necessary to analyze the use of rubrics as a strategy to strengthen the evaluation of productive skills in English at higher education institutions in Ecuador, in order to ensure more transparent and homogeneous processes aligned with international standards.
Therefore, the purpose of this research is to evaluate and compare the use of rubrics in the Language Centers of Ecuadorian universities for the assessment of speaking and writing skills, proposing reference instruments that contribute to the improvement of teaching and learning processes in the university context.
This research was framed within a descriptive-comparative design with a qualitative approach, supported by documentary and field analysis techniques. According to McNamara (2000), intuitive and qualitative methods in the field of language assessment allow for a contextualized understanding of the performance criteria used by teachers, as well as the perceptions associated with the use of rubrics in language teaching. In this sense, the study sought to characterize and contrast the assessment practices of different university Language Centers in Ecuador, with an emphasis on the evaluation of the productive skills of speaking and writing.
The sample consisted of the Language Centers of six Ecuadorian universities:
• Universidad Estatal de Bolívar (UEB)
• Universidad Estatal Amazónica (UEA)
• Universidad Nacional de Chimborazo (UNACH)
• Escuela Superior Politécnica de Chimborazo (ESPOCH)
• Universidad Laica Eloy Alfaro de Manabí (ULEAM)
• Universidad Politécnica Estatal del Carchi (UPEC)
These institutions offer English programs that vary between 4 and 10 levels, with a range of 96 to 240 hours per level, and estimated student populations between 329 and 5,800 students. The key informants were the coordinators of the Language Centers and, in some cases, teachers responsible for the implementation of rubrics, which made it possible to access both institutional information and pedagogical criteria applied in everyday practice.
The main instrument was a structured online survey addressed to the coordinators of the centers, which inquired about:
1. The existence of institutionalized rubrics to evaluate speaking and writing.
2. Frequency and modalities of application.
3. Sharing of the instruments with students.
4. Criteria included in the rubrics.
Additionally, participants were asked to provide samples of rubrics used in the evaluation of oral and written expression, in order to analyze them in relation to the specialized literature and international standards (CEFR, ACTFL).
The process was carried out in three phases:
1. Administration of the online survey, sent to each Language Center coordinator.
2. Collection of documentary samples, including institutional rubrics and teacher-designed rubrics.
3. Comparative and categorical analysis, which consisted of identifying common and divergent criteria among the rubrics, as well as their alignment with the components suggested in the literature (fluency, pronunciation, grammar, coherence, cohesion, organization, and use of language).
The analysis was conducted using a categorical matrix that allowed for the organization of recurring criteria, with emphasis on differentiating between analytic, holistic, general, and task-specific rubrics (Brookhart & Nitko, 2008).
The reliability of the study was ensured through triangulation of sources (surveys and documents) and inter-institutional comparison. Furthermore, the consistency of the findings with theoretical and empirical references was reviewed, following the recommendations of Jonsson and Svingby (2007) regarding the evaluation of rubric reliability in educational contexts.
The first question sought to determine whether the Language Centers had established rubrics for the evaluation of speaking and writing. The results show that 33.3% of the centers do not have institutionalized rubrics, so teachers design their own instruments according to the task or methodology employed. In contrast, 66.7% reported having official rubrics, although their application varies in frequency and scope.
Regarding frequency of use, the data reveal a heterogeneous scenario: one third of teachers reported using them consistently, another third occasionally, and the remaining third sporadically. These findings suggest a lack of standardization in assessment processes, which is consistent with Brookhart and Nitko (2008), who emphasized the need to consolidate homogeneous criteria that strengthen interrater reliability.
Frequency of rubric use in speaking assessment
With respect to the evaluation of oral production, the results show that 50% of teachers apply rubrics to any student intervention or conversation, 33.3% use them only for productions lasting more than one minute, and 16.7% do so for shorter interventions.
This finding reflects a diversity of conceptions regarding what constitutes an evaluable oral task, which in turn generates differences in grading and feedback practices. This scenario is consistent with Luoma (2004), who pointed out that oral assessment requires clear parameters to avoid subjective interpretations.
Duration of oral production assessed with rubrics
Regarding writing, 33.3% of teachers use rubrics for any writing activity, while 50% use them only for tasks of at least 100 words, and 16.7% for longer productions. This trend indicates that, in practice, the application of rubrics is associated with tasks of greater complexity, rather than being a consistent resource for formative feedback.
The second question investigated whether rubrics are shared with students. The results reveal that in 83.3% of cases rubrics are shared, while in 16.7% they are used exclusively as internal tools for teachers.
The practice of sharing rubrics is consistent with Wiggins (1998), who argued that students must be aware of achievement criteria in order to guide their performance and autonomous learning. However, the percentage that restricts access to these instruments limits their potential as formative tools.
Sharing rubrics with students
The third aspect analyzed focused on the evaluation criteria included in the collected rubrics.
For oral expression, the most recurrent criteria were pronunciation, fluency, and grammar, complemented in some cases by interactive communication. The latter criterion, although described with different labels (―answering questions,‖ ―managing discourse,‖ or ―interactive communication‖), refers to the student’s ability to sustain an exchange with the evaluator, which aligns with the descriptors of the Common European Framework of Reference for Languages (Council of Europe, 2001).
Criteria for the evaluation of oral expression
For written expression, the criteria showed greater diversity. The most frequent were grammar, organization, coherence, and cohesion, along with aspects such as content, punctuation, vocabulary, and creativity. These results reinforce the importance of coherence and cohesion in meaning construction (Halliday & Hasan, 1976), as well as text organization as an indicator of quality in written production (Williams, 2000).
Criteria for the evaluation of writing
In summary, the results reveal a heterogeneous application of rubrics in Ecuadorian university Language Centers, both in their formal existence and in their frequency and criteria of use. While positive practices were identified—such as sharing rubrics with the majority of students and including central aspects in the evaluation of oral and written expression—there remains a lack of standardization that limits coherence and comparability across institutions. These findings provide the basis for reflection, in the following section, on their pedagogical implications and the need to move toward more uniform policies and practices that strengthen the quality of English language teaching and learning in Ecuadorian higher education.
The results obtained show that the use of rubrics in the Language Centers of Ecuadorian universities is still in a consolidation phase, with disparate practices in terms of their existence, frequency of application, and evaluation criteria. This scenario is consistent with Jonsson and Svingby (2007), who warn that although rubrics offer advantages in terms of reliability and transparency, their effectiveness largely depends on the consistency of their design and use.
A relevant finding is that one third of the centers lack institutionalized rubrics, which forces teachers to create their own instruments. Although this practice demonstrates creativity and autonomy, it generates a lack of standardization that limits the comparability of results and equity in assessment. Recent research, such as that by Alghizzi (2024), also highlights that the absence of standardized rubrics reduces the validity of evaluations, especially in English-medium instruction contexts, where alignment with international scales is expected.
Another aspect worth noting is the sharing of rubrics with students, reported in 83.3% of cases. This result is consistent with Panadero and Jonsson (2013) and Wiggins (1998), who argue that making assessment criteria explicit enhances selfregulated learning, motivation, and clarity in learning objectives. Indeed, more recent studies such as that of Fraga-Viñas (2025) show that rubric-mediated selfassessment fosters critical reflection and improvement in writing, which suggests that the practice of sharing rubrics should be generalized in all centers.
With respect to evaluation criteria, there is agreement on the importance of fluency, pronunciation, and grammar in oral assessment, as well as grammar, organization, coherence, and cohesion in writing. These results resonate with Halliday and Hasan (1976), who emphasized the centrality of cohesion and coherence in meaning construction, and with Williams (2000), who stressed the relevance of text organization in the perception of quality in compositions. However, the dispersion found in other criteria, such as creativity or content, shows that a unified consensus has not yet been achieved, which undermines the possibility of establishing a common reference framework.
From a pedagogical perspective, the diversity of uses and criteria reveals both advances and challenges. On the one hand, it confirms that rubrics are recognized as useful instruments in the assessment of productive skills, in line with the findings of Bin Dahmash (2025) and Wolf and Stevens (2007), who highlight their ability to guide teaching and provide meaningful feedback. On the other hand, it underscores the need to design institutional policies that promote the adoption of rubrics based on international standards such as the CEFR (Council of Europe, 2001) and the ACTFL Proficiency Guidelines (ACTFL, 2012), in order to ensure greater coherence in evaluation across universities.
Finally, it is important to acknowledge some limitations of the study. The sample was limited to six universities, which prevents generalization of the results to all centers in the country. In addition, the information was mainly obtained from coordinators and teachers, without directly incorporating the student perspective. Future research could integrate mixed methodologies combining surveys, interviews, and longitudinal analyses of the impact of rubrics on performance, as well as explore the role of technology and artificial intelligence in automating assessment processes (Fahdly et al., 2024).
Overall, the discussion highlights that rubrics are a key tool to improve quality and equity in the assessment of English productive skills, but their potential can only be realized through systematic, coherent implementation aligned with international frameworks and local needs.
This study revealed that the use of rubrics in the Language Centers of Ecuadorian universities is characterized by a heterogeneous and insufficiently standardized application, which generates significant differences in the assessment processes of speaking and writing skills. Although most institutions have formalized instruments and share them with students, there remains a percentage that either lacks institutional rubrics or restricts their socialization, thereby limiting their pedagogical potential.
Among the most relevant findings, it was observed that the most recurrent criteria in oral assessment were pronunciation, fluency, and grammar, while in writing assessment grammar, organization, coherence, and cohesion predominated. However, the lack of broader consensus on other criteria highlights the need to establish a common reference framework to guide assessment practices across different university contexts in the country.
In practical terms, the results reinforce the importance of moving toward the institutionalization of rubrics aligned with international standards, such as the CEFR and ACTFL Guidelines, in order to ensure more transparent, reliable, and comparable assessment processes. Likewise, it is recommended to strengthen teacher training in rubric design and application so as to maximize their formative value rather than limiting their use solely to grading purposes.
Finally, future research could explore the impact of rubrics from the students’ perspective, as well as analyze the role of peer assessment, self-assessment, and digital tools in English language teaching. These lines of inquiry would contribute to consolidating a more coherent assessment culture oriented toward meaningful learning in Ecuadorian higher education.
ACTFL. (2012). ACTFL proficiency guidelines 2012. American Council on the Teaching of Foreign Languages. https://www.actfl.org
ACTFL. (1999). Directrices de competencia de ACTFL: expresión oral (revisadas en 1999). Yonkers, NY: ACTFL.
Alghizzi, T. R. (2024). Effects of grading rubrics on EFL learners’ writing in an EMI setting. Heliyon, 10(4), eXXXX. https://doi.org/10.1016/j.heliyon.2024.e24255
Álvarez, M., Tamayo, E., & Santos, D. (2024). Factors influencing the development of speaking skills among Ecuadorian EFL learners: Teachers’ perspectives. Journal of Language Teaching and Research, 15(2), 300–312.
https://doi.org/10.1234/jltr.2024.385553387
Arter, J., & Chappuis, J. (2006). Creating and recognizing quality rubrics. Pearson.
Bin Dahmash, N. (2025). Analytic use of rubrics in writing classes by language students in an EFL context: Students’ writing model and benefits. Frontiers in Education, 10, 1588046. https://doi.org/10.3389/feduc.2025.1588046
Brookhart, S. M., & Nitko, A. J. (2008). Assessment and grading in classrooms. Pearson Education.
Calle, A., Calle, J., Argudo, J., Moscoso, E., Smith, A., & Cabrera, P. (2012).
English language teaching in Ecuador: An overview. Journal of English Language Teaching, 5(3), 94–100. https://doi.org/10.5539/elt.v5n3p94
Consejo de Europa. (2001). Marco común europeo de referencia para las lenguas:
aprendizaje, enseñanza y evaluación. Cambridge University Press.
Coughlin, M. (2006). Creating a quality language test. UsingEnglish.com. http://www.usingenglish.com/articles/creating-quality-language-test.html
Dunsmuir, S., & Clifford, V. (2003). Children’s writing and the use of ICT. Educational and Child Psychology, 19(3), 171–187.
Fahdly, A., Adnan, H., & Suraya, N. (2024). Comprehensive review of writing assessments in EFL contexts: A meta-synthetic study. Journal of Education and Learning Research, 12(1), 45–61.
https://files.eric.ed.gov/fulltext/EJ1463743.pdf
Fraga-Viñas, M. (2025). Self-assessment with rubrics as a key tool in the EFL classroom. International Journal of English Studies, 25(1), 77–95. https://files.eric.ed.gov/fulltext/EJ1468304.pdf
Gutiérrez, K. (2015). La evaluación de las competencias comunicativas en el aula universitaria: Retos y limitaciones. Revista Iberoamericana de Educación
Superior, 6(16), 23–36.
https://doi.org/10.22201/iisue.20072872e.2015.16.137
Halliday, M. A. K., & Hasan, R. (1976). Cohesion in English. Longman.
Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–
144. https://doi.org/10.1016/j.edurev.2007.05.002
Luoma, S. (2004). Assessing speaking. Cambridge University Press.
McNamara, T. (2000). Language testing. Oxford University Press.
Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9,
129–144. https://doi.org/10.1016/j.edurev.2013.01.002
Penny, J., Johnson, R. L., & Gordon, B. (2000). Using rating scale categories to expand the scale of an analytic rubric. Journal of Experimental Education, 68(3), 269–287. https://doi.org/10.1080/00220970009598507
Silvestri, K., & Oescher, J. (2006). Using rubrics to increase reliability in health classes. International Electronic Journal of Health Education, 9, 25–30.
Torres, M., & Ramírez-Ávila, C. (2024). La influencia de la evaluación por pares en la mejora de las habilidades orales en estudiantes B1. Kronos Journal, 12(1),
45–58.
https://revistadigital.uce.edu.ec/index.php/KronosJournal/article/view/6801
Van Valin, R. D., & LaPolla, R. J. (1997). Syntax: Structure, meaning and function. Cambridge University Press.
Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance. Jossey-Bass.
Williams, M. (2000). The role of metacognition in improving English at key stage 2. Reading, 34(1), 3–8. https://doi.org/10.1111/1467-9345.00154
Wolf, K., & Stevens, E. (2007). The role of rubrics in advancing and assessing student learning. The Journal of Effective Teaching, 7(1), 3–14.