Araştırma Makalesi
BibTex RIS Kaynak Göster

Yabancı Dilde Yazmada Otomatik Kompozisyon Puanlama Geribildirimi Eğitmen Geribildirimi ile Uyumlu mu?

Yıl 2022, Cilt: 4 Sayı: 4, 53 - 62, 15.06.2022
https://doi.org/10.48147/dada.60

Öz

Bu çalışma, otomatik bir kompozisyon puanlama (OKP) sistemi olan Criterion tarafından verilen geri bildirim ile eğitmen geri bildirimini karşılaştırarak, öğrencilerin kağıtlarına mekanik geri bildirim vermede öğretmenlere zaman kazandırmak için kullanılıp kullanılamayacağını bulmayı amaçlamıştır. Bu çalışma kriter tarafından verilen geri bildirimin eğitmen geri bildirimi ile ne ölçüde örtüştüğünü göstermeyi amaçlayan tanımlayıcı bir çalışmadır. Bu çalışmada, bir değerlendirme sistemi ve bir insan değerlendiriciden alınan geri bildirimler karşılaştırılmıştır. Bu çalışmada “OKP geribildirimi, İngilizce dilbilgisi ve mekanik açısından İngilizce yazmada eğitmen geribildirimi ile ne ölçüde örtüşmektedir?” sorusuna cevap aranmıştır. Sonuçlar, Criterion'ın öğrencilerin kağıtlarındaki dilbilgisi hata-larını ve mekanik hataları bulmada insan değerlendirici kadar tutarlı ve başarılı olduğunu göstermiştir.

Kaynakça

  • Ashwell, Tim (2000). Patterns of teacher response to student writing in a multiple-draft composition classroom: Is content feedback followed by form feed-back the best method? Journal of Second Language Writing, 9(3), 227–257.
  • Attali, Yigal (2004, April). Exploring the feedback and revision features of Criterion. Paper presented at the National Council on Measurement in Education Annual Meeting, San Diego, CA.
  • Attali, Yigal; Lewis, Will; Steier, Michael (2012). Scoring with the computer: alternative procedures for improving the reliability of holistic essay scoring. Language Testing, 30(1), 125-141.
  • Chen, Chi-Fen; Cheng, Wei-Yuan (2008). Beyond the design of automated writing evaluation: pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning & Technology, 12(2), 94-112.
  • Choi, Jaeho; Lee, Youngju (2010). The use of feedback in the ESL writing class integrating Automated Essay Scoring (AES). In D. Gibson, & B.Dodge (Eds.), Proceedings of society for information technology & teacher education international conference (pp. 3008–3012). Chesapeake, VA: AACE.
  • Cushing Weigle, Sara (2010). Validation of automated scores of TOEFL iBT tasks against non-test indicators of writing ability. Language Testing, 27(3), 335-353. Dikli, Semire (2010). The nature of automated essay scoring feedback. CALICO Journal, 28(1), 99-134.
  • Dikli, Semire; Bleyle, Susan (2014). Automated Essay Scoring feedback for second language writers: How does it compare to instructor feedback?. Asses-sing writing, 22, 1-17.
  • Fathman, Ann (1990). Teacher response to student writing: Focus on form versus content. In B. Kroll (Ed.), Second Language Writing: Research insights for the classroom (pp. 178–190). Cambridge, UK: Cambridge University Press.
  • Ferris, Dana (1995). Student reactions to teacher response in multiple draft com-position 35 classrooms. TESOL Quarterly, 29(1), 33–50.
  • Ferris, Dana (2011). Treatment of error in second language writing (2nd ed.). Ann Arbor, MI: University of Michigan Press.
  • Lee, Yong-Won; Gentile, Claudia; Kantor, Robert (2008). Analytic scoring of TOEFL CBT essays: Scores from humans and e-rater (RR 08-01). Prince-ton, NJ: Educational Testing Service (ETS).
  • Liao, Hui-Chuan (2016). Enhancing the grammatical accuracy of EFL writing by using an AWE-assisted process approach. System, 62, 77-92.
  • Long, Robert (2013). A review of ETS’s Criterion online writing program for student compositions. The Language Teacher, 37(3), 11-18.
  • Nichols, Paul (2004). Evidence for the interpretation and use of scores from an Automated Essay Scorer. In Paper presented at the Annual Meeting of the American Educational Research Association (AERA) San Diego, CA.
  • Otoshi, Junko (2005). An analysis of the use of Criterion in a writing classroom in Japan. The JALT CALL Journal, 1(1), 30-38.
  • Ranalli, Jim; Link, Stephanie; Chukharev-Hudilainen, Evgeny (2017). Automated writing evaluation for formative assessment of second language writing: investigating the accuracy and usefulness of feedback as part of argument-based validation. Educational Psychology, 37(1), 8-25.
  • Truscott, John (2007). The effect of error correction on learners’ ability to write accurately. Journal of Second Language Writing, 16(2007), 255–272.
  • Wang, Jinhao; Brown, Michelle Stallone (2007). Automated Essay Scoring ver-sus Human Scoring: A comparative study. Journal of Technology, Learning, and Assessment, 6(2).
  • Wang, Ying-Jian; Shang, Hui-Fang; Briody, Paul (2013). Exploring the impact of using automated writing evaluation in English as a foreign language university students' writing. Computer Assisted Language Learning, 26(3), 234-257.
  • Ware, Paige (2011). Computer-generated feedback on student writing. TESOL Quarterly, 45, 769–774. doi:10.5054/tq.2011.272525
  • Warschauer, Mark; Ware, Paige (2006). Automated writing evaluation: Defining the classroom research agenda. Language Teaching Research, 10(2), 1–24.
  • Weigle, S. C. (2011). Validation of automated scores of TOEFL iBT® tasks against nontest indicators of writing ability. ETS Research Report Series, 2011(2).
  • Wilson, Joshua; Czik, Amanda (2016). Automated essay evaluation software in English Language Arts classrooms: Effects on teacher feedback, student motivation, and writing quality. Computers & Education, 100, 94-109.
  • Zamel, Vivian (1985). Responding to student writing. TESOL Quarterly, 19(1), 79–97.

Automated Essay Scoring Feedback in Foreign Language Writing: Does it Coincide with Instructor Feedback?

Yıl 2022, Cilt: 4 Sayı: 4, 53 - 62, 15.06.2022
https://doi.org/10.48147/dada.60

Öz

This study aimed to find out whether Criterion, an automated essay scoring system (AES), can be used to save time for teachers in giving mechanical feedback to student papers by comparing the feedback given by Criterion with the instructor feedback. This is a descriptive study aiming to show to what extent the feedback given by the Criterion match with the instructor feed-back. In this study, the feedback from Criterion and a human rater were compared. This study sought answer for the following research question: To what extent does AES feedback coincide with instructor feedback in English as a Foreign Language (EFL) writing in terms of grammar and mechanics? The results showed that Criterion was as accurate as the human rater in finding the grammar errors and mechanical errors in students’ papers.

Kaynakça

  • Ashwell, Tim (2000). Patterns of teacher response to student writing in a multiple-draft composition classroom: Is content feedback followed by form feed-back the best method? Journal of Second Language Writing, 9(3), 227–257.
  • Attali, Yigal (2004, April). Exploring the feedback and revision features of Criterion. Paper presented at the National Council on Measurement in Education Annual Meeting, San Diego, CA.
  • Attali, Yigal; Lewis, Will; Steier, Michael (2012). Scoring with the computer: alternative procedures for improving the reliability of holistic essay scoring. Language Testing, 30(1), 125-141.
  • Chen, Chi-Fen; Cheng, Wei-Yuan (2008). Beyond the design of automated writing evaluation: pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning & Technology, 12(2), 94-112.
  • Choi, Jaeho; Lee, Youngju (2010). The use of feedback in the ESL writing class integrating Automated Essay Scoring (AES). In D. Gibson, & B.Dodge (Eds.), Proceedings of society for information technology & teacher education international conference (pp. 3008–3012). Chesapeake, VA: AACE.
  • Cushing Weigle, Sara (2010). Validation of automated scores of TOEFL iBT tasks against non-test indicators of writing ability. Language Testing, 27(3), 335-353. Dikli, Semire (2010). The nature of automated essay scoring feedback. CALICO Journal, 28(1), 99-134.
  • Dikli, Semire; Bleyle, Susan (2014). Automated Essay Scoring feedback for second language writers: How does it compare to instructor feedback?. Asses-sing writing, 22, 1-17.
  • Fathman, Ann (1990). Teacher response to student writing: Focus on form versus content. In B. Kroll (Ed.), Second Language Writing: Research insights for the classroom (pp. 178–190). Cambridge, UK: Cambridge University Press.
  • Ferris, Dana (1995). Student reactions to teacher response in multiple draft com-position 35 classrooms. TESOL Quarterly, 29(1), 33–50.
  • Ferris, Dana (2011). Treatment of error in second language writing (2nd ed.). Ann Arbor, MI: University of Michigan Press.
  • Lee, Yong-Won; Gentile, Claudia; Kantor, Robert (2008). Analytic scoring of TOEFL CBT essays: Scores from humans and e-rater (RR 08-01). Prince-ton, NJ: Educational Testing Service (ETS).
  • Liao, Hui-Chuan (2016). Enhancing the grammatical accuracy of EFL writing by using an AWE-assisted process approach. System, 62, 77-92.
  • Long, Robert (2013). A review of ETS’s Criterion online writing program for student compositions. The Language Teacher, 37(3), 11-18.
  • Nichols, Paul (2004). Evidence for the interpretation and use of scores from an Automated Essay Scorer. In Paper presented at the Annual Meeting of the American Educational Research Association (AERA) San Diego, CA.
  • Otoshi, Junko (2005). An analysis of the use of Criterion in a writing classroom in Japan. The JALT CALL Journal, 1(1), 30-38.
  • Ranalli, Jim; Link, Stephanie; Chukharev-Hudilainen, Evgeny (2017). Automated writing evaluation for formative assessment of second language writing: investigating the accuracy and usefulness of feedback as part of argument-based validation. Educational Psychology, 37(1), 8-25.
  • Truscott, John (2007). The effect of error correction on learners’ ability to write accurately. Journal of Second Language Writing, 16(2007), 255–272.
  • Wang, Jinhao; Brown, Michelle Stallone (2007). Automated Essay Scoring ver-sus Human Scoring: A comparative study. Journal of Technology, Learning, and Assessment, 6(2).
  • Wang, Ying-Jian; Shang, Hui-Fang; Briody, Paul (2013). Exploring the impact of using automated writing evaluation in English as a foreign language university students' writing. Computer Assisted Language Learning, 26(3), 234-257.
  • Ware, Paige (2011). Computer-generated feedback on student writing. TESOL Quarterly, 45, 769–774. doi:10.5054/tq.2011.272525
  • Warschauer, Mark; Ware, Paige (2006). Automated writing evaluation: Defining the classroom research agenda. Language Teaching Research, 10(2), 1–24.
  • Weigle, S. C. (2011). Validation of automated scores of TOEFL iBT® tasks against nontest indicators of writing ability. ETS Research Report Series, 2011(2).
  • Wilson, Joshua; Czik, Amanda (2016). Automated essay evaluation software in English Language Arts classrooms: Effects on teacher feedback, student motivation, and writing quality. Computers & Education, 100, 94-109.
  • Zamel, Vivian (1985). Responding to student writing. TESOL Quarterly, 19(1), 79–97.
Toplam 24 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Dil Çalışmaları
Bölüm Araştırma Makaleleri
Yazarlar

Musa Tömen 0000-0002-7351-2440

Erken Görünüm Tarihi 8 Haziran 2022
Yayımlanma Tarihi 15 Haziran 2022
Gönderilme Tarihi 12 Mayıs 2022
Yayımlandığı Sayı Yıl 2022 Cilt: 4 Sayı: 4

Kaynak Göster

APA Tömen, M. (2022). Automated Essay Scoring Feedback in Foreign Language Writing: Does it Coincide with Instructor Feedback?. Disiplinler Arası Dil Araştırmaları, 4(4), 53-62. https://doi.org/10.48147/dada.60