Exploring a web-based interactive writing assessment WISSE: User experiences

Putu Dian Danayanti Degeng, Hamamah Hamamah, Ive Emaliana, Yulia Hapsari, Alifa Camilia Fadillah

Abstract


This paper aims to investigate the user experiences of Web-based Integrated Writing Assessment (WISSE) developed to help lecturers and learners provide and navigate feedback for academic writing in English. User experience is one of the crucial elements for product success and reception in Research and Development (R & D). Aspects of user experience highlighted in this study are web features, ease of use, and design. The participants involved were two lecturers and twenty-eight learners from the English Language Education and the English Literature study program at one of the prominent universities in Malang. At first, both learners and lecturers created a temporary account on the web prototype, and they were assigned two different roles: learners wrote and submitted a short argumentative essay on the application, and lecturers provided feedback on the essays through a personal comment box. At the end of the trial, both groups revealed their experience and inputs on the features, operation accessibility, and web design through questionnaires. To a large extent, both lecturers and learners were satisfied with the goal of the application to provide easy accessibility to assessing academic text. However, both groups agreed that more distinctive features should be added, along with a manual book and language switch feature, as it is attainable that the future users of WISSE will not be limited to EFL learners. The user trial results illustrate that while WISSE needs further development and revision, it exhibits proper performance and is prepared for large-scale use.


Keywords


Academic writing assessment; AI-integrated website; EFL learners; User experience

Full Text:

PDF

References


Al-Qeisi, K., Dennis, C., Alamanos, E., & Jayawardhena, C. (2014). Website design quality and usage behavior: Unified Theory of Acceptance and Use of Technology. Journal of Business Research, 67(11), 2282–2290. https://doi.org/10.1016/J.JBUSRES.2014.06.016

Asikin-Garmager, A., Dowd, P., George, S., & Afifi, R. A. (2022). Integrating user experience evaluation in the development of a web-based Community Engagement Toolkit. Evaluation and Program Planning, 91, 102048. https://doi.org/10.1016/J.EVALPROGPLAN.2022.102048

Chen, H. R., & Tseng, H. F. (2012). Factors that influence acceptance of web-based e-learning systems for the in-service education of junior high school teachers in Taiwan. Evaluation and Program Planning, 35(3), 398–406. https://doi.org/10.1016/J.EVALPROGPLAN.2011.11.007

Chen, H. Y., & Liu, K. Y. (2008). Web-based synchronized multimedia lecture system design for teaching/learning Chinese as second language. Computers & Education, 50(3), 693–702. https://doi.org/10.1016/J.COMPEDU.2006.07.010

Degeng, P. D., Hamamah, H., Emaliana, I., & Hapsari, Y. (2022, August 5). Providing Feedback for a Large Writing Class: An Application Prototype for Integrated Academic Writing Online Assessment. https://doi.org/10.4108/eai.9-10-2021.2319684

Ellington, H., & Aris, B. (2000). A Practical Guide to Instructional Design (A. Attan, Ed.). University Teknologi Malaysia.

Gong, J. W., Liu, H. C., You, X. Y., & Yin, L. (2021). An integrated multi-criteria decision making approach with linguistic hesitant fuzzy sets for E-learning website evaluation and selection. Applied Soft Computing, 102, 107118. https://doi.org/10.1016/J.ASOC.2021.107118

Hamamah, H., Emaliana, I., Degeng, P., Hapsari, Y., Budiawan, D., & Gayatri, P. (2023). What Do Teachers Need? A Probe of Best Practice for Assessment as Learning in EFL Writing Class. Proceedings of the 2nd International Conference on Advances in Humanities, Education and Language, ICEL 2022, 07–08 November 2022, Malang, Indonesia. https://doi.org/10.4108/eai.7-11-2022.2329391

Hamamah, Hapsari, Y., Emaliana, I., Dian Danayanti Degeng, P., & Artikel Abstrak, I. (2020). Integrated Academic Writing Assessment Model to Support The Implementation of OBE Curriculum. http://journal.um.ac.id/index.php/jptpp/

Hamamah, Sahar, R. Bt., Emaliana, I., Hapsari, Y., & Degeng, P. D. D. (2023). Assessing The Feasibility of a Web-Based Interactive Writing Assessment (WISSE): An Evaluation of Media and Linguistic Aspects. JEELS (Journal of English Education and Linguistics Studies), 10(1), 177–197. https://doi.org/10.30762/jeels.v10i1.1093

Kaluarachchi, T., & Wickramasinghe, M. (2023). A systematic literature review on automatic website generation. Journal of Computer Languages, 75, 101202. https://doi.org/10.1016/J.COLA.2023.101202

Koltovskaia, S. (2020). Learner engagement with automated written corrective feedback (AWCF) provided by Grammarly: A multiple case study. Assessing Writing, 44, 100450. https://doi.org/10.1016/J.ASW.2020.100450

Lam, S. T. E. (2021). A web-based feedback platform for peer and teacher feedback on writing: An Activity Theory perspective. Computers and Composition, 62, 102666. https://doi.org/10.1016/J.COMPCOM.2021.102666

Lee, A. V. Y. (2023). Supporting learners’ generation of feedback in large-scale online course with artificial intelligence-enabled evaluation. Studies in Educational Evaluation, 77, 101250. https://doi.org/10.1016/J.STUEDUC.2023.101250

Li, A. W. (2023). Using Peerceptiv to support AI-based online writing assessment across the disciplines. Assessing Writing, 57, 100746. https://doi.org/10.1016/J.ASW.2023.100746

Liu, G. Z., Liu, Z. H., & Hwang, G. J. (2011). Developing multi-dimensional evaluation criteria for English learning websites with university learners and professors. Computers & Education, 56(1), 65–79. https://doi.org/10.1016/J.COMPEDU.2010.08.019

Lu, Q., Yao, Y., & Zhu, X. (2023). The relationship between peer feedback features and revision sources mediated by feedback acceptance: The effect on undergraduate learners’ writing performance. Assessing Writing, 56. https://doi.org/10.1016/j.asw.2023.100725

Mahmud, M. M., Freeman, B., & Abu Bakar, M. S. (2022). Technology in education: efficacies and outcomes of different delivery methods. Interactive Technology and Smart Education, 19(1), 20–38. https://doi.org/10.1108/ITSE-01-2021-0021

Seckler, M., Heinz, S., Forde, S., Tuch, A. N., & Opwis, K. (2015). Trust and distrust on the web: User experiences and website characteristics. Computers in Human Behavior, 45, 39–50. https://doi.org/10.1016/J.CHB.2014.11.064

Van der Heijden, H. (2003). Factors influencing the usage of websites: the case of a generic portal in The Netherlands. Information & Management, 40(6), 541–549. https://doi.org/10.1016/S0378-7206(02)00079-4

Vo, Y., Rickels, H., Welch, C., & Dunbar, S. (2023). Human scoring versus automated scoring for english learners in a statewide evidence-based writing assessment. Assessing Writing, 56, 100719. https://doi.org/10.1016/J.ASW.2023.100719

Wen, S.-F., & Katt, B. (2023). A quantitative security evaluation and analysis model for web applications based on OWASP application security verification standard. Computers & Security, 135, 103532. https://doi.org/10.1016/J.COSE.2023.103532

Weng, X., & Chiu, T. K. F. (2023). Instructional design and learning outcomes of intelligent computer assisted language learning: Systematic review in the field. In Computers and Education: Artificial Intelligence (Vol. 4). Elsevier B.V. https://doi.org/10.1016/j.caeai.2022.100117




DOI: http://dx.doi.org/10.22373/ej.v11i2.20386

Refbacks

  • There are currently no refbacks.


This journal has been viewedtimes.
View full page view stats report here.


All works are licensed under CC-BY

Englisia Journal
© Author(s) 2019.
Published by Center for Research and Publication UIN Ar-Raniry and Department of English Language Education UIN Ar-Raniry.

Indexed by: