Understanding the implementation of an at-home language test: A case of an online version of TOEFL-PBT
Abstract
An at-home test is a unique mode of language test delivery as a result of mass-gathering prohibition during the COVID-19 pandemic. Despite the uniqueness, little is known about how to effectively implement an at-home test. This study aims to provide a deeper understanding of the test by exploring the execution of the online version of TOEFL-PBT in the Language Center of Syiah Kuala University. Four test administrators were interviewed to share their experiences and opinions related to considerations for implementing an at-home proficiency test, which includes technological resources, security, and validity concerns. The data were then analyzed descriptively. The results of this study revealed that the Language Center used Safe Exam Browser to deliver the test and Zoom to supervise the test-takers in real time. The proctors could stop the test and privately investigate the test takers using the Zoom Breakout feature. The validity of the test was claimed not to be a concern since the test provider used the same form of questions as the offline version. In addition, the Language Center expressed exhaustion in carrying out the online test, thus suggesting the development of a less complicated procedure of an at-home test.
Keywords
Full Text:
PDFReferences
Barkaoui, K. (2015). Test Takers’ Writing Activities During the TOEFL iBT ® Writing Tasks: A Stimulated Recall Study . ETS Research Report Series, 2015(1), 1–42. https://doi.org/10.1002/ets2.12050
Boonsathorn, S., & Kaoropthai, C. (2016). QSAT: The web-based mC-test as an alternative English proficiency test. TESOL International Journal, 11(2), 91–107.
Brown, H. D., & Abeywickrama, P. (2010). Language assessment: Principles and classroom practices. Pearson Education.
Brunfaut, T., Harding, L., & Batty, A. O. (2018). Going online: The effect of mode of delivery on performances and perceptions on an English L2 writing test suite. Assessing Writing, 36(September 2017), 3–18. https://doi.org/10.1016/j.asw.2018.02.003
Bunderson, C. V., Inouye, D. K., & Olsen, J. B. (1988). The four generations of computerized educational measurement. In R. L. Linn (Ed.), Educational Measurement (3rd ed., Issue June 1988, pp. 367–407). Macmillan.
Chalhoub-Deville, M. (2001). Language testing and technology: Past and future. Language Learning and Technology, 5(2), 95–98.
Chalhoub-Deville, M. (2012). Technology in Standardized Language Assessments. In R. B. Kaplan (Ed.), The Oxford Handbook of Applied Linguistics, (2 Ed.) (2nd ed., Issue September 2018, pp. 1–19). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780195384253.013.0035
Coniam, D. (2006). Evaluating computer-based and paper-based versions of an English-language listening test. ReCALL, 18(2), 193–211. https://doi.org/10.1017/S0958344006000425
Dooey, P. (2008). Language testing and technology: Problems of transition to a new era. ReCALL, 20(1), 21–34. https://doi.org/10.1017/S0958344008000311
Gacs, A., Goertler, S., & Spasova, S. (2020). Planned online education versus crisis-prompted online language teaching: Lessons for the future. Foreign Language Annals, 53(2), 380–392. http://clik.dva.gov.au/rehabilitation-library/1-introduction-rehabilitation%0Ahttp://www.scirp.org/journal/doi.aspx?DOI=10.4236/as.2017.81005%0Ahttp://www.scirp.org/journal/PaperDownload.aspx?DOI=10.4236/as.2012.34066%0Ahttp://dx.doi.org/10.1016/j.pbi.2013.02.0
Hosseini, M., Abidin, M. J. Z., & Baghdarnia, M. (2014). Comparability of Test Results of Computer based Tests (CBT) and Paper and Pencil Tests (PPT) among English Language Learners in Iran. Procedia - Social and Behavioral Sciences, 98, 659–667. https://doi.org/10.1016/j.sbspro.2014.03.465
Hrbackova, J., Lukacova, E., & Puncocharova, E. (2020). Language testing in the time of the Covid-19 pandemic 2020. CASALC Review, 10(1), 186–189.
Isbell, D. R., & Kremmel, B. (2020). Test Review: Current options in at-home language proficiency tests for making high-stakes decisions. Language Testing, 37(4), 600–619. https://doi.org/10.1177/0265532220943483
Khoshsima, H., Toroujeni, S. M. H., Thompson, N., & Ebrahimi, M. R. (2019). Computer-based (CBT) vs. paper-based (PBT) testing: Mode effect, relationship between computer familiarity, attitudes, aversion and mode preference with CBT test scores in an asian private EFL context. Teaching English with Technology, 19(1), 86–101.
Laborda, J. G., Magal-Royo, T., & Carrasco, E. E. (2010). Teachers’ trialing procedures for computer assisted language testing implementation. Eurasian Journal of Educational Research, 39, 161–174.
Nogami, Y., & Hayashi, N. (2010). A Japanese adaptive test of English as a foreign language: Developmental and operational aspects. In W. J. van der Linden & C. A. W. Glas (Eds.), Elements of adaptive testing (pp. 191–211). Springer. https://doi.org/10.1007/978-0-387-85461-8
Ockey, G. J. (2009). Developments and challenges in the use of computer-based testing for assessing second language ability. Modern Language Journal, 93(SUPPL. 1), 836–847. https://doi.org/10.1111/j.1540-4781.2009.00976.x
Öz, H., & Özturan, T. (2018). Computer-based and paper-based testing: Does the test administration mode influence the reliability and validity of achievement tests? Journal of Language and Linguistic Studies, 14(1), 67–85. https://dergipark.org.tr/en/download/article-file/650156
Piaw, C. Y. (2012). Replacing Paper-based Testing with Computer-based Testing in Assessment: Are we Doing Wrong? Procedia - Social and Behavioral Sciences, 64, 655–664. https://doi.org/10.1016/j.sbspro.2012.11.077
Plough, I., & Raquel, M. (2020). ILTA Newsletter, 4(1). https://www.iltaonline.com/
page/NewsletterMay2020
Richards, K. (2009). Interviews. In H. Juanita & C. Robert A. (Eds.), Qualitative Research in Applied Linguistics (pp. 182–199). Palgrave Macmillan.
Rigo, F. (2020). Effectiveness of testing English online in distance. Marketing Identity, 1, 547–557.
Roever, C. (2001). Concerns with Computerized Adaptive Oral Proficiency Assessment. Language Learning & Technology, 5(2), 84–94. http://llt.msu.edu/vol5num2/roever/default.html
Scheuermann, F., & Björnsson, J. (2009). The Transition to Computer-Based Assessment: New Approaches to Skills Assessment and Implications for Large-scale Testing. In Office for Official Publications of the European Communities (Issue September). https://doi.org/10.2788/60083
Short, S. E. (2006). Focus group interview. In P. Ellen & C. Sara R. (Eds.), A Handbook for social science field research: Essays & bibliographic sources on research design and methods (pp. 103–116). Sage Publications.
Stradiotová, E., Nemethova, I., & Stefancik, R. (2021). Comparison of on-Site Testing With Online Testing During the Covid-19 Pandemic. Advanced Education, 8(18), 73–83. https://doi.org/10.20535/2410-8286.229264
Suvorov, R., & Hegelheimer, V. (2014). Computer-Assisted Language Testing. In A. J. Kunnan (Ed.), The Companion to Language Assessment (first, pp. 594–613). John Wiley & Sons, Inc. https://doi.org/10.1002/9781118411360.wbcla083
Taylor, C., Kirsch, I., Eignor, D., & Jamieson, J. (1999). Examining the relationship between computer familiarity and performance on computer-based language tasks. Language Learning, 49(2), 219–274. https://doi.org/10.1111/0023-8333.00088
van der Linden, W. J., & Glas, C. A. W. (2000). Computerized adaptive testing: Theory and practice. Kluwer Acadmic Publishers.
Wagner, E. (2020). Duolingo English Test, Revised Version July 2019. Language Assessment Quarterly, 17(3), 300–315. https://doi.org/10.1080/15434303.2020.1771343
Weigle, S. C. (2010). Validation of automated scores of TOEFL iBT tasks against non-test indicators of writing ability. Language Testing, 27(3), 335–353. https://doi.org/10.1177/0265532210364406
Wolfe, E. W., & Manalo, J. R. (2004). Composition medium comparability in a direct writing assessment of non-native English speakers. Language Learning and Technology, 8(1), 53–65.
Yu, G. (2010). Effects of presentation mode and computer familiarity on summarization of extended texts. Language Assessment Quarterly, 7(2), 119–136. https://doi.org/10.1080/15434300903452355
DOI: http://dx.doi.org/10.22373/ej.v10i2.15899
Refbacks
- There are currently no refbacks.