Ika Kana Trisnawati


These past years have seen the growing popularity of the Computer-Based Tests (CBTs) in various disciplines, for various purposes, although the Paper-and Pencil Based Tests (P&Ps) are still in use. However, many question on whether the use of CBTs outperform the effectiveness of the P&Ps or if the CBTs can become a valid measuring tool compared to the PBTs. This paper tries to present the comparison on both the CBTs and the P&Ps and their respective examinee perspectives in order to figure out if doubts should arise to the emergence of the CBTs over the classic P&Ps. Findings showed that the CBTs are advantageous in that they are both efficient (reducing testing time) and effective (maintaining the test reliability) over the P&P versions. Nevertheless, the CBTs still need to have their variables well-designed (e.g., study design, computer algorithm) in order for the scores to be comparable to those in the P&P tests since the score equivalence is one of the validity evidences needed in a CBT.


Computer-Based Tests; Paper-and-Pencil Based Tests; comparability; examinee perspectives; validity; reliability

Full Text:



American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME). (1999). Standards for educational and psychological testing. Washington, DC: Author.

Bennett, R. E., & Rock, D. A. (1995). Generalizability, validity, and examinee perceptions of a computer-delivered formulating hypothesis test. Journal of Educational Measurement, 32(1), 19–36.

Gallagher, A., Bennett, R. A., Cahalan, C., & Rock, D. A. (2002). Validity and fairness in technology-based assessment: detecting construct-irrelevant variance in an open-ended, computerized mathematics task. Educational Assessment, 8(1), 27–41.

Green, B. F., Bock, R. D., Humphreys, L. G., Linn, R. L., & Reckase, M. D. (1984). Technical guidelines for assessing computerized adaptive tests. Journal of Educational Measurement, 21(4), 347-360.

Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: a meta-analysis. Psychological Bulletin, 114(3), 449-458.

Neuman, G., & Baydoun, R. (1998). Computerization of paper-and-pencil tests: when are they equivalent? Applied Psychological Measurement, 22(1), 71-83.

Olsen, J. B. (2000). Guidelines for computer-based testing. Retrieved May 17, 2008 from

Pomplun, M., Frey, S., & Becker, D. (2002). The score equivalence of paper-and-pencil and computerized versions of a speeded test of reading comprehension. Educational and Psychological Measurement, 62(2), 337-35.

Russell, M., Goldberg, A., & O’Connor, K. (2003). Computer based-testing and validity: a look back and into the future. Retrieved May 2, 2008 from

Wang, T, & Kolen, M. J. (2001). Evaluating comparability in computerized adaptive testing: issues, criteria and an example. Journal of Educational Measurement, 38 (1), 19-49.

Wang, S., Jiao, H., Young, M. J., Brooks, T, & Olson, J. (2008). Comparability of computer-based and paper-and-pencil testing in K 12 reading assessments: a meta-analysis of testing mode effects. Educational and Psychological Measurement, 28(1), 5-24.

Wise, S. L., & Kingsbury, G. G. (2000). Practical issues in developing and maintaining a computerized adaptive testing program. Psicológica, 21,135-155.

Wise, S. L., & Kong, X. (2005). Response time effort: a new measure of examinee motivation in computer-based tests. Applied Measurement in Education, 18(2), 163–183.

Zenisky, A. L., & Sireci, S. G. (2002). Technological innovations in large-scale assessment. Applied Measurement in Education, 15(4), 337-362.



  • There are currently no refbacks.

This journal has been viewedtimes.
View full page view stats report here.

All works are licensed under CC-BY

Englisia Journal
© Author(s) 2019.
Published by Center for Research and Publication UIN Ar-Raniry and Department of English Language Education UIN Ar-Raniry.

Indexed by: