An Analysis of English Final Semester Test Items at SMK Kencana Sakti Haumeni
DOI:
https://doi.org/10.33323/l.v3i2.123Keywords:
test quality, discrimination index, English assessment, vocational education, item analysisAbstract
Abstract This study aims to analyze the quality of the English final semester test items developed by the English teacher of SMK Kencana Sakti Haumeni. The research focuses on evaluating the test’s effectiveness by examining the difficulty level and discrimination power of each item. Using a quantitative descriptive approach, the study analyzed 50 multiple-choice items from the English summative test administered to second-grade students. The data were obtained through documentation, including students’ answer sheets and the original test papers. The analysis employed Heaton’s (1988) formulas to determine the facility value and discrimination index, supported by classifications from Sumarsono and Arikunto (Hartati & Yogi, 2019). The results revealed that 18% of the items were difficult, 50% were moderate, and 32% were easy. In terms of discrimination power, 22% of the items were categorized as good, 30% satisfactory, 40% poor, and 8% bad or negative. These findings suggest that although most test items met acceptable quality standards, a considerable number of items still require revision or replacement to enhance their ability to distinguish between high and low achievers. This study highlights the importance of item analysis as a reflective tool for teachers to ensure that classroom assessments are valid, reliable, and aligned with learning objectives. Regular evaluation of test items is therefore essential to improve the overall quality of English language assessment in schools.
References
Ary, D., Jacobs, L. C., & Sorensen, C. (2009). Introduction to research in education
(8th ed.). Belmont, CA: Wadsworth Cengage Learning.
Brown, H. D. (2004). Language assessment: Principles and classroom practices.
White Plains, NY: Pearson Education.
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: Sage Publications.
Hartati, T., & Yogi, A. (2019). An analysis of teacher-made English test items based on item difficulty and discrimination index. Journal of English Language Teaching and Linguistics, 4(2), 89–101. https://doi.org/10.21462/jeltl.v4i2.234
Heaton, J. B. (1988). Writing English language tests. New York, NY: Longman.
Kusumawati, D., & Hadi, M. (2018). The role of assessment in improving English learning outcomes. International Journal of English Language Education, 6(1), 45–57. https://doi.org/10.5296/ijele.v6i1.12703
Maharani, I. A. P., & Putro, N. H. (2020). An analysis of multiple-choice test items for senior high school students. Indonesian Journal of English Education, 7(2), 145–158. https://doi.org/10.15408/ijee.v7i2.17612
McCowan, R. J., & McCowan, S. C. (1999). Item analysis for criterion-referenced tests.
New York, NY: State University of New York Press.
Noviasmy, F., & Risma, H. (2024). Evaluating teacher-made tests: A case study of English summative assessment in Indonesia. TEFLIN Journal, 35(1), 77–
https://doi.org/10.15639/teflin.v35i1.77-95
Pradanti, D. A., Martono, A., & Sarosa, T. (2018). An analysis of English test items based on validity, reliability, and discrimination index. Indonesian Journal of Language Teaching, 3(1), 12–20.
Semiun, E., & Luruk, S. (2020). Teachers’ evaluation practices in EFL classrooms: A study in Kupang. Journal of Educational Research and Practice, 10(4), 223–233.
Suardipa, I. P., & Primayana, K. H. (2018). The implementation of summative assessment in English language teaching. Jurnal Pendidikan Bahasa Inggris, 6(2), 112–122.
Sumarsono, P., & Arikunto, S. (2015). Dasar-dasar evaluasi pendidikan. Jakarta: Bumi Aksara
