Mapeo científico como técnica de investigación: puntos de corte en pruebas de evaluación educativa referidas a criterios como campo de conocimiento

  1. Melissa Villalobos García
  2. José María Marbán Prieto
  3. Rocío Anguita
Journal:
ReiDoCrea: Revista electrónica de investigación y docencia creativa

ISSN: 2254-5883

Year of publication: 2021

Volume: 10

Pages: 1-18

Type: Article

DOI: 10.30827/DIGIBUG.70947 DIALNET GOOGLE SCHOLAR lock_openOpen access editor

More publications in: ReiDoCrea: Revista electrónica de investigación y docencia creativa

Abstract

Through a creative literature review strategy known as Science Mapping, a characterization of the intellectual configuration of the field of knowledge about cut-off points in evaluation tests referred to criteria was carried out. Based on the analysis of 601 scientific articles, trends in the evolution of the field of knowledge and its main lines of publication was identified based on the authors, citations, and the interaction between them. In addition, from the identification of nodes, it was observed that it is a topic developed for several decades with greater growth in volume of scientific production from the year 2000. This field of knowledge shows a progression from aspects related to methodology towards specification in fields of knowledge evaluation such as education and health.

Bibliographic References

  • American Educational Research Association [AERA]. (1999). Standards for educational and psychological testing. Washington, DC: American Psychological Association.
  • Anderson, J., Corbett, A. Koedinger, K. & Pelletier, R. (1995). Cognitive Tutors: Lessons Learned. Journal of the Learning Sciences, 4(2), 167-207.
  • Angarita, L. (2014). Estudio bibliométrico sobre uso de métodos y técnicas cualitativas en investigación publicada en bases de datos de uso común entre el 2011-2013. Revista Iberoamericana de Psicología: Ciencia y Tecnología, 7(2), 67-76.
  • Angoff, W. (1971). Scales, norms, and equivalent scores. In R. L. Thorndike (Ed.), Educational measurement (pp. 508–600).
  • Washington, DC: American Council on Education.
  • Angoff, W. (1984). Scales, norms, and equivalent scores. Princeton, New Jersey: Educational Testing Service.
  • Aranguren, M. y Hoszowski, A. (2017). Aprender 2016. Serie de documentos técnicos 5: Bookmark, Establecimiento de puntos de corte. Ministerio de Educación y Deportes: Buenos Aires.
  • Aria, M. & Cuccurullo, C. (2017). Bibliometrix: An R-tool for comprehensive science mapping analysis. Journal of Informetrics, 11(4), 959-975.
  • Barbosa, J., Barbosa J. y Rodríguez, M. (2013). Revisión y análisis documental para estado del arte: Una propuesta metodológica desde el contexto de la sistematización de experiencias educativas. Investigación Bibliotecológica, 27(61), 83-105.
  • Barrios, M. y Cosculluela, A. (2013). Fiabilidad. En J. Meneses, M. Barrios, A. Bonillo, A. Cosculluela, L. Lozano, J. Turbany y S. Valero (Eds.). Psicometría (pp. 75-141). Barcelona: OUC.
  • Berk, R. (1986). A consumer’s guide to setting performance standards on criterion referenced tests. Review of Educational Research, 56, 137–172.
  • Bernal, D., Martínez, L., Parra, A. y Jiménez, J. (2015). Investigación documental sobre calidad de la educación en instituciones educativas del contexto Iberoamericano. Revista Entramados Educación y Sociedad, 2(2), 107- 124.
  • Beuk, C. (1984). A method for reaching a compromise between absolute and relative standards in examinations. Journal of Educational Measurement, 21, 147–152.
  • Buckendahl, Ch., Smith, R., Impara, J. & Plake, B. (2002). A comparison of Angoff and Bookmark Standard Setting Methods. Journal of Educational Measurement, 34(3), 253-263.
  • Capella, J., Smith, S. Philp, A., Putnam, T., Gilbert, C., Fry, W., Harvey, E., Wright, A., Henderson, K., Baker, D., Ranson, S. & Remine, S. (2010). Teamwork training improves the clinical care of trauma patients. Journal of Surgical Education, 67(6), 439-443.
  • Cetin, S. & Gelbal, S. (2013). A comparison of Bookmark and Angoff Standard Setting Methods. Educational Sciences: Theory & Practice, 13(4), 2169-2175.
  • Chen, Ch. (2017). Science Mapping: A Systematic Review of the Literature. Journal of Data and Information Science, 2(2), 1–40.
  • Chen, Ch. (2018). How to Use CiteSpace. Lean Publishing.
  • Cizek, G. & Bunch, M. (2007). Standard setting: A guide to establish and evaluating performance standards on tests. California: Sage Publications.
  • Cizek, G., Bunch, M. & Koons, H. (2004). Setting performance standards: Contemporary methods. Educational Measurement: Issues and Practice, 23(4), 31-49.
  • Cizek, G. & Bunch, M. (2007). Standard setting: A guide to establish and evaluating performance standards on tests. California: Sage Publications.
  • Cortés, J. (2008). Web of Science: Termómetro de la producción internacional de conocimiento: Ventajas y limitaciones. Culcyt, 5(29), 5-15.
  • Díaz-Iglesias, S., Blanco-González, A. y Orden-Cruz, C. (2019). Science mapping analysis of change management. Espacios, 40(32), 17-29.
  • García, P., Abad, F., Olea, J. y Aguado, D. (2012). Un nuevo método de standard setting basado en la TRI: aplicación a eCat-Listening. Informe de investigación eCAT 12-01. Madrid: Universidad Autónoma de Madrid.
  • Hamme, C. & Shulz, M. (2011). Reliability and validity of Bookmarkbased methods for standard setting: Comparisons to Angoff-based methods in the national assessment of Educational Progress. Educational Measurement: Issues and Practice, 30(2), 3–14.
  • Hart, C. (1998). Doing a literature review: Releasing the social science research imagination. London: Sage Publications.
  • Heller, P. & Hollabaugh, M. (1992). Teaching problem-solving through cooperative grouping. Designing problems and structuring groups. American Journal of Physics, 60(7), 637-644.
  • Hofstee, B. (1983). The case for compromise in educational selection and grading. In S. B. Anderson & J. S. Helmick (Eds.), on Educational testing (pp. 109–127). San Francisco: Jossey-Bass.
  • Hoyos, C. (2010). Un modelo para la investigación documental. Guía teórico-práctica sobre construcción de Estados del Arte con importantes reflexiones sobre la investigación. Medellín: Señal Editora.
  • Jornet, J. y Gonzales, J. (2009). Evaluación criterial: determinación de estándares de interpretación (EE) para pruebas de rendimiento educativo. Estudios sobre Educación, 16, 103-123.
  • Kaufman, D., Mann, K., Muijtjens, A. & Van der Vleuten, C. (2000). A comparison of standard-setting procedures for an OSCE in undergraduate medical education. Academic Medicine, 75(3), 267- 271.
  • Margolis, M. & Clauser, E. (2014). The impact of examinee performance information on judges’ cut scores in modified Angoff standard-setting exercises. Educational Measurement: Issues and Practice, 33(1), 15–22.
  • Montero, I. y León, O. (2007). A guide for naming research studies in Psychology. International Journal of Clinical and Health Psychology, 7(3), 847-862.
  • Moss, P. (1992). Shifting Conceptions of Validity in Educational Measurement: Implications for Performance Assessment. Review of Educational Research, 62(3), 229-258.
  • Muñiz, J. (2018). Introducción a la Psicometría: Teoría Clásica y TRI. Madrid: Pirámide.
  • Nedelsky, L. (1954). Absolute grading standards for objective tests. Educational and Psychological Measurement, 14, 3–19.
  • Norcini, J. (2003). Setting standards on educational tests. Medical Education, 37(5), 464-469.
  • Popham, J. (1983). Evaluación basada en criterios. Madrid: Magisterio Español.
  • R Core Team (2018). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing.
  • Reckase, M. (2006). A conceptual framework for a psychometric theory for standard setting with examples of its use for evaluating the functioning of two standard setting methods. Educational Measurement: Issues and Practice, 5(2), 4-18.
  • Rethans, J., Norcini, J., Baron-Maldonado, M., Blackmore, D., Jolly, B., LaDuca, T., Lew, S., Page, G., Southgate, L. (2002). The relationship between competence and performance: implications for assessing practice performance. Medical Education, 36(10), 901- 909.
  • Rodríguez, M., Alcaide, L. & Cobo, J. (2018). Analyzing the scientific evolution and impact of e-Participation research in JCR journals using science mapping. International Journal of Information Management, 40, 111–119.
  • Ruoyu, Z., Patrick, P., Poorang, W., Hannah, Y., Yan, L. & Han, Y. (2019). A science mapping approach based review of construction safety research. Safety Science, 113, 285-297.
  • Sanju, G., Sayeed, H. & Femi, O. (2006). Standard setting: Comparison of two methods. BMC Medical Education, 6(46), 1-6.
  • Scharnhorst, A., Börner, K. & Van den Besselaar, P. (2012). Models of science dynamics: Encounters between complexity theory and information sciences. Dordrecht: Springer.
  • Sireci, S., Hambleton, R. y Pitoniak, M. (2004). Setting passing scores on licensure examinations using direct consensus. CLEAR Exam Review, 15(1), 21-25.
  • Tiffin-Richards, S. & Anand-Pant, H. (2013). Setting standards for English foreign language assessment: Methodology, validation, and a degree of arbitrariness. Educational Measurement: Issues and Practice, 32(2), 15–25.
  • Van Eck, N. & Waltman, L. (2018). Manual for VOSviewer version 1.6.7. Nettherlands: Universiteit Leiden.
  • Van Niljen, D. & Janssen, R. (2008). Modeling judgments in the Angoff and contrasting-groups method of standard setting. Journal of Educational Measurement, 45(1), 45-63.
  • Vargas-Quesada, B., Chinchilla-Rodriguez, Z. & PerianesRodríguez, A. (2018). Science mapping artificial intelligence. En Conferencia de la Asociación Española para la Inteligencia Artificial (CAEPIA), I Workshop en Ciencia de Datos en Redes Sociales, Granada, 23-26 de octubre de 2018.
  • Zieky, M. & Livingston, S. (1977). Manual for setting standards on the basic skills assessment tests. Princeton, NJ: Educational Testing Service.