Value Added Methodology - An Alternative Approach to Quality Measurement in Higher Education

Author(s)

Nkrumah, Maame Afua ,

Download Full PDF Pages: 38-46 | Views: 714 | Downloads: 178 | DOI: 10.5281/zenodo.3456953

Volume 8 - July 2019 (07)

Abstract

This sought to present research evidence in support of ‘value added' methodology as an alternative approach to quality measurement in higher education. The study was informed by several issues including the need for modern methods of measuring quality in higher education. Hence, a dataset of over 6,000 students was used in demonstrating how 'value added' methodology and multilevel modelling statistical techniques could be used to measure quality. The dataset was created using secondary data (examination scores and prior attainment in English) and administrative records (cohort) from 16 academic departments. The issues explored include the strength of adjusted scores using the value added approach as against 'raw' (unadjusted) scores. The overall finding was that a 'value added' methodology is relatively more informative and fairer with respect to the measurement of quality. Also highlighted in this report do issues need policy, practice and research direction in accordance with the study findings

Keywords

prior attainment, value added and raw scores multilevel modeling, quality 

References

  1. Astin, A. W. (1993). What matters in college? Four critical years revisited. San Francisco, CA: Jossey-Bass.
  2. Ballou, D., Sanders, W., & Wright, P. (2004). Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioral Statistics, 29(1), 37-65.
  3. Beardsley, A. A. (2008). Methodological Concerns about the Education Value-Added Assessment System Educational Researcher, Vol. 37, (2), pp. 65–75.
  4. Bradshaw, R. H. (1992). Individual attributes as predictors of social status in small groups of laying hens. Applied Animal Behaviour Science, 34(4), 359-363.
  5. Breen, R., & Jonsson, J. O. (2005). Inequality of opportunity in comparative perspective: Recent research on educational attainment and social mobility. Annual review of sociology, 223-243.
  6. Bryk, A., & Raudenbush, S. (1992). Hierarchical linear models in social and behavioral research: Applications and data analysis methods. Newbury Park, CA: Sage.
  7. Chetty, D. (1992) School efficiency and effectiveness: Pointers for educational transformation in South Africa, paper presented to the Economics of Education Conference, University of Cape Town.Chisholm and Vally, 1996
  8. Chisholm, L., & Vally, S. (1996). The culture of learning and teaching in Gauteng schools. Johannesburg: University of Witwatersrand Education Policy Unit.
  9. Christie, P., & Potterton, M. (1997). School development in South Africa: A research project to investigate strategic interventions for quality improvement in South African schools. Final Report. Johannesburg, South Africa: University of Witwatersrand.
  10. Coates, H. (2006). Student engagement in campus-based and online education: University connections. Taylor & Francis.
  11. Creswell, J. (2009). Research design. Qualitative, Quantitative and Mixed Methods Approaches. 3rd. Edition. Sage: London.
  12. Gersick, C. (1988). Time and transition in work teams: Toward a new model of group development. Academy of Management Journal, Vol. 31, pp. 9-41.
  13. Harvey, L., & Knight, P. T. (1996). Transforming higher education. Buckingham: Society for Research in Higher Education and Open University Press.
  14. Heck, R. H. (2007). Examining the Relationship between Teacher Quality as an Organizational Property of Schools and Students’ Achievement and Growth Rates Educational Administration Quarterly Vol. 43, No. 4 pp.  399-432.
  15. Hopkins, D. & Harris, A. (2000) Differentia l strategies for school development; in: D. Van Veen & C. Day (Eds) Professional Development and School Improvement: strategies for growth (Mahwah, NJ, Erlbaum).
  16. Hopkins, D., West, M., Ainscow, M., Harris, A. & Beresford, J. (1997) Creating the Conditions for Classroom Improvement (London, David Fulton).
  17. Hulpia, H., & Valcke, M. (2004). The use of performance indicators in a school improvement policy: The theoretical and empirical context. Evaluation & Research in Education, 18(1-2), 102-119.
  18. Johnes, J. (2006). Measuring efficiency: a comparison of multilevel modelling and data envelopment analysis in the context of higher education. Bulletin of Economic research, 58(2), 75-104.
  19. Johnes, J., & Taylor, J. (1990). Performance indicators in higher education: UK universities. Open University Press and the Society for Research into Higher Education.
  20. Kothari, C. R. (2006). Reseach methodology methods and techniques 2nd edition. New Age International Publishers Limited Delhi.
  21. Marginson, S., (1993). Arts, Science and Work: Work-related skills and the generalist courses in higher education (Canberra, AGPS).
  22. Motala, S. (2001). Quality and indicators of quality in South African education: a critical appraisal. International Journal of Educational Development, 21(1), 61-78.
  23. Mortimore, P. (2000). The Road to School Improvement: Rotterdam, Swets & Zetlinger.
  24. Nkrumah, M. A. (2018). The relevance of teacher factors in understanding tertiary students’ performances. Quality assurance in education, 26 (4), 476-488.
  25. Opdenakker, M. C., & Damme, J. (2001). Relationship between school composition and characteristics of school process and their effect on mathematics achievement. British Educational Research Journal, 27(4), 407-432.
  26. Peers, I. S. & Johnston, M. (1994). Influence of learning context on the relationship between A-level attainment and final degree performance: a meta-analytic review, British Journal of Educational Psychology, 64, pp. 1-18.
  27. Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). Thousand Oaks, CA: Sage.
  28. Rodgers, Timothy (2007). 'Measuring Value Added in Higher Education: A Proposed Methodology for Developing a Performance Indicator Based on the Economic Value Added to Graduates', Education Economics, 15: 1, pp. 55 — 74
  29. Rutter, M., & Maughan, B. (2002). School effectiveness findings 1979–2002. Journal of school psychology, 40(6), 451-475.
  30. Scheerens, J., Glas, C., & Thomas, S. (2003). Educational Evaluation, Assessment and Monitoring. Lisse: Swets & Zeitlinger.
  31. Smith, J. and Naylor, R. (2001a). ‘Determinants of degree performance in UK universities: a statistical analysis of the 1993 student cohort’, Oxford Bulletin of Economics and Statistics, 63, pp. 29–60.
  32. Smith, J. P. and Naylor, R. A. (2001b). “Dropping out of university: a statistical analysis of the probability of withdrawal for UK university students,” Journal of the Royal Statistical Society, Series A, (Statistics in Society), Volume 164(2), p. 389-405.
  33. Tam, M. (2001). 'Measuring Quality and Performance in Higher Education', Quality in Higher Education, 7: 1, 47 — 54.
  34. Teddlie, C. & Reynolds, D. (2000). The international handbook of school effectiveness research (London & New York, Falmer Press).
  35. Thomas, S., Peng, W. J., & Gray, J. (2007). Modelling patterns of improvement over time: value added trends in English secondary school performance across ten cohorts. Oxford Review of Education, 33(3), 261-295.
  36. Thomas, S. (2001). 'Dimensions of Secondary School Effectiveness: Comparative Analyses Across Regions', School Effectiveness and School Improvement, 12: 3, pp. 285-322.
  37. Thrupp, M. (2010). Emerging school-level education policy under National 2008–9. New Zealand Annual Review of Education, 19, 30-51.
  38. Thum, Y. M. (2003). Measuring progress toward a goal estimating teacher productivity using a multivariate multilevel model for value-added analysis. Sociological Methods & Research, 32(2), 153-207.
  39. Willms, J. D. (1992). Monitoring school performance: A guide for educators. London: Falmer

Cite this Article: