Friday, February 20, 2009

online course evaluation

Online Student Course Evaluation Systems: Effective Strategies and Best Practices

The study evaluates online course evaluation systems and explores faculty and student experiences, perceptions, and preferences for evaluation. In addition, traditional constructs such as response rates, quality of responses, and access are also evaluated. Results of the study provide an effective basis for institutional decision-making, and provide practical evidence for assuring the best course evaluation data possible. Experiences and best practices for implementation of online student course evaluations in higher education will be also highlighted.
Abstract: (Click here to enhance readability)

Introduction Student course evaluations of teaching are a time-honored tradition within most higher education institutions. For administrators, they are important indicators of course and curriculum quality, and often weigh heavily in tenure and promotion decisions. In addition, they can be used on an individual basis for teaching and course improvement (Brown et al. 1997; Brockbank & McGill 1998). Because of their importance in instructional and institutional decision-making student evaluations of teaching have become a deep-rooted institutional fixture. Early investigations of course evaluation methods (paper-based vs. online) have yielded some interesting and distressing results. Perhaps the most universal difference reported is a lower response rate (Baum et. al. 2001; Hardy 2003; Carini et. al. 2003; Sax et. al. 2002) which faculty fear can contribute to non-response negative bias, however fears are often over exaggerated. Other studies, however, report that the differences in response rate have narrowed over time (Johnson 2002; Thorpe 2002). Another difference noted by a number of studies is a higher quality of written responses in online student course evaluations (Johnson 2002; Kasiar et al. 2001; Layne et al. 1999; Ravelli 2000). From a pedagogical perspective, one example illustrates how online course evaluation systems can help facilitate improvements in teaching and learning practices by providing more timely access to evaluation results (Tucker et al. 2003). From an administrative perspective there are demonstrated benefits as well, such as significant cost savings in time and materials. For example, one study found that staff workload decreased from approximately 30 hours spent on paper-based evaluations down to just 1 hour spent with online evaluations (Kasiar et al. 2001). Still it is not clear that the best practices and practical evidence from this body of research has benefited institutional adoption and implementation of online course evaluation systems, given the slow adoption of these systems compared to other educational technologies in higher education. The study intends to understand faculty and student experiences, perceptions, and preferences in regard to course evaluations, while also providing an updated evaluation of traditional constructs collected in earlier studies such as, response time, response quality, and response rate. Theoretical Framework In order to grasp this complex nature of any organization, the current study utilizes a socio-technical approach as the primary theoretical perspective. This approach suggests that both the social and technical aspects are equally important when implementing, using, or evaluating information technology (Kling et al. 2003). The conceptualization of educational technology systems as a socio-technical system is further supported by Moore & Kearsley (1996), who suggest that educational technologies are complex systems that involve a wide variety of technological, organizational, social, and instructional components. In an effort to conceptualize an online student course evaluation system as socio-technical system (STS) it is useful to note the important social and technological aspects that define it. An STS is comprised of identifiable populations, groups, incentives, actors, undesired interactions, flows, and choice points (Kling et al. 2003). It is our belief that the socio-technical approach will provide a framework to more thoroughly determine the advantages and disadvantages of online course evaluations, and will enable better procedures and best practices within our results by accounting for all aspects of the system. Importance of Study As recent as five years ago, few colleges and universities were using online student course evaluations of teaching. In fact, a recent report on higher education indicates that only 1% of the nation’s most “wired” universities reported institution-wide uses of online course evaluations systems (Hmieleski 2000) even though most of their other systems are highly technological including: the extensive use of learning management systems, email communication, podcasting, blogs, wikis and wireless technologies (Carlson 2005; Hoffman 2003). Along with the overall increase in the use of technology in all aspects of higher education, many recognize there are significant advantages to using online student course evaluations systems including: ease and economy of administration, more detailed and thoughtful student responses, more accurate data collection and reporting, more class time, and more timely instructor access to results (Johnson 2002; Hardy 2003; Kasiar et al. 2001; Layne et al. 1999; Ravelli 2000). This change is driven not only by the administration, in effort to save costs and boost staff productivity, but also by the students themselves who increasingly demand the latest technologies (Carlson 2005). The study utilizes a mixed methods approach and provides a combination of data gathered via survey, as well as student evaluation data from both paper-based and online evaluations in order to uncover advantages and disadvantages regarding the use of online course evaluations. Data has been collected from a purposeful sample of in-class, paper-based, course evaluations (N=45), and from a random sample of online course evaluations (N=95). All courses and participants are from a School of Education at a large midwestern state university. Survey and qualitative data have been collected from instructors and students, and data has been mined from the universities learning management system to provide triangulation on several of the studies constructs. A series of qualitative interviews have also been conducted with a small subset of student and faculty participants. Preliminary results indicate positive outcomes with the schools pilot implementation of online course evaluations using the STS approach in regard to faculty and student experiences and preferences, and are in-line with that of other campus educational technologies. Response rates, response time and response quality also indicate overall positive trends and yield some interesting findings. Completed results will help instructional technologists, and educational organizations apply best practices for implementing online course evaluation systems as socio-technical systems, as well as provide strategies for those that would like to improve upon current course evaluation practices. Conclusion There is a need for current research on this topic given the changing technology climate and apparent lag of online course evaluation system implementation by higher education institutions. The current study will step beyond previous comparison studies to evaluate online course evaluation system use while also exploring the reality of faculty and student experiences, perceptions, and preferences. The results of this study will provide best practices for implementation of online student course evaluations in higher education. References Brockbank, A., & McGill, I. (1998). Facilitating Reflective Learning in Higher Education. Bristol, England: Open University Press. Brown, S., Race, P., and Smith, B. 500 Tips for Quality Enhancement in Universities and Colleges. London, England: Kogan Page, 1997 Baum, P., Chapman, K. S., Dommeyer, C.J. & Hanna, R. W. (2001) Online versus in-class student evaluations of faculty, paper presented at the Hawaii Conference on Business, Honolulu. Carini, R. M., Hayek, J. C., Kuh, G. D., & Ouimet, J. A. (2003). College student responses to web and paper surveys: does mode matter? Research in Higher Education, 44(1), 1–19. Carlson, S. (2005). The net generation goes to college. The Chronicle of Higher Education, Information Technology. Retrieved December 4, 2007 from http://chronicle.com/weekly/v52/i07/07a03401.htm Dommeyer, C. J., Baum, P., Hanna, R. W., & Chapman, K. S. (2002). Attitudes of business faculty toward two methods of collecting teaching evaluations: paper vs. online. Assessment & Evaluation in Higher Education, 29(5), 611-624. Hmieleski, K. (2000). Barriers to online evaluation: Surveying the nation’s top 200 most wired colleges. Interactive and Distance Education Assessment Laboratory, Rensselaer Polytechnic Institute (Unpublished Report): Troy, N.Y. Hoffman, K. M. (2003). Online course evaluation and reporting in higher education. New Directions for Teaching & Learning, 96, 25-30. Hardy, N. (2003) Online ratings: Fact and fiction. In D. L. Sorenson and T.D. Johnson (Eds.), New directions for Teaching and Learning, 96 (4) 31-38. Johnson, R. (2002) “Online Student Ratings: Will Students Respond?” In D. L. Sorenson and T.D. Johnson (Eds.), New directions for Teaching and Learning, 96 (4), 49-59. Kling, R., McKim, G., & King, A. (2003). A bit more to IT: Scholarly communication forums as socio-technical interaction networks. Journal of the American Society for Information Science and Technology, 54(1), 47-67. Kasiar, J. B., Schroeder, S. L., & Holstad, S. G. (2001). Comparison of traditional and web-based course evaluation processes in a required, team-taught pharmacotherapy course. American Journal of Pharmaceutical Education, 63(2), 68-70. Layne, B. H., DeCristofor, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Research in Higher Education, 40(2), 221-32. Moore, M. G., & Kearsley, G. (1996). Distance Education: A Systems View, Belmont, CA: Wadsworth Publishing Company. Sax, L., Gilmartin, S., Keup, J., Bryant, A. and Plecha, M. (2002). Findings from the 2001 pilot administration of Your First College Year: National norms. Higher Education Research Institute, University of California. Retrieved December 7, 2007 from http://www.gseis.ucla.edu/heri/yfcy/yfcy_report_02.pdf Ravelli, B. (2000). Anonymous online teaching assessments: Preliminary findings. Paper presented at: Annual National Conference of the American Association for Higher Education, June 14-18, 2000, Charlotte, North Carolina. Tucker, B., Jones, S., Straker, L., & Cole, J. (2003). Course evaluation on the web: Facilitating student and teacher reflection to improve learning. New Directions for Teaching and Learning, 96(Winter), 81-94.

No comments:

Post a Comment