Short Stories

Moving beyond objective testing in online assessment

Description
Moving beyond objective testing in online assessment
Categories
Published
of 11
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
   Moving Beyond Objective Testing in Online Assessment Page 1 Moving Beyond Objective Testing in Online Assessment Authors:  H S Ashton, C E Beevers, C D Milligan, D K Schofield, R C Thomas and M A Youngson Address:  Scottish Centre for Research into On-Line Learning and Assessment, Heriot-Watt University, Riccarton, EDINBURGH, EH14 4AS, United Kingdom. Tel:  0044 (0)131 451 3251 Fax:  0044 (0)131 451 3249 Abstract Computer Aided Assessment is traditionally seen as an efficient approach to testing large numbers of students utilizing objective type questions, with the emphasis firmly on measurement. This article describes a system which seeks also to contribute to improving student learning by enhancing the quality, sophistication and authenticity of the assessments delivered. The system supports students and tutors through the learning process by providing diagnostic and directed feedback to learners and a clear measurement of their true ability to the teacher. The article also looks at current work focused on developing assessment systems which can assess higher-order skills and begin to blur the boundary between learning and assessment. Introduction Online assessment has captured the imagination of many as the panacea for testing large numbers of students, in a reliable, consistent fashion, and in a way in which marks are immediately available for analysis and publication. Typically, this type of online assessment takes the form of objective testing, utilising multiple choice and multiple response questions, offering scalability and time saving solutions with large groups of students. However, this focus has diverted attention away from many of the key benefits that online assessment offers to learning. The experience of the authors in online assessment has been radically different, in that, from the outset, the focus has been on supporting student learning. This has led to the use of online assessment in diagnostic, formative and summative modes; supporting independent learning; encouraging reflection; and involving both student and teacher in this process. This chapter describes the experiences of the authors through a range of projects over the last two decades in which Computer Aided Assessment (CAA) has been used to support learning and deliver assessment to students in both tertiary and secondary education in the United Kingdom. In the long term, online assessment can play a positive role in enhancing the quality of learning by: •   Providing diagnostic and directed feedback to learners; •   Supporting students and teachers through the learning process; •   Increasing the interactivity of the learning experience; •   Enhancing the quality and authenticity of the assessment delivered; •   Helping to identify mis-conceptions; and •   Aiding teachers/lecturers to grade the students they teach.   Moving Beyond Objective Testing in Online Assessment Page 2 This chapter presents a case study of a series of CAA projects that, from their inception, have  been different from the norm. Background Using a computer to aid the assessment of student performance has been an option for several decades. Various groups have pioneered the delivery of CAA in Higher Education in the UK and internationally since the mid-1980s, as reviewed elsewhere (Ashton et al , 2004). Within the UK, early projects and tools include: the Computer Aided Learning in Mathematics (CALM) Project at Heriot-Watt University (Beevers et al , 1989, 1995, 1999), Ceilidh CourseMaster computer science programming software at Nottingham University (Benford et al , 1995) and various language tools, such as LUISA at Leeds University, among others (Bull, 1993). Since those early days the tools of CAA have advanced dramatically. Systems are now capable of supplying a range of question types well beyond the multiple-choice format, incorporating images, multimedia and animation. In many universities CAA is used for both formative and summative assessment in a variety of disciplines (Bull and McKenna, 2003). In discussing the assessment of learning it is valuable to consider the types of learning that can be assessed, and particularly, which types currently lend themselves to CAA. Practitioners and developers frequently debate the effective pedagogical use of CAA. Themes include the potential to support and enhance learning through structured and directive formative assessment and feedback and the capabilities of CAA to test different types of skills effectively and reliably. It is generally accepted that Bloom et al,  (1956) provide a sensible taxonomy of educational objectives that apply to most academic subjects and this taxonomy has been updated to reflect changes in educational practise (Anderson et al,  2001). Already, in mainstream practice, CAA can be applied to test the so-called lower order skills (knowledge, comprehension and application), and attempts are already appearing to provide automatic testing for the higher order skills or extended competencies of analysis, synthesis and evaluation (Beevers and Paterson, 2003, Bull and Hesketh, in press). Developments from CALM to PASS-IT This case study encompasses a number of projects since 1985, as outlined below. 1985-1992 CALM  (Computer Aided Learning in Mathematics)  Network delivered weekly tutorial material with formative assessment component to support 1 st  year undergraduate Calculus course, then, CD based courseware to support pre-university Mathematics (Scottish Higher). 1992-1996 Mathwise  National UK Initiative to develop computer based learning materials and assessment support for the teaching of the Mathematics part of the syllabus for the European Engineer. 1995-1998 Interactive Past Papers CD based assessment provision for Mathematics at Scottish sixth year studies (SYS), A-Level, Scottish Higher, GCSE and Scottish Standard Grade levels. 1997-2001 CUE  (see Paterson, 2002) Development of an online assessment engine. 2000-present SCHOLAR  (see Livingstone and Condie, 2004) E-Learning initiative to develop materials and online assessments to support subjects across the school/university interface. 2002-present PASS-IT (Project on ASsessment in Scotland using Information Technology)   Moving Beyond Objective Testing in Online Assessment Page 3 Research into the issues surrounding the online delivery of minimum competency assessment into Scottish schools in a range of subjects and levels. Through all of this work, the focus has been on the following main issues: •    Non objective questions (construction of answers, automatic marking); •   Immediate feedback and support; •   The provision for staged answers; and •   Support for post assessment feedback and reflection. CALM started in the Department of Mathematics at Heriot-Watt University in Edinburgh in 1985. The University is a technological university with large groups of Engineering, Science and Business undergraduates. These subjects all require significant mathematical skills and it was in the delivery of numerical and mathematical tests that attention was first focused. Initially, a series of weekly computerised tutorials was established to support the teaching of a first course on Calculus to large groups of Science and Engineering undergraduates. The resources in the computerised tutorial replaced the traditional pen and paper approach which, at the time, was struggling to provide suitable support for the large number of students attending the course. Each week throughout a 25 week course mathematical topics were covered to the recipe: •   Summary screens of the theory delivered in lectures; •   Worked examples illustrating technique; •   Motivating examples to inspire deeper study; and •   A test section to allow students to follow their own progress and enable teachers to reflect on class performance. The motivating examples were designed to engage the students in problems like calculating the escape speeds from a variety of planets, determining the optimal angle for the trajectory of a water jet in a game called Firefighter and taking the largest plank around a corner in a tunnel from an exercise known affectionately as Escape from Colditz. However, from the outset the students chose to spend three-quarters of their time practising within the test section itself. The randomisation of parameters within questions had been an important early feature designed to keep the questions fresh for re-use. Beyond Objective Questions From the beginning of the CALM project it was considered important to allow students to input answers as mathematical expressions, rather than select an option from a multiple choice format. This allowed students to construct their own answers and express them in a format they felt was appropriate. This led to the CALM project dealing with the issue of marking mathematically equivalent expressions as correct even if they were in a different format to that stored in the computer (for example, a correct answer of 2  x  could also be input  by a student as +2  x  or  x +  x  etc.). The CALM method of comparison for marking correct answers was ground breaking at the time though other groups have subsequently followed this approach (Cook et al,  2001, Ramsden, 2004).   Moving Beyond Objective Testing in Online Assessment Page 4 This use of student constructed mathematical expressions involved students using a string format to express their answers. For example,  x 21 would be expressed as 1/(2x). Although students may initially find this confusing, most quickly become comfortable with the format. However, to aid students in entering mathematical expressions the development of an Input Tool became an important element through the Mathwise project, culminating in the one incorporated into the Interactive Past Papers CD (Beevers et al , 1997) (Figure 1). The Input Tool dynamically transformed the string format and displayed it on the screen in a more familiar format so the students could be sure that the computer was properly interpreting their answers. Figure 1 – Screenshot showing the use of the Input Tool to enter mathematical expressions Following an educational evaluation of the Mathwise project, groups of students piloted the use of computer tests to measure lower order skills (Beevers et al, 1995) in a university first year course on Algebra and Calculus at Heriot-Watt University. In the evaluation of this pilot study, students highlighted the need for confirmation of the computer interpretation of their mathematical string expression and this led to the construction of the Input Tool. More recently, the Project on ASsessment in Scotland using Information Technology (PASS-IT) began investigating the issues surrounding the delivery of online summative assessments into Scottish schools. As with Mathwise, the assessments were designed to measure lower order skills, however, the online delivery mechanisms have placed restrictions on the  practical implementation of an input tool. As a result, the current method is to render a student’s submitted answer in a mathematically familiar form and students may then modify their answer as appropriate. As part of PASS-IT investigations have started to realise the use of new measurement mechanisms to extend the range of questions that can be asked. Using multimedia applications embedded into the assessment engine opens up a multitude of possibilities to allow students to record answers to questions that were previously only available for paper- based assessment. For example by integrating Macromedia Flash ™ applications into the assessment engine, students are able to draw a line on a graph, in music tests students can   Moving Beyond Objective Testing in Online Assessment Page 5 now manipulate musical notes on a stave or to annotate a music score,. The assessment engine is able to mark the students’ answers. Partial Credit Another issue identified by the evaluation of the Mathwise pilot study was the students concern about partial credit. In traditional paper based assessment teachers can award partial credit for rough working and the application of the correct concepts (follow through), even where the final answer is incorrect. In most online assessments, the practicalities of capturing students working prove too great an obstacle, resulting in an all or nothing  situation - students may make significant progress towards an answer, but become stuck, or make an arithmetical slip, and be awarded no marks for their efforts (their final answer not being mathematically equivalent). One solution to this issue was the introduction of optional steps. Initially, this idea was implemented as part of the formative assessment in Mathwise and Interactive Past Papers where rapid feedback was given to both right and wrong answers. Good students could reinforce their learning through success and weaker students had the choice of steps towards an answer. More recently, the notion of optional steps have been researched as a mechanism for the provision of partial credit in summative assessments (McGuire et al,  2002). Supporting Students’ Learning The main reason the authors became involved in CAA was to support students’ learning. The students welcomed the rapid feedback offered by CALM and one of them spoke for many when he reported, “It is like having a human tutor in front of you all the time.” However, there was recognition of the importance of involving the teacher in this process. The CALM assessment system collected a vast amount of information about the assessments the students were taking. The ability to review this data allowed a teacher to provide additional feedback to the students and to identify common areas of weakness that needed to  be addressed. To aid this process a crude but effective reporting tool was developed that gathered up the student test records at the end of each week ready for human scrutiny. The lecturer then had to go through the data manually to see how each individual was progressing. This was time-consuming though worthwhile, as it meant that comments could be sent back to the students at the start of the next week. A reporting system was created which allowed the lecturer to view the class records and then visit the file of any individual student, see their marks and review their answers to any of the questions. The lecturer could report back to students by the creation of text messages which when stored on the file server displayed next time the student logged onto the system. This became a powerful way of communicating with the large numbers of students using the system and provides an early example of a database results system with reporting capability. It should be noted that this approach pre-dated the widespread provision of email. The modern day version of this reporting system plays an important role in the PASS-IT  project. Reporting tools have been developed for students, teachers and researchers enabling immediate access to assessment data. The PASS-IT assessment engine records every action a student makes in an assessment; such as navigation to a question, every submitted answer, and the choice to view steps. If a user wishes to review or collate data over many students, assessments or attempts at a single assessment, this process quickly becomes unmanageable without a reporting system to collate, filter and present this data as meaningful information.
Search
Tags
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks