Evaluating Learning in Virtual Worlds: How Can We Measure?

Evaluating Learning in Virtual Worlds: How Can We Measure?
of 11
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  1   Evaluating Learning in Virtual Worlds: How Can We Measure? Diane D. Chapman NC State University Abstract The past few years have seen growth in the use of virtual worlds in higher education. Initial reports about successful educational uses are positive, but the evidence is largely anecdotal or based on student reactions. Little has been published about how to measure the learning occurring in these worlds. This presentation will review what has been published in the realm of evaluation in virtual worlds and suggest strategies and instruments that can be used to measure learning in virtual world environments. Evaluation needs and barriers will be addressed, and examples of methods and questions will be presented. Keywords : Virtual worlds, learning, evaluation When people hear about virtual worlds, many immediately think of virtual gaming where visions armor clad monsters and laser wielding titans attempt to conquer all. But these virtual gaming sites, such as  EverQuest   and World of Warcraft   are only a subset of the larger virtual world phenomena. Warburton (2009) suggested a typology of virtual worlds based on their characteristics. Games and serious games, such as World of Warcraft  , social platforms, such as Second Life , simulations and reflections of real life such as Google Earth , and 3-D realizations of collaborative workspaces such as Project Wonderland  . Virtual worlds with social goals are gaining in popularity and attention from those in higher education and business, who see these spaces as places to immerse people in learning. One of the more active virtual worlds for education is Second Life (Baker, Wentz, & Woods, 2009). While many colleges, universities, and large businesses already have presences in Second Life, the research is now only beginning to describe the best practices for using these technologies to help adults learn (Oliver & Carr, 2009). Studies about how to evaluate learning in virtual worlds are even more scarce (Jarmon, Traphagen, Mayrath, & Trivedi, 2009). This paper will review the published research on evaluation in virtual worlds and the author’s personal experience to suggest techniques (using examples) of how learning might be evaluated. While the paper will primarily reference Second Life , the strategies and techniques can be applied to many of the other social-based virtual spaces. Virtual Worlds Virtual worlds are three-dimensional, Internet-based spaces where users may log on and interact simultaneously with each other. The worlds are constant, as they remain in place and operating even when the users log off of their computers. Users represent themselves by designing their own individual avatars, virtual representations of themselves. The worlds are entirely designed and built by users. Evaluation Issues to Consider There are many issues to consider when evaluating learning in virtual world environments, a few are discussed here. To begin, the evaluations should be designed to be carried out at a distance. Although learners may be accessible in a specific face-to-face (F2F) situation, this may not always be the case. To get the best return and to avoid undue stress on the learner, data collection should not require learners to leave their places of learning to access the evaluation. Another issue is that the evaluations should assess more than learner reaction. Asking learners the extent to which they enjoyed the session, thought the instructor was  2   effective, and liked the pace of the instruction does not provide any information about whether or not learning occurred. As a result, evaluations must provide a means of looking past student reactions and give insight into changes in knowledge and if possible, changes in behavior. Finally, effective evaluations need to triangulate data collection. That is, data should be collected from more than one source. Not only does this help to provide validity to the results, it also allows the evaluator to look at the different ways that learning may occur. Literature about Evaluation in Virtual Worlds Little research on evaluation and virtual worlds emerged from searches of the published literature. In a literature search, articles spoke to evaluation in a variety of virtual environments including virtual worlds and virtual reality simulations. The literature relating to evaluation in virtual worlds spanned several popular virtual worlds including Second Life ,  Active Worlds , World of Warcraft  , and  Everquest  . The literature in Tables 1 and 2 was selected for its reference to information about to what to evaluate and how to evaluate in virtual environments. Table 1 displays the different aspects of virtual worlds about what to measure. Table 2 displays the evaluation methods which the literature says should or has been performed. The lists are not meant to be exhaustive, rather they presents ideas of where to begin to formulate an evaluation plan for a virtual world. Table 1. Aspects of Learning in Virtual Worlds Which Should be Evaluated Evaluation Aspects “What to Measure” Authors Formation/perception of groups and communities Araki & Carliner, 2008; De Lucia, Francese, Passero, & Tortora, 2009 Avatars/identity Araki & Carliner, 2008; Calongne, 2008; Warburton & Perez-Garcia, 2009 Value of learning activities Atkinson, et al., 2009 Technical issues/barriers Atkinson, et al., 2009; Heinrichs, Youngblood, Harter, & Dev, 2008; Warburton, 2009 Reliability/ effectiveness of technology compared to other systems Atkinson, et al., 2009; De Lucia, et al., 2009 Technology tools Calongne, 2008 Content/course structure/delivery Araki & Carliner, 2008; Calongne, 2008 Engagement/immersion Calongne, 2008; Heinrichs, et al., 2008; , Jarmon, et al., 2009; Warburton, 2009 Presence/awareness De Lucia, et al., 2009; Jarmon, et al., 2009; Warburton, 2009 Communication/social effectiveness De Lucia, et al., 2009; Warburton, 2009 User satisfaction Atkinson, et al., 2009; De Lucia, et al., 2009 Perception of user productivity De Lucia, et al., 2009 Usefulness of exercises Heinrichs, et al., 2008 Knowledge/skill /affect changes Heinrichs, et al., 2008; Jarmon, et al., 2009; Ninnis & Inoue, 2006; Oliver & Carr, 2009; Woodruff, Conway, Edwards, Elliott, and Crittenden, 2007  3   Team work/collaboration, (confidence in being a team member, confidence in being a team leader) Heinrichs, et al., 2008; Jarmon, Traphagen, Mayrath, & Trivedi, 2009 Level of realism of environment Heinrichs, et al., 2008 Confidence in ability to perform learned skills Heinrichs, et al., 2008 Enjoyment Jarmon, et al., 2009 Transferability to real world Jarmon, et al., 2009 Shift in attitude toward SL Jarmon, et al., 2009 Indicators of scientific reasoning Steinkuehler & Duncan, 2008 Content production Warburton, 2009 Culture Warburton & Perez-Garcia, 2009 Time Warburton & Perez-Garcia, 2009 Economics Warburton & Perez-Garcia, 2009 Table 2. Methods in Which Evaluation has Been Performed Measurement – “What Was Analyzed” Authors Perception survey to learners upon completion of activity Atkinson, et al., 2009; Heinrichs, et al., 2008; Jarmon, et al., 2009; Ninnis & Inoue, 2006; Woodruff, et al., 2007 Questionnaire to learners based on previously validated scales De Lucia, et al., 2009 Learner-produced lessons-learned paper from learners Calongne, 2008 Recordings of learner real-life computing actions Gazit & Chen, 2003 Recordings of student computing actions in-world Gazit & Chen, 2003 Face-to-face Interviews with learners Dickey, 2005; Gazit & Chen, 2003; Ninnis & Inoue, 2006 In-world interviews with learners Oliver & Carr, 2009 Observations of learner avatars in-world Dickey, 2005; Heinrichs, et al., 2008 Comparison tests to control group Heinrichs, et al., 2008; Woodruff, et al., 2007 Observation of debriefing sessions Heinrichs, et al., 2008 Observation of post-activity open discussion Heinrichs, et al., 2008 Learner journal content analysis (quantitative and qualitative) Jarmon, et al., 2009 In-world, learner-created content Jarmon, et al., 2009 Focus groups Heinrichs, et al., 2008; et al., 2009 In-world world snapshots and video Jarmon, et al., 2009; Perez-Garcia, 2009 Learner blogs Perez-Garcia, 2009 Learner wikis Perez-Garcia, 2009 Learner Twitter   updates Perez-Garcia, 2009 Discussion forum posts (qualitative and quantitative) Perez-Garcia, 2009; Steinkuehler & Duncan, 2008 Participation (quantitative) Woodruff, et al., 2007  4   Surveys, Interviews and Observations Selection of evaluation techniques may depend on the types of learning activities that occur. Making use of journals, personal blogs, and discussion posts all require additional work on the part of the learner. Focus groups can be done at a distance, but they are synchronous and may be difficult to organize. Surveys, interviews, and observations all lend themselves easily to evaluation in virtual worlds. All can be performed at a distance and all can provide unique insights into whether or not learning has occurred. The evaluation examples in this paper are organized by aspects of the virtual world environment and then by data collection method. One again, both the data collection methods and the aspects of learning in virtual worlds examples are not exhaustive. When evaluating learning in virtual worlds, surveys quickly come to mind as the simplest way to gather data. At first glance surveys seem to be the easiest to develop and administer, and they also allow for data collection at a distance. Some of the reasons to consider surveys might are that they can be administered at a distance, they do not need to be administered in real time, and they are relatively less time consuming for the evaluator, as compared to other types of data collection. But, when using surveys to evaluate whether or not learning has occurred using virtual world technologies, there is a need to focus on the learning and not on satisfaction with the learning. Interviews can provide more in depth information than surveys. Evaluators also have the opportunity to probe for more information when interesting or pertinent issues emerge. Interviews usually last between 45 to 90 minutes and can be done face-to-face, or in the virtual world. In both spaces, the interview can be recorded and transcribed at a later time. However, interviewing usually requires more time of the learner and more time to analyze the data. Virtual worlds are very well suited for observations. Evaluators can observe avatar behavior, avatar characteristics, and learner created content, all within the virtual world. Many of the worlds have tools available for recording screenshots and video of what is occurring. Third-party software is also available to serve these needs. Some of the issues with observation surround the need for observer consistency. You would want observers to be trained and equipped with a list of the specific things they are being asked to observe. Checklists work well for observations. A checklist should include both what is being observed (the actions, situations, and/or characteristics) and the how that action, situation, or characteristic was represented. For example, if you are trying to observe whether or not learners were engaging in social behavior, an indicator might be that learners were observed talking in groups. Aspects of Learning with Virtual Worlds The tendency when administering learner surveys is to focus on the instruction or the technology and not on the learning. But, this is only part of the picture. The learning is what is most important, but the methods/strategies, affective/social aspects, and the technologies can all impact learning mastery. This section is organized by 4 aspects of learning in virtual worlds (learning, methods, affective/social, and technologies.) Although there are many ways to categorize aspects of learning in these worlds, the ones listed here are prevalent in the literature. The Learning The first strategy is to ask questions about the learners’ mastery of the learning objectives. Although subjective, since measurements are the learners’ opinions, these types of questions can give information about what and how much each learner gleaned from the  5   instruction. Table 3 displays examples of the of survey, interview, and observations questions/criteria that can be used. Note that each of the questions focuses on what was learned and not on the instruction or the instructor. Table 3. Examples of Appropriate Survey, Interview, and Observation Questions about Learning Surveys Instructional Objective Survey Question Examples Increased knowledge of new product development As a result of this instructional event, I increased my knowledge of new product development Strongly Agree Agree Disagree Strongly Disagree List any aspects of new product development you learned during this instructional event? Demonstrate learner’s ability to integrate ideas between different concepts As a result of this instructional event I have been able to demonstrate my ability to integrate ideas between different concepts. Strongly Agree Agree Disagree Strongly Disagree List the ways (if any) that you have been able to demonstrate your ability to integrate ideas between different concepts. Increase digital literacy My digital literacy has increased as a result of this instructional event. Strongly Agree Agree Disagree Strongly Disagree As a result of participating in this instructional event, I can better evaluate information using digital technology. Strongly Agree Agree Disagree Strongly Disagree Interviews Instructional Objective Interview Question Examples Increased knowledge of new product development What have you learned about new product development as a result of participating in this instructional event? How did participating in this instructional event impact your knowledge of new product development? Demonstrate learner’s ability to integrate ideas between different concepts Can you describe how participating in this instructional event has helped you to integrate ideas between different concepts? Can you give examples of ways that you integrated ideas between different concepts while participating in this instructional event? Increase digital literacy How has participating in this instructional event impacted your ability to organize information using digital technology? In what ways, if any, has your digital literacy increased as a result of this instructional event? Observations Instructional Objective Observation Example Criteria (To be used in an observation checklist) Increased knowledge of new product development Learner’s avatar was observed discussing new product development strategies with others. (list indications and how they were communicated)


Apr 16, 2018
Similar documents
View more...
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks