Government & Nonprofit

Player Dominance Adjustment Motion Gaming AI for Health Promotion

This paper presents an opponent fighting game AI for promoting balancedness in use of body segments of the player during full-body motion gaming. The proposed AI, named PDAHP-AI, is based on Monte Carlo tree-search and employs a recently purposed
of 2
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  Player Dominance Adjustment Motion Gaming AIfor Health Promotion  Junjie Xu College of Information Science andEngineeringRitsumeikan Pujana Paliyawan Research Organization of Science and TechnologyRitsumeikan Yiming Zhang College of Information Science andEngineeringRitsumeikan Ruck Thawonmas College of Information Science andEngineeringRitsumeikan Tomohiro Harada College of Information Science andEngineeringRitsumeikan ABSTRACT This paper presents an opponent fighting game AI for promoting balancednessinuseofbodysegmentsoftheplayerduringfull-body motion gaming. The proposed AI, named PDAHP-AI, is based on Monte Carlo tree-search and employs a recently purposed conceptcalled Player Dominance Adjustment, where the AI determines its actions based on the player’s inputs so as to adjust the player’sdominant power. The basic idea is to let the player dominate thegame when they perform healthy movement and on the contrary to have the AI take a strong action against the player when she orhe performs unhealthy movement. The AI outperforms an existing dynamic difficulty adjustment AI designed for the same propose. KEYWORDS Motion Game, Game AI, Monte-Carlo Tree Search, Health Promo-tion, Player Dominance Adjustment (PDA) 1 INTRODUCTION Motion games can be effectively used to motivate player engage- ment in physical activity and thus promote health [Peng et al .  2013].However, playing motion games also involves possible adverse out- comes such as muscle imbalance, caused by overusing some parts of the body [Rössler et al .  2014]. As a solution, DDAHP-AI [Kusanoet al .  2019] was recently developed with a goal to promote balance use of body segments during motion gaming. DDAHP-AI combines two former AIs: (1) DDA-AI [Ishihara et al . 2018], an AI based on Monte-Carlo Tree Search (MCTS), with adynamic difficulty adjustment (DDA) mechanism, and (2) HP-AI[Paliyawan et al .  2017], a health promotion AI that induces the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from MIG ’19, October 28–30, 2019, Newcastle upon Tyne, United Kingdom © 2019 Association for Computing Machinery.ACM ISBN 978-1-4503-6994-7/19/10...$15.00 player to perform healthy counteraction 1 . To induce the player to perform expected counteractions, the AI must be able to recognize the player’s behavior and predict what the player will do when it does each candidate action. Its performance, therefore, depends onthe accuracy prediction. In their paper, time series forecasting was hence introduced for better prediction. Nevertheless, limitationsof this AI remain, including (1) inaccuracy in the aforementioned prediction,(2)havingnorewardmechanismtoencouragetheplayer, and (3) not significantly promoting the balancedness statistically. This paper presents a new AI with a different strategy to over-come limitations underlying the above AI. This new AI employsa concept of Player Dominance Adjustment (PDA), which was re-cently purposed [Xu et al .  2019]. Instead of predicting what the playerwilldotocountereachofitscandidateactions,theAIinducestheplayertoperformhealthymotionsbyincreasingtheplayerdom- inant power when they perform a healthy motion—theoretically,this will encourage the player to repeat the action as long as it is still healthy to perform. The main contribution of this work lies inthe introduction of the first AI that embraces the PDA concept andstatistically significantly improves the balancedness in use of body segment of the player during motion gaming. 2 BACKGROUND AND EXISTING WORK2.1 FightingICE and UKI The proposed AI is implemented on FightingICE, a fighting gameplatform for AI development and competition 2 . In a two-playeradversarial game, human behaviors can be transferred to the con- trollerasuserinputstothegame[Wampleretal . 2010].Inparticular, middleware called UKI [Paliyawan and Thawonmas 2017] is usedfor integrating full-body control with the game and assessing the amount of body movement and health parameters of the player. 2.2 MCTS and PDA MCTS is widely known as an algorithm for developing effective gameAIs.Todetermineactions,atypicalopen-loopMCTSAIcalled 1 A counter action is an action that the player takes as a response to the action done by the opponent AI. 2 This game platform has been used to hold annual AI competitions since 2013. ftgaic/  MIG ’19, October 28–30, 2019, Newcastle upon Tyne, United Kingdom J. Xu, et al. Figure 1: Decision making process of the PDAHP-AI MctsAI [Yoshida et al .  2016] repeats four steps of the MCTS process under a given time budget: selection, expansion, simulation, and backpropagation. Our proposed AI is built based on MctsAI. PDA [Xu et al .  2019] is a concept of adapting the gameplay based on the player’s intentions appearing as game inputs. The main idea is to follow or unfollow the player’s intentions. In the case of  fighting games, it is to make a decision whether to let the playerattack the AI or to use a strong counter attack against the player, respectively, with certain randomness in the decision. 3 PROPOSED AI The proposed AI, named PDAHP-AI, aims to make the player move both sides of their body equally, in other words make relevanthealth parameter  Bal   close to 1. The calculation of   Bal   is derivedfrom previous research (Eq. (4) in [Paliyawan et al .  2018]). Thevalues of   Bal   is computed for the whole body, considering fourbody joints, which are two arms and two legs; its values range between 0 and 1, where 1 is the healthiest. During gameplay, as shown in Fig. 1, the AI knows the most recent motion the player just performed, before the execution of anin-game action associated with that motion by the player character.Therefore, there is time for the AI to decide how to respond to thecoming player-character action. As a result, the AI has two options: (1) take a strong action recommended by MCTS, which aims at beatingtheplayer,or(2)takeanactioninthewaythatlettheplayer dominate the game, like moving closer to the player character to make the incoming attack from the player to hit the AI more easily.The first option is used when the performed motion increases  Bal  , while the second is used vice versa. The AI analyzes how goodthe performed motion by the human player is, in relative to other available motions, using a balancedness fitness  F  Bal   (computed byEq. (8) in [Paliyawan et al .  2018]).  F  Bal   is used as a probability that the AI will take the first option over the second option. 4 EXPERIMENT AND RESULT We conducted an experiment on 18 university students. The par-ticipants were equally separated into two groups. Both groups played FightingICE for two rounds, one round against MctsAI and the other against PDAHP-AI. The participants in the two groups played against each AI in opposite order. Playing against PDAHP-AI 3 led to significantly higher  Bal   thanplaying against MctsAI, supported by  p  -values of .033 and .031 fromthe paired sample t-test and the Wilcoxon signed-rank test, respec-tively. Although the participants in this study did not play against 3 Figure2:Comparisonofthedistributionof  Bal   amongfight-ing against four AIs DDAHP-AI [Kusano et al .  2019],  Bal  s from fighting against MctsAI in their work (Mcts_prev) and in this study (Mcts_this) were notstatistically different (  p  -value = .892 and .874 for the independentsample t-test and the Mann-Whitney U test, respectively). There-fore, we consider that it is compromisable to compare  Bal   fromplaying against PDAHP-AI in this study with  Bal   from playing against DDAHP-AI in the previous study. The result in Fig. 2 shows that  Bal   from playing against PDAHP-AI is the most preferable. 5 CONCLUSION AND FUTURE WORK This paper presents a game AI employing the PDA concept for pro- moting balancedness in use of body segments during full-motiongaming. Compared to an existing AI, the proposed AI yields thefollowing advantages: (1) no need to predict actions of the player and (2) having an intrinsic reward-like mechanism to motivate theplayer which makes the player win the game easier when they per- form healthy movement. The proposed AI significantly improvesthe balancedness statistically from a typical MCTS AI, which was not achieved in the previous study. Our future plan includes adapt- ing the concept of PDA for promoting other health parameters. REFERENCES Makoto Ishihara, Suguru Ito, Ryota Ishii, Tomohiro Harada, and Ruck Thawonmas. 2018. Monte-Carlo Tree Search for Implementation of Dynamic Difficulty Adjust-ment Fighting Game AIs Having Believable Behaviors. In  2018 IEEE Conference on Computational Intelligence and Games (CIG)  . 46–53. Takahiro Kusano, Yunshi Liu, Pujana Paliyawan, Tomohiro Harada, and Ruck Tha-wonmas. 2019. Motion Gaming AI using Time Series Forecasting and Dynamic Difficulty Adjustment for Improving Exercise Balance and Enjoyment. In  2019 IEEE  Conference on Games  .  Pujana Paliyawan, Takahiro Kusano, Yuto Nakagawa, Tomohiro Harada, and RuckThawonmas. 2017. Adaptive Motion Gaming AI for Health Promotion. In  2017  AAAI Spring Symposium Series  . 720–725. Pujana Paliyawan, Takahiro Kusano, and Ruck Thawonmas. 2018. Motion Recom-mender for Preventing Injuries During Motion Gaming.  IEEE Access   7 (2018), 7977–7988. Pujana Paliyawan and Ruck Thawonmas. 2017. UKI: universal Kinect-type controller by ICE Lab.  Software: Practice and Experience   47, 10 (2017), 1343–1363. Wei Peng, Julia C Crouse, and Jih-Hsuan Lin. 2013. Using active video games forphysical activity promotion: a systematic review of the current state of research. Health education & behavior   40, 2 (2013), 171–192. Roland Rössler, Lars Donath, Evert Verhagen, Astrid Junge, Thomas Schweizer, and Oliver Faude. 2014. Exercise-based injury prevention in child and adolescent sport: a systematic review and meta-analysis.  Sports medicine   44, 12 (2014), 1733–1748. Kevin Wampler, Erik Andersen, Evan Herbst, Yongjoon Lee, and Zoran Popović. 2010.Character animation in two-player adversarial games.  ACM Transactions on Graph-  ics (TOG)   29, 3 (2010), 26.  Junjie Xu, Pujana Paliyawan, Ruck Thawonmas, and Tomohiro Harada. 2019. Player Dominance Adjustment: Promoting Self-Efficacy and Experience of Game Players by Adjusting Dominant Power. gcce2019_xu.pdf (accepted). Shubu Yoshida, Makoto Ishihara, Taichi Miyazaki, Yuto Nakagawa, Tomohiro Harada,and Ruck Thawonmas. 2016. Application of Monte-Carlo tree search in a fighting game AI. In  2016 IEEE 5th Global Conference on Consumer Electronics  . 623–624.
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!