• Title/Summary/Keyword: Dynamic Game Environment

Search Result 61, Processing Time 0.021 seconds

A Control Method for designing Object Interactions in 3D Game (3차원 게임에서 객체들의 상호 작용을 디자인하기 위한 제어 기법)

  • 김기현;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.322-331
    • /
    • 2003
  • As the complexity of a 3D game is increased by various factors of the game scenario, it has a problem for controlling the interrelation of the game objects. Therefore, a game system has a necessity of the coordination of the responses of the game objects. Also, it is necessary to control the behaviors of animations of the game objects in terms of the game scenario. To produce realistic game simulations, a system has to include a structure for designing the interactions among the game objects. This paper presents a method that designs the dynamic control mechanism for the interaction of the game objects in the game scenario. For the method, we suggest a game agent system as a framework that is based on intelligent agents who can make decisions using specific rules. Game agent systems are used in order to manage environment data, to simulate the game objects, to control interactions among game objects, and to support visual authoring interface that ran define a various interrelations of the game objects. These techniques can process the autonomy level of the game objects and the associated collision avoidance method, etc. Also, it is possible to make the coherent decision-making ability of the game objects about a change of the scene. In this paper, the rule-based behavior control was designed to guide the simulation of the game objects. The rules are pre-defined by the user using visual interface for designing their interaction. The Agent State Decision Network, which is composed of the visual elements, is able to pass the information and infers the current state of the game objects. All of such methods can monitor and check a variation of motion state between game objects in real time. Finally, we present a validation of the control method together with a simple case-study example. In this paper, we design and implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the most effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

A Collision detection from division space for performance improvement of MMORPG game engine (MMORPG 게임엔진의 성능개선을 위한 분할공간에서의 충돌검출)

  • Lee, Sung-Ug
    • The KIPS Transactions:PartB
    • /
    • v.10B no.5
    • /
    • pp.567-574
    • /
    • 2003
  • Application field of third dimension graphic is becoming diversification by the fast development of hardware recently. Various theory of details technology necessary to design game such as 3D MMORPG (Massive Multi-play Online Role Flaying Game) that do with third dimension. Cyber city should be absorbed. It is the detection speed that this treatise is necessary in game engine design. 3D MMORPG game engine has much factor that influence to speed as well as rendering processing because it express huge third dimension city´s grate many building and individual fast effectively by real time. This treatise nay get concept about the collision in 3D MMORPG and detection speed elevation of game engine through improved detection method. Space division is need to process fast dynamically wide outside that is 3D MMORPG´s main detection target. 3D is constructed with tree construct individual that need collision using processing geometry dataset that is given through new graph. We may search individual that need in collision detection and improve the collision detection speed as using hierarchical bounding box that use it with detection volume. Octree that will use by division octree is used mainly to express rightly static object but this paper use limited OSP by limited space division structure to use this in dynamic environment. Limited OSP space use limited space with method that divide square to classify typically complicated 3D space´s object. Through this detection, this paper propose follow contents, first, this detection may judge collision detection at early time without doing all polygon´s collision examination. Second, this paper may improve detection efficiency of game engine through and then reduce detection time because detection time of bounding box´s collision detection.

A P2P-based Management Method for Dynamic AOI (동적 AOI를 위한 P2P 기반 관리기법)

  • Lim, Chae-Gyun;Rho, Kyung-Taeg
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.5
    • /
    • pp.211-216
    • /
    • 2011
  • Networked virtual environments (NVEs) are distributed systems where geographically dispersed users interact with each other in virtual worlds by exchanging network messages. Massively Multiplayer Online Game (MMOG) is one of diverse applications where more than hundreds of users enjoy experiencing virtual worlds. A limited area called area of interest (AOI) in MMOG is reduced the load caused by message exchange between users. Voronoi-based Overlay Network (VON) is proposed to reduce the bandwidth consumption in P2P environments and Vorocast also is made using message forwarding in VON. We propose a dynamic AOI management method that solves problems such as a consistency and latency due to forwarding position updates to neighbor nodes from the message originator in forwarding scheme. Our scheme provides the consistency and reduces latency by combining direct connection scheme and Vorocast scheme compared to existing schemes. The communication between a user and users existing in center circle within AOI of the user is directly connected and the communication between the user and users existing outside the center area within AOI is using Vorocast scheme. The proposed model is evaluated through simulations.

3D Virtual Reality Game with Deep Learning-based Hand Gesture Recognition (딥러닝 기반 손 제스처 인식을 통한 3D 가상현실 게임)

  • Lee, Byeong-Hee;Oh, Dong-Han;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.5
    • /
    • pp.41-48
    • /
    • 2018
  • The most natural way to increase immersion and provide free interaction in a virtual environment is to provide a gesture interface using the user's hand. However, most studies about hand gesture recognition require specialized sensors or equipment, or show low recognition rates. This paper proposes a three-dimensional DenseNet Convolutional Neural Network that enables recognition of hand gestures with no sensors or equipment other than an RGB camera for hand gesture input and introduces a virtual reality game based on it. Experimental results on 4 static hand gestures and 6 dynamic hand gestures showed that they could be used as real-time user interfaces for virtual reality games with an average recognition rate of 94.2% at 50ms. Results of this research can be used as a hand gesture interface not only for games but also for education, medicine, and shopping.

A Naive Bayesian-based Model of the Opponent's Policy for Efficient Multiagent Reinforcement Learning (효율적인 멀티 에이전트 강화 학습을 위한 나이브 베이지만 기반 상대 정책 모델)

  • Kwon, Ki-Duk
    • Journal of Internet Computing and Services
    • /
    • v.9 no.6
    • /
    • pp.165-177
    • /
    • 2008
  • An important issue in Multiagent reinforcement learning is how an agent should learn its optimal policy in a dynamic environment where there exist other agents able to influence its own performance. Most previous works for Multiagent reinforcement learning tend to apply single-agent reinforcement learning techniques without any extensions or require some unrealistic assumptions even though they use explicit models of other agents. In this paper, a Naive Bayesian based policy model of the opponent agent is introduced and then the Multiagent reinforcement learning method using this model is explained. Unlike previous works, the proposed Multiagent reinforcement learning method utilizes the Naive Bayesian based policy model, not the Q function model of the opponent agent. Moreover, this learning method can improve learning efficiency by using a simpler one than other richer but time-consuming policy models such as Finite State Machines(FSM) and Markov chains. In this paper, the Cat and Mouse game is introduced as an adversarial Multiagent environment. And then effectiveness of the proposed Naive Bayesian based policy model is analyzed through experiments using this game as test-bed.

  • PDF

Profile-based Service Continuity Framework for N-Screen Service

  • Chung, Young-Sik;Paik, Eui-Hyun;Rhee, Woo-Seop
    • International Journal of Contents
    • /
    • v.8 no.1
    • /
    • pp.47-54
    • /
    • 2012
  • The dynamic adaptation between various service environments using the application profiles for the service continuity is a key issue of the profile-based service continuity framework (PSCF) for N-screen service using next generation networks. PSCF offers an optimized service framework for providing continuous user services, which are multimedia video streaming, educational broadcasting, game, etc., using the various devices that are not restricted by the service environment of the user. This paper specifies the functional model of PSCF, service scenario and explains the experimental results of the service continuity for N-screen service using PSCF.

Fast Navigation in Dynamic 3D Game Environment Using Reinforcement Learning (강화 학습을 사용한 동적 게임 환경에서의 빠른 경로 탐색)

  • Yi, Seung-Joon;Zhang, Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.703-705
    • /
    • 2005
  • 연속적이고 동적인 실세계에서의 경로 탐색 문제는 이동 로봇 분야에서 주된 문제 중 하나였다. 최근 컴퓨터 성능이 크게 발전하면서 컴퓨터 게임들이 실제에 가까운 연속적인 3차원 환경 모델을 사용하기 시작하였고, 그에 따라 보다 복잡하고 동적인 환경 모델 하에서 경로 탐색을 할 수 있는 능력이 요구되고 있다. 강화 학습 기반의 경로 탐색 알고리즘인 평가치 반복(Value iteration) 알고리즘은 실시간 멀티에이전트 환경에 적합한 여러 장점들을 가지고 있으나, 문제가 커질수록 속도가 크게 느려진다는 단점을 가지고 있다. 본 논문에서는 연속적인 3차원 상황에서 빠르게 동적 변화에 적응할 수 있도록 하기 위하여 작은 세상 네트웍 모델을 사용한 환경 모델 및 경로 탐색 알고리즘을 제안한다. 3차원 게임 환경에서의 실험을 통해 제안된 알고리즘이 연속적이고 복잡한 실시간 환경 하에서 우수한 경로를 찾아낼 수 있으며, 환경의 변화가 관측될 경우 이에 빠르게 적응할 수 있음을 확인할 수 있었다.

  • PDF

A Proposal for Developing a Situated Learning Support Systems-Based on an MMORPG

  • PIAO, Cheng Ri
    • Educational Technology International
    • /
    • v.6 no.2
    • /
    • pp.59-67
    • /
    • 2005
  • The primary purposes of this study are to develop a Situated Learning Support System based on an MMORPG (Massively Multiplayer Online Role Playing Game) and to investigate applications of Situated Learning theory both hypothetically and practically. In Situated Leaning theory, cognition is interpreted as a dynamic system related to situation, context and activity. According to this theory, learning context, social interaction and personal direct experience are also emphasized. A virtual reality learning system based on an MMORPG provides context, social interaction and a learning environment able to provide direct experiences. However, such a system has been difficult for teachers to develop. This study aims to develop a support system facilitating the construction of a Situated Learning System based on an MMORPG. This study proposes new research and practical applications of Situated Learning theory using educational games.

AGENT-BASED SIMULATION OF ORGANIZATIONAL DYNAMICS IN CONSTRUCTION PROJECT TEAMS

  • JeongWook Son;Eddy M. Rojas
    • International conference on construction engineering and project management
    • /
    • 2011.02a
    • /
    • pp.439-444
    • /
    • 2011
  • As construction projects have been getting larger and more complex, a single individual or organization cannot have complete knowledge or the abilities to handle all matters. Collaborative practices among heterogeneous individuals, which are temporarily congregated to carry out a project, are required in order to accomplish project objectives. These organizational knowledge creation processes of project teams should be understood from the active and dynamic viewpoint of how they create information and knowledge rather than from the passive and static input-process-output sequence. To this end, agent-based modeling and simulation which is built from the ground-up perspective can provide the most appropriate way to systematically investigate them. In this paper, agent-based modeling and simulation as a research method and a medium for representing theory is introduced. To illustrate, an agent-based simulation of the evolution of collaboration in large-scale project teams from a game theory and social network perspective is presented.

  • PDF

L-CAA : An Architecture for Behavior-Based Reinforcement Learning (L-CAA : 행위 기반 강화학습 에이전트 구조)

  • Hwang, Jong-Geun;Kim, In-Cheol
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.3
    • /
    • pp.59-76
    • /
    • 2008
  • In this paper, we propose an agent architecture called L-CAA that is quite effective in real-time dynamic environments. L-CAA is an extension of CAA, the behavior-based agent architecture which was also developed by our research group. In order to improve adaptability to the changing environment, it is extended by adding reinforcement learning capability. To obtain stable performance, however, behavior selection and execution in the L-CAA architecture do not entirely rely on learning. In L-CAA, learning is utilized merely as a complimentary means for behavior selection and execution. Behavior selection mechanism in this architecture consists of two phases. In the first phase, the behaviors are extracted from the behavior library by checking the user-defined applicable conditions and utility of each behavior. If multiple behaviors are extracted in the first phase, the single behavior is selected to execute in the help of reinforcement learning in the second phase. That is, the behavior with the highest expected reward is selected by comparing Q values of individual behaviors updated through reinforcement learning. L-CAA can monitor the maintainable conditions of the executing behavior and stop immediately the behavior when some of the conditions fail due to dynamic change of the environment. Additionally, L-CAA can suspend and then resume the current behavior whenever it encounters a higher utility behavior. In order to analyze effectiveness of the L-CAA architecture, we implement an L-CAA-enabled agent autonomously playing in an Unreal Tournament game that is a well-known dynamic virtual environment, and then conduct several experiments using it.

  • PDF