• Title/Summary/Keyword: Virtual Reality Games

Search Result 231, Processing Time 0.027 seconds

Gaming Space into a Cultural Place: A study on the transformation process of digital gaming space into a place focused on the framework of Mechanics-Dynamics-Aesthetics (MDA프레임워크를 통한 디지털게임 공간의 장소성 발생 구조에 관한 연구)

  • Yi, Young-A;Kwon, Doo-Hee;Choi, Hye-Lim;Jeong, Eui Jun
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.738-747
    • /
    • 2021
  • Space and place have distinctively different meanings from each other. As virtual reality has become a routine of daily life, placeness concepts have been introduced on discussion tables. Yet, place has not been widely discussed in conceptual approaches Thus, using the concepts of space and place this study attempts to figure out the structure and the processes of how users recognize digital space and give placeness. For the study purpose, it identifies core elements of placeness attribution in digital game places, and then explains the development processes of space into place through characteristics of MDA(Mechanics-Dynamics-Aesthetics)framework. Based on present theoretical concepts and their application process this study also demonstrates the transformation process through which physical space becomes a place in the similar context with a necessary condition in order for a space to be a place. This study confirms that digital games can be transformed into a space that creates placeness in the process. Considering that players' affinity and nostalgia are generated through the placeness acquiring process in digital game space, the processes eventually imply an extension of largely meaningful and influencing contents as digital games induce players' immersion.

Animation and Machines: designing expressive robot-human interactions (애니메이션과 기계: 감정 표현 로봇과 인간과의 상호작용 연구)

  • Schlittler, Joao Paulo Amaral
    • Cartoon and Animation Studies
    • /
    • s.49
    • /
    • pp.677-696
    • /
    • 2017
  • Cartoons and consequently animation are an effective way of visualizing futuristic scenarios. Here we look at how animation is becoming ubiquitous and an integral part of this future today: the cybernetic and mediated society that we are being transformed into. Animation therefore becomes a form of speech between humans and this networked reality, either as an interface or as representation that gives temporal form to objects. Animation or specifically animated films usually are associated with character based short and feature films, fiction or nonfiction. However animation is not constricted to traditional cinematic formats and language, the same way that design and communication have become treated as separate fields, however according to $Vil{\acute{e}}m$ Flusser they aren't. The same premise can be applied to animation in a networked culture: Animation has become an intrinsic to design processes and products - as in motion graphics, interface design and three-dimensional visualization. Video-games, virtual reality, map based apps and social networks constitute layers of an expanded universe that embodies our network based culture. They are products of design and media disciplines that are increasingly relying on animation as a universal language suited to multi-cultural interactions carried in digital ambients. In this sense animation becomes a discourse, the same way as Roland Barthes describes myth as a type of speech. With the objective of exploring the role of animation as a design tool, the proposed research intends to develop transmedia creative visual strategies using animation both as narrative and as an user interface.

Developing and Valuating 3D Building Models Based on Multi Sensor Data (LiDAR, Digital Image and Digital Map) (멀티센서 데이터를 이용한 건물의 3차원 모델링 기법 개발 및 평가)

  • Wie, Gwang-Jae;Kim, Eun-Young;Yun, Hong-Sic;Kang, In-Gu
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.1
    • /
    • pp.19-30
    • /
    • 2007
  • Modeling 3D buildings is an essential process to revive the real world into a computer. There are two ways to create a 3D building model. The first method is to use the building layer of 1:1000 digital maps based on high density point data gained from airborne laser surveying. The second method is to use LiDAR point data with digital images achieved with LiDAR. In this research we tested one sheet area of 1:1000 digital map with both methods to process a 3D building model. We have developed a process, analyzed quantitatively and evaluated the efficiency, accuracy, and reality. The resulted differed depending on the buildings shape. The first method was effective on simple buildings, and the second method was effective on complicated buildings. Also, we evaluated the accuracy of the produced model. Comparing the 3D building based on LiDAR data and digital image with digital maps, the horizontal accuracy was within ${\pm}50cm$. From the above we derived a conclusion that 3D building modeling is more effective when it is based on LiDAR data and digital maps. Using produced 3D building modeling data, we will be utilized as digital contents in various fields like 3D GIS, U-City, telematics, navigation, virtual reality and games etc.

Efficient 3D Geometric Structure Inference and Modeling for Tensor Voting based Region Segmentation (효과적인 3차원 기하학적 구조 추정 및 모델링을 위한 텐서 보팅 기반 영역 분할)

  • Kim, Sang-Kyoon;Park, Soon-Young;Park, Jong-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.10-17
    • /
    • 2012
  • In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. In this paper, we propose a method for creating 3D virtual scenes based on 2D image that is completely automatic and requires only a single scene as input data. The proposed method is similar to the creation of a pop-up illustration in a children's book. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting to an image segmentation. The tensor voting is used based on the fact that homogeneous region in an image is usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. And then, our algorithm labels regions of the input image into coarse categories: "ground", "sky", and "vertical". These labels are then used to "cut and fold" the image into a pop-up model using a set of simple assumptions. The experimental results show that our method successfully segments coarse regions in many complex natural scene images and can create a 3D pop-up model to infer the structure information based on the segmented region information.

Real-Time Shadow Generation using Image Warping (이미지 와핑을 이용한 실시간 그림자 생성 기법)

  • Kang, Byung-Kwon;Ihm, In-Sung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.5
    • /
    • pp.245-256
    • /
    • 2002
  • Shadows are important elements in producing a realistic image. Generation of exact shapes and positions of shadows is essential in rendering since it provides users with visual cues on the scene. It is also very important to be able to create soft shadows resulted from area light sources since they increase the visual realism drastically. In spite of their importance. the existing shadow generation algorithms still have some problems in producing realistic shadows in real-time. While image-based rendering techniques can often be effective1y applied to real-time shadow generation, such techniques usually demand so large memory space for storing preprocessed shadow maps. An effective compression method can help in reducing memory requirement, only at the additional decoding costs. In this paper, we propose a new image-barred shadow generation method based on image warping. With this method, it is possible to generate realistic shadows using only small sizes of pre-generated shadow maps, and is easy to extend to soft shadow generation. Our method will be efficiently used for generating realistic scenes in many real-time applications such as 3D games and virtual reality systems.

Developing a Sensory Ride Film 'Dragon Dungeon Racing' (효율적인 입체 라이드 콘텐츠 제작을 위한 연구)

  • Chae, Eel-Jin;Choi, Chul-Young;Choi, Kyu-Don;Kim, Ki-Hong
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.2
    • /
    • pp.178-185
    • /
    • 2011
  • The recent development of 3D and its application contents have made it possible for people to experience more various 3D contents such as 3D/4D, VR, 3D ride film, I-max, sensory 3D games at theme parks, large-scale exhibitions, 4D cinemas and Video ride. Among them, Video ride, a motion-based genre, especially is getting more popularity, where viewers are immersed in and get indirect experiences in virtual reality. In this study, the production process of the genre of sensory 3D image getting attention recently and ride film are introduced. In the material selection of 3D images, the space and the setting up which is suitable to the fierce movement of rides are studied and some examples of the realization of creative direction ideas and effective technologies using the functions of Stereo Camera which has been first applied to MAYA 2009 are also illustrated. When experts in this 3D image production create more interesting stories with the cultural diversity and introduce enhanced 3D production techniques for excellent contents, domestic relevant companies will be sufficiently able to compete with their foreign counterparts and further establish their successfully unique and strong domains in the image contents sector.

Enhanced Image Mapping Method for Computer-Generated Integral Imaging System (집적 영상 시스템을 위한 향상된 이미지 매핑 방법)

  • Lee Bin-Na-Ra;Cho Yong-Joo;Park Kyoung-Shin;Min Sung-Wook
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.295-300
    • /
    • 2006
  • The integral imaging system is an auto-stereoscopic display that allows users to see 3D images without wearing special glasses. In the integral imaging system, the 3D object information is taken from several view points and stored as elemental images. Then, users can see a 3D reconstructed image by the elemental images displayed through a lens array. The elemental images can be created by computer graphics, which is referred to the computer-generated integral imaging. The process of creating the elemental images is called image mapping. There are some image mapping methods proposed in the past, such as PRR(Point Retracing Rendering), MVR(Multi-Viewpoint Rendering) and PGR(Parallel Group Rendering). However, they have problems with heavy rendering computations or performance barrier as the number of elemental lenses in the lens array increases. Thus, it is difficult to use them in real-time graphics applications, such as virtual reality or real-time, interactive games. In this paper, we propose a new image mapping method named VVR(Viewpoint Vector Rendering) that improves real-time rendering performance. This paper describes the concept of VVR first and the performance comparison of image mapping process with previous methods. Then, it discusses possible directions for the future improvements.

From Broken Visions to Expanded Abstractions (망가진 시선으로부터 확장된 추상까지)

  • Hattler, Max
    • Cartoon and Animation Studies
    • /
    • s.49
    • /
    • pp.697-712
    • /
    • 2017
  • In recent years, film and animation for cinematic release have embraced stereoscopic vision and the three-dimensional depth it creates for the viewer. The maturation of consumer-level virtual reality (VR) technology simultaneously spurred a wave of media productions set within 3D space, ranging from computer games to pornographic videos, to Academy Award-nominated animated VR short film Pearl. All of these works rely on stereoscopic fusion through stereopsis, that is, the perception of depth produced by the brain from left and right images with the amount of binocular parallax that corresponds to our eyes. They aim to emulate normal human vision. Within more experimental practices however, a fully rendered 3D space might not always be desirable. In my own abstract animation work, I tend to favour 2D flatness and the relative obfuscation of spatial relations it affords, as this underlines the visual abstraction I am pursuing. Not being able to immediately understand what is in front and what is behind can strengthen the desired effects. In 2015, Jeffrey Shaw challenged me to create a stereoscopic work for Animamix Biennale 2015-16, which he co-curated. This prompted me to question how stereoscopy, rather than hyper-defining space within three dimensions, might itself be used to achieve a confusion of spatial perception. And in turn, how abstract and experimental moving image practices can benefit from stereoscopy to open up new visual and narrative opportunities, if used in ways that break with, or go beyond stereoscopic fusion. Noteworthy works which exemplify a range of non-traditional, expanded approaches to binocular vision will be discussed below, followed by a brief introduction of the stereoscopic animation loop III=III which I created for Animamix Biennale. The techniques employed in these works might serve as a toolkit for artists interested in exploring a more experimental, expanded engagement with stereoscopy.

Uniform Posture Map Algorithm to Generate Natural Motion Transitions in Real-time (자연스러운 실시간 동작 전이 생성을 위한 균등 자세 지도 알고리즘)

  • Lee, Bum-Ro;Chung, Chin-Hyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.6
    • /
    • pp.549-558
    • /
    • 2001
  • It is important to reuse existing motion capture data for reduction of the animation producing cost as well as efficiency of producing process. Because its motion curve has no control point, however, it is difficult to modify the captured data interactively. The motion transition is a useful method to reuse the existing motion data. It generates a seamless intermediate motion with two short motion sequences. In this paper, Uniform Posture Map (UPM) algorithm is proposed to perform the motion transition. Since the UPM is organized through quantization of various postures with an unsupervised learning algorithm, it places the output neurons with similar posture in adjacent position. Using this property, an intermediate posture of two active postures is generated; the generating posture is used as a key-frame to make an interpolating motion. The UPM algorithm needs much less computational cost, in comparison with other motion transition algorithms. It provides a control parameter; an animator could control the motion simply by adjusting the parameter. These merits of the UPM make an animator to produce the animation interactively. The UPM algorithm prevents from generating an unreal posture in learning phase. It not only makes more realistic motion curves, but also contributes to making more natural motions. The motion transition algorithm proposed in this paper could be applied to the various fields such as real time 3D games, virtual reality applications, web 3D applications, and etc.

  • PDF

GPU-only Terrain Rendering for Walk-through (Walk-through를 지원하는 GPU 기반 지형렌더링)

  • Park, Sun-Yong;Oh, Kyoung-Su;Cho, Sung-Hyun
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.71-80
    • /
    • 2007
  • In this paper, we introduce an efficient GPU-based real-time rendering technique applicable to every kind of game. Our method, without an extra geometry, can represent terrain just with a height map. It makes it possible to freely go around in the air or on the surface, so we can directly apply it to any computer games as well as a virtual reality. Since our method is not based on any geometrical structure, it doesn't need special LOD policy and the precision of geometrical representation and visual quality absolutely depend on the resolution of height map and color map. Moreover, GPU-only technique allows the general CPU to be dedicated to more general work, and as a result, enhances the overall performance of the computer. To date, there have been many researches related to the terrain representation, but most of them rely on CPU or confmed its applications to flight simulation, Improving existing displacement mapping techniques and applying it to our terrain rendering, we completely ruled out the problems, such as cracking, poping etc, which cause in polygon-based techniques, The most important contributions are to efficiently deal with arbitrary LOS(Line Of Sight) and dramatically improve visual quality during walk-through by reconstructing a height field with curved patches. We suggest a simple and useful method for calculating ray-patch intersections. We implemented all these on GPU 100%, and got tens to hundreds of framerates with height maps a variety of resolutions$(256{\times}256\;to\;4096{\times}4096)$.

  • PDF