• Title/Summary/Keyword: 3D User Interfaces

Search Result 81, Processing Time 0.027 seconds

The Development of Realistic Virtual Reality Game with Leap Motion Reflected Physical strength and Score Characters (물리적인 힘과 스코어 캐릭터를 반영한 립모션 체험형 가상현실 게임개발)

  • Park, Gangrae;Lee, Byungseok;Kim, Seongdong;Chin, Seongah
    • Journal of Korea Game Society
    • /
    • v.16 no.4
    • /
    • pp.69-78
    • /
    • 2016
  • With the development of game technology, the realistic game graphics, interface technology, and various content services with immersion are being required in the content area. NUI has been developed through CLI and GUI. Unlike the conventional methods, it is an interface that could be the intuitive and realistic interface for human as a natural action realized. we propose a boxing simulation game using leap motion of it. Providing a realistic 3D experimental environment through VR headsets game, we also propose a method that can be calculated the scores if the user-controlled interface (fist) could be to punch the target (sandbag) of the internal in accordance with changes of the angle of target impact with the physical characteristics.

Construction of VR Experience Pavilion Using Multi-display and ZA Sensor (멀티디스플레이와 ZA센서를 이용한 가상현실 체험관 구축 방법)

  • Joo, Jae-Hong;Lee, Hyeon-Cheol;Kim, Eun Seok
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2008.05a
    • /
    • pp.28-32
    • /
    • 2008
  • The displays and the interactive interfaces are important factors to enhance immersion in the VR experience pavilion. In the VR experience pavilion displaying 3D Virtual environment, it is difficult to have multi-display with single viewpoint because of the field of view. Also it costs a lot to construct a large size display. The interaction device between user and VR environment necessary to immersion has used mainly the touchscreens with the restricted display area or the button-based input devices so far. In this paper, we suggest a method for constructing extensible multi-display which are showing VR environment in realtime and a method for interaction using a ZA Sensor which is wireless and guarantees the user's unrestricted movement. The proposed method can make a VR experience pavilion more immersive at less cost.

  • PDF

Automatic Generator for Component-Based Web Database Applications (컴포넌트 기반 웹 데이터베이스 응용의 자동 생성기)

  • Eum, Doo-Hun;Ko, Min-Jeung;Kang, I-Zzy
    • The KIPS Transactions:PartD
    • /
    • v.11D no.2
    • /
    • pp.371-380
    • /
    • 2004
  • E-commerce is in wide use with the rapid advance of internet technology. The main component of an e-commerce application is a Web-based database application. Currently, it takes a lot of time in developing Web applications since developers should write codes manually or semi-automatically for user interface forms and query processing of an application. Therefore, the productivity increase of Web-based database applications has been demanded. In this paper, we introduce a software tool, which we call the WebSiteGen2, that automatically generates the forms that we used as user interfaces and the EJB/JSP components that process the query made through the forms for an application that needs a new database or uses an existing database. The WebSiteGen2 thus increases the productivity, reusability, expandibility, and portability of an application by automatically generating a 3-tier application based on component technology. Moreover, one user interface form that are generated by the WebSiteGen2 provides information on an interested entity as well as information on all the directly or indirectly related entities with the interested one. In this paper, we explain the functionality and implementation of the WebSiteGen2 and then show the merits by comparing the WebSiteGen2 to the other commercial Web application generators.

An Interface Technique for Avatar-Object Behavior Control using Layered Behavior Script Representation (계층적 행위 스크립트 표현을 통한 아바타-객체 행위 제어를 위한 인터페이스 기법)

  • Choi Seung-Hyuk;Kim Jae-Kyung;Lim Soon-Bum;Choy Yoon-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.9
    • /
    • pp.751-775
    • /
    • 2006
  • In this paper, we suggested an avatar control technique using the high-level behavior. We separated behaviors into three levels according to level of abstraction and defined layered scripts. Layered scripts provide the user with the control over the avatar behaviors at the abstract level and the reusability of scripts. As the 3D environment gets complicated, the number of required avatar behaviors increases accordingly and thus controlling the avatar-object behaviors gets even more challenging. To solve this problem, we embed avatar behaviors into each environment object, which informs how the avatar can interact with the object. Even with a large number of environment objects, our system can manage avatar-object interactions in an object-oriented manner Finally, we suggest an easy-to-use user interface technique that allows the user to control avatars based on context menus. Using the avatar behavior information that is embedded into the object, the system can analyze the object state and filter the behaviors. As a result, context menu shows the behaviors that the avatar can do. In this paper, we made the virtual presentation environment and applied our model to the system. In this paper, we suggested the technique that we controling an the avatar control technique using the high-level behavior. We separated behaviors into three levels byaccording to level of abstract levelion and defined multi-levellayered script. Multi-leveILayered script offers that the user can control avatar behavior at the abstract level and reuses script easily. We suggested object models for avatar-object interaction. Because, TtThe 3D environment is getting more complicated very quickly, so that the numberss of avatar behaviors are getting more variableincreased. Therefore, controlling avatar-object behavior is getting complex and difficultWe need tough processing for handling avatar-object interaction. To solve this problem, we suggested object models that embedded avatar behaviors into object for avatar-object interaction. insert embedded ail avatar behaviors into object. Even though the numbers of objects areis large bigger, it can manage avatar-object interactions by very efficientlyobject-oriented manner. Finally Wewe suggested context menu for ease ordering. User can control avatar throughusing not avatar but the object-oriented interfaces. To do this, Oobject model is suggested by analyzeing object state and filtering the behavior, behavior and context menu shows the behaviors that avatar can do. The user doesn't care about the object or avatar state through the related object.

Usability Test on Haptic Interaction With Real Object in Virtual Reality (실제 사물을 이용한 VR 햅틱 인터랙션 사용성 테스트)

  • Yang, Han Ul;Park, Jun
    • Journal of the Korean Society for Computer Game
    • /
    • v.31 no.4
    • /
    • pp.197-203
    • /
    • 2018
  • As people's interest in Virtual Reality has recently increased, peripherals have also made many progress. There is a lot of research being done from VR environment to VR configuration through scanning at room level with various interface devices that can interact with objects in the environment. According to current VR research Home VR uses multiple haptic interfaces to interact with objects configured in the VR environment, the method uses room scanning to some extent is beyond the spatial constraints and may use tracking equipment to interact with real objects. And advances in 3D printers have enabled the distribution of commercial 3D printers and home 3D printers, and made it easy for 3D printers to create models of their choice at home or at home. Considering the above two factors, We think it is necessary to study the difference between a model's object that people feel when interacting directly with an easy-to-create model in a VR environment. Therefore, in this paper, we are going to implement objects produced by 3D printers in VR space and study the differences between using real objects and other general interaction equipment through user testing with those that are actually implemented.

Simulation Software for Semiconductor Photolithography Equipment: TrackSim (반도체 포토 장비의 시뮬레이션 소프트웨어: TrackSim)

  • Yoon, Hyun-Joong;Kim, Jin-Gon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.8
    • /
    • pp.3319-3325
    • /
    • 2012
  • This paper describes the development of the TrackSim, which is a discrete event simulation tool for photolithography equipment of semiconductor industry. The TrackSim is focused on the accurate simulation model of the photolithography equipment and easy-to-use user interfaces. TrackSim provides 3D simulation environment for evaluating, validating, and scheduling the photolithography process. One of the major characteristics of TrackSim is in that it is developed based on Applied Materials' AutoMod, a discrete event simulation software broadly used in semiconductor industry. Accordingly, the photolithography model of TrackSim can be used to perform simulation connected with other simulation models built with AutoMod.

A Framework Development for Sketched Data-Driven Building Information Model Creation to Support Efficient Space Configuration and Building Performance Analysis (효율적 공간 형상화 및 건물성능분석을 위한 스케치 정보 기반 BIM 모델 자동생성 프레임워크 개발)

  • Kong, ByungChan;Jeong, WoonSeong
    • Korean Journal of Construction Engineering and Management
    • /
    • v.25 no.1
    • /
    • pp.50-61
    • /
    • 2024
  • The market for compact houses is growing due to the demand for floor plans prioritizing user needs. However, clients often have difficulty communicating their spatial requirements to professionals including architects because they lack the means to provide evidence, such as spatial configurations or cost estimates. This research aims to create a framework that can translate sketched data-driven spatial requirements into 3D building components in BIM models to facilitate spatial understanding and provide building performance analysis to aid in budgeting in the early design phase. The research process includes developing a process model, implementing, and validating the framework. The process model describes the data flow within the framework and identifies the required functionality. Implementation involves creating systems and user interfaces to integrate various systems. The validation verifies that the framework can automatically convert sketched space requirements into walls, floors, and roofs in a BIM model. The framework can also automatically calculate material and energy costs based on the BIM model. The developed frame enables clients to efficiently create 3D building components based on the sketched data and facilitates users to understand the space and analyze the building performance through the created BIM models.

Caret Unit Generation Method from PC Web for Mobile Device (캐럿 단위를 이용한 PC 웹 컨텐츠를 모바일 단말기에 서비스 하는 방법)

  • Park, Dae-Hyuck;Kang, Eui-Sun;Lim, Young-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.14D no.3 s.113
    • /
    • pp.339-346
    • /
    • 2007
  • The objective of this study is to satisfy requirements for a variety of terminals to play wired web page contents in ubiquitous environment constantly connected to network. In other words, this study intended to automatically transcode wired web page into mobile web page in order to receive service by using mobile base to carry contents in Internet web page. To achieve this objective, we suggest the method that is to directly enter URL of web page in mobile device and to check contents of the current web page. For this, web page is converted into an image and configured into a mobile web page suitable for personal terminals. Users can obtain the effect of having web services provided by using computer with interfaces to expand, reduce and move the web page as desired. This is a caret unit play method, with which contents of web page are transcoded and played to suit each user According to the method proposed in this study, contents of wired web page can be played by using a mobile device. This study confirms that a single content can be serviced to suit users of various terminals. Through this, it will be able to reuse numerous wired web contents as mobile web contents.

Virtual Object Weight Information with Multi-modal Sensory Feedback during Remote Manipulation (다중 감각 피드백을 통한 원격 가상객체 조작 시 무게 정보 전달)

  • Changhyeon Park;Jaeyoung Park
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.9-15
    • /
    • 2024
  • As virtual reality technology became popular, a high demand emerged for natural and efficient interaction with the virtual environment. Mid-air manipulation is one of the solutions to such needs, letting a user manipulate a virtual object in a 3D virtual space. In this paper, we focus on manipulating a remote virtual object while visually displaying the object and providing tactile information on the object's weight. We developed two types of wearable interfaces that can provide cutaneous or vibrotactile feedback on the virtual object weight to the user's fingertips. Human perception of the remote virtual object weight during manipulation was evaluated by conducting a psychophysics experiment. The results indicate a significant effect of haptic feedback on the perceived weight of the virtual object during manipulation.

Gesture interface with 3D accelerometer for mobile users (모바일 사용자를 위한 3 차원 가속도기반 제스처 인터페이스)

  • Choe, Bong-Whan;Hong, Jin-Hyuk;Cho, Sung-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.378-383
    • /
    • 2009
  • In these days, many systems are equipped with people to infer their intention and provide the corresponding service. People always carry their own mobile device with various sensors, and the accelerator takes a role in this environment. The accelerator collects motion information, which is useful for the development of gesture-based user interfaces. Generally, it needs to develop an effective method for the mobile environment that supports relatively less computational capability since huge computation is required to recognize time-series patterns such as gestures. In this paper, we propose a 2-stage motion recognizer composed of low-level and high-level motions based on the motion library. The low-level motion recognizer uses the dynamic time warping with 3D acceleration data, and the high-level motion is defined linguistically with the low-level motions.

  • PDF