• Title/Summary/Keyword: Natural 인터페이스

Search Result 220, Processing Time 0.024 seconds

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

Color Image Segmentation and Textile Texture Mapping of 2D Virtual Wearing System (2D 가상 착의 시스템의 컬러 영상 분할 및 직물 텍스쳐 매핑)

  • Lee, Eun-Hwan;Kwak, No-Yoon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.5
    • /
    • pp.213-222
    • /
    • 2008
  • This paper is related to color image segmentation and textile texture mapping for the 2D virtual wearing system. The proposed system is characterized as virtually wearing a new textile pattern selected by user to the clothing shape section, based on its intensity difference map, segmented from a 2D clothes model image using color image segmentation technique. Regardless of color or intensity of model clothes, the proposed system is possible to virtually change the textile pattern or color with holding the illumination and shading properties of the selected clothing shape section, and also to quickly and easily simulate, compare, and select multiple textile pattern combinations for individual styles or entire outfits. The proposed system can provide higher practicality and easy-to-use interface, as it makes real-time processing possible in various digital environment, and creates comparatively natural and realistic virtual wearing styles, and also makes semi-automatic processing possible to reduce the manual works to a minimum. According to the proposed system, it can motivate the creative activity of the designers with simulation results on the effect of textile pattern design on the appearance of clothes without manufacturing physical clothes and, as it can help the purchasers for decision-making with them, promote B2B or B2C e-commerce.

Stereo Vision Based 3D Input Device (스테레오 비전을 기반으로 한 3차원 입력 장치)

  • Yoon, Sang-Min;Kim, Ig-Jae;Ahn, Sang-Chul;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.429-441
    • /
    • 2002
  • This paper concerns extracting 3D motion information from a 3D input device in real time focused to enabling effective human-computer interaction. In particular, we develop a novel algorithm for extracting 6 degrees-of-freedom motion information from a 3D input device by employing an epipolar geometry of stereo camera, color, motion, and structure information, free from requiring the aid of camera calibration object. To extract 3D motion, we first determine the epipolar geometry of stereo camera by computing the perspective projection matrix and perspective distortion matrix. We then incorporate the proposed Motion Adaptive Weighted Unmatched Pixel Count algorithm performing color transformation, unmatched pixel counting, discrete Kalman filtering, and principal component analysis. The extracted 3D motion information can be applied to controlling virtual objects or aiding the navigation device that controls the viewpoint of a user in virtual reality setting. Since the stereo vision-based 3D input device is wireless, it provides users with a means for more natural and efficient interface, thus effectively realizing a feeling of immersion.

Development of Mirror Neuron System-based BCI System using Steady-State Visually Evoked Potentials (정상상태시각유발전위를 이용한 Mirror Neuron System 기반 BCI 시스템 개발)

  • Lee, Sang-Kyung;Kim, Jun-Yeup;Park, Seung-Min;Ko, Kwang-Enu;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.1
    • /
    • pp.62-68
    • /
    • 2012
  • Steady-State Visually Evoked Potentials (SSVEP) are natural response signal associated with the visual stimuli with specific frequency. By using SSVEP, occipital lobe region is electrically activated as frequency form equivalent to stimuli frequency with bandwidth from 3.5Hz to 75Hz. In this paper, we propose an experimental paradigm for analyzing EEGs based on the properties of SSVEP. At first, an experiment is performed to extract frequency feature of EEGs that is measured from the image-based visual stimuli associated with specific objective with affordance and object-related affordance is measured by using mirror neuron system based on the frequency feature. And then, linear discriminant analysis (LDA) method is applied to perform the online classification of the objective pattern associated with the EEG-based affordance data. By using the SSVEP measurement experiment, we propose a Brain-Computer Interface (BCI) system for recognizing user's inherent intentions. The existing SSVEP application system, such as speller, is able to classify the EEG pattern based on grid image patterns and their variations. However, our proposed SSVEP-based BCI system performs object pattern classification based on the matters with a variety of shapes in input images and has higher generality than existing system.

PPEditor: Semi-Automatic Annotation Tool for Korean Dependency Structure (PPEditor: 한국어 의존구조 부착을 위한 반자동 말뭉치 구축 도구)

  • Kim Jae-Hoon;Park Eun-Jin
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.63-70
    • /
    • 2006
  • In general, a corpus contains lots of linguistic information and is widely used in the field of natural language processing and computational linguistics. The creation of such the corpus, however, is an expensive, labor-intensive and time-consuming work. To alleviate this problem, annotation tools to build corpora with much linguistic information is indispensable. In this paper, we design and implement an annotation tool for establishing a Korean dependency tree-tagged corpus. The most ideal way is to fully automatically create the corpus without annotators' interventions, but as a matter of fact, it is impossible. The proposed tool is semi-automatic like most other annotation tools and is designed to edit errors, which are generated by basic analyzers like part-of-speech tagger and (partial) parser. We also design it to avoid repetitive works while editing the errors and to use it easily and friendly. Using the proposed annotation tool, 10,000 Korean sentences containing over 20 words are annotated with dependency structures. For 2 months, eight annotators have worked every 4 hours a day. We are confident that we can have accurate and consistent annotations as well as reduced labor and time.

Soil Erosion Assessment Tool - Water Erosion Prediction Project (WEPP) (토양 침식 예측 모델 - Water Erosion Prediction Project (WEPP))

  • Kim, Min-Kyeong;Park, Seong-Jin;Choi, Chul-Man;Ko, Byong-Gu;Lee, Jong-Sik;Flanagan, D.C.
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.41 no.4
    • /
    • pp.235-238
    • /
    • 2008
  • The Water Erosion Prediction Project (WEPP) was initiated in August 1985 to develop new generation water erosion prediction technology for federal agencies involved in soil and water conservation and environmental planning and assessment. Developed by USDA-ARS as a replacement for empirical erosion prediction technologies, the WEPP model simulates many of the physical processes important in soil erosion, including infiltration, runoff, raindrop detachment, flow detachment, sediment transport, deposition, plant growth and residue decomposition. The WEPP included an extensive field experimental program conducted on cropland, rangeland, and disturbed forest sites to obtain data required to parameterize and test the model. A large team effort at numerous research locations, ARS laboratories, and cooperating land-grant universities was needed to develop this state-of-the-art simulation model. The WEPP model is used for hillslope applications or on small watersheds. Because it is physically based, the model has been successfully used in the evaluation of important natural resources issues throughout the United State and in several other countries. Recent model enhancements include a graphical Windows interface and integration of WEPP with GIS software. A combined wind and water erosion prediction system with easily accessible databases and a common interface is planned for the future.

A New Hardware Design for Generating Digital Holographic Video based on Natural Scene (실사기반 디지털 홀로그래픽 비디오의 실시간 생성을 위한 하드웨어의 설계)

  • Lee, Yoon-Hyuk;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.11
    • /
    • pp.86-94
    • /
    • 2012
  • In this paper we propose a hardware architecture of high-speed CGH (computer generated hologram) generation processor, which particularly reduces the number of memory access times to avoid the bottle-neck in the memory access operation. For this, we use three main schemes. The first is pixel-by-pixel calculation rather than light source-by-source calculation. The second is parallel calculation scheme extracted by modifying the previous recursive calculation scheme. The last one is a fully pipelined calculation scheme and exactly structured timing scheduling by adjusting the hardware. The proposed hardware is structured to calculate a row of a CGH in parallel and each hologram pixel in a row is calculated independently. It consists of input interface, initial parameter calculator, hologram pixel calculators, line buffer, and memory controller. The implemented hardware to calculate a row of a $1,920{\times}1,080$ CGH in parallel uses 168,960 LUTs, 153,944 registers, and 19,212 DSP blocks in an Altera FPGA environment. It can stably operate at 198MHz. Because of the three schemes, the time to access the external memory is reduced to about 1/20,000 of the previous ones at the same calculation speed.

The Development of Robot and Augmented Reality Based Contents and Instructional Model Supporting Childrens' Dramatic Play (로봇과 증강현실 기반의 유아 극놀이 콘텐츠 및 교수.학습 모형 개발)

  • Jo, Miheon;Han, Jeonghye;Hyun, Eunja
    • Journal of The Korean Association of Information Education
    • /
    • v.17 no.4
    • /
    • pp.421-432
    • /
    • 2013
  • The purpose of this study is to develop contents and an instructional model that support children's dramatic play by integrating the robot and augmented reality technology. In order to support the dramatic play, the robot shows various facial expressions and actions, serves as a narrator and a sound manager, supports the simultaneous interaction by using the camera and recognizing the markers and children's motions, records children's activities as a photo and a video that can be used for further activities. The robot also uses a projector to allow children to directly interact with the video object. On the other hand, augmented reality offers a variety of character changes and props, and allows various effects of background and foreground. Also it allows natural interaction between the contents and children through the real-type interface, and provides the opportunities for the interaction between actors and audiences. Along with these, augmented reality provides an experience-based learning environment that induces a sensory immersion by allowing children to manipulate or choose the learning situation and experience the results. In addition, the instructional model supporting dramatic play consists of 4 stages(i.e., teachers' preparation, introducing and understanding a story, action plan and play, evaluation and wrapping up). At each stage, detailed activities to decide or proceed are suggested.

A Usability Testing on the Tablet PC-based Korean High-tech AAC Software (태블릿 PC 기반 한국형 하이테크 AAC 소프트웨어의 사용성 평가)

  • Lee, Heeyeon;Hong, Ki-Hyung
    • Journal of the HCI Society of Korea
    • /
    • v.7 no.2
    • /
    • pp.35-42
    • /
    • 2012
  • The purpose of this study was to evaluate the usability of the tablet PC-based Korean high-tech AAC(Augmentative Alternative Communication System) software. In order to develop an AAC software which is appropriate to Korean cultural/linguistic contexts and communication needs of the users, we examined the necessity and ease of use for the communication functions that are required in native Korean communication, such as polite expressions, tense expressions, negative expressions, subject-verb auto-matching, and automatic sentence generation functions, using a scenario-based user testing. We also investigated the users' needs, preferences, and satisfaction for the tablet PC-based Korean high tech AAC using a semi-structured and open questionnaires. The participants of this study were 9 special education teachers, 6 speech therapists, and 6 parents whose children had communication disabilities. The results of the usability testing of the tablet PC-based Korean high-tech AAC software presented positive responses in general, by indicating overall scores of above 4 out of 5 except in tense and negative expressions. The necessity and ease of use in the tense and negative expressions were evaluated relatively low, and it might be related to the inconsistent interface with the polite expressions. In terms of the user interface(UI), there were users' needs for clear visual feedback in the symbol selection and display, consistent interface for all functions, more natural subject-verb auto-matching, and spacing in the text within symbols. The results of the usability testing and users' feedback might serve as a guideline to compensate and improve the function and UI of the existing AAC software.

  • PDF

Distance measurement System from detected objects within Kinect depth sensor's field of view and its applications (키넥트 깊이 측정 센서의 가시 범위 내 감지된 사물의 거리 측정 시스템과 그 응용분야)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.279-282
    • /
    • 2017
  • Kinect depth sensor, a depth camera developed by Microsoft as a natural user interface for game appeared as a very useful tool in computer vision field. In this paper, due to kinect's depth sensor and its high frame rate, we developed a distance measurement system using Kinect camera to test it for unmanned vehicles which need vision systems to perceive the surrounding environment like human do in order to detect objects in their path. Therefore, kinect depth sensor is used to detect objects in its field of view and enhance the distance measurement system from objects to the vision sensor. Detected object is identified in accuracy way to determine if it is a real object or a pixel nose to reduce the processing time by ignoring pixels which are not a part of a real object. Using depth segmentation techniques along with Open CV library for image processing, we can identify present objects within Kinect camera's field of view and measure the distance from them to the sensor. Tests show promising results that this system can be used as well for autonomous vehicles equipped with low-cost range sensor, Kinect camera, for further processing depending on the application type when they reach a certain distance far from detected objects.

  • PDF