• Title/Summary/Keyword: Visual information

Search Result 5,252, Processing Time 0.04 seconds

Reducing Visual Discomfort for VR Browser based on Visual Perception Characteristics (사람 시각 특성을 활용한 가상현실 브라우저에서의 시각적 피로도 절감 기술)

  • Kim, Kyungtae;Kim, Haksub
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.888-890
    • /
    • 2017
  • VR browser is one of the most popular applications for VR(Virtual Reality) environment. However, because most of the web contents are not designed considering the VR environment, scrolling the web pages in the VR browser causes much visual discomfort. We found it's because the angular velocity of the eye movement during scrolling increased because the viewing distance got closer compared with legacy devices. So we have developed a technology that regulates the scrolling to reduce the visual discomfort in the VR browser, in reference of the visual perception characteristics of the human visual system.

Image Understanding for Visual Dialog

  • Cho, Yeongsu;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1171-1178
    • /
    • 2019
  • This study proposes a deep neural network model based on an encoder-decoder structure for visual dialogs. Ongoing linguistic understanding of the dialog history and context is important to generate correct answers to questions in visual dialogs followed by questions and answers regarding images. Nevertheless, in many cases, a visual understanding that can identify scenes or object attributes contained in images is beneficial. Hence, in the proposed model, by employing a separate person detector and an attribute recognizer in addition to visual features extracted from the entire input image at the encoding stage using a convolutional neural network, we emphasize attributes, such as gender, age, and dress concept of the people in the corresponding image and use them to generate answers. The results of the experiments conducted using VisDial v0.9, a large benchmark dataset, confirmed that the proposed model performed well.

A Study on Visual Contents Exhibition Design as Efficient Spatial Experience Utilizing Internet of Things (사물인터넷을 활용한 효율적인 공간체험 영상콘텐츠 전시 디자인에 대한 연구)

  • Ryu, Chang-Su
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.611-612
    • /
    • 2017
  • Recently, exhibition space has incorporated a format in which visual contents providers and viewers can directly or indirectly experience the space with a sense of immersion. By offering more extended spatiality within a limited space, such exhibition design is now transcending physical limitations and expanding to visual reality; as such, exhibition design demands the implementation of not only visual elements but information services generated between people, contents, and objects. To this end, this study examines the changes and the development direction of exhibition techniques that use Internet of Things (IoT), which is a representation of environmental change in exhibition design of our times; it also explores the various forms of change based on the perception of the visual contents (visual exhibits) users during the act of accumulating and using information through the smartphone communication of viewers who experience the space through IoT. Finally, the study conducts a case study on the relationship between a regular exhibition and an exhibition design that makes use of IoT, in order to propose an exhibition design with which to verify whether the viewers are immersed in and experience a sense of realism from visual contents by identifying the viewers' visual and emotional changes.

  • PDF

Motion Detection Model Based on PCNN

  • Yoshida, Minoru;Tanaka, Masaru;Kurita, Takio
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.273-276
    • /
    • 2002
  • Pulse-Coupled Neural Network (PCNN), which can explain the synchronous burst of neurons in a cat visual cortex, is a fundamental model for the biomimetic vision. The PCNN is a kind of pulse coded neural network models. In order to get deep understanding of the visual information Processing, it is important to simulate the visual system through such biologically plausible neural network model. In this paper, we construct the motion detection model based on the PCNN with the receptive field models of neurons in the lateral geniculate nucleus and the primary visual cortex. Then it is shown that this motion detection model can detect the movements and the direction of motion effectively.

  • PDF

Imaging a scene from experience given verbal experssions

  • Sakai, Y.;Kitazawa, M.;Takahashi, S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.307-310
    • /
    • 1995
  • In the conventional systems, a human must have knowledge of machines and of their special language in communicating with machines. In one side, it is desirable for a human but in another side, it is true that achieving it is very elaborate and is also a significant cause of human error. To reduce this sort of human load, an intelligent man-machine interface is desirable to exist between a human operator and machines to be operated. In the ordinary human communication, not only linguistic information but also visual information is effective, compensating for each others defect. From this viewpoint, problem of translating verbal expressions to some visual image is discussed here in this paper. The location relation between any two objects in a visual scene is a key in translating verbal information to visual information, as is the case in Fig.l. The present translation system advances in knowledge with experience. It consists of Japanese Language processing, image processing, and Japanese-scene translation functions.

  • PDF

Video Game Experience and Children's Abilities of Self-Control and Visual Information Processing (전자오락경험과 아동의 자기통제력 및 시각정보처리능력)

  • Yi, Soon Hyung;Lee, So Eun
    • Korean Journal of Child Studies
    • /
    • v.18 no.2
    • /
    • pp.105-120
    • /
    • 1997
  • The purpose of this study was to investigate children's abilities of self-control and visual information processing based on their experience with video games. Participants, divided by prior exposure to video games, were 44 seven-year-old and 48 eleven-year-old boys. The impulsive tendency of children was measured through the MFFT and The delayed satisfaction test. Visual information processing ability was assessed through perceptual speed, mental rotation, and spatial visualization tasks. No differences were found between more-and less-video-game-experienced boys. Significant differences, however, were found in visual information processing abilities. More experienced boys performed better in mental rotation and spatial visualization tasks than less experienced boys.

  • PDF

Applications of Morphing on Facial Model Reconstruction and Surgical Simulation

  • Lee, Tong-Yee;Sun, Yung-Nein;Weng, Tzu-Lun;Lin, Yung-Ching
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.103.2-110
    • /
    • 1999
  • Facial model reconstruction and surgical simulation are essential parts in the computer-aided surgical system. Plastic surgeons use it to design appropriate repair plans and procedures before actual surgery is operated. In this work, the exploration of 3-D metamorphosis to them presents new results in these two parts.

Representing Navigation Information on Real-time Video in Visual Car Navigation System

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.5
    • /
    • pp.365-373
    • /
    • 2007
  • Car navigation system is a key application in geographic information system and telematics. A recent trend of car navigation system is using real video captured by camera equipped on the vehicle, because video has more representation power about real world than conventional map. In this paper, we suggest a visual car navigation system that visually represents route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid directly on the video. The system integrates real-time data acquisition, conventional route finding and guidance, computer vision, and augmented reality display. We also designed visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to current location and driving circumstances. We briefly show implementation of the system.

Robust Feature Extraction Based on Image-based Approach for Visual Speech Recognition (시각 음성인식을 위한 영상 기반 접근방법에 기반한 강인한 시각 특징 파라미터의 추출 방법)

  • Gyu, Song-Min;Pham, Thanh Trung;Min, So-Hee;Kim, Jing-Young;Na, Seung-You;Hwang, Sung-Taek
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.3
    • /
    • pp.348-355
    • /
    • 2010
  • In spite of development in speech recognition technology, speech recognition under noisy environment is still a difficult task. To solve this problem, Researchers has been proposed different methods where they have been used visual information except audio information for visual speech recognition. However, visual information also has visual noises as well as the noises of audio information, and this visual noises cause degradation in visual speech recognition. Therefore, it is one the field of interest how to extract visual features parameter for enhancing visual speech recognition performance. In this paper, we propose a method for visual feature parameter extraction based on image-base approach for enhancing recognition performance of the HMM based visual speech recognizer. For experiments, we have constructed Audio-visual database which is consisted with 105 speackers and each speaker has uttered 62 words. We have applied histogram matching, lip folding, RASTA filtering, Liner Mask, DCT and PCA. The experimental results show that the recognition performance of our proposed method enhanced at about 21% than the baseline method.

Query by Visual Example: A Comparative Study of the Efficacy of Image Query Paradigms in Supporting Visual Information Retrieval (시각 예제에 의한 질의: 시각정보 검색지원을 위한 이미지 질의 패러다임의 유용성 비교 연구)

  • Venters, Colin C.
    • Journal of Information Management
    • /
    • v.42 no.3
    • /
    • pp.71-94
    • /
    • 2011
  • Query by visual example is the principal query paradigm for expressing queries in a content-based image retrieval environment. Query by image and query by sketch have long been purported as being viable methods of query formulation yet there is little empirical evidence to support their efficacy in facilitating query formulation. The ability of the searcher to express their information problem to an information retrieval system is fundamental to the retrieval process. The aim of this research was to investigate the query by image and query by sketch methods in supporting a range of information problems through a usability experiment in order to contribute to the gap in knowledge regarding the relationship between searchers' information problems and the query methods required to support efficient and effective visual query formulation. The results of the experiment suggest that query by image is a viable approach to visual query formulation. In contrast, the results strongly suggest that there is a significant mismatch between the searchers information problems and the expressive power of the query by sketch paradigm in supporting visual query formulation. The results of a usability experiment focusing on efficiency (time), effectiveness (errors) and user satisfaction show that there was a significant difference, p<0.001, between the two query methods on all three measures: time (Z=-3.597, p<0.001), errors (Z=-3.317, p<0.001), and satisfaction (Z=-10.223, p<0.001). The results also show that there was a significant difference in participants perceived usefulness of the query tools Z=-4.672, p<0.001.