• Title/Summary/Keyword: Visual Information

Search Result 5,214, Processing Time 0.038 seconds

Visual Image Effects on Sound Localization in Peripheral Region under Dynamic Multimedia Conditions

  • Kono, Yoshinori;Hasegawa, Hiroshi;Ayama, Miyoshi;Kasuga, Masao;Matsumoto, Shuichi;Koike, Atsushi;Takagi, Koichi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.702-705
    • /
    • 2002
  • This paper describes effects of visual information influencing sound localization in the peripheral visual Held under dynamic conditions. Presentation experiments of an audio-visual stimulus were carried out using a movie of a moving patrol car and its siren sound. The tallowing results were obtained: first, the sound image on the timing at the beginning of the presentation was more strongly captured by the visual image than that at the end, i.e., the "beginning effect" was occurred; second, in the peripheral regions, the "beginning effect" was strongly appeared in near the fixation point of eyes.

  • PDF

Study on Levee Visual Inspection Information System Building Using Mobile Technology

  • Kang, Seung-Hyun;Lee, Jong-Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.6
    • /
    • pp.71-76
    • /
    • 2016
  • In this paper, we propose the mobile visual inspection information system using DGPS and portable range finder for levee safety inspection. Instead of existing visual inspection management method that is stored hand-written data, this system is designed to manage directly the visual inspection information using mobile devices in the field of levee. And through extracting accurate DGPS coordinates information about damage location of levee, this system is developed to ensure efficiency for the main task arising from the levee site such as inspection, maintenance and reinforcement. Furthermore, when damage has occurred at the point that inspector is not able to approach, this system can record the damage site data correctly, by converting data such as position, orientation and height of the damage point into the World Geodetic System coordinates. The position, orientation and height data was extracted automatically through the DGPS and portable range finder. And by applying the augmented reality method, this system was implemented for inspector to revisit the point of damage easily in order to perform the management, maintenance and reinforcement of the levee later.

A Perception-based Color Correction Method for Multi-view Images

  • Shao, Feng;Jiang, Gangyi;Yu, Mei;Peng, Zongju
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.2
    • /
    • pp.390-407
    • /
    • 2011
  • Three-dimensional (3D) video technologies are becoming increasingly popular, as it can provide users with high quality and immersive experiences. However, color inconsistency between the camera views is an urgent problem to be solved in multi-view imaging. In this paper, a perception-based color correction method for multi-view images is proposed. In the proposed method, human visual sensitivity (VS) and visual attention (VA) models are incorporated into the correction process. Firstly, the VS property is used to reduce the computational complexity by removing these visual insensitive regions. Secondly, the VA property is used to improve the perceptual quality of local VA regions by performing VA-dependent color correction. Experimental results show that compared with other color correction methods, the proposed method can greatly promote the perceptual quality of local VA regions greatly and reduce the computational complexity, and obtain higher coding performance.

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.

Perception of Ship's Movement in Docking Maneuvering using Ship-Handling Simulator

  • Arai, Yasuo;Minamiya, Taro;Okuda, Shigeyuki
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2006.10a
    • /
    • pp.3-10
    • /
    • 2006
  • Recently it is coming to be hish reality on visual system in ship-handling simulator depending on the technical development of 3D computer graphics. Even with high reality, it is possible that visual information presented seafarers through screen or display is not equivalent to the real world. In docking maneuvering, visual targets or obstructs are sighted close to ship's operator or within few hundred meters, so it might be possible to affect visual information such as the difference between both eyes' and single eye's visual sight. Because it is not possible to perceive of very slow ship's movement by visual in case of very large vessels, so the Doppler Docking SONAR and/or Docking Speed and Distance Measurement Equipment were developed and applied for safety docking maneuvering. By the way, the simulator training includes the ship's maneuvering training in docking, but in Ship-handling Simulator and also onboard, there are some limitations of perception of ship's movement with visual information. In this paper, perception of ship's movement with visual system in Ship-handling Simulator and competition of performances of visual systems that are conventional screen type with Fixed Eye-point system and Mission Simulator. We got some conclusions not only on the effectiveness for visual system but also on the human behavior in docking maneuver.

  • PDF

Temporal-perceptual Judgement of Visuo-Auditory Stimulation (시청각 자극의 시간적 인지 판단)

  • Yu, Mi;Lee, Sang-Min;Piao, Yong-Jun;Kwon, Tae-Kyu;Kim, Nam-Gyun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.24 no.1 s.190
    • /
    • pp.101-109
    • /
    • 2007
  • In situations of spatio-temporal perception about visuo-auditory stimulus, researches propose optimal integration hypothesis that perceptual process is optimized to the interaction of the senses for the precision of perception. So, when the visual information considered generally dominant over any other sense is ambiguous, the information of the other sense like auditory stimulus influences the perceptual process in interaction with visual information. Thus, we performed two different experiments to certain the conditions of the interacting senses and influence of the condition. We consider the interaction of the visuo-auditory stimulation in the free space, the color of visual stimulus and sex difference of testee with normal people. In first experiment, 12 participants were asked to judge the change in the frequency of audio-visual stimulation using a visual flicker and auditory flutter stimulation in the free space. When auditory temporal cues were presented, the change in the frequency of the visual stimulation was associated with a perceived change in the frequency of the auditory stimulation as the results of the previous studies using headphone. In second experiment, 30 male and 30 female were asked to judge the change in the frequency of audio-visual stimulation using a color of visual flicker and auditory flutter stimulation. In the color condition using red and green. Both male and female testees showed same perceptual tendency. male and female testees showed same perceptual tendency however, in case of female, the standard deviation is larger than that of male. This results implies that audio-visual asymmetry effects are influenced by the cues of visual and auditory information, such as the orientation between auditory and visual stimulus, the color of visual stimulus.

Effect of Visual Information by Ultrasound on Maternal-Fetal Attachment (초음파 영상을 통한 태아의 모습 제공 여부가 임부의 태아 애착에 미치는 영향)

  • Lee, Jee-Young;Cho, Jeong-Yeon;Chang, Soon-Bok;Park, Ju-Hyun;Lee, Young-Ho
    • Women's Health Nursing
    • /
    • v.8 no.3
    • /
    • pp.335-344
    • /
    • 2002
  • Providing visual information about the fetus to the mother by the ultrasound examination was found to be an effective nursing intervention to promote Maternal-Fetal Attachment. In keeping with the purpose of the study, to evaluate the effect of providing visual information by ultrasound on level of Maternal-Fetal Attachment, a non-equivalent experimental group quasi-experimental design was used. The data were collected using Cranley's Maternal-Fetal Attachment Scale(1981) with a research questionnaire that consisted of 16 items on general characteristics and 23 items on Maternal-Fetal Attachment from November 2, 2000 to August 11, 2001. Subjects were 126 pregnant women who were received visual information by ultrasound and 123 pregnant women who did not receive visual information by ultrasound after finishing examination. The data were analyzed by using the SPSS/PC+ window 10.0 version program. The results of this study were as follows: There was no statistical difference in general characteristics between both groups. The scores on Maternal-Fetal Attachment at second trimester show no statistical difference (t=1.123, p=0.263). The scores on Maternal-Fetal Attachment in both groups increased between the second trimester and third trimester. However, the increase was greater in the group receiving visual information by ultrasound as compared to the group which did not receive the visual information(t=-2.152, p=0.032). This result shows that providing visual information about the fetus by the ultrasound examination is effective in increasing Maternal-Fetal Attachment.

  • PDF

A Collaborative Visual Language

  • Kim, Kyung-Deok
    • Journal of information and communication convergence engineering
    • /
    • v.1 no.2
    • /
    • pp.74-81
    • /
    • 2003
  • There are many researches on visual languages, but the most of them are difficult to support various collaborative interactions on a distributed multimedia environment. So, this paper suggests a collaborative visual language for interaction between multi-users. The visual language can describe a conceptual model for collaborative interactions between multi-users. Using the visual language, generated visual sentences consist of object icons and interaction operators. An object icon represents a user who is responsible for a collaborative activity, has dynamic attributes of a user, and supports flexible interaction between multi-users. An interaction operator represents an interactive relation between multi-users and supports various collaborative interactions. Merits of the visual language are as follows: supporting of both asynchronous interaction and synchronous interaction, supporting flexible interaction between multi-users according to participation or leave of users, supporting a user oriented modeling, etc. For example, an application to a workflow system for document approval is illustrated. So we could be found that the visual language shows a collaborative interaction.

INFLUENCE OF PROVIDING BODY SENSORY INFORMATION AND VISUAL INFORMATION TO DRIVER ON STEER CHARACTERISTICS AND AMOUNT OF PERSPIRATION IN DRIFT CORNERING

  • NOZAKI H.
    • International Journal of Automotive Technology
    • /
    • v.7 no.1
    • /
    • pp.35-41
    • /
    • 2006
  • Driving simulations were performed to evaluate the effect of providing both visual information and body sensory information on changes in steering characteristics and the amount of perspiration in drift cornering. When the driver is provided with body sensory information and visual information, the amount of perspiration increases and the driver can perform drift control with a moderate level of tension. With visual information only, the driver tends to easily go into a spin because drift control is difficult. In this case, the amount of perspiration increases greatly as compared with the case where body sensory information is also provided, reflecting a very high perception of risk. When body sensory information is provided, the driver can control drift adequately, feeding back the roll angle information in steering. The importance of the driver's perception of the state of the vehicle was thus confirmed, and a desirable future direction for driver assistance systems was determined.