• Title/Summary/Keyword: Visual Information

Search Result 5,221, Processing Time 0.046 seconds

Query by Visual Example: A Comparative Study of the Efficacy of Image Query Paradigms in Supporting Visual Information Retrieval (시각 예제에 의한 질의: 시각정보 검색지원을 위한 이미지 질의 패러다임의 유용성 비교 연구)

  • Venters, Colin C.
    • Journal of Information Management
    • /
    • v.42 no.3
    • /
    • pp.71-94
    • /
    • 2011
  • Query by visual example is the principal query paradigm for expressing queries in a content-based image retrieval environment. Query by image and query by sketch have long been purported as being viable methods of query formulation yet there is little empirical evidence to support their efficacy in facilitating query formulation. The ability of the searcher to express their information problem to an information retrieval system is fundamental to the retrieval process. The aim of this research was to investigate the query by image and query by sketch methods in supporting a range of information problems through a usability experiment in order to contribute to the gap in knowledge regarding the relationship between searchers' information problems and the query methods required to support efficient and effective visual query formulation. The results of the experiment suggest that query by image is a viable approach to visual query formulation. In contrast, the results strongly suggest that there is a significant mismatch between the searchers information problems and the expressive power of the query by sketch paradigm in supporting visual query formulation. The results of a usability experiment focusing on efficiency (time), effectiveness (errors) and user satisfaction show that there was a significant difference, p<0.001, between the two query methods on all three measures: time (Z=-3.597, p<0.001), errors (Z=-3.317, p<0.001), and satisfaction (Z=-10.223, p<0.001). The results also show that there was a significant difference in participants perceived usefulness of the query tools Z=-4.672, p<0.001.

The Influence of Acoustic Information Type on Landscape Preference (청각적 정보의 유형이 경관선호도에 미치는 영향)

  • 서주환;성미성
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.29 no.5
    • /
    • pp.28-36
    • /
    • 2001
  • The purpose of this study is to research the influence of soundscape in the preference of landscape. Specifically, standards type of communication are applied to the landscape such as positive scenery and negative scenery. The spatial image was analyzed by the variables of Kaplan\`s information process model. The level of visual preferences was measured by a type of acoustic information and visual information in the landscape, and this data was analyzed by multiple regression. The results of this study can be summarized as follows; The value of landscape preference was maximum value in Type I and minimum value in Type II from all fluent of coherence, mystery, and legibility to, except complexity, and it was not different from preference. These results clearly show the influence of sounds effecting decision of landscape preference. It was different by the type of acoustic information and visual information in landscape. The results of ANOVA among types of acoustic information were differences of mean between positive sound, no sound and negative sound from coherence, mystery, and legibility to, except complexity. These variables may be the major factors which must be considered in planning and designing as the functional basis for quantitative analysis.

  • PDF

AStudy on Human Resources of Local Visual Industry (Focus on Busan Metropolitan City) (지역 영상산업 인력자원 분석 - 부산광역시를 중심으로 -)

  • Park, Byeong-Ju;Choi, Yeong-Geun;Kim, JaeHeon;Kim, Cheeyong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.625-628
    • /
    • 2009
  • Busan was wasteland of visual industry, But It has been undergo a complete change visual industry city since It was started 1st Busan International Film Festival(PIFF) in 1996. It promoted Cineport Busan, 1st step. promotion age 'good city film' 2004, 2nd step. settlement age 'good city produce movie' from 2005 to 2007, now 3rd step. development age settle down visual industry and progressing project for make importance place produce movie. so, employment problem of region visual human resources. It is indispensable element for become real region industry in visual industry. So that, this study analyzes what is problem of visual industry and human resources in Busan city. Then inquired what will happen supply region visual human resources through visual importance city Busan development plan.

  • PDF

Accurate Representation of Light-intensity Information by the Neural Activities of Independently Firing Retinal Ganglion Cells

  • Ryu, Sang-Baek;Ye, Jang-Hee;Kim, Chi-Hyun;Goo, Yong-Sook;Kim, Kyung-Hwan
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.13 no.3
    • /
    • pp.221-227
    • /
    • 2009
  • For successful restoration of visual function by a visual neural prosthesis such as retinal implant, electrical stimulation should evoke neural responses so that the informat.ion on visual input is properly represented. A stimulation strategy, which means a method for generating stimulation waveforms based on visual input, should be developed for this purpose. We proposed to use the decoding of visual input from retinal ganglion cell (RGC) responses for the evaluation of stimulus encoding strategy. This is based on the assumption that reliable encoding of visual information in RGC responses is required to enable successful visual perception. The main purpose of this study was to determine the influence of inter-dependence among stimulated RGCs activities on decoding accuracy. Light intensity variations were decoded from multiunit RGC spike trains using an optimal linear filter. More accurate decoding was possible when different types of RGCs were used together as input. Decoding accuracy was enhanced with independently firing RGCs compared to synchronously firing RGCs. This implies that stimulation of independently-firing RGCs and RGCs of different types may be beneficial for visual function restoration by retinal prosthesis.

A Study on the Visual Odometer using Ground Feature Point (지면 특징점을 이용한 영상 주행기록계에 관한 연구)

  • Lee, Yoon-Sub;Noh, Gyung-Gon;Kim, Jin-Geol
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.28 no.3
    • /
    • pp.330-338
    • /
    • 2011
  • Odometry is the critical factor to estimate the location of the robot. In the mobile robot with wheels, odometry can be performed using the information from the encoder. However, the information of location in the encoder is inaccurate because of the errors caused by the wheel's alignment or slip. In general, visual odometer has been used to compensate for the kinetic errors of robot. In case of using the visual odometry under some robot system, the kinetic analysis is required for compensation of errors, which means that the conventional visual odometry cannot be easily applied to the implementation of the other type of the robot system. In this paper, the novel visual odometry, which employs only the single camera toward the ground, is proposed. The camera is mounted at the center of the bottom of the mobile robot. Feature points of the ground image are extracted by using median filter and color contrast filter. In addition, the linear and angular vectors of the mobile robot are calculated with feature points matching, and the visual odometry is performed by using these linear and angular vectors. The proposed odometry is verified through the experimental results of driving tests using the encoder and the new visual odometry.

An Efficient Processing Technique for Similarity based Visual Queries (효율적인 유사 시각질의 처리)

  • Hwang, Jun
    • Journal of Internet Computing and Services
    • /
    • v.1 no.1
    • /
    • pp.1-14
    • /
    • 2000
  • Visual information retrieval and image databases are very important applications of spatial access methods. The quaries for these applications are visual and based not on exact match but on dubjective similarity. The individual aperations of spatial access methods are much more expensive than those of conventional one-dimensional access methods. Also, because the visual queries are much more complex than textual queries, an efficient processing technique for visual queries is one of the critical requirements in the development of large and scalable image databases. Therefore, efficient translation and execution for the complex visual queries are not less important than those of textual databases. In this paper, we introduce our cognitive and topological studies that are required to process subjective visual queries effectively. Then, we propose an efficient translation and execution techniques for similarity based visual queries by conducting these related studies.

  • PDF

Lip and Voice Synchronization Using Visual Attention (시각적 어텐션을 활용한 입술과 목소리의 동기화 연구)

  • Dongryun Yoon;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.166-173
    • /
    • 2024
  • This study explores lip-sync detection, focusing on the synchronization between lip movements and voices in videos. Typically, lip-sync detection techniques involve cropping the facial area of a given video, utilizing the lower half of the cropped box as input for the visual encoder to extract visual features. To enhance the emphasis on the articulatory region of lips for more accurate lip-sync detection, we propose utilizing a pre-trained visual attention-based encoder. The Visual Transformer Pooling (VTP) module is employed as the visual encoder, originally designed for the lip-reading task, predicting the script based solely on visual information without audio. Our experimental results demonstrate that, despite having fewer learning parameters, our proposed method outperforms the latest model, VocaList, on the LRS2 dataset, achieving a lip-sync detection accuracy of 94.5% based on five context frames. Moreover, our approach exhibits an approximately 8% superiority over VocaList in lip-sync detection accuracy, even on an untrained dataset, Acappella.