• Title/Summary/Keyword: visual features

Search Result 1,079, Processing Time 0.025 seconds

Experiments on a Visual Servoing Approach using Disturbance Observer (외란관측기를 이용한 시각구동 방법의 구현)

  • Lee, Joon-Soo;Suh, Il-Hong;You, Bum-Jae;Oh, Sang-Rok
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.3077-3079
    • /
    • 1999
  • A visual servoing method has been proposed based on disturbance observer to eliminate the effect of the off-diagonal component of image feature Jacobian, since performance indices such as measurement sensitivity of visual features, sensitivity of the control to noise and controllability could be improved when an image feature Jacobian was given as a block diagonal matrix. In this paper, experimental results of disturbance observer-based visual servoing are discussed where Samsung FARAMAN-ASl 6-axis industrial robot manipulator is employed. Also, the feature saturator is proposed to stabilized the disturbance observer loop by saturating the differential changes of the image features.

  • PDF

Omni-directional Visual-LiDAR SLAM for Multi-Camera System (다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM)

  • Javed, Zeeshan;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.353-358
    • /
    • 2022
  • Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.

Visual Feature Extraction Technique for Content-Based Image Retrieval

  • Park, Won-Bae;Song, Young-Jun;Kwon, Heak-Bong;Ahn, Jae-Hyeong
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.12
    • /
    • pp.1671-1679
    • /
    • 2004
  • This study has proposed visual-feature extraction methods for each band in wavelet domain with both spatial frequency features and multi resolution features. In addition, it has brought forward similarity measurement method using fuzzy theory and new color feature expression method taking advantage of the frequency of the same color after color quantization for reducing quantization error, a disadvantage of the existing color histogram intersection method. Experiments are performed on a database containing 1,000 color images. The proposed method gives better performance than the conventional method in both objective and subjective performance evaluation.

  • PDF

A New Details Extraction Technique for Video Sequence Using Morphological Laplacian (수리형태학적 Laplacian 연산을 이용한 새로운 동영상 Detail 추출 기법)

  • 김희준;어진우
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.911-914
    • /
    • 1998
  • In this paper, the importance of including small image features at the initial levels of a progressive second generation video coding scheme is presented. It is shown that a number of meaningful small features called details shouuld be coded in order to match their perceptual significance to the human visual system. We propose a method for extracting, perceptually selecting and coding of visual details in a video sequence using morphological laplacian operator and modified post-it transform is very efficient for improving quality of the reconstructed images.

  • PDF

Complex Features by Independent Component Analysis (독립성분분석에 의한 복합특징 형성)

  • 오상훈
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.05a
    • /
    • pp.351-355
    • /
    • 2003
  • Neurons in the mammalian visual cortex can be classified into the two main categories of simple cells and complex cells based on their response properties. Here, we find the complex features corresponding to the response of complex cells by applying the unsupervised independent component analysis network to input images. This result will be helpful to elucidate the information processing mechanism of neurons in primary visual cortex.

  • PDF

An Exploratory Investigation on Visual Cues for Emotional Indexing of Image (이미지 감정색인을 위한 시각적 요인 분석에 관한 탐색적 연구)

  • Chung, SunYoung;Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.48 no.1
    • /
    • pp.53-73
    • /
    • 2014
  • Given that emotion-based computing environment has grown recently, it is necessary to focus on emotional access and use of multimedia resources including images. The purpose of this study aims to identify the visual cues for emotion in images. In order to achieve it, this study selected five basic emotions such as love, happiness, sadness, fear, and anger and interviewed twenty participants to demonstrate the visual cues for emotions. A total of 620 visual cues mentioned by participants were collected from the interview results and coded according to five categories and 18 sub-categories for visual cues. Findings of this study showed that facial expressions, actions / behaviors, and syntactic features were found to be significant in terms of perceiving a specific emotion of the image. An individual emotion from visual cues demonstrated distinctive characteristics. The emotion of love showed a higher relation with visual cues such as actions and behaviors, and the happy emotion is substantially related to facial expressions. In addition, the sad emotion was found to be perceived primarily through actions and behaviors and the fear emotion is perceived considerably through facial expressions. The anger emotion is highly related to syntactic features such as lines, shapes, and sizes. Findings of this study implicated that emotional indexing could be effective when content-based features were considered in combination with concept-based features.

Analysis of Features to Acquire Observation Information by Sex through Scanning Path Tracing - With the Object of Space in Cafe - (주사경로 추적을 통한 성별 주시정보 획득특성 - 카페 공간을 대상으로 -)

  • Choi, Gae-Young
    • Korean Institute of Interior Design Journal
    • /
    • v.23 no.5
    • /
    • pp.76-85
    • /
    • 2014
  • When conscious and unconscious exploring information of space-visitors which is contained in the information acquired in the process of seeing any space is analyzed, it can be found what those visitors pick up as factors in the space for its selection as visual information in order to put it into action. This study, with the object of the space reproduced in three dimensions from the cafe which was visited for conversation, has analyzed the process of acquiring space-information by sex to find out the features of scanning path, findings of which are the followings. First, the rate of scanning type of males was "Combination (50.5%)- Circulation (31.0%) and that of females "Horizontal (32.5%) - Combination (32.1%)", which shows that there was a big difference by sex in the scanning path which took place in the process of observing any space. Second, when the features of continuous observation frequency by sex is looked into, the trends of increased "horizontal" scanning and decreased "Combination" scanning of both showed the same as the frequency of continuous observations increased, while in case of "Circulation" scanning, that of females was found to decrease but that of males showed the aspect of confusion. Third, the 'Combination' scanning of males was found strong at the short observation time with three times of continuous observation frequency defined as "Attention Concentration" while the distinct feature was seen that the scanning type was dispersed to "combination-circulation" as the frequency of continuous observation increased. Females start the information acquirement with "combination-circulation" but in the process of visual appreciation they showed a strong "Horizontal" These scanning features can be defined as those by sex for acquiring space information and therefore are very significant because they are fundamental studies which will enable any customized space-design by sex.

A Neural Network Model for Visual Selection: Top-down mechanism of Feature Gate model (시각적 선택에 대한 신경 망 모형FeatureGate 모형의 하향식 기제)

  • 김민식
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.3
    • /
    • pp.1-15
    • /
    • 1999
  • Based on known physiological and psychophysical results, a neural network model for visual selection, called FeaureGate is proposed. The model consists of a hierarchy of spatial maps. and the flow of information from each level of the hierarchy to the next is controlled by attentional gates. The gates are jointly controlled by a bottom-up system favoring locations with unique features. and a top-down mechanism favoring locations with features designated as target features. The present study focuses on the top-down mechanism of the FeatureGate model that produces results similar to Moran and Desimone's (1985), which many current models have failed to explain, The FeatureGate model allows a consistent interpretation of many different experimental results in visual attention. including parallel feature searches and serial conjunction searches. attentional gradients triggered by cuing, feature-driven spatial selection, split a attention, inhibition of distractor locations, and flanking inhibition. This framework can be extended to produce a model of shape recognition using upper-level units that respond to configurations of features.

  • PDF

Speech Recognition by Integrating Audio, Visual and Contextual Features Based on Neural Networks (신경망 기반 음성, 영상 및 문맥 통합 음성인식)

  • 김명원;한문성;이순신;류정우
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.67-77
    • /
    • 2004
  • The recent research has been focused on fusion of audio and visual features for reliable speech recognition in noisy environments. In this paper, we propose a neural network based model of robust speech recognition by integrating audio, visual, and contextual information. Bimodal Neural Network(BMNN) is a multi-layer perception of 4 layers, each of which performs a certain level of abstraction of input features. In BMNN the third layer combines audio md visual features of speech to compensate loss of audio information caused by noise. In order to improve the accuracy of speech recognition in noisy environments, we also propose a post-processing based on contextual information which are sequential patterns of words spoken by a user. Our experimental results show that our model outperforms any single mode models. Particularly, when we use the contextual information, we can obtain over 90% recognition accuracy even in noisy environments, which is a significant improvement compared with the state of art in speech recognition. Our research demonstrates that diverse sources of information need to be integrated to improve the accuracy of speech recognition particularly in noisy environments.

An Image Processing Algorithm for a Visual Weld Defects Detection on Weld Joint in Steel Structure (강구조물 용접이음부 외부결함의 자동검출 알고리즘)

  • Seo, Won Chan;Lee, Dong Uk
    • Journal of Korean Society of Steel Construction
    • /
    • v.11 no.1 s.38
    • /
    • pp.1-11
    • /
    • 1999
  • The aim of this study is to construct a machine vision monitoring system for an automatic visual inspection of weld joint in steel structure. An image processing algorithm for a visual weld defects detection on weld bead is developed using the intensity image. An optic system for getting four intensity images was set as a fixed camera position and four different illumination directions. The input images were thresholded and segmented after a suitable preprocessing and the features of each region were defined and calculated. The features were used in the detection and the classification of the visual weld defects. It is confirmed that the developed algorithm can detect weld defects that could not be detected by previously developed techniques. The recognized results were evaluated and compared to expert inspectors' results.

  • PDF