• Title/Summary/Keyword: Local histogram information

Search Result 179, Processing Time 0.026 seconds

ELA: Real-time Obstacle Avoidance for Autonomous Navigation of Variable Configuration Rescue Robots (ELA: 가변 형상 구조로봇의 자율주행을 위한 실시간 장애물 회피 기법)

  • Jeong, Hae-Kwan;Hyun, Kyung-Hak;Kim, Soo-Hyun;Kwak, Yoon-Keun
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.186-193
    • /
    • 2008
  • We propose a novel real-time obstacle avoidance method for rescue robots. This method, named the ELA(Emergency Level Around), permits the detection of unknown obstacles and avoids collisions while simultaneously steering the mobile robot toward safe position. In the ELA, we consider two sensor modules, PSD(Position Sensitive Detector) infrared sensors taking charge of obstacle detection in short distance and LMS(Laser Measurement System) in long distance respectively. Hence if a robot recognizes an obstacle ahead by PSD infrared sensors first, and judges impossibility to overcome the obstacle based on driving mode decision process, the order of priority is transferred to LMS which collects data of radial distance centered on the robot to avoid the confronted obstacle. After gathering radial information, the ELA algorithm estimates emergency level around a robot and generates a polar histogram based on the emergency level to judge where the optimal free space is. Finally, steering angle is determined to guarantee rotation to randomly direction as well as robot width for safe avoidance. Simulation results from wandering in closed local area which includes various obstacles and different conditions demonstrate the power of the ELA.

  • PDF

Reconstruction from Feature Points of Face through Fuzzy C-Means Clustering Algorithm with Gabor Wavelets (FCM 군집화 알고리즘에 의한 얼굴의 특징점에서 Gabor 웨이브렛을 이용한 복원)

  • 신영숙;이수용;이일병;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.2
    • /
    • pp.53-58
    • /
    • 2000
  • This paper reconstructs local region of a facial expression image from extracted feature points of facial expression image using FCM(Fuzzy C-Meang) clustering algorithm with Gabor wavelets. The feature extraction in a face is two steps. In the first step, we accomplish the edge extraction of main components of face using average value of 2-D Gabor wavelets coefficient histogram of image and in the next step, extract final feature points from the extracted edge information using FCM clustering algorithm. This study presents that the principal components of facial expression images can be reconstructed with only a few feature points extracted from FCM clustering algorithm. It can also be applied to objects recognition as well as facial expressions recognition.

  • PDF

On-Road Succeeding Vehicle Detection using Characteristic Visual Features (시각적 특징들을 이용한 도로 상의 후방 추종 차량 인식)

  • Adhikari, Shyam Prasad;Cho, Hi-Tek;Yoo, Hyeon-Joong;Yang, Chang-Ju;Kim, Hyong-Suk
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.3
    • /
    • pp.636-644
    • /
    • 2010
  • A method for the detection of on-road succeeding vehicles using visual characteristic features like horizontal edges, shadow, symmetry and intensity is proposed. The proposed method uses the prominent horizontal edges along with the shadow under the vehicle to generate an initial estimate of the vehicle-road surface contact. Fast symmetry detection, utilizing the edge pixels, is then performed to detect the presence of vertically symmetric object, possibly vehicle, in the region above the initially estimated vehicle-road surface contact. A window defined by the horizontal and the vertical line obtained from above along with local perspective information provides a narrow region for the final search of the vehicle. A bounding box around the vehicle is extracted from the horizontal edges, symmetry histogram and a proposed squared difference of intensity measure. Experiments have been performed on natural traffic scenes obtained from a camera mounted on the side view mirror of a host vehicle demonstrate good and reliable performance of the proposed method.

Caption Detection and Recognition for Video Image Information Retrieval (비디오 영상 정보 검색을 위한 문자 추출 및 인식)

  • 구건서
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.7
    • /
    • pp.901-914
    • /
    • 2002
  • In this paper, We propose an efficient automatic caption detection and location method, caption recognition using FE-MCBP(Feature Extraction based Multichained BackPropagation) neural network for content based retrieval of video. Frames are selected at fixed time interval from video and key frames are selected by gray scale histogram method. for each key frames, segmentation is performed and caption lines are detected using line scan method. lastly each characters are separated. This research improves speed and efficiency by color segmentation using local maximum analysis method before line scanning. Caption detection is a first stage of multimedia database organization and detected captions are used as input of text recognition system. Recognized captions can be searched by content based retrieval method.

  • PDF

Object Tracking with Sparse Representation based on HOG and LBP Features

  • Boragule, Abhijeet;Yeo, JungYeon;Lee, GueeSang
    • International Journal of Contents
    • /
    • v.11 no.3
    • /
    • pp.47-53
    • /
    • 2015
  • Visual object tracking is a fundamental problem in the field of computer vision, as it needs a proper model to account for drastic appearance changes that are caused by shape, textural, and illumination variations. In this paper, we propose a feature-based visual-object-tracking method with a sparse representation. Generally, most appearance-based models use the gray-scale pixel values of the input image, but this might be insufficient for a description of the target object under a variety of conditions. To obtain the proper information regarding the target object, the following combination of features has been exploited as a corresponding representation: First, the features of the target templates are extracted by using the HOG (histogram of gradient) and LBPs (local binary patterns); secondly, a feature-based sparsity is attained by solving the minimization problems, whereby the target object is represented by the selection of the minimum reconstruction error. The strengths of both features are exploited to enhance the overall performance of the tracker; furthermore, the proposed method is integrated with the particle-filter framework and achieves a promising result in terms of challenging tracking videos.

Adaptive image enhancement technique considering visual perception property in digital chest radiography (시각특성을 고려한 디지털 흉부 X-선 영상의 적응적 향상기법)

  • 김종효;이충웅;민병구;한만청
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.8
    • /
    • pp.160-171
    • /
    • 1994
  • The wide dynamic range and severely attenuated contrast in mediastinal area appearing in typical chest radiographs have often caused difficulties in effective visualization and diagnosis of lung diseases. This paper proposes a new adaptive image enhancement technique which potentially solves this problem and there by improves observer performance through image processing. In the proposed method image processing is applied to the chest radiograph with different processing parameters for the lung field and mediastinum adaptively since there are much differences in anatomical and imaging properties between these two regions. To achieve this the chest radiograph is divided into the lung and mediastinum by gray level thresholding using the cumulative histogram and the dynamic range compression and local contrast enhancement are carried out selectively in the mediastinal region. Thereafter a gray scale transformation is performed considering the JND(just noticeable difference) characteristic for effective image displa. The processed images showed apparenty improved contrast in mediastinum and maintained moderate brightness in the lung field. No artifact could be observed. In the visibility evaluation experiment with 5 radiologists the processed images with better visibility was observed for the 5 important anatomical structures in the thorax.

  • PDF

Auto Gain/offset Based on Visibility of Spatial JND (공간 JND의 가시성 기반 자동 게인옵셋)

  • Kim, Mi-Hye;Jang, Ick-Hoon;Kim, Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.16-22
    • /
    • 2009
  • In this paper, we propose an auto gain/offset which considers the visibility of human visual system (HVS) and the histogram of a target image jointly. In the proposed method, the lower and upper clipping thresholds are determined to maximize the averaged visibility of the contrast-stretched image. The target image is then contrast-stitched by the gain and offset derived from the clipping thresholds. We define the visibility as a quantity related to the spatial JND, which means the threshold below which any change of a pixel from its textured neighbors is not recognized by the HVS. Experimental results show that the contrast-stretched images by the proposed method have better global and local contrasts compared to the results by some conventional methods.

Query-by-emotion sketch for local emotion-based image retrieval (지역 감성기반 영상 검색을 위한 감성 스케치 질의)

  • Lee, Kyoung-Mi
    • Journal of Internet Computing and Services
    • /
    • v.10 no.6
    • /
    • pp.113-121
    • /
    • 2009
  • In order to retrieve images with different emotions in regions of the images, this paper proposes the image retrieval system using emotion sketch. The proposed retrieval system divides an image into $17{\times}17$ sub-regions and extracts emotion features in each sub-region. In order to extract the emotion features, this paper uses emotion colors on 160 emotion words from H. Nagumo's color scheme imaging chart. We calculate a histogram of each sub-region and consider one emotion word having the maximal value as a representative emotion word of the sub-region. The system demonstrates the effectiveness of the proposed emotion sketch and our experimental results show that the system successfully retrieves on the Corel image database.

  • PDF

Vision-based Walking Guidance System Using Top-view Transform and Beam-ray Model (탑-뷰 변환과 빔-레이 모델을 이용한 영상기반 보행 안내 시스템)

  • Lin, Qing;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.93-102
    • /
    • 2011
  • This paper presents a walking guidance system for blind pedestrians in an outdoor environment using just one single camera. Unlike many existing travel-aid systems that rely on stereo-vision, the proposed system aims to get necessary information of the road environment by using just single camera fixed at the belly of the user. To achieve this goal, a top-view image of the road is used, on which obstacles are detected by first extracting local extreme points and then verified by the polar edge histogram. Meanwhile, user motion is estimated by using optical flow in an area close to the user. Based on these information extracted from image domain, an audio message generation scheme is proposed to deliver guidance instructions via synthetic voice to the blind user. Experiments with several sidewalk video-clips show that the proposed walking guidance system is able to provide useful guidance instructions under certain sidewalk environments.

Medical Image Classification and Retrieval Using BoF Feature Histogram with Random Forest Classifier (Random Forest 분류기와 Bag-of-Feature 특징 히스토그램을 이용한 의료영상 자동 분류 및 검색)

  • Son, Jung Eun;Ko, Byoung Chul;Nam, Jae Yeal
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.4
    • /
    • pp.273-280
    • /
    • 2013
  • This paper presents novel OCS-LBP (Oriented Center Symmetric Local Binary Patterns) based on orientation of pixel gradient and image retrieval system based on BoF (Bag-of-Feature) and random forest classifier. Feature vectors extracted from training data are clustered into code book and each feature is transformed new BoF feature using code book. BoF features are applied to random forest for training and random forest having N classes is constructed by combining several decision trees. For testing, the same OCS-LBP feature is extracted from a query image and BoF is applied to trained random forest classifier. In contrast to conventional retrieval system, query image selects similar K-nearest neighbor (K-NN) classes after random forest is performed. Then, Top K similar images are retrieved from database images that are only labeled K-NN classes. Compared with other retrieval algorithms, the proposed method shows both fast processing time and improved retrieval performance.