• 제목/요약/키워드: Invariant image

검색결과 469건 처리시간 0.025초

SIFT를 이용한 내시경 영상에서의 특징점 추출 (Feature Extraction for Endoscopic Image by using the Scale Invariant Feature Transform(SIFT))

  • 오장석;김호철;김형률;구자민;김민기
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.6-8
    • /
    • 2005
  • Study that uses geometrical information in computer vision is lively. Problem that should be preceded is matching problem before studying. Feature point should be extracted for well matching. There are a lot of methods that extract feature point from former days are studied. Because problem does not exist algorithm that is applied for all images, it is a hot water. Specially, it is not easy to find feature point in endoscope image. The big problem can not decide easily a point that is predicted feature point as can know even if see endoscope image as eyes. Also, accuracy of matching problem can be decided after number of feature points is enough and also distributed on whole image. In this paper studied algorithm that can apply to endoscope image. SIFT method displayed excellent performance when compared with alternative way (Affine invariant point detector etc.) in general image but SIFT parameter that used in general image can't apply to endoscope image. The gual of this paper is abstraction of feature point on endoscope image that controlled by contrast threshold and curvature threshold among the parameters for applying SIFT method on endoscope image. Studied about method that feature points can have good distribution and control number of feature point than traditional alternative way by controlling the parameters on experiment result.

  • PDF

Moving Vehicle Segmentation from Plane Constraint

  • Kang, Dong-Joong;Ha, Jong-Eun;Kim, Jin-Young;Kim, Min-Sung;Lho, Tae-Jung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.2393-2396
    • /
    • 2005
  • We present a method to detect on-road vehicle using geometric invariant of feature points on side planes of the vehicle. The vehicles are assumed into a set of planes and the invariant from motion information of features on the plane segments the plane from the theory that a geometric invariant value defined by five points on a plane is preserved under a projective transform. Harris corners as a salient image point are used to give motion information with the normalized correlation centered at these points. We define a probabilistic criterion to test the similarity of invariant values between sequential frames. Experimental results using images of real road scenes are presented.

  • PDF

Scale Invariant Auto-context for Object Segmentation and Labeling

  • Ji, Hongwei;He, Jiangping;Yang, Xin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권8호
    • /
    • pp.2881-2894
    • /
    • 2014
  • In complicated environment, context information plays an important role in image segmentation/labeling. The recently proposed auto-context algorithm is one of the effective context-based methods. However, the standard auto-context approach samples the context locations utilizing a fixed radius sequence, which is sensitive to large scale-change of objects. In this paper, we present a scale invariant auto-context (SIAC) algorithm which is an improved version of the auto-context algorithm. In order to achieve scale-invariance, we try to approximate the optimal scale for the image in an iterative way and adopt the corresponding optimal radius sequence for context location sampling, both in training and testing. In each iteration of the proposed SIAC algorithm, we use the current classification map to estimate the image scale, and the corresponding radius sequence is then used for choosing context locations. The algorithm iteratively updates the classification maps, as well as the image scales, until convergence. We demonstrate the SIAC algorithm on several image segmentation/labeling tasks. The results demonstrate improvement over the standard auto-context algorithm when large scale-change of objects exists.

투사영상 불변량을 이용한 장애물 검지 및 자기 위치 인식 (Obstacle Detection and Self-Localization without Camera Calibration using Projective Invariants)

  • 노경식;이왕헌;이준웅;권인소
    • 제어로봇시스템학회논문지
    • /
    • 제5권2호
    • /
    • pp.228-236
    • /
    • 1999
  • In this paper, we propose visual-based self-localization and obstacle detection algorithms for indoor mobile robots. The algorithms do not require calibration, and can be worked with only single image by using the projective invariant relationship between natural landmarks. We predefine a risk zone without obstacles for a robot, and update the image of the risk zone, which will be used to detect obstacles inside the zone by comparing the averaging image with the current image of a new risk zone. The positions of the robot and the obstacles are determined by relative positioning. The method does not require the prior information for positioning robot. The robustness and feasibility of our algorithms have been demonstrated through experiments in hallway environments.

  • PDF

A Study on the Automatic Inspection System using Invariant Moments Algorithm with the Change of Size and Rotation

  • 이용중
    • 한국공작기계학회:학술대회논문집
    • /
    • 한국공작기계학회 2003년도 추계학술대회
    • /
    • pp.164-169
    • /
    • 2003
  • The purpose of this study is to develop a practical image inspect ion system that could recognize it correctly, endowing flexibility to the productive field, although the same object for work will be changed in the size and rotated. In this experiment, it selected a fighter, rotating the direction from 30$^{\circ}$ to 45 simultaneously while changing the size from 1/4 to 1/16, as an object inspection without using another hardware for exclusive image processing. The invariant moments, Hu has suggested, was used as feature vector moment descriptor. As a result of the experiment, the image inspect ion system developed from this research was operated in real-time regardless of the chance of size and rotation for the object inspection, and it maintained the correspondent rates steadily above from 94% to 96%. Accordingly, it is considered as the flexibility can be considerably endowed to the factory automat ion when the image inspect ion system developed from this research is applied to the product ive field.

  • PDF

원통 모델과 스테레오 카메라를 이용한 포즈 변화에 강인한 얼굴인식 (Pose-invariant Face Recognition using Cylindrical Model and Stereo Camera)

  • 노진우;안병두;;고한석
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2012-2015
    • /
    • 2003
  • This paper proposes a pose-invariant face recognition method using cylindrical model and stereo camera. We divided this paper into two parts. One is single input image case, the other is stereo input image case. In single input image case, we normalized a face's yaw pose using cylindrical model, and in stereo input image case, we normalized a face's pitch pose using cylindrical model with estimated object's pitch pose by stereo geometry. Also, since we have advantage that we can utilize two images acquired at the same time, we can increase overall recognition rate by decision-level fusion. By experiment, we confirmed that recognition rate could be increased using our methods.

  • PDF

불변 특징 기반 파노라마 영상의 생성 (Construction of Panoramic Images Based on Invariant Features)

  • 김태우;유현중
    • 한국산학기술학회논문지
    • /
    • 제7권6호
    • /
    • pp.1214-1218
    • /
    • 2006
  • 본 논문에서는 파노라마 영상 생성의 처리 속도 개선 방법을 제안한다. 그 방법은 불변 특징에 기반한 파노라마 생성 방법으로 영상 축소와 영상 에지 정보를 이용하는 방법이다. 영상을 축소하고 에지의 위치에 대해서만 특징 묘사자를 적용함으로써 특징점의 개수를 줄여 속도 개선을 실현한다. 실험에서 640$\times$480 크기의 24비트 칼라 영상에 대해 기존의 방법보다 3.26$\sim$13.87%의 속도 개선의 효과를 보였다.

  • PDF

Deep Convolutional Auto-encoder를 이용한 환경 변화에 강인한 장소 인식 (Condition-invariant Place Recognition Using Deep Convolutional Auto-encoder)

  • 오정현;이범희
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.8-13
    • /
    • 2019
  • Visual place recognition is widely researched area in robotics, as it is one of the elemental requirements for autonomous navigation, simultaneous localization and mapping for mobile robots. However, place recognition in changing environment is a challenging problem since a same place look different according to the time, weather, and seasons. This paper presents a feature extraction method using a deep convolutional auto-encoder to recognize places under severe appearance changes. Given database and query image sequences from different environments, the convolutional auto-encoder is trained to predict the images of the desired environment. The training process is performed by minimizing the loss function between the predicted image and the desired image. After finishing the training process, the encoding part of the structure transforms an input image to a low dimensional latent representation, and it can be used as a condition-invariant feature for recognizing places in changing environment. Experiments were conducted to prove the effective of the proposed method, and the results showed that our method outperformed than existing methods.

회전불변 객체 인식에 관한 연구 (On the Study of Rotation Invariant Object Recognition)

  • 엠디자한기르 앨롬;이효종
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2010년도 춘계학술발표대회
    • /
    • pp.405-408
    • /
    • 2010
  • This paper presents a new feature extraction technique, correlation coefficient and Manhattan distance (MD) based method for recognition of rotated object in an image. This paper also represented a new concept of intensity invariant. We extracted global features of an image and converts a large size image into a one-dimensional vector called circular feature vector's (CFVs). An especial advantage of the proposed technique is that the extracted features are same even if original image is rotated with rotation angles 1 to 360 or rotated. The proposed technique is based on fuzzy sets and finally we have recognized the object by using histogram matching, correlation coefficient and manhattan distance of the objects. The proposed approach is very easy in implementation and it has implemented in Matlab7 on Windows XP. The experimental results have demonstrated that the proposed approach performs successfully on a variety of small as well as large scale rotated images.

Hardware Accelerated Design on Bag of Words Classification Algorithm

  • Lee, Chang-yong;Lee, Ji-yong;Lee, Yong-hwan
    • Journal of Platform Technology
    • /
    • 제6권4호
    • /
    • pp.26-33
    • /
    • 2018
  • In this paper, we propose an image retrieval algorithm for real-time processing and design it as hardware. The proposed method is based on the classification of BoWs(Bag of Words) algorithm and proposes an image search algorithm using bit stream. K-fold cross validation is used for the verification of the algorithm. Data is classified into seven classes, each class has seven images and a total of 49 images are tested. The test has two kinds of accuracy measurement and speed measurement. The accuracy of the image classification was 86.2% for the BoWs algorithm and 83.7% the proposed hardware-accelerated software implementation algorithm, and the BoWs algorithm was 2.5% higher. The image retrieval processing speed of BoWs is 7.89s and our algorithm is 1.55s. Our algorithm is 5.09 times faster than BoWs algorithm. The algorithm is largely divided into software and hardware parts. In the software structure, C-language is used. The Scale Invariant Feature Transform algorithm is used to extract feature points that are invariant to size and rotation from the image. Bit streams are generated from the extracted feature point. In the hardware architecture, the proposed image retrieval algorithm is written in Verilog HDL and designed and verified by FPGA and Design Compiler. The generated bit streams are stored, the clustering step is performed, and a searcher image databases or an input image databases are generated and matched. Using the proposed algorithm, we can improve convenience and satisfaction of the user in terms of speed if we search using database matching method which represents each object.