• 제목/요약/키워드: Object Feature Information

검색결과 767건 처리시간 0.025초

A Novel Approach for Object Detection in Illuminated and Occluded Video Sequences Using Visual Information with Object Feature Estimation

  • Sharma, Kajal
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권2호
    • /
    • pp.110-114
    • /
    • 2015
  • This paper reports a novel object-detection technique in video sequences. The proposed algorithm consists of detection of objects in illuminated and occluded videos by using object features and a neural network technique. It consists of two functional modules: region-based object feature extraction and continuous detection of objects in video sequences with region features. This scheme is proposed as an enhancement of the Lowe's scale-invariant feature transform (SIFT) object detection method. This technique solved the high computation time problem of feature generation in the SIFT method. The improvement is achieved by region-based feature classification in the objects to be detected; optimal neural network-based feature reduction is presented in order to reduce the object region feature dataset with winner pixel estimation between the video frames of the video sequence. Simulation results show that the proposed scheme achieves better overall performance than other object detection techniques, and region-based feature detection is faster in comparison to other recent techniques.

Study on a Robust Object Tracking Algorithm Based on Improved SURF Method with CamShift

  • Ahn, Hyochang;Shin, In-Kyoung
    • 한국컴퓨터정보학회논문지
    • /
    • 제23권1호
    • /
    • pp.41-48
    • /
    • 2018
  • Recently, surveillance systems are widely used, and one of the key technologies in this surveillance system is to recognize and track objects. In order to track a moving object robustly and efficiently in a complex environment, it is necessary to extract the feature points in the interesting object and to track the object using the feature points. In this paper, we propose a method to track interesting objects in real time by eliminating unnecessary information from objects, generating feature point descriptors using only key feature points, and reducing computational complexity for object recognition. Experimental results show that the proposed method is faster and more robust than conventional methods, and can accurately track objects in various environments.

Comparative Study of Corner and Feature Extractors for Real-Time Object Recognition in Image Processing

  • Mohapatra, Arpita;Sarangi, Sunita;Patnaik, Srikanta;Sabut, Sukant
    • Journal of information and communication convergence engineering
    • /
    • 제12권4호
    • /
    • pp.263-270
    • /
    • 2014
  • Corner detection and feature extraction are essential aspects of computer vision problems such as object recognition and tracking. Feature detectors such as Scale Invariant Feature Transform (SIFT) yields high quality features but computationally intensive for use in real-time applications. The Features from Accelerated Segment Test (FAST) detector provides faster feature computation by extracting only corner information in recognising an object. In this paper we have analyzed the efficient object detection algorithms with respect to efficiency, quality and robustness by comparing characteristics of image detectors for corner detector and feature extractors. The simulated result shows that compared to conventional SIFT algorithm, the object recognition system based on the FAST corner detector yields increased speed and low performance degradation. The average time to find keypoints in SIFT method is about 0.116 seconds for extracting 2169 keypoints. Similarly the average time to find corner points was 0.651 seconds for detecting 1714 keypoints in FAST methods at threshold 30. Thus the FAST method detects corner points faster with better quality images for object recognition.

Feature Voting for Object Localization via Density Ratio Estimation

  • Wang, Liantao;Deng, Dong;Chen, Chunlei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권12호
    • /
    • pp.6009-6027
    • /
    • 2019
  • Support vector machine (SVM) classifiers have been widely used for object detection. These methods usually locate the object by finding the region with maximal score in an image. With bag-of-features representation, the SVM score of an image region can be written as the sum of its inside feature-weights. As a result, the searching process can be executed efficiently by using strategies such as branch-and-bound. However, the feature-weight derived by optimizing region classification cannot really reveal the category knowledge of a feature-point, which could cause bad localization. In this paper, we represent a region in an image by a collection of local feature-points and determine the object by the region with the maximum posterior probability of belonging to the object class. Based on the Bayes' theorem and Naive-Bayes assumptions, the posterior probability is reformulated as the sum of feature-scores. The feature-score is manifested in the form of the logarithm of a probability ratio. Instead of estimating the numerator and denominator probabilities separately, we readily employ the density ratio estimation techniques directly, and overcome the above limitation. Experiments on a car dataset and PASCAL VOC 2007 dataset validated the effectiveness of our method compared to the baselines. In addition, the performance can be further improved by taking advantage of the recently developed deep convolutional neural network features.

다중 채널 동적 객체 정보 추정을 통한 특징점 기반 Visual SLAM (A New Feature-Based Visual SLAM Using Multi-Channel Dynamic Object Estimation)

  • 박근형;조형기
    • 대한임베디드공학회논문지
    • /
    • 제19권1호
    • /
    • pp.65-71
    • /
    • 2024
  • An indirect visual SLAM takes raw image data and exploits geometric information such as key-points and line edges. Due to various environmental changes, SLAM performance may decrease. The main problem is caused by dynamic objects especially in highly crowded environments. In this paper, we propose a robust feature-based visual SLAM, building on ORB-SLAM, via multi-channel dynamic objects estimation. An optical flow and deep learning-based object detection algorithm each estimate different types of dynamic object information. Proposed method incorporates two dynamic object information and creates multi-channel dynamic masks. In this method, information on actually moving dynamic objects and potential dynamic objects can be obtained. Finally, dynamic objects included in the masks are removed in feature extraction part. As a results, proposed method can obtain more precise camera poses. The superiority of our ORB-SLAM was verified to compared with conventional ORB-SLAM by the experiment using KITTI odometry dataset.

Implementation of an improved real-time object tracking algorithm using brightness feature information and color information of object

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권5호
    • /
    • pp.21-28
    • /
    • 2017
  • As technology related to digital imaging equipment is developed and generalized, digital imaging system is used for various purposes in fields of society. The object tracking technology from digital image data in real time is one of the core technologies required in various fields such as security system and robot system. Among the existing object tracking technologies, cam shift technology is a technique of tracking an object using color information of an object. Recently, digital image data using infrared camera functions are widely used due to various demands of digital image equipment. However, the existing cam shift method can not track objects in image data without color information. Our proposed tracking algorithm tracks the object by analyzing the color if valid color information exists in the digital image data, otherwise it generates the lightness feature information and tracks the object through it. The brightness feature information is generated from the ratio information of the width and the height of the area divided by the brightness. Experimental results shows that our tracking algorithm can track objects in real time not only in general image data including color information but also in image data captured by an infrared camera.

AANet: Adjacency auxiliary network for salient object detection

  • Li, Xialu;Cui, Ziguan;Gan, Zongliang;Tang, Guijin;Liu, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3729-3749
    • /
    • 2021
  • At present, deep convolution network-based salient object detection (SOD) has achieved impressive performance. However, it is still a challenging problem to make full use of the multi-scale information of the extracted features and which appropriate feature fusion method is adopted to process feature mapping. In this paper, we propose a new adjacency auxiliary network (AANet) based on multi-scale feature fusion for SOD. Firstly, we design the parallel connection feature enhancement module (PFEM) for each layer of feature extraction, which improves the feature density by connecting different dilated convolution branches in parallel, and add channel attention flow to fully extract the context information of features. Then the adjacent layer features with close degree of abstraction but different characteristic properties are fused through the adjacent auxiliary module (AAM) to eliminate the ambiguity and noise of the features. Besides, in order to refine the features effectively to get more accurate object boundaries, we design adjacency decoder (AAM_D) based on adjacency auxiliary module (AAM), which concatenates the features of adjacent layers, extracts their spatial attention, and then combines them with the output of AAM. The outputs of AAM_D features with semantic information and spatial detail obtained from each feature are used as salient prediction maps for multi-level feature joint supervising. Experiment results on six benchmark SOD datasets demonstrate that the proposed method outperforms similar previous methods.

2-D 이동물체의 형태 정보 분석을 위한 특징 파라미터 추출 (Feature Parameter Extraction for Shape Information Analysis of 2-D Moving Object)

  • 김윤호;이주신
    • 한국통신학회논문지
    • /
    • 제16권11호
    • /
    • pp.1132-1142
    • /
    • 1991
  • 본 논문에서는 이동물체의 형태정보를 분석을 위한 이동물체의 특징파라미터를 추출하는 기법을 제안하였다. 이차원 영상에서 이동물체의 추출은 차영상 기법을 이용하였다. 이동물체의 특징 파라미터는 면적과 둘레, 면적과 둘레의 비(A/P ratio), 굴곡점(Vertex), 종횡비(X/Y ratio)로 하였다. 휘도 변화를 600 Lux${\sim}$1400 Lux로 가변시켜 휘도변화에 대한 각 특징파라미터의 오차 허용범위를 결정하였다. 제안된 방법의 타당성을 입증하기 위하여 모형 자동차를 이용하여 동일성을 판별한 결과 판정오류는 6%미만이었다.

  • PDF

비디오 감시 응용에서 확장된 기술자를 이용한 물체 검출과 분류 (Object Detection and Classification Using Extended Descriptors for Video Surveillance Applications)

  • 모하마드 카이룰 이슬람;파라 자한;민재홍;백중환
    • 대한전자공학회논문지SP
    • /
    • 제48권4호
    • /
    • pp.12-20
    • /
    • 2011
  • 본 논문은 비디오 감시 장치에 사용되는 효율적인 물체 검출 및 분류 알고리즘을 제안한다. 이전 연구는 주로 Scale Invariant Feature Transform (SIFT)나 Speeded Up Robust Feature (SURF)와 같은 특정 형태의 특징을 이용해 물체를 검출하거나 분류하였다. 본 논문에서는 물체 검출 및 분류에 상호 작용하는 알고리즘을 제안한다. 이는 로컬 패치들로부터 얻어지는 텍스쳐나 컬러 분포 같은 서로 다른 특성을 갖는 특징값을 이용해 물체의 검출 및 분류율을 높인다. 물체 검출에는 특징점들의 공간적인 클러스터링을, 이미지 표현이나 분류에는 Bag of Words 모델과 Naive Bayes 분류기를 사용한다. 실험을 통해 제안한 기법이 로컬 기술자를 사용한 물체 분류기법보다 우수한 성능을 나타냄을 보인다.

차량 검색을 위한 측면 에지 특징 추출 내용기반 검색 : CBIRS/EFI (Edge Feature Extract CBIRS for Car Retrieval : CBIRS/EFI)

  • 구건서
    • 한국컴퓨터정보학회논문지
    • /
    • 제15권11호
    • /
    • pp.75-82
    • /
    • 2010
  • 본 논문은 불확실한 객체의 영상 정보를 객체의 에지 특징정보를 이용하여 내용기반검색기법으로 CBIRS/EFI을 제안했다. 특히 객체의 부분 영상 정보의 경우 효율적으로 검색하기 위해 객체의 특징 정보 중 윤곽선 정보와 색체정보 추출하여 검색기법이다. 이를 실험하기 위해 지하 주차장의 차량 이미지를 캡처한후 객체의 특징 정보를 위한 차량의 측면 에지 특징 정보를 추출하였다. 검색하고자하는 원 영상과 특징 추출한 영상을 분석 결과와 최종 유사도 측정 결과에 의해 내용기반 검색을 적용하는 시스템으로, 기존 특징 추출 내용 기반 영상 검색 시스템인 FE-CBIRS 시스템에 비해 검색율의 정확성과 효율성을 향상 시키는 기능이 보완되었다. CBIRS/EF시스템의 성능평가는 차량의 색상 정보와 차량의 에지 추출 특징 정보를 적용하여 영역 특징정보를 검색하는 과정에서 색상 특징 검색 시간, 모양 특징 검색 시간과 검색 율을 비교 했다. 차량 에지 특징 추출률의 경우 91.84% 추출하였고, 차량 색상 검색 시간, 모양 특징 검색시간, 유사도 검색시간에서 CBIRS/EFI가 FE-CBIRS 보다 평균 검색시간이 평균 0.4~0.9초의 차이를 보고 있어 우수한 것으로 증명되었다.