• Title/Summary/Keyword: Robust Feature

Search Result 874, Processing Time 0.023 seconds

Discrete Multiwavelet-Based Video Watermarking Scheme Using SURF

  • Narkedamilly, Leelavathy;Evani, Venkateswara Prasad;Samayamantula, Srinivas Kumar
    • ETRI Journal
    • /
    • v.37 no.3
    • /
    • pp.595-605
    • /
    • 2015
  • This paper proposes a robust, imperceptible block-based digital video watermarking algorithm that makes use of the Speeded Up Robust Feature (SURF) technique. The SURF technique is used to extract the most important features of a video. A discrete multiwavelet transform (DMWT) domain in conjunction with a discrete cosine transform is used for embedding a watermark into feature blocks. The watermark used is a binary image. The proposed algorithm is further improved for robustness by an error-correction code to protect the watermark against bit errors. The same watermark is embedded temporally for every set of frames of an input video to improve the decoded watermark correlation. Extensive experimental results demonstrate that the proposed DMWT domain video watermarking using SURF features is robust against common image processing attacks, motion JPEG2000 compression, frame averaging, and frame swapping attacks. The quality of a watermarked video under the proposed algorithm is high, demonstrating the imperceptibility of an embedded watermark.

Depth-hybrid speeded-up robust features (DH-SURF) for real-time RGB-D SLAM

  • Lee, Donghwa;Kim, Hyungjin;Jung, Sungwook;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.1
    • /
    • pp.33-44
    • /
    • 2018
  • This paper presents a novel feature detection algorithm called depth-hybrid speeded-up robust features (DH-SURF) augmented by depth information in the speeded-up robust features (SURF) algorithm. In the keypoint detection part of classical SURF, the standard deviation of the Gaussian kernel is varied for its scale-invariance property, resulting in increased computational complexity. We propose a keypoint detection method with less variation of the standard deviation by using depth data from a red-green-blue depth (RGB-D) sensor. Our approach maintains a scale-invariance property while reducing computation time. An RGB-D simultaneous localization and mapping (SLAM) system uses a feature extraction method and depth data concurrently; thus, the system is well-suited for showing the performance of the DH-SURF method. DH-SURF was implemented on a central processing unit (CPU) and a graphics processing unit (GPU), respectively, and was validated through the real-time RGB-D SLAM.

A Method for Improving Object Recognition Using Pattern Recognition Filtering (패턴인식 필터링을 적용한 물체인식 성능 향상 기법)

  • Park, JinLyul;Lee, SeungGi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.6
    • /
    • pp.122-129
    • /
    • 2016
  • There have been a lot of researches on object recognition in computer vision. The SURF(Speeded Up Robust Features) algorithm based on feature detection is faster and more accurate than others. However, this algorithm has a shortcoming of making an error due to feature point mismatching when extracting feature points. In order to increase a success rate of object recognition, we have created an object recognition system based on SURF and RANSAC(Random Sample Consensus) algorithm and proposed the pattern recognition filtering. We have also presented experiment results relating to enhanced the success rate of object recognition.

Robust Features and Accurate Inliers Detection Framework: Application to Stereo Ego-motion Estimation

  • MIN, Haigen;ZHAO, Xiangmo;XU, Zhigang;ZHANG, Licheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.302-320
    • /
    • 2017
  • In this paper, an innovative robust feature detection and matching strategy for visual odometry based on stereo image sequence is proposed. First, a sparse multiscale 2D local invariant feature detection and description algorithm AKAZE is adopted to extract the interest points. A robust feature matching strategy is introduced to match AKAZE descriptors. In order to remove the outliers which are mismatched features or on dynamic objects, an improved random sample consensus outlier rejection scheme is presented. Thus the proposed method can be applied to dynamic environment. Then, geometric constraints are incorporated into the motion estimation without time-consuming 3-dimensional scene reconstruction. Last, an iterated sigma point Kalman Filter is adopted to refine the motion results. The presented ego-motion scheme is applied to benchmark datasets and compared with state-of-the-art approaches with data captured on campus in a considerably cluttered environment, where the superiorities are proved.

Robust Control of Robot Manipulators using Vision Systems

  • Lee, Young-Chan;Jie, Min-Seok;Lee, Kang-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.7 no.2
    • /
    • pp.162-170
    • /
    • 2003
  • In this paper, we propose a robust controller for trajectory control of n-link robot manipulators using feature based on visual feedback. In order to reduce tracking error of the robot manipulator due to parametric uncertainties, integral action is included in the dynamic control part of the inner control loop. The desired trajectory for tracking is generated from feature extraction by the camera mounted on the end effector. The stability of the robust state feedback control system is shown by the Lyapunov method. Simulation and experimental results on a 5-link robot manipulator with two degree of freedom show that the proposed method has good tracking performance.

  • PDF

Robust Feature Extraction and Tracking Algorithm Using 2-dimensional Wavelet Transform (2차원 웨이브릿 변환을 이용한 강건한 특징점 추출 및 추적 알고리즘)

  • Jang, Sung-Kun;Suk, Jung-Youp
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.405-406
    • /
    • 2007
  • In this paper, we propose feature extraction and tracking algorithm using multi resolution in 2-dimensional wavelet domain. Feature extraction selects feature points using 2-level wavelet transform in interested region. Feature tracking estimates displacement between current frame and next frame based on feature point which is selected feature extraction algorithm. Experimental results show that the proposed algorithm confirmed a better performance than the existing other algorithms.

  • PDF

Face Identification Method Using Face Shape Independent of Lighting Conditions

  • Takimoto, H.;Mitsukura, Y.;Akamatsu, N.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2213-2216
    • /
    • 2003
  • In this paper, we propose the face identification method which is robust for lighting based on the feature points method. First of all, the proposed method extracts an edge of facial feature. Then, by the hough transform, it determines ellipse parameters of each facial feature from the extracted edge. Finally, proposed method performs the face identification by using parameters. Even if face image is taken under various lighting condition, it is easy to extract the facial feature edge. Moreover, it is possible to extract a subject even if the object has not appeared enough because this method extracts approximately the parameters by the hough transformation. Therefore, proposed method is robust for the lighting condition compared with conventional method. In order to show the effectiveness of the proposed method, computer simulations are done by using the real images.

  • PDF

Robust Global Localization based on Environment map through Sensor Fusion (센서 융합을 통한 환경지도 기반의 강인한 전역 위치추정)

  • Jung, Min-Kuk;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.2
    • /
    • pp.96-103
    • /
    • 2014
  • Global localization is one of the essential issues for mobile robot navigation. In this study, an indoor global localization method is proposed which uses a Kinect sensor and a monocular upward-looking camera. The proposed method generates an environment map which consists of a grid map, a ceiling feature map from the upward-looking camera, and a spatial feature map obtained from the Kinect sensor. The method selects robot pose candidates using the spatial feature map and updates sample poses by particle filter based on the grid map. Localization success is determined by calculating the matching error from the ceiling feature map. In various experiments, the proposed method achieved a position accuracy of 0.12m and a position update speed of 10.4s, which is robust enough for real-world applications.

Robust appearance feature learning using pixel-wise discrimination for visual tracking

  • Kim, Minji;Kim, Sungchan
    • ETRI Journal
    • /
    • v.41 no.4
    • /
    • pp.483-493
    • /
    • 2019
  • Considering the high dimensions of video sequences, it is often challenging to acquire a sufficient dataset to train the tracking models. From this perspective, we propose to revisit the idea of hand-crafted feature learning to avoid such a requirement from a dataset. The proposed tracking approach is composed of two phases, detection and tracking, according to how severely the appearance of a target changes. The detection phase addresses severe and rapid variations by learning a new appearance model that classifies the pixels into foreground (or target) and background. We further combine the raw pixel features of the color intensity and spatial location with convolutional feature activations for robust target representation. The tracking phase tracks a target by searching for frame regions where the best pixel-level agreement to the model learned from the detection phase is achieved. Our two-phase approach results in efficient and accurate tracking, outperforming recent methods in various challenging cases of target appearance changes.

Object Recognition by Invariant Feature Extraction in FLIR (적외선 영상에서의 불변 특징 정보를 이용한 목표물 인식)

  • 권재환;이광연;김성대
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.65-68
    • /
    • 2000
  • This paper describes an approach for extracting invariant features using a view-based representation and recognizing an object with a high speed search method in FLIR. In this paper, we use a reformulated eigenspace technique based on robust estimation for extracting features which are robust for outlier such as noise and clutter. After extracting feature, we recognize an object using a partial distance search method for calculating Euclidean distance. The experimental results show that the proposed method achieves the improvement of recognition rate compared with standard PCA.

  • PDF