• Title/Summary/Keyword: Salient Feature

Search Result 116, Processing Time 0.03 seconds

Improved Gradient Direction Assisted Linking Algorithm for Linear Feature Extraction in High Resolution Satellite Images, an Iterative Dynamic Programming Approach

  • Yang, Kai;Liew, Soo Chin;Lee, Ken Yoong;Kwoh, Leong Keong
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.408-410
    • /
    • 2003
  • In this paper, an improved gradient direction assisted linking algorithm is proposed. This algorithm begins with initial seeds satisfying some local criteria. Then it will search along the direction provided by the initial point. A window will be generated in the gradient direction of the current point. Instead of the conventional method which only considers the value of the local salient structure, an improved mathematical model is proposed to describe the desired linear features. This model not only considers the value of the salient structure but also the direction of it. Furthermore, the linking problem under this model can be efficiently solved by dynamic programming method. This algorithm is tested for linear features detection in IKONOS images. The result demonstrates this algorithm is quite promising.

  • PDF

A Region-based Image Retrieval System using Salient Point Extraction and Image Segmentation (영상분할과 특징점 추출을 이용한 영역기반 영상검색 시스템)

  • 이희경;호요성
    • Journal of Broadcast Engineering
    • /
    • v.7 no.3
    • /
    • pp.262-270
    • /
    • 2002
  • Although most image indexing schemes ate based on global image features, they have limited discrimination capability because they cannot capture local variations of the image. In this paper, we propose a new region-based image retrieval system that can extract important regions in the image using salient point extraction and image segmentation techniques. Our experimental results show that color and texture information in the region provide a significantly improved retrieval performances compared to the global feature extraction methods.

Rectangle Region Based Stereo Matching for Building Reconstruction

  • Wang, Jing;Miyazaki, Toru;Koizumi, Hirokazu;Iwata, Makoto;Chong, Jong-Wha;Yagyu, Hiroyuki;Shimazu, Hideo;Ikenaga, Takeshi;Goto, Satoshi
    • Journal of Ubiquitous Convergence Technology
    • /
    • v.1 no.1
    • /
    • pp.9-17
    • /
    • 2007
  • Feature based stereo matching is an effective way to perform 3D building reconstruction. However, in urban scene, the cluttered background and various building structures may interfere with the performance of building reconstruction. In this paper, we propose a novel method to robustly reconstruct buildings on the basis of rectangle regions. Firstly, we propose a multi-scale linear feature detector to obtain the salient line segments on the object contours. Secondly, candidate rectangle regions are extracted from the salient line segments based on their local information. Thirdly, stereo matching is performed with the list of matching line segments, which are boundary edges of the corresponding rectangles from the left and right image. Experimental results demonstrate that the proposed method can achieve better accuracy on the reconstructed result than pixel-level stereo matching.

  • PDF

Saliency Attention Method for Salient Object Detection Based on Deep Learning (딥러닝 기반의 돌출 객체 검출을 위한 Saliency Attention 방법)

  • Kim, Hoi-Jun;Lee, Sang-Hun;Han, Hyun Ho;Kim, Jin-Soo
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.12
    • /
    • pp.39-47
    • /
    • 2020
  • In this paper, we proposed a deep learning-based detection method using Saliency Attention to detect salient objects in images. The salient object detection separates the object where the human eye is focused from the background, and determines the highly relevant part of the image. It is usefully used in various fields such as object tracking, detection, and recognition. Existing deep learning-based methods are mostly Autoencoder structures, and many feature losses occur in encoders that compress and extract features and decoders that decompress and extend the extracted features. These losses cause the salient object area to be lost or detect the background as an object. In the proposed method, Saliency Attention is proposed to reduce the feature loss and suppress the background region in the Autoencoder structure. The influence of the feature values was determined using the ELU activation function, and Attention was performed on the feature values in the normalized negative and positive regions, respectively. Through this Attention method, the background area was suppressed and the projected object area was emphasized. Experimental results showed improved detection results compared to existing deep learning methods.

A feature-based motion parameter estimation using bi-directional correspondence scheme (쌍방향 대응기법을 이용한 특징점 기반 움직임 계수 추정)

  • 서종열;김경중;임채욱;박규태
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.11
    • /
    • pp.2776-2788
    • /
    • 1996
  • A new feature-based motion parameter estimation for arbitrary-shaped regions is proposed. Existing motion parameter estimation algorithms such as gradient-based algorithm require iterations that are very sensitive to initial values and which often converge to a local minimum. In this paper, the motion parameters of an object are obtained by solving a set of linear equations derived by the motion of salient feature points of the object. In order to estimate the displacement of the feature points, a new process called the "bi-directional correspondence scheme" is proposed to ensure the robjstness of correspondence. The proposed correspondence scheme iteratively selects the feature points and their corresponding points until unique one-to-one correspondence is established. Furthermore, initially obtained motion paramerters are refined using an iterative method to give a better performance. The proposed algorithm can be used for motion estimationin object-based image coder, and the experimental resuls show that the proposed method outperforms existing schemes schemes in estimating motion parameters of objects in image sequences.sequences.

  • PDF

A Salient Based Bag of Visual Word Model (SBBoVW): Improvements toward Difficult Object Recognition and Object Location in Image Retrieval

  • Mansourian, Leila;Abdullah, Muhamad Taufik;Abdullah, Lilli Nurliyana;Azman, Azreen;Mustaffa, Mas Rina
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.769-786
    • /
    • 2016
  • Object recognition and object location have always drawn much interest. Also, recently various computational models have been designed. One of the big issues in this domain is the lack of an appropriate model for extracting important part of the picture and estimating the object place in the same environments that caused low accuracy. To solve this problem, a new Salient Based Bag of Visual Word (SBBoVW) model for object recognition and object location estimation is presented. Contributions lied in the present study are two-fold. One is to introduce a new approach, which is a Salient Based Bag of Visual Word model (SBBoVW) to recognize difficult objects that have had low accuracy in previous methods. This method integrates SIFT features of the original and salient parts of pictures and fuses them together to generate better codebooks using bag of visual word method. The second contribution is to introduce a new algorithm for finding object place based on the salient map automatically. The performance evaluation on several data sets proves that the new approach outperforms other state-of-the-arts.

Salient Object Extraction from Video Sequences using Contrast Map and Motion Information (대비 지도와 움직임 정보를 이용한 동영상으로부터 중요 객체 추출)

  • Kwak, Soo-Yeong;Ko, Byoung-Chul;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1121-1135
    • /
    • 2005
  • This paper proposes a moving object extraction method using the contrast map and salient points. In order to make the contrast map, we generate three-feature maps such as luminance map, color map and directional map and extract salient points from an image. By using these features, we can decide the Attention Window(AW) location easily The purpose of the AW is to remove the useless regions in the image such as background as well as to reduce the amount of image processing. To create the exact location and flexible size of the AW, we use motion feature instead of pre-assumptions or heuristic parameters. After determining of the AW, we find the difference of edge to inner area from the AW. Then, we can extract horizontal candidate region and vortical candidate region. After finding both horizontal and vertical candidates, intersection regions through logical AND operation are further processed by morphological operations. The proposed algorithm has been applied to many video sequences which have static background like surveillance type of video sequences. The moving object was quite well segmented with accurate boundaries.

An approach for improving the performance of the Content-Based Image Retrieval (CBIR)

  • Jeong, Inseong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.665-672
    • /
    • 2012
  • Amid rapidly increasing imagery inputs and their volume in a remote sensing imagery database, Content-Based Image Retrieval (CBIR) is an effective tool to search for an image feature or image content of interest a user wants to retrieve. It seeks to capture salient features from a 'query' image, and then to locate other instances of image region having similar features elsewhere in the image database. For a CBIR approach that uses texture as a primary feature primitive, designing a texture descriptor to better represent image contents is a key to improve CBIR results. For this purpose, an extended feature vector combining the Gabor filter and co-occurrence histogram method is suggested and evaluated for quantitywise and qualitywise retrieval performance criterion. For the better CBIR performance, assessing similarity between high dimensional feature vectors is also a challenging issue. Therefore a number of distance metrics (i.e. L1 and L2 norm) is tried to measure closeness between two feature vectors, and its impact on retrieval result is analyzed. In this paper, experimental results are presented with several CBIR samples. The current results show that 1) the overall retrieval quantity and quality is improved by combining two types of feature vectors, 2) some feature is better retrieved by a specific feature vector, and 3) retrieval result quality (i.e. ranking of retrieved image tiles) is sensitive to an adopted similarity metric when the extended feature vector is employed.

Parking Space Recognition for Autonomous Valet Parking Using Height and Salient-Line Probability Maps

  • Han, Seung-Jun;Choi, Jeongdan
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1220-1230
    • /
    • 2015
  • An autonomous valet parking (AVP) system is designed to locate a vacant parking space and park the vehicle in which it resides on behalf of the driver, once the driver has left the vehicle. In addition, the AVP is able to direct the vehicle to a location desired by the driver when requested. In this paper, for an AVP system, we introduce technology to recognize a parking space using image sensors. The proposed technology is mainly divided into three parts. First, spatial analysis is carried out using a height map that is based on dense motion stereo. Second, modelling of road markings is conducted using a probability map with a new salient-line feature extractor. Finally, parking space recognition is based on a Bayesian classifier. The experimental results show an execution time of up to 10 ms and a recognition rate of over 99%. Also, the performance and properties of the proposed technology were evaluated with a variety of data. Our algorithms, which are part of the proposed technology, are expected to apply to various research areas regarding autonomous vehicles, such as map generation, road marking recognition, localization, and environment recognition.

A Robust Watermarking Technique Using Affine Transform and Cross-Reference Points (어파인 변형과 교차참조점을 이용한 강인한 워터마킹 기법)

  • Lee, Hang-Chan
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.56 no.3
    • /
    • pp.615-622
    • /
    • 2007
  • In general, Harris detector is commonly used for finding salient points in watermarking systems using feature points. Harris detector is a kind of combined comer and edge detector which is based on neighboring image data distribution, therefore it has some limitation to find accurate salient points after watermark embedding or any kinds of digital attacks. In this paper, we have used cross reference points which use not data distribution but geometrical structure of a normalized image in order to avoid pointing error caused by the distortion of image data. After normalization, we find cross reference points and take inverse normalization of these points. Next, we construct a group of triangles using tessellation with inversely normalized cross reference points. The watermarks are affine transformed and transformed-watermarks are embedded into not normalized image but original one. Only locations of watermarks are determined on the normalized image. Therefore, we can reduce data loss of watermark which is caused by inverse normalization. As a result, we can detect watermarks with high correlation after several digital attacks.