• Title/Summary/Keyword: image feature extraction

Search Result 1,017, Processing Time 0.029 seconds

EDMFEN: Edge detection-based multi-scale feature enhancement Network for low-light image enhancement

  • Canlin Li;Shun Song;Pengcheng Gao;Wei Huang;Lihua Bi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.980-997
    • /
    • 2024
  • To improve the brightness of images and reveal hidden information in dark areas is the main objective of low-light image enhancement (LLIE). LLIE methods based on deep learning show good performance. However, there are some limitations to these methods, such as the complex network model requires highly configurable environments, and deficient enhancement of edge details leads to blurring of the target content. Single-scale feature extraction results in the insufficient recovery of the hidden content of the enhanced images. This paper proposed an edge detection-based multi-scale feature enhancement network for LLIE (EDMFEN). To reduce the loss of edge details in the enhanced images, an edge extraction module consisting of a Sobel operator is introduced to obtain edge information by computing gradients of images. In addition, a multi-scale feature enhancement module (MSFEM) consisting of multi-scale feature extraction block (MSFEB) and a spatial attention mechanism is proposed to thoroughly recover the hidden content of the enhanced images and obtain richer features. Since the fused features may contain some useless information, the MSFEB is introduced so as to obtain the image features with different perceptual fields. To use the multi-scale features more effectively, a spatial attention mechanism module is used to retain the key features and improve the model performance after fusing multi-scale features. Experimental results on two datasets and five baseline datasets show that EDMFEN has good performance when compared with the stateof-the-art LLIE methods.

Implementation of the Panoramic System Using Feature-Based Image Stitching (특징점 기반 이미지 스티칭을 이용한 파노라마 시스템 구현)

  • Choi, Jaehak;Lee, Yonghwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.2
    • /
    • pp.61-65
    • /
    • 2017
  • Recently, the interest and research on 360 camera and 360 image production are expanding. In this paper, we describe the feature extraction algorithm, alignment and image blending that make up the feature-based stitching system. And it deals with the theory of representative algorithm at each stage. In addition, the feature-based stitching system was implemented using OPENCV library. As a result of the implementation, the brightness of the two images is different, and it feels a sense of heterogeneity in the resulting image. We will study the proper preprocessing to adjust the brightness value to improve the accuracy and seamlessness of the feature-based stitching system.

  • PDF

The study of iris region extraction for iris recognition (홍채 인식을 위한 홍채 영역 추출)

  • Yoon, Kyong-Lok;Yang, Woo-S.
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.181-183
    • /
    • 2004
  • In this paper, We proposed an algorithm which extraction iris region from 2D image. Our method is composed of three parts : internal boundary defection and external boundary detection. Since eyelid and eyelash cover part of the boundary and the size of iris changes continuously, it is difficult to extract iris region accurately. For the interior and exterior boundary detection, we used partial differentiation of histogram. Performance of the proposed algorithm is tested and evaluated using 360 iris image samples.

  • PDF

A Study on Feature Selection and Feature Extraction for Hyperspectral Image Classification Using Canonical Correlation Classifier (정준상관분류에 의한 하이퍼스펙트럴영상 분류에서 유효밴드 선정 및 추출에 관한 연구)

  • Park, Min-Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.3D
    • /
    • pp.419-431
    • /
    • 2009
  • The core of this study is finding out the efficient band selection or extraction method discovering the optimal spectral bands when applying canonical correlation classifier (CCC) to hyperspectral data. The optimal efficient bands grounded on each separability decision technique are selected using Multispec$^{(C)}$ software developed by Purdue university of USA. Total 6 separability decision techniques are used, which are Divergence, Transformed Divergence, Bhattacharyya, Mean Bhattacharyya, Covariance Bhattacharyya, Noncovariance Bhattacharyya. For feature extraction, PCA transformation and MNF transformation are accomplished by ERDAS Imagine and ENVI software. For the comparison and assessment on the effect of feature selection and feature extraction, land cover classification is performed by CCC. The overall accuracy of CCC using the firstly selected 60 bands is 71.8%, the highest classification accuracy acquired by CCC is 79.0% as the case that executes CCC after appling Noncovariance Bhattacharyya. In conclusion, as a matter of fact, only Noncovariance Bhattacharyya separability decision method was valuable as feature selection algorithm for hyperspectral image classification depended on CCC. The lassification accuracy using other feature selection and extraction algorithms except Divergence rather declined in CCC.

Feature Extraction Of Content-based image retrieval Using object Segmentation and HAQ algorithm (객체 분할과 HAQ 알고리즘을 이용한 내용 기반 영상 검색 특징 추출)

  • 김대일;홍종선;장혜경;김영호;강대성
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.453-456
    • /
    • 2003
  • Compared with other features of the image, color features are less sensitive to noise and background complication. Besides, this adding to object segmentation has more accuracy of image retrieval. This paper presents object segmentation and HAQ(Histogram Analysis and Quantization) algorithm approach to extract features(the object information and the characteristic colors) of an image. The empirical results shows that this method presents exactly spatial and color information of an image as image retrieval's feature.

  • PDF

Image Retrieval Using Directional Features (방향성 특징을 이용한 이미지 검색)

  • Jung, Ho-Young;Whang, Whan-Kyu
    • Journal of Industrial Technology
    • /
    • v.20 no.B
    • /
    • pp.207-211
    • /
    • 2000
  • For efficient massive image retrieval, an image retrieval requires that several important objectives are satisfied, namely: automated extraction of features, efficient indexing and effective retrieval. In this work, we present a technique for extracting the 4-dimension directional feature. By directional detail, we imply strong directional activity in the horizontal, vertical and diagonal direction present in region of the image texture. This directional information also present smoothness of region. The 4-dimension feature is only indexed in the 4-D space so that complex high-dimensional indexing can be avoided.

  • PDF

Automatic Image Registration Based on Extraction of Corresponding-Points for Multi-Sensor Image Fusion (다중센서 영상융합을 위한 대응점 추출에 기반한 자동 영상정합 기법)

  • Choi, Won-Chul;Jung, Jik-Han;Park, Dong-Jo;Choi, Byung-In;Choi, Sung-Nam
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.12 no.4
    • /
    • pp.524-531
    • /
    • 2009
  • In this paper, we propose an automatic image registration method for multi-sensor image fusion such as visible and infrared images. The registration is achieved by finding corresponding feature points in both input images. In general, the global statistical correlation is not guaranteed between multi-sensor images, which bring out difficulties on the image registration for multi-sensor images. To cope with this problem, mutual information is adopted to measure correspondence of features and to select faithful points. An update algorithm for projective transform is also proposed. Experimental results show that the proposed method provides robust and accurate registration results.

Feature Extraction Based on GRFs for Facial Expression Recognition

  • Yoon, Myoong-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.23-31
    • /
    • 2002
  • In this paper we propose a new feature vector for recognition of the facial expression based on Gibbs distributions which are well suited for representing the spatial continuity. The extracted feature vectors are invariant under translation rotation, and scale of an facial expression imege. The Algorithm for recognition of a facial expression contains two parts: the extraction of feature vector and the recognition process. The extraction of feature vector are comprised of modified 2-D conditional moments based on estimated Gibbs distribution for an facial image. In the facial expression recognition phase, we use discrete left-right HMM which is widely used in pattern recognition. In order to evaluate the performance of the proposed scheme, experiments for recognition of four universal expression (anger, fear, happiness, surprise) was conducted with facial image sequences on Workstation. Experiment results reveal that the proposed scheme has high recognition rate over 95%.

  • PDF

FERET DATA SET에서의 PCA와 ICA의 비교

  • Kim, Sung-Soo;Moon, Hyeon-Joon;Kim, Jaihie
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2355-2358
    • /
    • 2003
  • The purpose of this paper is to investigate two major feature extraction techniques based on generic modular face recognition system. Detailed algorithms are described for principal component analysis (PCA) and independent component analysis (ICA). PCA and ICA ate statistical techniques for feature extraction and their incorporation into a face recognition system requires numerous design decisions. We explicitly state the design decisions by introducing a modular-based face recognition system since some of these decision are not documented in the literature. We explored different implementations of each module, and evaluate the statistical feature extraction algorithms based on the FERET performance evaluation protocol (the de facto standard method for evaluating face recognition algorithms). In this paper, we perform two experiments. In the first experiment, we report performance results on the FERET database based on PCA. In the second experiment, we examine performance variations based on ICA feature extraction algorithm. The experimental results are reported using four different categories of image sets including front, lighting, and duplicate images.

  • PDF

Feature Extraction of the 3-Dimensional Objects with Circular Cross Sections (단면이 원인 3차원 물체의 특징 추출)

  • Cho, Dong-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.4
    • /
    • pp.866-876
    • /
    • 1996
  • A feature extraction method for the objects that have a circular cross section is proposed.To implement a robust recognition system which can effectively deal with various types of 2-dimensional image and 3-dimensional image, both 2- dimensional information and 3-dimensional information should be collectively extracted and combined for the optimum. For this, this paper presents a feature extraction method for 3-dimensional objects, particularly for the objects with a circular cross section which most objects in the real world are known to have. Firstly, the Z gradient is proposed to extract the shape information from those objects. Using this information, normal vectors are derived from the surface patches. The intersection points between the vectors are applied to the geometric feature extraction.Also, for more accurate recognition, a feature extraction method for between surface regions is proposed.Finally, the extraction method of function information is investigated for the final recognition process.The usefulness of the proposed method is proved through the experimentation.

  • PDF