• Title/Summary/Keyword: Image Extraction

Search Result 2,625, Processing Time 0.025 seconds

Slab Region Localization for Text Extraction using SIFT Features (문자열 검출을 위한 슬라브 영역 추정)

  • Choi, Jong-Hyun;Choi, Sung-Hoo;Yun, Jong-Pil;Koo, Keun-Hwi;Kim, Sang-Woo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.5
    • /
    • pp.1025-1034
    • /
    • 2009
  • In steel making production line, steel slabs are given a unique identification number. This identification number, Slab management number(SMN), gives information about the use of the slab. Identification of SMN has been done by humans for several years, but this is expensive and not accurate and it has been a heavy burden on the workers. Consequently, to improve efficiency, automatic recognition system is desirable. Generally, a recognition system consists of text localization, text extraction, character segmentation, and character recognition. For exact SMN identification, all the stage of the recognition system must be successful. In particular, the text localization is great important stage and difficult to process. However, because of many text-like patterns in a complex background and high fuzziness between the slab and background, directly extracting text region is difficult to process. If the slab region including SMN can be detected precisely, text localization algorithm will be able to be developed on the more simple method and the processing time of the overall recognition system will be reduced. This paper describes about the slab region localization using SIFT(Scale Invariant Feature Transform) features in the image. First, SIFT algorithm is applied the captured background and slab image, then features of two images are matched by Nearest Neighbor(NN) algorithm. However, correct matching rate can be low when two images are matched. Thus, to remove incorrect match between the features of two images, geometric locations of the matched two feature points are used. Finally, search rectangle method is performed in correct matching features, and then the top boundary and side boundaries of the slab region are determined. For this processes, we can reduce search region for extraction of SMN from the slab image. Most cases, to extract text region, search region is heuristically fixed [1][2]. However, the proposed algorithm is more analytic than other algorithms, because the search region is not fixed and the slab region is searched in the whole image. Experimental results show that the proposed algorithm has a good performance.

SIFT Image Feature Extraction based on Deep Learning (딥 러닝 기반의 SIFT 이미지 특징 추출)

  • Lee, Jae-Eun;Moon, Won-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.234-242
    • /
    • 2019
  • In this paper, we propose a deep neural network which extracts SIFT feature points by determining whether the center pixel of a cropped image is a SIFT feature point. The data set of this network consists of a DIV2K dataset cut into $33{\times}33$ size and uses RGB image unlike SIFT which uses black and white image. The ground truth consists of the RobHess SIFT features extracted by setting the octave (scale) to 0, the sigma to 1.6, and the intervals to 3. Based on the VGG-16, we construct an increasingly deep network of 13 to 23 and 33 convolution layers, and experiment with changing the method of increasing the image scale. The result of using the sigmoid function as the activation function of the output layer is compared with the result using the softmax function. Experimental results show that the proposed network not only has more than 99% extraction accuracy but also has high extraction repeatability for distorted images.

Study on an Extraction Method for a Fuel Rod Image and a Visualization of the Color Information in a Sectional Image of a Spent Fuel Assembly (사용후핵연료집합체 영상에서 핵연료봉 영상 추출방법과 색상정보의 가시화에 관한 연구)

  • Jang, Ji-Woon;Shin, Hee-Sung;Youn, Cheung;Kim, Ho-Dong
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.27 no.5
    • /
    • pp.432-441
    • /
    • 2007
  • Image processing methods for an extraction of a nuclear fuel rod image and visualization methods of the RGB color data were studied with a sectional image of spent fuel assembly. The fuel rod images could be extracted by using a histogram analysis, an edge detection and RGB rotor data. In these results, a size of the spent fuel assembly could be measured by using a histogram analysis method and a shape of the spent fuel rod could be observed by using an edge detection method. Finally, a various analyses were established for status of the spent fuel assembly by realized various 3D images for the color data in an image of a spent fuel assembly.

FLIR and CCD Image Fusion Algorithm Based on Adaptive Weight for Target Extraction (표적 추출을 위한 적응적 가중치 기반 FLIR 및 CCD 센서 영상 융합 알고리즘)

  • Gu, Eun-Hye;Lee, Eun-Young;Kim, Se-Yun;Cho, Woon-Ho;Kim, Hee-Soo;Park, Kil-Houm
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.3
    • /
    • pp.291-298
    • /
    • 2012
  • In automatic target recognition(ATR) systems, target extraction techniques are very important because ATR performance depends on segmentation result. So, this paper proposes a multi-sensor image fusion method based on adaptive weights. To incorporate the FLIR image and CCD image, we used information such as the bi-modality, distance and texture. A weight of the FLIR image is derived from the bi-modality and distance measure. For the weight of CCD image, the information that the target's texture is more uniform than the background region is used. The proposed algorithm is applied to many images and its performance is compared with the segmentation result using the single image. Experimental results show that the proposed method has the accurate extraction performance.

Visual Feature Extraction for Image Retrieval using Wavelet Coefficient’s Fuzzy Homogeneity and High Frequency Energy (웨이브릿 계수의 퍼지 동질성과 고주파 에너지를 이용한 영상 검색용 특징벡터 추출)

  • 박원배;류은주;송영준
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.1
    • /
    • pp.18-23
    • /
    • 2004
  • In this paper, we propose a new visual feature extraction method for content-based image retrieval(CBIR) based on wavelet transform which has both spatial-frequency characteristic and multi-resolution characteristic. We extract visual features for each frequency band in wavelet transformation and use them to CBIR. The lowest frequency band involves spacial information of original image. We extract L feature vectors using fuzzy homogeneity in the wavelet domain, which consider both the wavelet coefficients and the spacial information of each coefficient. Also, we extract 3 feature vectors wing the energy values of high frequency bands, and store those to image database. As a query, we retrieve the most similar image from image database according to the 10 largest homograms(normalized fuzzy homogeneity vectors) and 3 energy values. Simulation results show that the proposed method has good accuracy in image retrieval using 90 texture images.

  • PDF

Codebook-Based Foreground Extraction Algorithm with Continuous Learning of Background (연속적인 배경 모델 학습을 이용한 코드북 기반의 전경 추출 알고리즘)

  • Jung, Jae-Young
    • Journal of Digital Contents Society
    • /
    • v.15 no.4
    • /
    • pp.449-455
    • /
    • 2014
  • Detection of moving objects is a fundamental task in most of the computer vision applications, such as video surveillance, activity recognition and human motion analysis. This is a difficult task due to many challenges in realistic scenarios which include irregular motion in background, illumination changes, objects cast shadows, changes in scene geometry and noise, etc. In this paper, we propose an foreground extraction algorithm based on codebook, a database of information about background pixel obtained from input image sequence. Initially, we suppose a first frame as a background image and calculate difference between next input image and it to detect moving objects. The resulting difference image may contain noises as well as pure moving objects. Second, we investigate a codebook with color and brightness of a foreground pixel in the difference image. If it is matched, it is decided as a fault detected pixel and deleted from foreground. Finally, a background image is updated to process next input frame iteratively. Some pixels are estimated by input image if they are detected as background pixels. The others are duplicated from the previous background image. We apply out algorithm to PETS2009 data and compare the results with those of GMM and standard codebook algorithms.

Automatic National Image Interpretability Rating Scales (NIIRS) Measurement Algorithm for Satellite Images (위성영상을 위한 NIIRS(Natinal Image Interpretability Rating Scales) 자동 측정 알고리즘)

  • Kim, Jeahee;Lee, Changu;Park, Jong Won
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.4
    • /
    • pp.725-735
    • /
    • 2016
  • High-resolution satellite images are used in the fields of mapping, natural disaster forecasting, agriculture, ocean-based industries, infrastructure, and environment, and there is a progressive increase in the development and demand for the applications of high-resolution satellite images. Users of the satellite images desire accurate quality of the provided satellite images. Moreover, the distinguishability of each image captured by an actual satellite varies according to the atmospheric environment and solar angle at the captured region, the satellite velocity and capture angle, and the system noise. Hence , NIIRS must be measured for all captured images. There is a significant deficiency in professional human resources and time resources available to measure the NIIRS of few hundred images that are transmitted daily. Currently, NIIRS is measured every few months or even few years to assess the aging of the satellite as well as to verify and calibrate it [3]. Therefore, we develop an algorithm that can measure the national image interpretability rating scales (NIIRS) of a typical satellite image rather than an artificial target satellite image, in order to automatically assess its quality. In this study, the criteria for automatic edge region extraction are derived based on the previous works on manual edge region extraction [4][5], and consequently, we propose an algorithm that can extract the edge region. Moreover, RER and H are calculated from the extracted edge region for automatic edge region extraction. The average NIIRS value was measured to be 3.6342±0.15321 (2 standard deviations) from the automatic measurement experiment on a typical satellite image, which is similar to the result extracted from the artificial target.

Image Matching Based on Robust Feature Extraction for Remote Sensing Haze Images (위성 안개 영상을 위한 강인한 특징점 검출 기반의 영상 정합)

  • Kwon, Oh-Seol
    • Journal of Broadcast Engineering
    • /
    • v.21 no.2
    • /
    • pp.272-275
    • /
    • 2016
  • This paper presents a method of single image dehazing and surface-based feature detection for remote sensing images. In the conventional dark channel prior (DCP) algorithm, the resulting transmission map invariably includes some block artifacts because of patch-based processing. This also causes image blur. Therefore, a refined transmission map based on a hidden Markov random field and expectation-maximization algorithm can reduce the block artifacts and also increase the image clarity. Also, the proposed algorithm enhances the accuracy of image matching surface-based features in an remote sensing image. Experimental results confirm that the proposed algorithm is superior to conventional algorithms in image haze removal. Moreover, the proposed algorithm is suitable for the problem of image matching based on feature extraction.

Face Feature Extraction Method ThroughStereo Image's Matching Value (스테레오 영상의 정합값을 통한 얼굴특징 추출 방법)

  • Kim, Sang-Myung;Park, Chang-Han;Namkung, Jae-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.4
    • /
    • pp.461-472
    • /
    • 2005
  • In this paper, we propose face feature extraction algorithm through stereo image's matching value. The proposed algorithm detected face region by change the RGB color space of skin color information to the YCbCr color space. Applying eye-template from extracted face region geometrical feature vector of feature about distance and lean, nose and mouth between eye extracted. And, Proposed method could do feature of eyes, nose and mouth through stereo image's matching as well as 2D feature information extract. In the experiment, the proposed algorithm shows the consistency rate of 73% in distance within about 1m and the consistency rate of 52%in distance since about 1m.

  • PDF

Automated Edge-based Seamline Extraction for Mosaicking of High-resolution Satellite Images (고해상도 위성영상 모자이킹을 위한 경계선 기반의 접합선 자동 추출)

  • Jin, Kyeong-Hyeok;Song, Yeong-Sun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.1
    • /
    • pp.61-69
    • /
    • 2009
  • By the advent of the high resolution satellite imagery, a ground-coverage included by a single satellite image is decreased. By the reason, there are increasing needs in image mosaicking technology to use images to various GIS fields. This paper describes an edge-based seamline extraction algorithm using edge information such as rivers, roads, buildings for image mosaicking. For this, we developed a method to track and link discontinuous edges extracted by edge detection operator. To estimate the effectiveness of the proposed algorithm, we applied the algorithm to IKONOS, KOMPSAT-1 and SPOT-5 satellite images. The experimental results showed that the algorithm successfully dealts with discontinuities caused by geometric differences in two images.

  • PDF