• Title/Summary/Keyword: Image Detect

Search Result 2,315, Processing Time 0.025 seconds

Character Region Detection Using Structural Features of Hangul Vowel (한글 모음의 구조적 특징을 이용한 문자영역 검출 기법)

  • Park, Jong-Cheon;Lee, Keun-Wang;Park, Hyoung-Keun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.2
    • /
    • pp.872-877
    • /
    • 2012
  • We proposes the method to detect the Hangul character region from natural image using topological structural feature of Hangul grapheme. First, we transform a natural image to a gray-scale image. Second, feature extraction performed with edge and connected component based method, Edge-based method use a Canny-edge detector and connected component based method applied the local range filtering. Next, if features are not corresponding to the heuristic rule of Hangul character, extracted features filtered out and select candidates of character region. Next, candidates of Hangul character region are merged into one Hangul character using Hangul character merging algorithm. Finally, we detect the final character region by Hangul character class decision algorithm. Experimental result, proposed method could detect a character region effectively in images that contains a complex background and various environments. As a result of the performance evaluation, A proposed method showed advanced results about detection of Hangul character region from mobile image.

Effective Detection of Target Region Using a Machine Learning Algorithm (기계 학습 알고리즘을 이용한 효과적인 대상 영역 분할)

  • Jang, Seok-Woo;Lee, Gyungju;Jung, Myunghee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.5
    • /
    • pp.697-704
    • /
    • 2018
  • Since the face in image content corresponds to individual information that can distinguish a specific person from other people, it is important to accurately detect faces not hidden in an image. In this paper, we propose a method to accurately detect a face from input images using a deep learning algorithm, which is one of the machine learning methods. In the proposed method, image input via the red-green-blue (RGB) color model is first changed to the luminance-chroma: blue-chroma: red-chroma ($YC_bC_r$) color model; then, other regions are removed using the learned skin color model, and only the skin regions are segmented. A CNN model-based deep learning algorithm is then applied to robustly detect only the face region from the input image. Experimental results show that the proposed method more efficiently segments facial regions from input images. The proposed face area-detection method is expected to be useful in practical applications related to multimedia and shape recognition.

Studies on the Ability to Detect Lesions According to the Changes in the MR Diffusion Weighted Images

  • Kim, Chang-Bok;Cho, Jae-Hwan;Dong, Kyung-Rae;Chung, Woon-Kwan
    • Journal of Magnetics
    • /
    • v.17 no.2
    • /
    • pp.153-157
    • /
    • 2012
  • This study evaluated the ability of Diffusion-Weight Image (DWI), which is one of pulse sequences used in MRI based on the T2 weighted images, to detect samples placed within phantoms according to their size. Two identically sized phantoms, which could be inserted into the breast coil bilaterally, were prepared. Five samples with different sizes were placed in the phantoms, and the T2 weighted images and DWI were obtained. The Breast 2 channel coil of SIEMENS MAGNETOM Avanto 1.5 Tesla equipment was used for the experiments. 2D T2 weighted images were obtained using the following parameters: TR/TE = 6700/74 msec, Thickness/gap = 5/1 mm, Inversion Time (TI) = 130 ms, and matrix = $224{\times}448$. The parameters of DWI were that TR/TE = 8100/90 msec, Thickness/gap = 5/1 mm, matrix = $128{\times}128$, Inversion Time = 185 ms, and b-value = 0, 100, 300, 600, 1000 s/mm. The ratio of the sample volume on DWI compared to the T2 weighted images, which show excellent ability to detect lesions on MR images, was presented as the mean b-value. The measured b-value of the samples was obtained: 0.5${\times}$0.5 cm=0.33/0.34 square ${\times}$ cm (103%), 1${\times}$1 cm=1.28/1.25 square ${\times}$ cm (102.4%), 1.5${\times}$1.5 cm = 2.28/2.67 square ${\times}$ cm (85.39%), 2${\times}$2 cm=3.56/4.08 square ${\times}$ cm (87.25%), and 2.5${\times}$2.5 cm=7.53/8.77 square ${\times}$ cm (85.86%). In conclusion, the detection ability by the size of a sample was measured to be over 85% compared to T2 weighted image, but the detection ability of DWI was relatively lower than that of T2 weighted image.

Character Region Detection Using Structural Features of Hangul & English Characters in Natural Image (자연영상에서 한글 및 영문자의 구조적 특징을 이용한 문자영역 검출)

  • Oh, Myoung-Kwan;Park, Jong-Cheon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.3
    • /
    • pp.1718-1723
    • /
    • 2014
  • We proposes the method to detect the Hangul and English character region from natural image using structural feature of Hangul and English Characters. First, we extract edge features from natural image, Next, if features are not corresponding to the heuristic rule of character features, extracted features filtered out and select candidates of character region. Next, candidates of Hangul character region are merged into one Hangul character using Hangul character merging algorithm. Finally, we detect the final character region by Hangul character class decision algorithm. English character region detected by edge features of English characters. Experimental result, proposed method could detect a character region effectively in images that contains a complex background and various environments. As a result of the performance evaluation, A proposed method showed advanced results about detection of Hangul and English characters region from natural image.

BETTER ASTROMETRIC DE-BLENDING OF GRAVITATIONAL MICROLENSING EVENTS BY USING THE DIFFERENCE IMAGE ANALYSIS METHOD

  • HAN CHEONGHO
    • Journal of The Korean Astronomical Society
    • /
    • v.33 no.2
    • /
    • pp.89-95
    • /
    • 2000
  • As an efficient method to detect blending of general gravitational microlensing events, it is proposed to measure the shift of source star image centroid caused by microlensing. The conventional method to detect blending by this method is measuring the difference between the positions of the source star image point spread function measured on the images taken before and during the event (the PSF centroid shift, ${\delta}{\theta}$c,PSF). In this paper, we investigate the difference between the centroid positions measured on the reference and the subtracted images obtained by using the difference image analysis method (DIA centroid shift, ${\delta}{\theta}$c.DIA), and evaluate its relative usefulness in detecting blending over the conventional method based on ${\delta}{\theta}$c,PSF measurements. From this investigation, we find that the DIA centroid shift of an event is always larger than the PSF centroid shift. We also find that while ${\delta}{\theta}$c,PSF becomes smaller as the event amplification decreases, ${\delta}{\theta}$c.DIA remains constant regardless of the amplification. In addition, while ${\delta}{\theta}$c,DIA linearly increases with the increasing value of the blended light fraction, ${\delta}{\theta}$c,PSF peaks at a certain value of the blended light fraction and then eventually decreases as the fraction further increases. Therefore, measurements of ${\delta}{\theta}$c,DIA instead of ${\delta}{\theta}$c,PSF will be an even more efficient method to detect the blending effect of especially of highly blended events, for which the uncertainties in the determined time scales are high, as well as of low amplification events, for which the current method is highly inefficient.

  • PDF

Face Detection Using Shapes and Colors in Various Backgrounds

  • Lee, Chang-Hyun;Lee, Hyun-Ji;Lee, Seung-Hyun;Oh, Joon-Taek;Park, Seung-Bo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.19-27
    • /
    • 2021
  • In this paper, we propose a method for detecting characters in images and detecting facial regions, which consists of two tasks. First, we separate two different characters to detect the face position of the characters in the frame. For fast detection, we use You Only Look Once (YOLO), which finds faces in the image in real time, to extract the location of the face and mark them as object detection boxes. Second, we present three image processing methods to detect accurate face area based on object detection boxes. Each method uses HSV values extracted from the region estimated by the detection figure to detect the face region of the characters, and changes the size and shape of the detection figure to compare the accuracy of each method. Each face detection method is compared and analyzed with comparative data and image processing data for reliability verification. As a result, we achieved the highest accuracy of 87% when using the split rectangular method among circular, rectangular, and split rectangular methods.

A Study on the Edge Detection using Variable Vector Depending on the Distribution of Gray-Level (밝기 분포도에 따라 가변 가능한 벡터를 이용한 에지 검출)

  • Lee, Chang-Young;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.05a
    • /
    • pp.130-132
    • /
    • 2012
  • The use of visual media has been increased by development of contemporary society. To use these information of image, there are various methods of image processing. Edge detection which is one of those is technique to detect dramatically changing part of image brightness. Existing methods detect edge through mask which is composited by constant values. Because existing methods do not consider factor as location, direction of pixel in image, performance of edge detecting in insufficient. Therefore, an algorithm which is using variable vector for the variation of brightness in mask of $3{\times}3$ pixels is proposed.

  • PDF

An efficient ship detection method for KOMPSAT-5 synthetic aperture radar imagery based on adaptive filtering approach

  • Hwang, JeongIn;Kim, Daeseong;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.1
    • /
    • pp.89-95
    • /
    • 2017
  • Ship detection in synthetic aperture radar(SAR)imagery has long been an active research topic and has many applications. In this paper,we propose an efficient method for detecting ships from SAR imagery using filtering. This method exploits ship masking using a median filter that considers maximum ship sizes and detects ships from the reference image, to which a Non-Local means (NL-means) filter is applied for speckle de-noising and a differential image created from the difference between the reference image and the median filtered image. As the pixels of the ship in the SAR imagery have sufficiently higher values than the surrounding sea, the ship detection process is composed primarily of filtering based on this characteristic. The performance test for this method is validated using KOMPSAT-5 (Korea Multi-Purpose Satellite-5) SAR imagery. According to the accuracy assessment, the overall accuracy of the region that does not include land is 76.79%, and user accuracy is 71.31%. It is demonstrated that the proposed detection method is suitable to detect ships in SAR imagery and enables us to detect ships more easily and efficiently.

Motion Estimation-based Human Fall Detection for Visual Surveillance

  • Kim, Heegwang;Park, Jinho;Park, Hasil;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.5
    • /
    • pp.327-330
    • /
    • 2016
  • Currently, the world's elderly population continues to grow at a dramatic rate. As the number of senior citizens increases, detection of someone falling has attracted increasing attention for visual surveillance systems. This paper presents a novel fall-detection algorithm using motion estimation and an integrated spatiotemporal energy map of the object region. The proposed method first extracts a human region using a background subtraction method. Next, we applied an optical flow algorithm to estimate motion vectors, and an energy map is generated by accumulating the detected human region for a certain period of time. We can then detect a fall using k-nearest neighbor (kNN) classification with the previously estimated motion information and energy map. The experimental results show that the proposed algorithm can effectively detect someone falling in any direction, including at an angle parallel to the camera's optical axis.