• Title/Summary/Keyword: 윤곽 검출

Search Result 343, Processing Time 0.024 seconds

Slant Estimation and Correction for the Off-Line Handwritten Hangul String Using Hough transform (Hough 변환을 이용한 오프라인 필기 한글 문자열의 기울기 추정 및 교정)

  • 이성환;이동준
    • Korean Journal of Cognitive Science
    • /
    • v.4 no.1
    • /
    • pp.243-260
    • /
    • 1993
  • This paper presents an efficient method for estimationg and correcting the slant of off-line handwritten Hangul strings.In the proposed method,after extracting contours from input image.Hough tranform is applied to the contours to detect lines and estimate slants of the lines.When Hough trans form is applied to the contours,pixels which are not parts of the same stroke could be detected as a line.In order to exclude these lines from slant estimation process,detected lines which have the length less than threshold are eliminated.Experiments have been performed with address images which were extracted from live envelopes provided by Seoul Mail Center.Experimental results show that the proposed method is superior to the previous methods,which had been done with handwritten English strings.in estimation the slant of off-line handwritten Hangul strings.

Sign Language Recognition System Using SVM and Depth Camera (깊이 카메라와 SVM을 이용한 수화 인식 시스템)

  • Kim, Ki-Sang;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.11
    • /
    • pp.63-72
    • /
    • 2014
  • In this paper, we propose a sign language recognition system using SVM and depth camera. Especially, we focus on the Korean sign language. For the sign language system, we suggest two methods, one in hand feature extraction stage and the other in recognition stage. Hand features are consisted of the number of fingers, finger length, radius of palm, and direction of the hand. To extract hand features, we use Distance Transform and make hand skeleton. This method is more accurate than a traditional method which uses contours. To recognize hand posture, we develop the decision tree with the hand features. For more accuracy, we use SVM to determine the threshold value in the decision tree. In the experimental results, we show that the suggested method is more accurate and faster when extracting hand features a recognizing hand postures.

A Study on Implementation of the High Speed Feature Extraction System Based on Block Type Classification (블록 유형 분류 알고리즘 기반 고속 특징추출 시스템 구현에 관한 연구)

  • Lee, Juseong;An, Ho-Myoung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.3
    • /
    • pp.186-191
    • /
    • 2019
  • In this paper, we propose a implementation approach of the high-speed feature extraction algorithm. The proposed method is based on the block type classification algorithm which reduces the computation time when target macro block is divided to smooth block type that has no image features. It is quantitatively identified that occurs at 29.5% of the total image using 200 standard test images with $64{\times}64$ macro block size. This means that within a standard test image containing various image information, 29.5% can reduce the complexity of the operation. When the proposed approach is applied to the Canny edge detection, the required latency of the edge detection can be completely eliminated, such as 2D derivative filter, gradient magnitude/direction computation, non-maximal suppression, adaptive threshold calculation, hysteresis thresholding. Also, it is expected that operation time of the feature detection can be reduced by applying block type classification algorithm to various feature extraction algorithms in this way.

IMToon: Image-based Cartoon Authoring System using Image Processing (IMToon: 영상처리를 활용한 영상기반 카툰 저작 시스템)

  • Seo, Banseok;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.11-22
    • /
    • 2017
  • This study proposes IMToon(IMage-based carToon) which is an image-based cartoon authoring system using an image processing algorithm. The proposed IMToon allows general users to easily and efficiently produce frames comprising cartoons based on image. The authoring system is designed largely with two functions: cartoon effector and interactive story editor. Cartoon effector automatically converts input images into a cartoon-style image, which consists of image-based cartoon shading and outline drawing steps. Image-based cartoon shading is to receive images of the desired scenes from users, separate brightness information from the color model of the input images, simplify them to a shading range of desired steps, and recreate them as cartoon-style images. Then, the final cartoon style images are created through the outline drawing step in which the outlines of the shaded images are applied through edge detection. Interactive story editor is used to enter text balloons and subtitles in a dialog structure to create one scene of the completed cartoon that delivers a story such as web-toon or comic book. In addition, the cartoon effector, which converts images into cartoon style, is expanded to videos so that it can be applied to videos as well as still images. Finally, various experiments are conducted to verify the possibility of easy and efficient production of cartoons that users want based on images with the proposed IMToon system.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.

Automatic Left Ventricle Segmentation Algorithm using K-mean Clustering and Graph Searching on Cardiac MRI (K-평균 클러스터링과 그래프 탐색을 통한 심장 자기공명영상의 좌심실 자동분할 알고리즘)

  • Jo, Hyun-Wu;Lee, Hae-Yeoun
    • The KIPS Transactions:PartB
    • /
    • v.18B no.2
    • /
    • pp.57-66
    • /
    • 2011
  • To prevent cardiac diseases, quantifying cardiac function is important in routine clinical practice by analyzing blood volume and ejection fraction. These works have been manually performed and hence it requires computational costs and varies depending on the operator. In this paper, an automatic left ventricle segmentation algorithm is presented to segment left ventricle on cardiac magnetic resonance images. After coil sensitivity of MRI images is compensated, a K-mean clustering scheme is applied to segment blood area. A graph searching scheme is employed to correct the segmentation error from coil distortions and noises. Using cardiac MRI images from 38 subjects, the presented algorithm is performed to calculate blood volume and ejection fraction and compared with those of manual contouring by experts and GE MASS software. Based on the results, the presented algorithm achieves the average accuracy of 6.2mL${\pm}$5.6, 2.9mL${\pm}$3.0 and 2.1%${\pm}$1.5 in diastolic phase, systolic phase and ejection fraction, respectively. Moreover, the presented algorithm minimizes user intervention rates which was critical to automatize algorithms in previous researches.

A Study on the Generation of Ultrasonic Binary Image for Image Segmentation (Image segmentation을 위한 초음파 이진 영상 생성에 관한 연구)

  • Choe, Heung-Ho;Yuk, In-Su
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.6
    • /
    • pp.571-575
    • /
    • 1998
  • One of the most significant features of diagnostic ultrasonic instruments is to provide real time information of the soft tissues movements. Echocardiogram has been widely used for diagnosis of heart diseases since it is able to show real time images of heart valves and walls. However, the currently used ultrasonic images are deteriorated due to presence of speckle noises and image dropout. Therefore, it is very important to develop a new technique which can enhance ultrasonic images. In this study, a technique which extracts enhanced binary images in echocardiograms was proposed. For this purpose, a digital moving image file was made from analog echocardiogram, then it was stored as 8-bit gray-level for each frame. For an efficient image processing, the region containing the heat septum and tricuspid valve was selected as the region of interest(ROI). Image enhancement filters and morphology filters were used to reduce speckle noises in the images. The proposed procedure in this paper resulted in binary images with enhanced contour compared to those form the conventional threshold technique and original image processing technique which can be further implemented for the quantitative analysis of the left ventricular wall motion in echocardiogram by easy detection of the heart wall contours.

  • PDF

Analysis of size distribution of riverbed gravel through digital image processing (영상 처리에 의한 하상자갈의 입도분포 분석)

  • Yu, Kwonkyu;Cho, Woosung
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.7
    • /
    • pp.493-503
    • /
    • 2019
  • This study presents a new method of estimating the size distribution of river bed gravel through image processing. The analysis was done in two steps; first the individual grain images were analyzed and then the grain particle segmentation of river-bed images were processed. In the first part of the analysis, the relationships (long axes, intermediate axes and projective areas) between grain features from images and those measured were compared. For this analysis, 240 gravel particles were collected at three river stations. All particles were measured with vernier calipers and weighed with scales. The measured data showed that river gravel had shape factors of 0.514~0.585. It was found that the weight of gravel had a stronger correlation with the projective areas than the long or intermediate axes. Using these results, we were able to establish an area-weight formula. In the second step, we calculated the projective areas of the river-bed gravels by detecting their edge lines using the ImageJ program. The projective areas of the gravels were converted to the grain-size distribution using the formula previously established. The proposed method was applied to 3 small- and medium- sized rivers in Korea. Comparisons of the analyzed size distributions with those measured showed that the proposed method could estimate the median diameter within a fair error range. However, the estimated distributions showed a slight deviation from the observed value, which is something that needs improvement in the future.

Studies of Automatic Dental Cavity Detection System as an Auxiliary Tool for Diagnosis of Dental Caries in Digital X-ray Image (디지털 X-선 영상을 통한 치아우식증 진단 보조 시스템으로써 치아 와동 자동 검출 프로그램 연구)

  • Huh, Jangyong;Nam, Haewon;Kim, Juhae;Park, Jiman;Shin, Sukyoung;Lee, Rena
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.52-58
    • /
    • 2015
  • The automated dental cavity detection program for a new concept intra-oral dental x-ray imaging device, an auxiliary diagnosis system, which is able to assist a dentist to identify dental caries in an early stage and to make an accurate diagnosis, was to be developed. The primary theory of the automatic dental cavity detection program is divided into two algorithms; one is an image segmentation skill to discriminate between a dental cavity and a normal tooth and the other is a computational method to analyze feature of an tooth image and take an advantage of it for detection of dental cavities. In the present study, it is, first, evaluated how accurately the DRLSE (Direct Regularized Level Set Evolution) method extracts demarcation surrounding the dental cavity. In order to evaluate the ability of the developed algorithm to automatically detect dental cavities, 7 tooth phantoms from incisor to molar were fabricated which contained a various form of cavities. Then, dental cavities in the tooth phantom images were analyzed with the developed algorithm. Except for two cavities whose contours were identified partially, the contours of 12 cavities were correctly discriminated by the automated dental caries detection program, which, consequently, proved the practical feasibility of the automatic dental lesion detection algorithm. However, an efficient and enhanced algorithm is required for its application to the actual dental diagnosis since shapes or conditions of the dental caries are different between individuals and complicated. In the future, the automatic dental cavity detection system will be improved adding pattern recognition or machine learning based algorithm which can deal with information of tooth status.

A Study on Projection Image Restoration by Adaptive Filtering (적응적 필터링에 의한 투사영상 복원에 관한 연구)

  • 김정희;김광익
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.2
    • /
    • pp.119-128
    • /
    • 1998
  • This paper describes a filtering algorithm which employs apriori information of SPECT lesion detectability potential for the filtering of degraded projection images prior to the backprojection reconstruction. In this algorithm, we determined m minimum detectable lesion sized(MDLSs) by assuming m object contrasts uniformly-chosen in the range of 0.0-1.0, based on a signal/noise model which provides the capability potential of SPECT in terms of physical factors. A best estimate of given projection image is attempted as a weighted combination of the subimages from m optimal filters whose design is focused on maximizing the local S/N ratios for the MDLS-lesions. These subimages show relatively larger resolution recovery effect and relatively smaller noise reduction effect with the decreased MDLS, and the weighting on each subimage was controlled by the difference between the subimage and the maximum-resolution-recovered projection image. The proposed filtering algoritym was tested on SPECT image reconstruction problems, and produced good results. Especially, this algorithm showed the adaptive effect that approximately averages the filter outputs in homogeneous areas and sensitively depends on each filter strength on contrast preserving/enhancing in textured lesion areas of the reconstructed image.

  • PDF