• Title/Summary/Keyword: Segmentation method

Search Result 2,154, Processing Time 0.032 seconds

A Fast Iris Region Finding Algorithm for Iris Recognition (홍채 인식을 위한 고속 홍채 영역 추출 방법)

  • 송선아;김백섭;송성호
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.876-884
    • /
    • 2003
  • It is essential to identify both the pupil and iris boundaries for iris recognition. The circular edge detector proposed by Daugman is the most common and powerful method for the iris region extraction. The method is accurate but requires lots of computational time since it is based on the exhaustive search. Some heuristic methods have been proposed to reduce the computational time, but they are not as accurate as that of Daugman. In this paper, we propose a pupil and iris boundary finding algorithm which is faster than and as accurate as that of Daugman. The proposed algorithm searches the boundaries using the Daugman's circular edge detector, but reduces the search region using the problem domain knowledge. In order to find the pupil boundary, the search region is restricted in the maximum and minimum bounding circles in which the pupil resides. The bounding circles are obtained from the binarized pupil image. Two iris boundary points are obtained from the horizontal line passing through the center of the pupil region obtained above. These initial boundary points, together with the pupil point comprise two bounding circles. The iris boundary is searched in this bounding circles. Experiments show that the proposed algorithm is faster than that of Daugman and more accurate than the conventional heuristic methods.

Analysis of Land Cover Characteristics with Object-Based Classification Method - Focusing on the DMZ in Inje-gun, Gangwon-do - (객체기반 분류기법을 이용한 토지피복 특성분석 - 강원도 인제군의 DMZ지역 일원을 대상으로 -)

  • Na, Hyun-Sup;Lee, Jung-Soo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.2
    • /
    • pp.121-135
    • /
    • 2014
  • Object-based classification methods provide a valid alternative to traditional pixel-based methods. This study reports the results of an object-based classification to examine land cover in the demilitarized zones(DMZs) of Inje-gun. We used land cover classes(7 classes for main category and 13 classes for sub-category) selected from the criteria by Korea Ministry of Environment. The average and standard deviation of the spectrum values, and homogeneity of GLCM were chosen to map land cover types in an hierarchical approach using the nearest neighborhood method. We then identified the distributional characteristics of land cover by considering 3 topographic characteristics (altitude, slope gradient, distance from the Southern Limited Line(SLL)) within the DMZs. The results showed that scale 72, shape 0.2, color 0.8, compactness 0.5 and smoothness 0.5 were the optimum weight values while scale, shape and color were most influenced parameters in image segmentation. The forests (92%) were main land cover type in the DMZs; the grassland(5%), the urban area (2%) and the forests (broadleaf forest: 44%, mixed forest: 42%, coniferous forest: 6%) also occupied mostly in land cover classes for sub-category. The results also showed that facilities and roads had higher density within 2 km from the SLL, while paddy, field and bare land were distributed largely outside 6 km from the SLL. In addition, there was apparent distinction in land cover by topographic characteristics. The forest had higher density at above altitude 600m and above slope gradient $30^{\circ}$ while agriculture, bare land and grass land were distributed mainly at below altitude 600m and below slope gradient $30^{\circ}$.

A Object-Based Image Retrieval Using Feature Analysis and Fractal Dimension (특징 분석과 프랙탈 차원을 이용한 객체 기반 영상검색)

  • 이정봉;박장춘
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.2
    • /
    • pp.173-186
    • /
    • 2004
  • This paper proposed the content-based retrieval system as a method for performing image retrieval through the effective feature extraction of the object of significant meaning based on the characteristics of man's visual system. To allow the object region of interest to be primarily detected, the region, being comparatively large size, greatly different from the background color and located in the middle of the image, was judged as the major object with a meaning. To get the original features of the image, the cumulative sum of tile declination difference vector the segment of the object contour had and the signature of the bipartite object were extracted and used in the form of being applied to the rotation of the object and the change of the size after partition of the total length of the object contour of the image into the normalized segment. Starting with this form feature, it was possible to make a retrieval robust to any change in translation, rotation and scaling by combining information on the texture sample, color and eccentricity and measuring the degree of similarity. It responded less sensitively to the phenomenon of distortion of the object feature due to the partial change or damage of the region. Also, the method of imposing a different weight of similarity on the image feature based on the relationship of complexity between measured objects using the fractal dimension by the Boxing-Counting Dimension minimized the wrong retrieval and showed more efficient retrieval rate.

  • PDF

The Segmented Polynomial Curve Fitting for Improving Non-linear Gamma Curve Algorithm (비선형 감마 곡선 알고리즘 개선을 위한 구간 분할 다항식 곡선 접합)

  • Jang, Kyoung-Hoon;Jo, Ho-Sang;Jang, Won-Woo;Kang, Bong-Soon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.3
    • /
    • pp.163-168
    • /
    • 2011
  • In this paper, we proposed non-linear gamma curve algorithm for gamma correction. The previous non-linear gamma curve algorithm is generated by the least square polynomial using the Gauss-Jordan inverse matrix. However, the previous algorithm has some weak points. When calculating coefficients using inverse matrix of higher degree, occurred truncation errors. Also, only if input sample points are existed regular interval on 10-bit scale, the least square polynomial is accurately works. To compensate weak-points, we calculated accurate coefficients of polynomial using eigenvalue and orthogonal value of mat11x from singular value decomposition (SVD) and QR decomposition of vandemond matrix. Also, we used input data part segmentation, then we performed polynomial curve fitting and merged curve fitting results. When compared the previous method and proposed method using the mean square error (MSE) and the standard deviation (STD), the proposed segmented polynomial curve fitting is highly accuracy that MSE under the least significant bit (LSB) error range is approximately $10^{-9}$ and STD is about $10^{-5}$.

Intercomparison of Change Point Analysis Methods for Identification of Inhomogeneity in Rainfall Series and Applications (강우자료의 비동질성 규명을 위한 변동점 분석기법의 상호비교 및 적용)

  • Lee, Sangho;Kim, Sang Ug;Lee, Yeong Seob;Sung, Jang Hyun
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.8
    • /
    • pp.671-684
    • /
    • 2014
  • Change point analysis is a efficient tool to understand the fundamental information in hydro-meteorological data such as rainfall, discharge, temperature etc. Especially, this fundamental information to change points to future rainfall data identified by reasonable detection skills can affect the prediction of flood and drought occurrence because well detected change points provide a key to resolve the non-stationary or inhomogeneous problem by climate change. Therefore, in this study, the comparative study to assess the performance of the 3 change point detection skills, cumulative sum (CUSUM) method, Bayesian change point (BCP) method, and segmentation by dynamic programming (DP) was performed. After assessment of the performance of the proposed detection skills using the 3 types of the synthetic series, the 2 reasonable detection skills were applied to the observed and future rainfall data at the 5 rainfall gauges in South Korea. Finally, it was suggested that BCP (with 0.9 posterior probability) could be best detection skill and DP could be reasonably recommended through the comparative study. Also it was suggested that BCP (with 0.9 posterior probability) and DP detection skills to find some change points could be reasonable at the North-eastern part in South Korea. In future, the results in this study can be efficiently used to resolve the non-stationary problems in hydrological modeling considering inhomogeneity or nonstationarity.

Fast information extraction algorithm for object-based MPEG-4 application from MPEG-2 bit-streamaper (MPEG-2 비트열로부터 객체 기반 MPEG-4 응용을 위한 고속 정보 추출 알고리즘)

  • 양종호;원치선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.12A
    • /
    • pp.2109-2119
    • /
    • 2001
  • In this paper, a fast information extraction algorithm for object-based MPEG-4 application from MPEG-2 bit-steam is proposed. For object-based MPEG-4 conversion, we need to extract such information as object-image, shape-image, macro-block motion vector, and header information from MPEG-2 bit-stream. If we use the extracted information, fast conversion for object-based MPEG-4 is possible. The proposed object extraction algorithm has two important steps, namely the motion vectors extraction from MPEG-2 bit-stream and the watershed algorithm. The algorithm extracts objects using user\`s assistance in the intra frame and tracks then in the following inter frames. If we have an unsatisfactory result for a fast moving object, the user can intervene to correct the segmentation. The proposed algorithm consist of two steps, which are intra frame object extracts processing and inter frame tracking processing. Object extracting process is the step in which user extracts a semantic object directly by using the block classification and watersheds. Object tacking process is the step of the following the object in the subsequent frames. It is based on the boundary fitting method using motion vector, object-mask, and modified watersheds. Experimental results show that the proposed method can achieve a fast conversion from the MPEG-2 bit-stream to the object-based MPEG-4 input.

  • PDF

Analysis of Shadow Effect on High Resolution Satellite Image Matching in Urban Area (도심지역의 고해상도 위성영상 정합에 대한 그림자 영향 분석)

  • Yeom, Jun Ho;Han, You Kyung;Kim, Yong Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.2
    • /
    • pp.93-98
    • /
    • 2013
  • Multi-temporal high resolution satellite images are essential data for efficient city analysis and monitoring. Yet even when acquired from the same location, identical sensors as well as different sensors, these multi-temporal images have a geometric inconsistency. Matching points between images, therefore, must be extracted to match the images. With images of an urban area, however, it is difficult to extract matching points accurately because buildings, trees, bridges, and other artificial objects cause shadows over a wide area, which have different intensities and directions in multi-temporal images. In this study, we analyze a shadow effect on image matching of high resolution satellite images in urban area using Scale-Invariant Feature Transform(SIFT), the representative matching points extraction method, and automatic shadow extraction method. The shadow segments are extracted using spatial and spectral attributes derived from the image segmentation. Also, we consider information of shadow adjacency with the building edge buffer. SIFT matching points extracted from shadow segments are eliminated from matching point pairs and then image matching is performed. Finally, we evaluate the quality of matching points and image matching results, visually and quantitatively, for the analysis of shadow effect on image matching of high resolution satellite image.

Studies of Automatic Dental Cavity Detection System as an Auxiliary Tool for Diagnosis of Dental Caries in Digital X-ray Image (디지털 X-선 영상을 통한 치아우식증 진단 보조 시스템으로써 치아 와동 자동 검출 프로그램 연구)

  • Huh, Jangyong;Nam, Haewon;Kim, Juhae;Park, Jiman;Shin, Sukyoung;Lee, Rena
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.52-58
    • /
    • 2015
  • The automated dental cavity detection program for a new concept intra-oral dental x-ray imaging device, an auxiliary diagnosis system, which is able to assist a dentist to identify dental caries in an early stage and to make an accurate diagnosis, was to be developed. The primary theory of the automatic dental cavity detection program is divided into two algorithms; one is an image segmentation skill to discriminate between a dental cavity and a normal tooth and the other is a computational method to analyze feature of an tooth image and take an advantage of it for detection of dental cavities. In the present study, it is, first, evaluated how accurately the DRLSE (Direct Regularized Level Set Evolution) method extracts demarcation surrounding the dental cavity. In order to evaluate the ability of the developed algorithm to automatically detect dental cavities, 7 tooth phantoms from incisor to molar were fabricated which contained a various form of cavities. Then, dental cavities in the tooth phantom images were analyzed with the developed algorithm. Except for two cavities whose contours were identified partially, the contours of 12 cavities were correctly discriminated by the automated dental caries detection program, which, consequently, proved the practical feasibility of the automatic dental lesion detection algorithm. However, an efficient and enhanced algorithm is required for its application to the actual dental diagnosis since shapes or conditions of the dental caries are different between individuals and complicated. In the future, the automatic dental cavity detection system will be improved adding pattern recognition or machine learning based algorithm which can deal with information of tooth status.

Automation of Building Extraction and Modeling Using Airborne LiDAR Data (항공 라이다 데이터를 이용한 건물 모델링의 자동화)

  • Lim, Sae-Bom;Kim, Jung-Hyun;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.619-628
    • /
    • 2009
  • LiDAR has capability of rapid data acquisition and provides useful information for reconstructing surface of the Earth. However, Extracting information from LiDAR data is not easy task because LiDAR data consist of irregularly distributed point clouds of 3D coordinates and lack of semantic and visual information. This thesis proposed methods for automatic extraction of buildings and 3D detail modeling using airborne LiDAR data. As for preprocessing, noise and unnecessary data were removed by iterative surface fitting and then classification of ground and non-ground data was performed by analyzing histogram. Footprints of the buildings were extracted by tracing points on the building boundaries. The refined footprints were obtained by regularization based on the building hypothesis. The accuracy of building footprints were evaluated by comparing with 1:1,000 digital vector maps. The horizontal RMSE was 0.56m for test areas. Finally, a method of 3D modeling of roof superstructure was developed. Statistical and geometric information of the LiDAR data on building roof were analyzed to segment data and to determine roof shape. The superstructures on the roof were modeled by 3D analytical functions that were derived by least square method. The accuracy of the 3D modeling was estimated using simulation data. The RMSEs were 0.91m, 1.43m, 1.85m and 1.97m for flat, sloped, arch and dome shapes, respectively. The methods developed in study show that the automation of 3D building modeling process was effectively performed.

The Study for the Improvement of the Informal Science Education Program of the Gwachon National Science Museum Based on the Participant Satisfaction (교육프로그램 참가자 만족도 조사로 본 국립과천과학관의 비형식 과학교육프로그램 운영 방향 연구)

  • Kim, Yi-sul;Lee, Sun Hee;Sohn, Jungjoo;Kim, Jung Bok;Kweon, Hyosun
    • Journal of Science Education
    • /
    • v.34 no.2
    • /
    • pp.279-290
    • /
    • 2010
  • This study is to investigate the direction for management of informal science education center by survey of participants' satisfaction and implication. The place to study was Gwacheon National Science Museum in Kyung-gi province. $4,322m^2$ for the education space can make education done effectively. And attendees of their own education program are over 10,000 per year. 87 students who attend education program and 78 of their parents joined the survey of participants' satisfaction. The results of this study show that most of the participants want informal science education center to be able to make up for school education in part of promoting scientific literacy, heuristic method and scientific attitude. Things to be improved were feedback about student activity, segmentation of the education program for each grade, public relations exercise of program and advanced method of teaching based on each subject of classes. As ideas for improvement, it seemed that making long term program for continuous participant, limiting participant appropriately, developing guide manual of teaching, improving publicity of program were required.

  • PDF