• 제목/요약/키워드: Automatic thresholding

검색결과 95건 처리시간 0.022초

택배 자동 분류를 위한 주소영역 검출 알고리즘 (Destination address block locating algorithm for automatic classification of packages)

  • 김봉석;김승진;정윤수;임성운;노철균;원철호;조진호;이건일
    • 센서학회지
    • /
    • 제12권3호
    • /
    • pp.128-138
    • /
    • 2003
  • 본 연구에서는 택배물의 분류를 위한 자동화 시스템에서 주소 영역 검출 알고리즘을 제안하였다. 주소 영역 검출을 위한 알고리즘에서는 대상 영상이 매우 크기 때문에 수행 시간의 단축을 위하여 택배 라벨부분을 포함하는 제한된 범위인 관심영역 (Region of interesting: ROI)을 구한 후, 관심영역 내에서 모든 알고리즘이 수행되도록 한다. 주소 영역 검출을 위하여 택배 라벨의 특징인 주소 영역을 둘러싸고 있는 테두리선을 이용한다. 이진화 (thresholding) 과정과 라벨링 (labeling) 과정을 통하여 획득된 영상에서 주소 영역의 테두리선과 그 밖의 성분들을 각각 독립된 연결성분들 (connected components)로 검출한다. 주소 영역을 둘러싸는 테두리선의 기하학적인 특징을 이용하여 여러 개의 연결성분들 중에서 주소 영역을 둘러싸는 테두리선을 분리한다. 마지막으로 원 영상과 분리된 테두리선 부분과의 논리적 곱을 이용하여 주소 영역을 최종적으로 검출하게 된다.

흉부 CT 영상의 밝기값 정보를 사용한 폐구조물 자동 분할 (Automatic Segmentation of Pulmonary Structures using Gray-level Information of Chest CT Images)

  • 임예니;홍헬렌
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제33권11호
    • /
    • pp.942-952
    • /
    • 2006
  • 본 논문에서는 흉부 CT 영상의 밝기값 정보를 사용하여 폐 구조물을 자동 분할하기 위한 방법을 제안한다. 본 제안방법은 다음과 같은 다섯 단계로 구성된다. 첫 번째, 영상의 밝기값 차이를 이용하여 폐 구조물을 분할하기 위해 최적 임계값 기법을 사용하여 임계값을 계산한다. 두 번째, 흉부 CT 영상에 2차원 영역성장법의 역 연산을 사용하여 배경으로부터 흉부를, 흉부로부터 기관지 및 폐를 단계적으로 분할한다. 이 때, 밝기값이 비슷한 다른 영역들을 3차원 연결화소군 레이블링을 통해 제거한다. 세 번째, 흉부 CT 영상에 3차원 분기 기반 영역성장법을 적용하여 기관과 좌우 기관지를 분할한다. 네 번째, 기관지 및 폐에서 기관지를 영상 감산함으로써 정확한 폐 영역을 얻는다. 마지막으로, 히스토그램 분석을 통해 임계값을 계산하고 기관지 및 폐에 밝기값 기반 임계값 기법을 적용하여 폐혈관을 분할한다. 제안방법의 정확성을 검증하기 위해 폐, 기관지, 폐혈관의 분할 결과에 대해 육안평가를 수행한다. 제안한 3차원 분기 기반 영역성장법을 통한 기관지 분할 결과를 평가하기 위해 기존 영역성장법으로 분할한 결과와 비교한다. 실험 결과는 제안 분할 방법이 폐, 기관지, 폐혈관을 자동으로 정확하게 추출함을 보여준다.

버섯 전후면과 꼭지부 상태의 자동 인식 (Automatic Recognition of the Front/Back Sides and Stalk States for Mushrooms(Lentinus Edodes L.))

  • 황헌;이충호
    • Journal of Biosystems Engineering
    • /
    • 제19권2호
    • /
    • pp.124-137
    • /
    • 1994
  • Visual features of a mushroom(Lentinus Edodes, L.) are critical in grading and sorting as most agricultural products are. Because of its complex and various visual features, grading and sorting of mushrooms have been done manually by the human expert. To realize the automatic handling and grading of mushrooms in real time, the computer vision system should be utilized and the efficient and robust processing of the camera captured visual information be provided. Since visual features of a mushroom are distributed over the front and back sides, recognizing sides and states of the stalk including the stalk orientation from the captured image is a prime process in the automatic task processing. In this paper, the efficient and robust recognition process identifying the front and back side and the state of the stalk was developed and its performance was compared with other recognition trials. First, recognition was tried based on the rule set up with some experimental heuristics using the quantitative features such as geometry and texture extracted from the segmented mushroom image. And the neural net based learning recognition was done without extracting quantitative features. For network inputs the segmented binary image obtained from the combined type automatic thresholding was tested first. And then the gray valued raw camera image was directly utilized. The state of the stalk seriously affects the measured size of the mushroom cap. When its effect is serious, the stalk should be excluded in mushroom cap sizing. In this paper, the stalk removal process followed by the boundary regeneration of the cap image was also presented. The neural net based gray valued raw image processing showed the successful results for our recognition task. The developed technology through this research may open the new way of the quality inspection and sorting especially for the agricultural products whose visual features are fuzzy and not uniquely defined.

  • PDF

적외선 영상 표적추적 성능 개선을 위한 적응적인 자동문턱치 산출 기법 연구 (Adaptive Automatic Thresholding in Infrared Image Target Tracking)

  • 김태한;송택렬
    • 제어로봇시스템학회논문지
    • /
    • 제17권6호
    • /
    • pp.579-586
    • /
    • 2011
  • It is very critical for image processing of IIR (Imaging Infrared) seekers to achieve improved guidance performance for missile systems to determine appropriate thresholds in various environments. In this paper, we propose automatic threshold determination methods for proper thresholds to extract definite target signals in an EOCM (Electro-Optical Countermeasures) environment with low SNR (Signal-to-Noise Ratios). In particular, thresholds are found to be too low to extract target signals if one uses the Otsu method so that we suggest a Shifted Otsu method to solve this problem. Also we improve extracting target signal by changing Shifted Otsu thresholds according to the TBR (Target to Background Ratio). The suggested method is tested for real IIR images and the results are compared with the Otsu method. The HPDAF (Highest Probabilistic Data Association Filter) which selects the target originated measurements by taking into account of both signal intensity and statistical distance information is applied in this study.

이동로봇을 위한 영상의 자동 엣지 검출 방법 (Automatic Edge Detection Method for Mobile Robot Application)

  • 김동수;권인소;이왕헌
    • 제어로봇시스템학회논문지
    • /
    • 제11권5호
    • /
    • pp.423-428
    • /
    • 2005
  • This paper proposes a new edge detection method using a $3{\times}3$ ideal binary pattern and lookup table (LUT) for the mobile robot localization without any parameter adjustments. We take the mean of the pixels within the $3{\times}3$ block as a threshold by which the pixels are divided into two groups. The edge magnitude and orientation are calculated by taking the difference of average intensities of the two groups and by searching directional code in the LUT, respectively. And also the input image is not only partitioned into multiple groups according to their intensity similarities by the histogram, but also the threshold of each group is determined by fuzzy reasoning automatically. Finally, the edges are determined through non-maximum suppression using edge confidence measure and edge linking. Applying this edge detection method to the mobile robot localization using projective invariance of the cross ratio. we demonstrate the robustness of the proposed method to the illumination changes in a corridor environment.

Navigation and Find Co-location of ATSR Images

  • Shin, Dong-Seok;Pollard, John-K.
    • 대한원격탐사학회지
    • /
    • 제10권2호
    • /
    • pp.133-160
    • /
    • 1994
  • In this paper, we propose a comprehensive geometric correction algorithm of Along Track Scanning Radiometer(ATSR) images. The procedure consists of two cascaded modules; precorrection and fine co-location. The pre-correction algorithm is based on the navigation model which was derived in mathematical forms. This model was applied for correction raw(un-geolocated) ATSR images. The non-systematic geometric errors are also introduced as the limitation of the geometric correction by this analytical method. A fast and automatic algorithm is also presented in the paper for co-locating nadir and forward views of the ATSR images by using a binary cross-correlation matching technique. It removes small non-systematic errors which cannot be corrected by the analytic method. The proposed algorithm does not require any auxiliary informations, or a priori processing and avoiding the imperfect co-registratio problem observed with multiple channels. Coastlines in images are detected by a ragion segmentation and an automatic thresholding technique. The matching procedure is carried out with binaty coastline images (nadir and forward), and it gives comparable accuracy and faster processing than a patch based matching technique. This technique automatically reduces non-systematic errors between two views to .$\pm$ 1 pixel.

EBT 영상에서 심장 영역의 추출 (Extraction of Heart Region in EBT Images)

  • 김현수;이성기
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제27권6호
    • /
    • pp.651-659
    • /
    • 2000
  • 의료영상에서 심장 영역을 추출하는 것은 심장의 질환 진단 및 삼차원 가시화를 위하여 매우 중요하다. 본 논문에서는 EBT (electron beam tomography) 의료 영상에서 심장 영역을 자동으로 추출하는 방법을 제시한다. EBT 영상에서 심장 영역을 추출하는 과정은 대비를 이용한 이진화, 해부학적 지식과 수학적 형태학을 이용하여 대략적인 심장 영역을 추출하고, active contour model (snake)을 사용하여 정확한 심장영역을 추출하였다. 특히 대비를 이용한 이진화 방법은 EBT 영상과 같이 복잡한 영상에서 좋은 결과를 보인다. 자동으로 추출한 심장 영역의 결과를 의학 전문가의 추출 결과와 비교하여 분석하였다.

  • PDF

합성곱 신경망 기반 야간 차량 검출 방법 (Night-time Vehicle Detection Method Using Convolutional Neural Network)

  • 박웅규;최연규;김현구;최규상;정호열
    • 대한임베디드공학회논문지
    • /
    • 제12권2호
    • /
    • pp.113-120
    • /
    • 2017
  • In this paper, we present a night-time vehicle detection method using CNN (Convolutional Neural Network) classification. The camera based night-time vehicle detection plays an important role on various advanced driver assistance systems (ADAS) such as automatic head-lamp control system. The method consists mainly of thresholding, labeling and classification steps. The classification step is implemented by existing CIFAR-10 model CNN. Through the simulations tested on real road video, we show that CNN classification is a good alternative for night-time vehicle detection.

자동차의 자기 주행차선 검출을 위한 시각 센싱 (Vision Sensing for the Ego-Lane Detection of a Vehicle)

  • 김동욱;도용태
    • 센서학회지
    • /
    • 제27권2호
    • /
    • pp.137-141
    • /
    • 2018
  • Detecting the ego-lane of a vehicle (the lane on which the vehicle is currently running) is one of the basic techniques for a smart car. Vision sensing is a widely-used method for the ego-lane detection. Existing studies usually find road lane lines by detecting edge pixels in the image from a vehicle camera, and then connecting the edge pixels using Hough Transform. However, this approach takes rather long processing time, and too many straight lines are often detected resulting in false detections in various road conditions. In this paper, we find the lane lines by scanning only a limited number of horizontal lines within a small image region of interest. The horizontal image line scan replaces the edge detection process of existing methods. Automatic thresholding and spatiotemporal filtering procedures are also proposed in order to make our method reliable. In the experiments using real road images of different conditions, the proposed method resulted in high success rate.

APPLICATION OF NEURAL NETWORK FOR THE CLOUD DETECTION FROM GEOSTATIONARY SATELLITE DATA

  • Ahn, Hyun-Jeong;Ahn, Myung-Hwan;Chung, Chu-Yong
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.34-37
    • /
    • 2005
  • An efficient and robust neural network-based scheme is introduced in this paper to perform automatic cloud detection. Unlike many existing cloud detection schemes which use thresholding and statistical methods, we used the artificial neural network methods, the multi-layer perceptrons (MLP) with back-propagation algorithm and radial basis function (RBF) networks for cloud detection from Geostationary satellite images. We have used a simple scene (a mixed scene containing only cloud and clear sky). The main results show that the neural networks are able to handle complex atmospheric and meteorological phenomena. The experimental results show that two methods performed well, obtaining a classification accuracy reaching over 90 percent. Moreover, the RBF model is the most effective method for the cloud classification.

  • PDF