• Title/Summary/Keyword: Model Based Segmentation

Search Result 602, Processing Time 0.036 seconds

Texture Segmentation using ART2 (ART2를 이용한 효율적인 텍스처 분할과 합병)

  • Kim, Do-Nyun;Cho, Dong-Sub
    • Proceedings of the KIEE Conference
    • /
    • 1995.07b
    • /
    • pp.974-976
    • /
    • 1995
  • Segmentation of image data is an important problem in computer vision, remote sensing, and image analysis. Most objects in the real world have textured surfaces. Segmentation based on texture information is possible even if there are no apparent intensity edges between the different regions. There are many existing methods for texture segmentation and classification, based on different types of statistics that can be obtained from the gray-level images. In this paper, we use a neural network model --- ART-2 (Adaptive Resonance Theory) for textures in an image, proposed by Carpenter and Grossberg. In our experiments, we use Walsh matrix as feature value for textured image.

  • PDF

Segmentation and Classification of Lidar data

  • Tseng, Yi-Hsing;Wang, Miao
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.153-155
    • /
    • 2003
  • Laser scanning has become a viable technique for the collection of a large amount of accurate 3D point data densely distributed on the scanned object surface. The inherent 3D nature of the sub-randomly distributed point cloud provides abundant spatial information. To explore valuable spatial information from laser scanned data becomes an active research topic, for instance extracting digital elevation model, building models, and vegetation volumes. The sub-randomly distributed point cloud should be segmented and classified before the extraction of spatial information. This paper investigates some exist segmentation methods, and then proposes an octree-based split-and-merge segmentation method to divide lidar data into clusters belonging to 3D planes. Therefore, the classification of lidar data can be performed based on the derived attributes of extracted 3D planes. The test results of both ground and airborne lidar data show the potential of applying this method to extract spatial features from lidar data.

  • PDF

Infrared and Visible Image Fusion Based on NSCT and Deep Learning

  • Feng, Xin
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1405-1419
    • /
    • 2018
  • An image fusion method is proposed on the basis of depth model segmentation to overcome the shortcomings of noise interference and artifacts caused by infrared and visible image fusion. Firstly, the deep Boltzmann machine is used to perform the priori learning of infrared and visible target and background contour, and the depth segmentation model of the contour is constructed. The Split Bregman iterative algorithm is employed to gain the optimal energy segmentation of infrared and visible image contours. Then, the nonsubsampled contourlet transform (NSCT) transform is taken to decompose the source image, and the corresponding rules are used to integrate the coefficients in the light of the segmented background contour. Finally, the NSCT inverse transform is used to reconstruct the fused image. The simulation results of MATLAB indicates that the proposed algorithm can obtain the fusion result of both target and background contours effectively, with a high contrast and noise suppression in subjective evaluation as well as great merits in objective quantitative indicators.

Improvement of an Automatic Segmentation for TTS Using Voiced/Unvoiced/Silence Information (유/무성/묵음 정보를 이용한 TTS용 자동음소분할기 성능향상)

  • Kim Min-Je;Lee Jung-Chul;Kim Jong-Jin
    • MALSORI
    • /
    • no.58
    • /
    • pp.67-81
    • /
    • 2006
  • For a large corpus of time-aligned data, HMM based approaches are most widely used for automatic segmentation, providing a consistent and accurate phone labeling scheme. There are two methods for training in HMM. Flat starting method has a property that human interference is minimized but it has low accuracy. Bootstrap method has a high accuracy, but it has a defect that manual segmentation is required In this paper, a new algorithm is proposed to minimize manual work and to improve the performance of automatic segmentation. At first phase, voiced, unvoiced and silence classification is performed for each speech data frame. At second phase, the phoneme sequence is aligned dynamically to the voiced/unvoiced/silence sequence according to the acoustic phonetic rules. Finally, using these segmented speech data as a bootstrap, phoneme model parameters based on HMM are trained. For the performance test, hand labeled ETRI speech DB was used. The experiment results showed that our algorithm achieved 10% improvement of segmentation accuracy within 20 ms tolerable error range. Especially for the unvoiced consonants, it showed 30% improvement.

  • PDF

Weakly-supervised Semantic Segmentation using Exclusive Multi-Classifier Deep Learning Model (독점 멀티 분류기의 심층 학습 모델을 사용한 약지도 시맨틱 분할)

  • Choi, Hyeon-Joon;Kang, Dong-Joong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.227-233
    • /
    • 2019
  • Recently, along with the recent development of deep learning technique, neural networks are achieving success in computer vision filed. Convolutional neural network have shown outstanding performance in not only for a simple image classification task, but also for tasks with high difficulty such as object segmentation and detection. However many such deep learning models are based on supervised-learning, which requires more annotation labels than image-level label. Especially image semantic segmentation model requires pixel-level annotations for training, which is very. To solve these problems, this paper proposes a weakly-supervised semantic segmentation method which requires only image level label to train network. Existing weakly-supervised learning methods have limitations in detecting only specific area of object. In this paper, on the other hand, we use multi-classifier deep learning architecture so that our model recognizes more different parts of objects. The proposed method is evaluated using VOC 2012 validation dataset.

The Estimation of Parameters to minimize the Energy Function of the Piecewise Constant Model Using Three-way Analysis of Variance (3원 변량분석을 이용한 구분적으로 일정한 모델의 에너지 함수 최소화를 위한 매개변수들 추정)

  • Joo, Ki-See;Cho, Deog-Sang;Seo, Jae-Hyung
    • Journal of Advanced Navigation Technology
    • /
    • v.16 no.5
    • /
    • pp.846-852
    • /
    • 2012
  • The result of imaging segmentation becomes different with the parameters involved in the segmentation algorithms; therefore, the parameters for the optimal segmentation have been found through a try and error. In this paper, we propose the method to find the best values of parameters involved in the area-based active contour method using three-way ANOVA. The segmentation result applied by three-way ANOVA is compared with the optimal segmentation which is drawn by user. We use the global consistency rate for comparing two segmentations. Finally, we estimate the main effects and interactions between each parameter using three-way ANOVA, and then calculate the point and interval estimate to find the best values of three parameters. The proposed method will be a great help to find the optimal parameters before working the motion segmentation using piecewise constant model.

Three-Dimensional Visualization of Medical Image using Image Segmentation Algorithm based on Deep Learning (딥 러닝 기반의 영상분할 알고리즘을 이용한 의료영상 3차원 시각화에 관한 연구)

  • Lim, SangHeon;Kim, YoungJae;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.468-475
    • /
    • 2020
  • In this paper, we proposed a three-dimensional visualization system for medical images in augmented reality based on deep learning. In the proposed system, the artificial neural network model performed fully automatic segmentation of the region of lung and pulmonary nodule from chest CT images. After applying the three-dimensional volume rendering method to the segmented images, it was visualized in augmented reality devices. As a result of the experiment, when nodules were present in the region of lung, it could be easily distinguished with the naked eye. Also, the location and shape of the lesions were intuitively confirmed. The evaluation was accomplished by comparing automated segmentation results of the test dataset to the manual segmented image. Through the evaluation of the segmentation model, we obtained the region of lung DSC (Dice Similarity Coefficient) of 98.77%, precision of 98.45%, recall of 99.10%. And the region of pulmonary nodule DSC of 91.88%, precision of 93.05%, recall of 90.94%. If this proposed system will be applied in medical fields such as medical practice and medical education, it is expected that it can contribute to custom organ modeling, lesion analysis, and surgical education and training of patients.

AI-based Automatic Spine CT Image Segmentation and Haptic Rendering for Spinal Needle Insertion Simulator (척추 바늘 삽입술 시뮬레이터 개발을 위한 인공지능 기반 척추 CT 이미지 자동분할 및 햅틱 렌더링)

  • Park, Ikjong;Kim, Keehoon;Choi, Gun;Chung, Wan Kyun
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.4
    • /
    • pp.316-322
    • /
    • 2020
  • Endoscopic spine surgery is an advanced surgical technique for spinal surgery since it minimizes skin incision, muscle damage, and blood loss compared to open surgery. It requires, however, accurate positioning of an endoscope to avoid spinal nerves and to locate the endoscope near the target disk. Before the insertion of the endoscope, a guide needle is inserted to guide it. Also, the result of the surgery highly depends on the surgeons' experience and the patients' CT or MRI images. Thus, for the training, a number of haptic simulators for spinal needle insertion have been developed. But, still, it is difficult to be used in the medical field practically because previous studies require manual segmentation of vertebrae from CT images, and interaction force between the needle and soft tissue has not been considered carefully. This paper proposes AI-based automatic vertebrae CT-image segmentation and haptic rendering method using the proposed need-tissue interaction model. For the segmentation, U-net structure was implemented and the accuracy was 93% in pixel and 88% in IoU. The needle-tissue interaction model including puncture force and friction force was implemented for haptic rendering in the proposed spinal needle insertion simulator.

MRF Model based Image Segmentation using Genetic Algorithm (유전자 알고리즘을 이용한 MRF 모델 기반의 영상분할)

  • Kim, Eun-Yi;Park, Se-Hyun;Jung, Kee-Chul;Kim, Hang-Joon
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.9
    • /
    • pp.66-75
    • /
    • 1999
  • Image segmentation is the process where an image is segmented into regions that are set of homogeneous pixels. The result has a ciritical effect on accuracy of image understanding. In this paper, an Markov random field (MRF) image segmentation is proposed using genetic algorithm(GA). We model an image using MRF which is resistant to noise and blurring. While MRF based methods are robust to degradation, these require accurate parameter estimation. So GA is used as a segmentation algorithm which is effective at dealing with combinatorial problems. The efficiency of the proposed method is shown by experimental results with real images and application to automatic vehicle extraction system.

  • PDF

Motion Parameter Estimation and Segmentation with Probabilistic Clustering (활률적 클러스터링에 의한 움직임 파라미터 추정과 세그맨테이션)

  • 정차근
    • Journal of Broadcast Engineering
    • /
    • v.3 no.1
    • /
    • pp.50-60
    • /
    • 1998
  • This paper addresses a problem of extraction of parameteric motion estimation and structural motion segmentation for compact image sequence representation and object-based generic video coding. In order to extract meaningful motion structure from image sequences, a direct parameteric motion estimation based on a pre-segmentation is proposed. The pre-segmentation which considers the motion of the moving objects is canied out based on probabilistic clustering with mixture models using optical flow and image intensities. Parametric motion segmentation can be obtained by iterated estimation of motion model parameters and region reassignment according to a criterion using Gauss-Newton iterative optimization algorithm. The efficiency of the proposed methoo is verified with computer simulation using elF real image sequences.

  • PDF