• Title/Summary/Keyword: Segment similarity

Search Result 80, Processing Time 0.023 seconds

cDNA Cloning and Nucleotide Sequence Determination for VP7 Coding RNA Segment of Human Rotavirus Isolated in Korea (한국에서 분리된 사람 로타바이러스의 VP7 코딩 RNA 분절의 cDNA 합성과 염기서열 결정)

  • Kim, Young Bong;Kim, Kyung Hee;Yang Jai Myung
    • Korean Journal of Microbiology
    • /
    • v.30 no.5
    • /
    • pp.397-402
    • /
    • 1992
  • The cDNA of RNA segment coding for VP7 of human rotavirus isolated from patient's stool at Seoul area was synthesized, amplified by polymerase chain reaction, field in with Klenow fragment of DNA polymerase I and cloned into pUC19. The cDNA sequence was determined and compared with that of VP7 coding RNA segments of group A rotaviruses isolates in foreign country. Over 90% sequence homology was found with serotyppe I sepcific WA1 and RE9 strains. Comparative analysis of the deduced amino acid sequences within the two variable regions (amino acid residue 87 through 101 and 208 through 221) with WA1 and RE9 strains also showed high degree of sequence similarity with each other.

  • PDF

Image Dehazing Enhancement Algorithm Based on Mean Guided Filtering

  • Weimin Zhou
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.417-426
    • /
    • 2023
  • To improve the effect of image restoration and solve the image detail loss, an image dehazing enhancement algorithm based on mean guided filtering is proposed. The superpixel calculation method is used to pre-segment the original foggy image to obtain different sub-regions. The Ncut algorithm is used to segment the original image, and it outputs the segmented image until there is no more region merging in the image. By means of the mean-guided filtering method, the minimum value is selected as the value of the current pixel point in the local small block of the dark image, and the dark primary color image is obtained, and its transmittance is calculated to obtain the image edge detection result. According to the prior law of dark channel, a classic image dehazing enhancement model is established, and the model is combined with a median filter with low computational complexity to denoise the image in real time and maintain the jump of the mutation area to achieve image dehazing enhancement. The experimental results show that the image dehazing and enhancement effect of the proposed algorithm has obvious advantages, can retain a large amount of image detail information, and the values of information entropy, peak signal-to-noise ratio, and structural similarity are high. The research innovatively combines a variety of methods to achieve image dehazing and improve the quality effect. Through segmentation, filtering, denoising and other operations, the image quality is effectively improved, which provides an important reference for the improvement of image processing technology.

Automated Lung Segmentation on Chest Computed Tomography Images with Extensive Lung Parenchymal Abnormalities Using a Deep Neural Network

  • Seung-Jin Yoo;Soon Ho Yoon;Jong Hyuk Lee;Ki Hwan Kim;Hyoung In Choi;Sang Joon Park;Jin Mo Goo
    • Korean Journal of Radiology
    • /
    • v.22 no.3
    • /
    • pp.476-488
    • /
    • 2021
  • Objective: We aimed to develop a deep neural network for segmenting lung parenchyma with extensive pathological conditions on non-contrast chest computed tomography (CT) images. Materials and Methods: Thin-section non-contrast chest CT images from 203 patients (115 males, 88 females; age range, 31-89 years) between January 2017 and May 2017 were included in the study, of which 150 cases had extensive lung parenchymal disease involving more than 40% of the parenchymal area. Parenchymal diseases included interstitial lung disease (ILD), emphysema, nontuberculous mycobacterial lung disease, tuberculous destroyed lung, pneumonia, lung cancer, and other diseases. Five experienced radiologists manually drew the margin of the lungs, slice by slice, on CT images. The dataset used to develop the network consisted of 157 cases for training, 20 cases for development, and 26 cases for internal validation. Two-dimensional (2D) U-Net and three-dimensional (3D) U-Net models were used for the task. The network was trained to segment the lung parenchyma as a whole and segment the right and left lung separately. The University Hospitals of Geneva ILD dataset, which contained high-resolution CT images of ILD, was used for external validation. Results: The Dice similarity coefficients for internal validation were 99.6 ± 0.3% (2D U-Net whole lung model), 99.5 ± 0.3% (2D U-Net separate lung model), 99.4 ± 0.5% (3D U-Net whole lung model), and 99.4 ± 0.5% (3D U-Net separate lung model). The Dice similarity coefficients for the external validation dataset were 98.4 ± 1.0% (2D U-Net whole lung model) and 98.4 ± 1.0% (2D U-Net separate lung model). In 31 cases, where the extent of ILD was larger than 75% of the lung parenchymal area, the Dice similarity coefficients were 97.9 ± 1.3% (2D U-Net whole lung model) and 98.0 ± 1.2% (2D U-Net separate lung model). Conclusion: The deep neural network achieved excellent performance in automatically delineating the boundaries of lung parenchyma with extensive pathological conditions on non-contrast chest CT images.

VRTEC : Multi-step Retrieval Model for Content-based Video Query (VRTEC : 내용 기반 비디오 질의를 위한 다단계 검색 모델)

  • 김창룡
    • Journal of the Korean Institute of Telematics and Electronics T
    • /
    • v.36T no.1
    • /
    • pp.93-102
    • /
    • 1999
  • In this paper, we propose a data model and a retrieval method for content-based video query After partitioning a video into frame sets of same length which is called video-window, each video-window can be mapped to a point in a multidimensional space. A video can be represented a trajectory by connection of neighboring video-window in a multidimensional space. The similarity between two video-windows is defined as the euclidean distance of two points in multidimensional space, and the similarity between two video segments of arbitrary length is obtained by comparing corresponding trajectory. A new retrieval method with filtering and refinement step if developed, which return correct results and makes retrieval speed increase by 4.7 times approximately in comparison to a method without filtering and refinement step.

  • PDF

Acoustic Signal based Optimal Route Selection Problem: Performance Comparison of Multi-Attribute Decision Making methods

  • Borkar, Prashant;Sarode, M.V.;Malik, L. G.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.647-669
    • /
    • 2016
  • Multiple attribute for decision making including user preference will increase the complexity of route selection process. Various approaches have been proposed to solve the optimal route selection problem. In this paper, multi attribute decision making (MADM) algorithms such as Simple Additive Weighting (SAW), Weighted Product Method (WPM), Analytic Hierarchy Process (AHP) method and Total Order Preference by Similarity to the Ideal Solution (TOPSIS) methods have been proposed for acoustic signature based optimal route selection to facilitate user with better quality of service. The traffic density state conditions (very low, low, below medium, medium, above medium, high and very high) on the road segment is the occurrence and mixture weightings of traffic noise signals (Tyre, Engine, Air Turbulence, Exhaust, and Honks etc) is considered as one of the attribute in decision making process. The short-term spectral envelope features of the cumulative acoustic signals are extracted using Mel-Frequency Cepstral Coefficients (MFCC) and Adaptive Neuro-Fuzzy Classifier (ANFC) is used to model seven traffic density states. Simple point method and AHP has been used for calculation of weights of decision parameters. Numerical results show that WPM, AHP and TOPSIS provide similar performance.

Convolutional Neural Network-Based Automatic Segmentation of Substantia Nigra on Nigrosome and Neuromelanin Sensitive MR Images

  • Kang, Junghwa;Kim, Hyeonha;Kim, Eunjin;Kim, Eunbi;Lee, Hyebin;Shin, Na-young;Nam, Yoonho
    • Investigative Magnetic Resonance Imaging
    • /
    • v.25 no.3
    • /
    • pp.156-163
    • /
    • 2021
  • Recently, neuromelanin and nigrosome imaging techniques have been developed to evaluate the substantia nigra in Parkinson's disease. Previous studies have shown potential benefits of quantitative analysis of neuromelanin and nigrosome images in the substantia nigra, although visual assessments have been performed to evaluate structures in most studies. In this study, we investigate the potential of using deep learning based automatic region segmentation techniques for quantitative analysis of the substantia nigra. The deep convolutional neural network was trained to automatically segment substantia nigra regions on 3D nigrosome and neuromelanin sensitive MR images obtained from 30 subjects. With a 5-fold cross-validation, the mean calculated dice similarity coefficient between manual and deep learning was 0.70 ± 0.11. Although calculated dice similarity coefficients were relatively low due to empirically drawn margins, selected slices were overlapped for more than two slices of all subjects. Our results demonstrate that deep convolutional neural network-based method could provide reliable localization of substantia nigra regions on neuromelanin and nigrosome sensitive MR images.

Content-based Image Retrieval using an Improved Chain Code and Hidden Markov Model (개선된 chain code와 HMM을 이용한 내용기반 영상검색)

  • 조완현;이승희;박순영;박종현
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.375-378
    • /
    • 2000
  • In this paper, we propose a novo] content-based image retrieval system using both Hidden Markov Model(HMM) and an improved chain code. The Gaussian Mixture Model(GMM) is applied to statistically model a color information of the image, and Deterministic Annealing EM(DAEM) algorithm is employed to estimate the parameters of GMM. This result is used to segment the given image. We use an improved chain code, which is invariant to rotation, translation and scale, to extract the feature vectors of the shape for each image in the database. These are stored together in the database with each HMM whose parameters (A, B, $\pi$) are estimated by Baum-Welch algorithm. With respect to feature vector obtained in the same way from the query image, a occurring probability of each image is computed by using the forward algorithm of HMM. We use these probabilities for the image retrieval and present the highest similarity images based on these probabilities.

  • PDF

Efficient Superpixel Generation Method Based on Image Complexity

  • Park, Sanghyun
    • Journal of Multimedia Information System
    • /
    • v.7 no.3
    • /
    • pp.197-204
    • /
    • 2020
  • Superpixel methods are widely used in the preprocessing stage as a method to reduce computational complexity by simplifying images while maintaining the characteristics of the images in the computer vision applications. It is common to generate superpixels of similar size and shape based on the pixel values rather than considering the characteristics of the image. In this paper, we propose a method to control the sizes and shapes of generated superpixels, considering the contents of an image. The proposed method consists of two steps. The first step is to over-segment an image so that the boundary information of the image is well preserved. In the second step, generated superpixels are merged based on similarity to produce the target number of superpixels, where the shapes of superpixels are controlled by limiting the maximum size and the proposed roundness metric. Experimental results show that the proposed method preserves the boundaries of the objects in an image more accurately than the existing method.

Object Recognition Using Neuro-Fuzzy Inference System (뉴로-퍼지 추론 시스템을 이용한 물체인식)

  • 김형근;최갑석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.5
    • /
    • pp.482-494
    • /
    • 1992
  • In this paper, the neuro-fuzzy inferene system for the effective object recognition is studied. The proposed neuro-fuzzy inference system combines learning capability of neural network with inference process of fuzzy theory, and the system executes the fuzzy inference by neural network automatically. The proposed system consists of the antecedence neural network, the consequent neural network, and the fuzzy operational part, For dissolving the ambiguity of recognition due to input variance in the neuro-fuzzy inference system, the antecedence’s fuzzy proposition of the inference rules are automatically produced by error back propagation learining rule. Therefore, when the fuzzy inference is made, the shape of membership functions os adaptively modified according to the variation. The antecedence neural netwerk constructs a separated MNN(Model Classification Neural Network)and LNN(Line segment Classification Neural Networks)for dissolving the degradation of recognition rate. The antecedence neural network can overcome the limitation of boundary decisoion characteristics of nrural network due to the similarity of extracted features. The increased recognition rate is gained by the consequent neural network which is designed to learn inference rules for the effective system output.

  • PDF

Stereo matching using dynamic programming and image segments (동적 계획법과 이미지 세그먼트를 이용한 스테레오 정합)

  • Dong Won-Pyo;Jeong Chang-Sung
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.805-807
    • /
    • 2005
  • 본 논문에서는 동적 계획법(dynamic programming)과 이미지 세그먼트(segment)를 이용한 새로운 스테레오 정합(stereo matching)기법을 제안한다. 일반적으로 동적 계획법(dynamic programming)은 빠르면서도 비교적 정확하고, 조밀(dense)한 disparity map을 얻을 수 있다. 그러나 경계(boundary)근처의 폐색지역(occlusion region)이나, 텍스쳐가 적은 모호한 영역에서는 잘못된 결과를 유도할 수 있다. 본 논문에서는 이러한 문제점들을 해결하기 위해 먼저 이미지를 아주 작은 영역으로 분할(over-segmentation)하고, 이런 작은 영역들이 비슷한 disparity를 가질 것이라고 가정한다. 다음으로 동적 계획법(dynamic programming)을 통해 정합을 수행한다. 여기서 계산비용(cost)은 기존의 정합윈도우 안에서 세그먼트 영역을 적용한 새로운 비용함수를 사용하며, 이 새로운 비용함수를 통해 정확도를 높인다. 마지막으로 동적 계획법을 통하여 얻어진 조밀한 disparity map을 세그먼트 영역들의 시각특성(visibility)과 유사도(similarity)를 이용하여 에러를 찾아내고, 세그먼트 정합을 통해 수정함으로 정확한 disparity map을 찾아낸다.

  • PDF