• Title/Summary/Keyword: error filtering

Search Result 482, Processing Time 0.029 seconds

Automatic Liver Segmentation of a Contrast Enhanced CT Image Using a Partial Histogram Threshold Algorithm (부분 히스토그램 문턱치 알고리즘을 사용한 조영증강 CT영상의 자동 간 분할)

  • Kyung-Sik Seo;Seung-Jin Park;Jong An Park
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.3
    • /
    • pp.189-194
    • /
    • 2004
  • Pixel values of contrast enhanced computed tomography (CE-CT) images are randomly changed. Also, the middle liver part has a problem to segregate the liver structure because of similar gray-level values of a pancreas in the abdomen. In this paper, an automatic liver segmentation method using a partial histogram threshold (PHT) algorithm is proposed for overcoming randomness of CE-CT images and removing the pancreas. After histogram transformation, adaptive multi-modal threshold is used to find the range of gray-level values of the liver structure. Also, the PHT algorithm is performed for removing the pancreas. Then, morphological filtering is processed for removing of unnecessary objects and smoothing of the boundary. Four CE-CT slices of eight patients were selected to evaluate the proposed method. As the average of normalized average area of the automatic segmented method II (ASM II) using the PHT and manual segmented method (MSM) are 0.1671 and 0.1711, these two method shows very small differences. Also, the average area error rate between the ASM II and MSM is 6.8339 %. From the results of experiments, the proposed method has similar performance as the MSM by medical Doctor.

Sakurajima volcano eruption detected by GOCI and geomagnetic variation analysis - A case study of the 18 Aug, 2013 eruption - (천리안 위성영상에 감지된 사쿠라지마 화산분화와 지자기 변동 분석 연구 - 2013년 8월 18일 분화를 중심으로 -)

  • Kim, Kiyeon;Hwang, Eui-Hong;Lee, Yoon-Kyung;Lee, Chang-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.2
    • /
    • pp.259-274
    • /
    • 2014
  • On Aug 18, 2013, Sakurajima volcano in Japan erupted on a relatively large-scale. Geostationary Ocean Color Imager (GOCI) had used to detect volcanic ash in the surrounding area on the next day of this eruption. The geomagnetic variation has been analyzed using geomagnetic data from Cheongyang observatory in Korea and several geomagnetic observatories in Japan. First, we reconstruct geomagnetic data by principal component analysis and conduct semblance analysis by wavelet transform. Secondly, we minimize the error of solar effect by using wavelet based semblance filtering with Kp index. As a result of this study, we could confirm that the geomagnetic variation usually occur at the moment of Sakurajima volcano eruption. However, we cannot rule out the possibilities that it could have been impacted by other factors besides volcanic eruption in other variation's cases. This research is an exceptional study to analyze geomagnetic variation related with abroad volcanic eruption uncommonly in Korea. Moreover, we expect that it can help to develop further study of geomagnetic variation involved in earthquake and volcanic eruption.

Removal of Seabed Multiples in Seismic Reflection Data using Machine Learning (머신러닝을 이용한 탄성파 반사법 자료의 해저면 겹반사 제거)

  • Nam, Ho-Soo;Lim, Bo-Sung;Kweon, Il-Ryong;Kim, Ji-Soo
    • Geophysics and Geophysical Exploration
    • /
    • v.23 no.3
    • /
    • pp.168-177
    • /
    • 2020
  • Seabed multiple reflections (seabed multiples) are the main cause of misinterpretations of primary reflections in both shot gathers and stack sections. Accordingly, seabed multiples need to be suppressed throughout data processing. Conventional model-driven methods, such as prediction-error deconvolution, Radon filtering, and data-driven methods, such as the surface-related multiple elimination technique, have been used to attenuate multiple reflections. However, the vast majority of processing workflows require time-consuming steps when testing and selecting the processing parameters in addition to computational power and skilled data-processing techniques. To attenuate seabed multiples in seismic reflection data, input gathers with seabed multiples and label gathers without seabed multiples were generated via numerical modeling using the Marmousi2 velocity structure. The training data consisted of normal-moveout-corrected common midpoint gathers fed into a U-Net neural network. The well-trained model was found to effectively attenuate the seabed multiples according to the image similarity between the prediction result and the target data, and demonstrated good applicability to field data.

Development of a Freeway Travel Time Estimating and Forecasting Model using Traffic Volume (차량검지기 교통량 데이터를 이용한 고속도로 통행시간 추정 및 예측모형 개발에 관한 연구)

  • 오세창;김명하;백용현
    • Journal of Korean Society of Transportation
    • /
    • v.21 no.5
    • /
    • pp.83-95
    • /
    • 2003
  • This study aims to develop travel time estimation and prediction models on the freeway using measurements from vehicle detectors. In this study, we established a travel time estimation model using traffic volume which is a principle factor of traffic flow changes by reviewing existing travel time estimation techniques. As a result of goodness of fit test. in the normal traffic condition over 70km/h, RMSEP(Root Mean Square Error Proportion) from travel speed is lower than the proposed model, but the proposed model produce more reliable travel times than the other one in the congestion. Therefore in cases of congestion the model uses the method of calculating the delay time from excess link volumes from the in- and outflow and the vehicle speeds from detectors in the traffic situation at a speed of over 70km/h. We also conducted short term prediction of Kalman Filtering to forecast traffic condition and more accurate travel times using statistical model The results of evaluation showed that the lag time occurred between predicted travel time and estimated travel time but the RMSEP values of predicted travel time to observations are as 1ow as that of estimation.

Encounter of Lattice-type coding with Wiener's MMSE and Shannon's Information-Theoretic Capacity Limits in Quantity and Quality of Signal Transmission (신호 전송의 양과 질에서 위너의 MMSE와 샤논의 정보 이론적 정보량 극한 과 격자 코드 와의 만남)

  • Park, Daechul;Lee, Moon Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.83-93
    • /
    • 2013
  • By comparing Wiener's MMSE on stochastic signal transmission with Shannon's mutual information first proved by C.E. Shannon in terms of information theory, connections between two approaches were investigated. What Wiener wanted to see in signal transmission in noisy channel is to try to capture fundamental limits for signal quality in signal estimation. On the other hands, Shannon was interested in finding fundamental limits of signal quantity that maximize the uncertainty in mutual information using the entropy concept in noisy channel. First concern of this paper is to show that in deriving limits of Shannon's point to point fundamental channel capacity, Shannon's mutual information obtained by exploiting MMSE combiner and Wiener filter's MMSE are interelated by integro-differential equantion. Then, At the meeting point of Wiener's MMSE and Shannon's mutual information the upper bound of spectral efficiency and the lower bound of energy efficiency were computed. Choosing a proper lattice-type code of a mod-${\Lambda}$AWGN channel model and MMSE estimation of ${\alpha}$ confirmed to lead to the fundamental Shannon capacity limits.

Improvement of SNPs detection efficient by reuse of sequences in Genotyping By Sequencing technology (유전체 서열 재사용을 이용한 Genotyping By Sequencing 기술의 단일 염기 다형성 탐지 효율 개선)

  • Baek, Jeong-Ho;Kim, Do-Wan;Kim, Junah;Lee, Tae-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.10
    • /
    • pp.2491-2499
    • /
    • 2015
  • Recently, the most popular technique to determine the Genotype, genetic features of individual organisms, is the GBS based on SNP from sequences determined by NGS. As analyzing the sequences by the GBS, TASSEL is the most used program to identify the genotypes. But, TASSEL has limitation that it uses only the partial sequences that is obtained by NGS. We tried to improve the efficiency in use of the sequences in order to solve the limitation. So, we constructed new data sets by quality checking, filtering the unused sequences with error rate below 0.1% and clipping the sequences considering the location of barcode and enzyme. As a result, approximately over 17% of the SNP detection efficiency was increased. In this paper, we suggest the method and the applied programs in order to detect more SNPs by using the disused sequences.

A Study on the Noise Reduction Method for Data Transmission of VLBI Data Processing System (VLBI 자료처리 시스템의 데이터 전송에서 잡음방지에 관한 연구)

  • Son, Do-Sun;Oh, Se-Jin;Yeom, Jae-Hwan;Roh, Duk-Gyoo;Jung, Jin-Seung;Oh, Chung-Sik
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.333-340
    • /
    • 2011
  • KJJVC(Korea-Japan Joint VLBI Correlator) was installed at the KJCC(Korea-Japan Correlation Center) and has been operated by KASI(Korea Astronomy and Space Science Institute) from 2009. KJNC is able to correlate the VLBI observed data through KVN(Korean VLBI Network), VERA(VLBI Exploration of Radio Astrometry), and JVN(Japanese VLBI Network) and its joint network array. And it is used exclusively as computer in order to process the observed data for the scientific purpose KJJVC used the VSI(VLBI Standard Interface) as the VLBI international standard at the data input-output specification between each component. Especially, for correlating the observed data, the data is transmitted with 1024Mbps speed between Mark5B high-speed playback and RVDB(Raw VLBI Data Buffer). The EMI(Electromagnetic lnterference), which is occurred by data transmission with high-speed, cause the data loss and the loss occurrence is frequently often for long transmission cable. Finally it will be caused the data recognition error by decreasing the voltage level of digital data signal. In this paper, in order to minimize the data loss by measuring the EMI noise level in transmission of the VSI specification, the 3 methods such as 1) RC filtering method, 2) lmpedance matching using Microstrip line, and 3) Signal buffering method using Differential line driver, were proposed. To verify the effectiveness of each proposed method, the performance evaluation was conducted by implementing and simulations for each method. Each proposed method was effectively confirmed as the high-speed data transmission of the VSI specification.

Deformation monitoring of Daejeon City using ALOS-1 PALSAR - Comparing the results by PSInSAR and SqueeSAR - (ALOS-1 PALSAR 영상을 이용한 대전지역 변위 관측 - PSInSAR와 SqueeSAR 분석 결과 비교 -)

  • Kim, Sang-Wan
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.567-577
    • /
    • 2016
  • SqueeSAR is a new technique to combine Persistent Scatterer (PS) and Distributed Scatterer (DS) for deformation monitoring. Although many PSs are available in urban areas, SqueeSAR analysis can be beneficial to increase the PS density in not only natural targets but also smooth surfaces in urban environment. The height of each targets is generally required to remove topographic phase in interferometric SAR processing. The result of PSInSAR analysis to use PS only is not affected by DEM resolution because the height error of initial input DEM at each PSs is precisely compensated in PS processing chain. On the contrary, SqueeSAR can be affected by DEM resolution and precision since it includes spatial average filtering for DS targets to increase a signal-to-noise ratio (SNR). In this study we observe the effect of DEM resolution on deformation measurement by PSInSAR and SqueeSAR. With ALOS-1 PALSAR L-band data, acquired over Daejeon city, Korea, two different DEM data are used in InSAR processing for comparison: 1 m LIDAR DEM and SRTM 1-arc (~30 m) DEM. As expected the results of PSInSAR analysis show almost same results independently of the kind of DEM, while the results of SqueeSAR analysis show the improvement in quality of the time-series in case of 1-m LIDAR DSM. The density of InSAR measurement points was also improved about five times more than the PSInSAR analysis.

Hepatic Vessel Segmentation using Edge Detection (Edge Detection을 이용한 간 혈관 추출)

  • Seo, Jeong-Joo;Park, Jong-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.3
    • /
    • pp.51-57
    • /
    • 2012
  • Hepatic vessel tree is the key structure for hepatic disease diagnosis and liver surgery planning. Especially, it is used to evaluate the donors' and recipients' liver for the LDLT(Living Donors Liver Transplantation) and estimate the volumes of left and right hepatic lobes for securing their life in the LDLT. In this study, we propose a method to apply canny edge detection that is not affected by noise to the liver images for automatic segmentation of hepatic vessels tree in contrast abdominal MDCT image. Using histograms and average pixel values of the various liver CT images, optimized parameters of the Canny algorithm are determined. It is more time-efficient to use the common parameters than to change parameters manually according to CT images. Candidates of hepatic vessels are extracted by threshold filtering around the detected the vessel edge. Finally, using a system which detects the true-negatives and the false-positives in horizontal and vertical direction, the true-negatives are added in candidate of hepatic vessels and the false-positives are removed. As a result of the process, the various hepatic vessel trees of patients are accurately reconstructed in 3D.

A Study on the Dyadic Sorting method for the Regularization in DT-MRI (Dyadic Sorting 방법을 이용한 DT-MRI Regularization에 관한 연구)

  • Kim, Tae-Hwan;Woo, Jong-Hyung;Lee, Hoon;Kim, Dong-Youn
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.4
    • /
    • pp.30-39
    • /
    • 2010
  • Since Diffusion tensor from Diffusion Tensor Magnetic Resonance Imaging(DT-MRI) is so sensitive to noise, the principle eigenvector(PEV) calculated from Diffusion tensor could be erroneous. Tractography obtained from PEV could be deviated from the real fiber tract. Therefore regularization process is needed to eliminate noise. In this paper, to reduce noise in DT-MRI measurements, the Dyadic Sorting(DS) method as regularization of the eigenvalue and the eigenvector is applied in the tractography. To resort the eigenvalues and the eignevectors, the DS method uses the intervoxel overlap function which can measure the overlap between eigenvalue-eigenvector pairs in the $3\times3$ pixel. In this paper, we applied the DS method to the three-dimensional volume. We discuss the error analysis and numerical study to the synthetic and the experimental data. As a result, we have shown that the DS method is more efficient than the median filtering methods as much as 79.97%~83.64%, 85.62%~87.76% in AAE, AFA respectively for the corticospinal tract of the experimental data.