• Title/Summary/Keyword: linear algorithm

Search Result 4,036, Processing Time 0.031 seconds

Automatic Liver Segmentation of a Contrast Enhanced CT Image Using a Partial Histogram Threshold Algorithm (부분 히스토그램 문턱치 알고리즘을 사용한 조영증강 CT영상의 자동 간 분할)

  • Kyung-Sik Seo;Seung-Jin Park;Jong An Park
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.3
    • /
    • pp.189-194
    • /
    • 2004
  • Pixel values of contrast enhanced computed tomography (CE-CT) images are randomly changed. Also, the middle liver part has a problem to segregate the liver structure because of similar gray-level values of a pancreas in the abdomen. In this paper, an automatic liver segmentation method using a partial histogram threshold (PHT) algorithm is proposed for overcoming randomness of CE-CT images and removing the pancreas. After histogram transformation, adaptive multi-modal threshold is used to find the range of gray-level values of the liver structure. Also, the PHT algorithm is performed for removing the pancreas. Then, morphological filtering is processed for removing of unnecessary objects and smoothing of the boundary. Four CE-CT slices of eight patients were selected to evaluate the proposed method. As the average of normalized average area of the automatic segmented method II (ASM II) using the PHT and manual segmented method (MSM) are 0.1671 and 0.1711, these two method shows very small differences. Also, the average area error rate between the ASM II and MSM is 6.8339 %. From the results of experiments, the proposed method has similar performance as the MSM by medical Doctor.

Analysis of the MODIS-Based Vegetation Phenology Using the HANTS Algorithm (HANTS 알고리즘을 이용한 MODIS 영상기반의 식물계절 분석)

  • Choi, Chul-Hyun;Jung, Sung-Gwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.3
    • /
    • pp.20-38
    • /
    • 2014
  • Vegetation phenology is the most important indicator of ecosystem response to climate change. Therefore it is necessary to continuously monitor forest phenology. This paper analyzes the phenological characteristics of forests in South Korea using the MODIS vegetation index with error from clouds or other sources removed using the HANTS algorithm. After using the HANTS algorithm to reduce the noise of the satellite-based vegetation index data, we were able to confirm that phenological transition dates varied strongly with altitudinal gradients. The dates of the start of the growing season, end of the growing season and the length of the growing season were estimated to vary by +0.71day/100m, -1.33day/100m and -2.04day/100m in needleleaf forests, +1.50day/100m, -1.54day/100m and -3.04day/100m in broadleaf forests, +1.39day/100m, -2.04day/100m and -3.43day/100m in mixed forests. We found a linear pattern of variation in response to altitudinal gradients that was related to air temperature. We also found that broadleaf forests are more sensitive to temperature changes compared to needleleaf forests.

Speaker-Adaptive Speech Synthesis based on Fuzzy Vector Quantizer Mapping and Neural Networks (퍼지 벡터 양자화기 사상화와 신경망에 의한 화자적응 음성합성)

  • Lee, Jin-Yi;Lee, Gwang-Hyeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.1
    • /
    • pp.149-160
    • /
    • 1997
  • This paper is concerned with the problem of speaker-adaptive speech synthes is method using a mapped codebook designed by fuzzy mapping on FLVQ (Fuzzy Learning Vector Quantization). The FLVQ is used to design both input and reference speaker's codebook. This algorithm is incorporated fuzzy membership function into the LVQ(learning vector quantization) networks. Unlike the LVQ algorithm, this algorithm minimizes the network output errors which are the differences of clas s membership target and actual membership values, and results to minimize the distances between training patterns and competing neurons. Speaker Adaptation in speech synthesis is performed as follow;input speaker's codebook is mapped a reference speaker's codebook in fuzzy concepts. The Fuzzy VQ mapping replaces a codevector preserving its fuzzy membership function. The codevector correspondence histogram is obtained by accumulating the vector correspondence along the DTW optimal path. We use the Fuzzy VQ mapping to design a mapped codebook. The mapped codebook is defined as a linear combination of reference speaker's vectors using each fuzzy histogram as a weighting function with membership values. In adaptive-speech synthesis stage, input speech is fuzzy vector-quantized by the mapped codcbook, and then FCM arithmetic is used to synthesize speech adapted to input speaker. The speaker adaption experiments are carried out using speech of males in their thirties as input speaker's speech, and a female in her twenties as reference speaker's speech. Speeches used in experiments are sentences /anyoung hasim nika/ and /good morning/. As a results of experiments, we obtained a synthesized speech adapted to input speaker.

  • PDF

Research on improvement of target tracking performance of LM-IPDAF through improvement of clutter density estimation method (클러터밀도 추정 방법 개선을 통한 LM-IPDAF의 표적 추적 성능 향상 연구)

  • Yoo, In-Je;Park, Sung-Jae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.5
    • /
    • pp.99-110
    • /
    • 2017
  • Improving tracking performance by estimating the status of multiple targets using radar is important. In a clutter environment, a joint event occurs between the track and measurement in multiple target tracking using a tracking filter. As the number increases, the joint event increases exponentially. The problem to be considered when multiple target tracking filter design in such environments is that first, the tracking filter minimizes the rate of false track alarmsby eliminating the false track and quickly confirming the target track. The purpose is to increase the FTD performance. The second consideration is to improve the track maintenance performance by allocating each measurement to a track efficiently when an event occurs. Through two considerations, a single target tracking data association technique is extended to a multiple target tracking filter, and representative algorithms are JIPDAF and LM-IPDAF. In this study, a probabilistic evaluation of many hypotheses in the assignment of measurements was not performed, so that the computation amount does not increase nonlinearly according to the number of measurements and tracks, and the track existence probability based on the track density The LM-IPDAF algorithm was introduced. This paper also proposes a method to reduce the computational complexity by improving the clutter density estimation method for calculating the track existence probability of LM-IPDAF. The performance was verified by a comparison with the existing algorithm through simulation. As a result, it was possible to reduce the simulation processing time by approximately 20% while achieving equivalent performance on the position RMSE and Confirmed True Track.

Robust 1D inversion of large towed geo-electric array datasets used for hydrogeological studies (수리지질학 연구에 이용되는 대규모 끄는 방식 전기비저항 배열 자료의 1 차원 강력한 역산)

  • Allen, David;Merrick, Noel
    • Geophysics and Geophysical Exploration
    • /
    • v.10 no.1
    • /
    • pp.50-59
    • /
    • 2007
  • The advent of towed geo-electrical array surveying on water and land has resulted in datasets of magnitude approaching that of airborne electromagnetic surveying and most suited to 1D inversion. Robustness and complete automation is essential if processing and reliable interpretation of such data is to be viable. Sharp boundaries such as river beds and the top of saline aquifers must be resolved so use of smoothness constraints must be minimised. Suitable inversion algorithms must intelligently handle low signal-to-noise ratio data if conductive basement, that attenuates signal, is not to be misrepresented. A noise-level aware inversion algorithm that operates with one elastic thickness layer per electrode configuration has been coded. The noise-level aware inversion identifies if conductive basement has attenuated signal levels so that they are below noise level, and models conductive basement where appropriate. Layers in the initial models are distributed to span the effective depths of each of the geo-electric array quadrupoles. The algorithm works optimally on data collected using geo-electric arrays with an approximately exponential distribution of quadrupole effective depths. Inversion of data from arrays with linear electrodes, used to reduce contact resistance, and capacitive-line antennae is plausible. This paper demonstrates the effectiveness of the algorithm using theoretical examples and an example from a salt interception scheme on the Murray River, Australia.

An estimation method for non-response model using Monte-Carlo expectation-maximization algorithm (Monte-Carlo expectation-maximaization 방법을 이용한 무응답 모형 추정방법)

  • Choi, Boseung;You, Hyeon Sang;Yoon, Yong Hwa
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.3
    • /
    • pp.587-598
    • /
    • 2016
  • In predicting an outcome of election using a variety of methods ahead of the election, non-response is one of the major issues. Therefore, to address the non-response issue, a variety of methods of non-response imputation may be employed, but the result of forecasting tend to vary according to methods. In this study, in order to improve electoral forecasts, we studied a model based method of non-response imputation attempting to apply the Monte Carlo Expectation Maximization (MCEM) algorithm, introduced by Wei and Tanner (1990). The MCEM algorithm using maximum likelihood estimates (MLEs) is applied to solve the boundary solution problem under the non-ignorable non-response mechanism. We performed the simulation studies to compare estimation performance among MCEM, maximum likelihood estimation, and Bayesian estimation method. The results of simulation studies showed that MCEM method can be a reasonable candidate for non-response model estimation. We also applied MCEM method to the Korean presidential election exit poll data of 2012 and investigated prediction performance using modified within precinct error (MWPE) criterion (Bautista et al., 2007).

A Commissioning of 3D RTP System for Photon Beams

  • Kang, Wee-Saing
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.119-120
    • /
    • 2002
  • The aim is to urge the need of elaborate commissioning of 3D RTP system from the firsthand experience. A 3D RTP system requires so much data such as beam data and patient data. Most data of radiation beam are directly transferred from a 3D dose scanning system, and some other data are input by editing. In the process inputting parameters and/or data, no error should occur. For RTP system using algorithm-bas ed-on beam-modeling, careless beam-data processing could also cause the treatment error. Beam data of 3 different qualities of photon from two linear accelerators, patient data and calculated results were commissioned. For PDD, the doses by Clarkson, convolution, superposition and fast superposition methods at 10 cm for 10${\times}$10 cm field, 100 cm SSD were compared with the measured. An error in the SCD for one quality was input by the service engineer. Whole SCD defined by a physicist is SAD plus d$\sub$max/, the value was just SAD. That resulted in increase of MU by 100${\times}$((1_d$\sub$max//SAD)$^2$-1)%. For 10${\times}$10 cm open field, 1 m SSD and at 10 cm depth in uniform medium of relative electron density (RED) 1, PDDs for 4 algorithms of dose calculation, Clarkson, convolution, superposition and fast-superposition, were compared with the measured. The calculated PDD were similar to the measured. For 10${\times}$10 cm open field, 1 m SSD and at 10 cm depth with 5 cm thick inhomogeneity of RED 0.2 under 2 cm thick RED 1 medium, PDDs for 4 algorithms were compared. PDDs ranged from 72.2% to 77.0% for 4 MV X-ray and from 90.9% to 95.6% for 6 MV X-ray. PDDs were of maximum for convolution and of minimum for superposition. For 15${\times}$15 cm symmetric wedged field, wedge factor was not constant for calculation mode, even though same geometry. The reason is that their wedge factor is considering beam hardness and ray path. Their definition requires their users to change the concept of wedge factor. RTP user should elaborately review beam data and calculation algorithm in commissioning.

  • PDF

ADMM algorithms in statistics and machine learning (통계적 기계학습에서의 ADMM 알고리즘의 활용)

  • Choi, Hosik;Choi, Hyunjip;Park, Sangun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1229-1244
    • /
    • 2017
  • In recent years, as demand for data-based analytical methodologies increases in various fields, optimization methods have been developed to handle them. In particular, various constraints required for problems in statistics and machine learning can be solved by convex optimization. Alternating direction method of multipliers (ADMM) can effectively deal with linear constraints, and it can be effectively used as a parallel optimization algorithm. ADMM is an approximation algorithm that solves complex original problems by dividing and combining the partial problems that are easier to optimize than original problems. It is useful for optimizing non-smooth or composite objective functions. It is widely used in statistical and machine learning because it can systematically construct algorithms based on dual theory and proximal operator. In this paper, we will examine applications of ADMM algorithm in various fields related to statistics, and focus on two major points: (1) splitting strategy of objective function, and (2) role of the proximal operator in explaining the Lagrangian method and its dual problem. In this case, we introduce methodologies that utilize regularization. Simulation results are presented to demonstrate effectiveness of the lasso.

Construction of a artificial levee line in river zones using LiDAR Data (라이다 자료를 이용한 하천지역 인공 제방선 추출)

  • Choung, Yun-Jae;Park, Hyeon-Cheol;Jo, Myung-Hee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2011.05a
    • /
    • pp.185-185
    • /
    • 2011
  • Mapping of artificial levee lines, one of major tasks in river zone mapping, is critical to prevention of river flood, protection of environments and eco systems in river zones. Thus, mapping of artificial levee lines is essential for management and development of river zones. Coastal mapping including river zone mapping has been historically carried out using surveying technologies. Photogrammetry, one of the surveying technologies, is recently used technology for national river zone mapping in Korea. Airborne laser scanning has been used in most advanced countries for coastal mapping due to its ability to penetrate shallow water and its high vertical accuracy. Due to these advantages, use of LiDAR data in coastal mapping is efficient for monitoring and predicting significant topographic change in river zones. This paper introduces a method for construction of a 3D artificial levee line using a set of LiDAR points that uses normal vectors. Multiple steps are involved in this method. First, a 2.5-dimensional Delaunay triangle mesh is generated based on three nearest-neighbor points in the LiDAR data. Second, a median filtering is applied to minimize noise. Third, edge selection algorithms are applied to extract break edges from a Delaunay triangle mesh using two normal vectors. In this research, two methods for edge selection algorithms using hypothesis testing are used to extract break edges. Fourth, intersection edges which are extracted using both methods at the same range are selected as the intersection edge group. Fifth, among intersection edge group, some linear feature edges which are not suitable to compose a levee line are removed as much as possible considering vertical distance, slope and connectivity of an edge. Sixth, with all line segments which are suitable to constitute a levee line, one river levee line segment is connected to another river levee line segment with the end points of both river levee line segments located nearest horizontally and vertically to each other. After linkage of all the river levee line segments, the initial river levee line is generated. Since the initial river levee line consists of the LiDAR points, the pattern of the initial river levee line is being zigzag along the river levee. Thus, for the last step, a algorithm for smoothing the initial river levee line is applied to fit the initial river levee line into the reference line, and the final 3D river levee line is constructed. After the algorithm is completed, the proposed algorithm is applied to construct the 3D river levee line in Zng-San levee nearby Ham-Ahn Bo in Nak-Dong river. Statistical results show that the constructed river levee line generated using a proposed method has high accuracy in comparison to the ground truth. This paper shows that use of LiDAR data for construction of the 3D river levee line for river zone mapping is useful and efficient; and, as a result, it can be replaced with ground surveying method for construction of the 3D river levee line.

  • PDF

Determination of the Optimal Height using the Simplex Algorithm in Network-RTK Surveying (Network-RTK측량에서 심플렉스해법을 이용한 최적표고 결정)

  • Lee, Suk Bae;Auh, Su Chang
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.24 no.1
    • /
    • pp.35-41
    • /
    • 2016
  • GNSS/Geoid positioning technology allows orthometric height determination using both the geoidal height calculated from geoid model and the ellipsoidal height achieved by GNSS survey. In this study, Network-RTK surveying was performed through the Benchmarks in the study area to analyze the possibility of height positioning of the Network-RTK. And the orthometric heights were calculated by applying the Korean national geoid model KNGeoid13 according to the condition of with site calibration and without site calibration and the results were compared. Simplex algorithm was adopted for liner programming in this study and the heights of all Benchmarks were calculated in both case of applying site calibration and does not applying site calibration. The results were compared to Benchmark official height of the National Geographic Information Institute. The results showed that the average value of the height difference was 0.060m, and the standard deviation was 0.072m in Network-RTK without site calibration and the average value of the height difference was 0.040m, and the standard deviation was 0.047m in Network-RTK with the application of the site calibration. With linearization method to obtain the optimal solution for observations it showed that the height determination within 0.033m was available in GNSS Network-RTK positioning.