• Title/Summary/Keyword: algorithm

Search Result 62,755, Processing Time 0.082 seconds

CT-Derived Deep Learning-Based Quantification of Body Composition Associated with Disease Severity in Chronic Obstructive Pulmonary Disease (CT 기반 딥러닝을 이용한 만성 폐쇄성 폐질환의 체성분 정량화와 질병 중증도)

  • Jae Eun Song;So Hyeon Bak;Myoung-Nam Lim;Eun Ju Lee;Yoon Ki Cha;Hyun Jung Yoon;Woo Jin Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.84 no.5
    • /
    • pp.1123-1133
    • /
    • 2023
  • Purpose Our study aimed to evaluate the association between automated quantified body composition on CT and pulmonary function or quantitative lung features in patients with chronic obstructive pulmonary disease (COPD). Materials and Methods A total of 290 patients with COPD were enrolled in this study. The volume of muscle and subcutaneous fat, area of muscle and subcutaneous fat at T12, and bone attenuation at T12 were obtained from chest CT using a deep learning-based body segmentation algorithm. Parametric response mapping-derived emphysema (PRMemph), PRM-derived functional small airway disease (PRMfSAD), and airway wall thickness (AWT)-Pi10 were quantitatively assessed. The association between body composition and outcomes was evaluated using Pearson's correlation analysis. Results The volume and area of muscle and subcutaneous fat were negatively associated with PRMemph and PRMfSAD (p < 0.05). Bone density at T12 was negatively associated with PRMemph (r = -0.1828, p = 0.002). The volume and area of subcutaneous fat and bone density at T12 were positively correlated with AWT-Pi10 (r = 0.1287, p = 0.030; r = 0.1668, p = 0.005; r = 0.1279, p = 0.031). However, muscle volume was negatively correlated with the AWT-Pi10 (r = -0.1966, p = 0.001). Muscle volume was significantly associated with pulmonary function (p < 0.001). Conclusion Body composition, automatically assessed using chest CT, is associated with the phenotype and severity of COPD.

Analysis of Applicability of RPC Correction Using Deep Learning-Based Edge Information Algorithm (딥러닝 기반 윤곽정보 추출자를 활용한 RPC 보정 기술 적용성 분석)

  • Jaewon Hur;Changhui Lee;Doochun Seo;Jaehong Oh;Changno Lee;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.387-396
    • /
    • 2024
  • Most very high-resolution (VHR) satellite images provide rational polynomial coefficients (RPC) data to facilitate the transformation between ground coordinates and image coordinates. However, initial RPC often contains geometric errors, necessitating correction through matching with ground control points (GCPs). A GCP chip is a small image patch extracted from an orthorectified image together with height information of the center point, which can be directly used for geometric correction. Many studies have focused on area-based matching methods to accurately align GCP chips with VHR satellite images. In cases with seasonal differences or changed areas, edge-based algorithms are often used for matching due to the difficulty of relying solely on pixel values. However, traditional edge extraction algorithms,such as canny edge detectors, require appropriate threshold settings tailored to the spectral characteristics of satellite images. Therefore, this study utilizes deep learning-based edge information that is insensitive to the regional characteristics of satellite images for matching. Specifically,we use a pretrained pixel difference network (PiDiNet) to generate the edge maps for both satellite images and GCP chips. These edge maps are then used as input for normalized cross-correlation (NCC) and relative edge cross-correlation (RECC) to identify the peak points with the highest correlation between the two edge maps. To remove mismatched pairs and thus obtain the bias-compensated RPC, we iteratively apply the data snooping. Finally, we compare the results qualitatively and quantitatively with those obtained from traditional NCC and RECC methods. The PiDiNet network approach achieved high matching accuracy with root mean square error (RMSE) values ranging from 0.3 to 0.9 pixels. However, the PiDiNet-generated edges were thicker compared to those from the canny method, leading to slightly lower registration accuracy in some images. Nevertheless, PiDiNet consistently produced characteristic edge information, allowing for successful matching even in challenging regions. This study demonstrates that improving the robustness of edge-based registration methods can facilitate effective registration across diverse regions.

A study on evaluation of the image with washed-out artifact after applying scatter limitation correction algorithm in PET/CT exam (PET/CT 검사에서 냉소 인공물 발생 시 산란 제한 보정 알고리즘 적용에 따른 영상 평가)

  • Ko, Hyun-Soo;Ryu, Jae-kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.55-66
    • /
    • 2018
  • Purpose In PET/CT exam, washed-out artifact could occur due to severe motion of the patient and high specific activity, it results in lowering not only qualitative reading but also quantitative analysis. Scatter limitation correction by GE is an algorism to correct washed-out artifact and recover the images in PET scan. The purpose of this study is to measure the threshold of specific activity which can recovers to original uptake values on the image shown with washed-out artifact from phantom experiment and to compare the quantitative analysis of the clinical patient's data before and after correction. Materials and Methods PET and CT images were acquired in having no misalignment(D0) and in 1, 2, 3, 4 cm distance of misalignment(D1, D2, D3, D4) respectively, with 20 steps of each specific activity from 20 to 20,000 kBq/ml on $^{68}Ge$ cylinder phantom. Also, we measured the distance of misalignment of foley catheter line between CT and PET images, the specific activity which makes washed-out artifact, $SUV_{mean}$ of muscle in artifact slice and $SUV_{max}$ of lesion in artifact slice and $SUV_{max}$ of the other lesion out of artifact slice before and after correction respectively from 34 patients who underwent $^{18}F-FDG$ Fusion Whole Body PET/CT exam. SPSS 21 was used to analyze the difference in the SUV between before and after scatter limitation correction by paired t-test. Results In phantom experiment, $SUV_{mean}$ of $^{68}Ge$ cylinder decreased as specific activity of $^{18}F$ increased. $SUV_{mean}$ more and more decreased as the distance of misalignment between CT and PET more increased. On the other hand, the effect of correction increased as the distance more increased. From phantom experiments, there was no washed-out artifact below 50 kBq/ml and $SUV_{mean}$ was same from origin. On D0 and D1, $SUV_{mean}$ recovered to origin(0.95) below 120 kBq/ml when applying scatter limitation correction. On D2 and D3, $SUV_{mean}$ recovered to origin below 100 kBq/ml. On D4, $SUV_{mean}$ recovered to origin below 80 kBq/ml. From 34 clinical patient's data, the average distance of misalignment was 2.02 cm and the average specific activity which makes washed-out artifact was 490.15 kBq/ml. The average $SUV_{mean}$ of muscles and the average $SUV_{max}$ of lesions in artifact slice before and after the correction show a significant difference according to a paired t-test respectively(t=-13.805, p=0.000)(t=-2.851, p=0.012), but the average $SUV_{max}$ of lesions out of artifact slice show a no significant difference (t=-1.173, p=0.250). Conclusion Scatter limitation correction algorism by GE PET/CT scanner helps to correct washed-out artifact from motion of a patient or high specific activity and to recover the PET images. When we read the image occurred with washed-out artifact by measuring the distance of misalignment between CT and PET image, specific activity after applying scatter limitation algorism, we can analyze the images more accurately without repeating scan.

A Study of Factors Associated with Software Developers Job Turnover (데이터마이닝을 활용한 소프트웨어 개발인력의 업무 지속수행의도 결정요인 분석)

  • Jeon, In-Ho;Park, Sun W.;Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.191-204
    • /
    • 2015
  • According to the '2013 Performance Assessment Report on the Financial Program' from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries. The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers. Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers. Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an 'expected age' to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a 'motivation' to become a SW developer and a 'personality (introverted tendency)' of a SW developer are highly importantly factors associated with the job continuity intention. Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that 'motivation', 'personality', and 'expected age' were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the 'ability to learn' new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a 'social position' of SW developers and a 'prospect' of SW industry were minor factors influencing job continuity intensions. On the other hand, 'type of an employment (regular position/ non-regular position)' and 'type of company (ordering company/ service providing company)' did not affect the job continuity intension in both methods. In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

A Hybrid Recommender System based on Collaborative Filtering with Selective Use of Overall and Multicriteria Ratings (종합 평점과 다기준 평점을 선택적으로 활용하는 협업필터링 기반 하이브리드 추천 시스템)

  • Ku, Min Jung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.85-109
    • /
    • 2018
  • Recommender system recommends the items expected to be purchased by a customer in the future according to his or her previous purchase behaviors. It has been served as a tool for realizing one-to-one personalization for an e-commerce service company. Traditional recommender systems, especially the recommender systems based on collaborative filtering (CF), which is the most popular recommendation algorithm in both academy and industry, are designed to generate the items list for recommendation by using 'overall rating' - a single criterion. However, it has critical limitations in understanding the customers' preferences in detail. Recently, to mitigate these limitations, some leading e-commerce companies have begun to get feedback from their customers in a form of 'multicritera ratings'. Multicriteria ratings enable the companies to understand their customers' preferences from the multidimensional viewpoints. Moreover, it is easy to handle and analyze the multidimensional ratings because they are quantitative. But, the recommendation using multicritera ratings also has limitation that it may omit detail information on a user's preference because it only considers three-to-five predetermined criteria in most cases. Under this background, this study proposes a novel hybrid recommendation system, which selectively uses the results from 'traditional CF' and 'CF using multicriteria ratings'. Our proposed system is based on the premise that some people have holistic preference scheme, whereas others have composite preference scheme. Thus, our system is designed to use traditional CF using overall rating for the users with holistic preference, and to use CF using multicriteria ratings for the users with composite preference. To validate the usefulness of the proposed system, we applied it to a real-world dataset regarding the recommendation for POI (point-of-interests). Providing personalized POI recommendation is getting more attentions as the popularity of the location-based services such as Yelp and Foursquare increases. The dataset was collected from university students via a Web-based online survey system. Using the survey system, we collected the overall ratings as well as the ratings for each criterion for 48 POIs that are located near K university in Seoul, South Korea. The criteria include 'food or taste', 'price' and 'service or mood'. As a result, we obtain 2,878 valid ratings from 112 users. Among 48 items, 38 items (80%) are used as training dataset, and the remaining 10 items (20%) are used as validation dataset. To examine the effectiveness of the proposed system (i.e. hybrid selective model), we compared its performance to the performances of two comparison models - the traditional CF and the CF with multicriteria ratings. The performances of recommender systems were evaluated by using two metrics - average MAE(mean absolute error) and precision-in-top-N. Precision-in-top-N represents the percentage of truly high overall ratings among those that the model predicted would be the N most relevant items for each user. The experimental system was developed using Microsoft Visual Basic for Applications (VBA). The experimental results showed that our proposed system (avg. MAE = 0.584) outperformed traditional CF (avg. MAE = 0.591) as well as multicriteria CF (avg. AVE = 0.608). We also found that multicriteria CF showed worse performance compared to traditional CF in our data set, which is contradictory to the results in the most previous studies. This result supports the premise of our study that people have two different types of preference schemes - holistic and composite. Besides MAE, the proposed system outperformed all the comparison models in precision-in-top-3, precision-in-top-5, and precision-in-top-7. The results from the paired samples t-test presented that our proposed system outperformed traditional CF with 10% statistical significance level, and multicriteria CF with 1% statistical significance level from the perspective of average MAE. The proposed system sheds light on how to understand and utilize user's preference schemes in recommender systems domain.

Diagnosis of Obstructive Sleep Apnea Syndrome Using Overnight Oximetry Measurement (혈중산소포화도검사를 이용한 폐쇄성 수면무호흡증의 흡증의 진단)

  • Youn, Tak;Park, Doo-Heum;Choi, Kwang-Ho;Kim, Yong-Sik;Woo, Jong-Inn;Kwon, Jun-Soo;Ha, Kyoo-Seob;Jeong, Do-Un
    • Sleep Medicine and Psychophysiology
    • /
    • v.9 no.1
    • /
    • pp.34-40
    • /
    • 2002
  • Objectives: The gold standard for diagnosing obstructive sleep apnea syndrome (OSAS) is nocturnal polysomnography (NPSG). This is rather expensive and somewhat inconvenient, however, and consequently simpler and cheaper alternatives to NPSG have been proposed. Oximetry is appealing because of its widespread availability and ease of application. In this study, we have evaluated whether oximetry alone can be used to diagnose or screen OSAS. The diagnostic performance of an analysis algorithm using arterial oxygen saturation ($SaO_2$) base on 'dip index', mean of $SaO_2$, and CT90 (the percentage of time spent at $SaO_2$<90%) was compared with that of NPSG. Methods: Fifty-six patients referred for NPSG to the Division of Sleep Studies at Seoul National University Hospital, were randomly selected. For each patient, NPSG with oximetry was carried out. We obtained three variables from the oximetry data such as the dip index most linearly correlated with respiratory disturbance index (RDI) from NPSG, mean $SaO_2$, and CT90 with diagnosis from NPSG. In each case, sensitivity, specificity and positive and negative predictive values of oximetry data were calculated. Results: Thirty-nine patients out of fifty-six patients were diagnosed as OSAS with NPSG. Mean RDI was 17.5, mean $SaO_2$ was 94.9%, and mean CT90 was 5.1%. The dip index [4%-4sec] was most linearly correlated with RDI (r=0.861). With dip index [4%-4sec]${\geq}2$ as diagnostic criteria, we obtained sensitivity of 0.95, specificity of 0.71, positive predictive value of 0.88, and negative predictive value of 0.86. Using mean $SaO_2{\leq}97%$, we obtained sensitivity of 0.95, specificity of 0.41, positive predictive value of 0.79, and negative predictive value of 0.78. Using $CT90{\geq}5%$, we obtained sensitivity of 0.28, specificity of 1.00, positive predictive value of 1.00, and negative predictive value of 0.38. Conclusions: The dip index [4%-4sec] and mean $SaO_2{\leq}97%$ obtained from nocturnal oximetry data are helpful in diagnosis of OSAS. CT90${\leq}$5% can be also used in excluding OSAS.

  • PDF

Relationships on Magnitude and Frequency of Freshwater Discharge and Rainfall in the Altered Yeongsan Estuary (영산강 하구의 방류와 강우의 규모 및 빈도 상관성 분석)

  • Rhew, Ho-Sang;Lee, Guan-Hong
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.16 no.4
    • /
    • pp.223-237
    • /
    • 2011
  • The intermittent freshwater discharge has an critical influence upon the biophysical environments and the ecosystems of the Yeongsan Estuary where the estuary dam altered the continuous mixing of saltwater and freshwater. Though freshwater discharge is controlled by human, the extreme events are mainly driven by the heavy rainfall in the river basin, and provide various impacts, depending on its magnitude and frequency. This research aims to evaluate the magnitude and frequency of extreme freshwater discharges, and to establish the magnitude-frequency relationships between basin-wide rainfall and freshwater inflow. Daily discharge and daily basin-averaged rainfall from Jan 1, 1997 to Aug 31, 2010 were used to determine the relations between discharge and rainfall. Consecutive daily discharges were grouped into independent events using well-defined event-separation algorithm. Partial duration series were extracted to obtain the proper probability distribution function for extreme discharges and corresponding rainfall events. Extreme discharge events over the threshold 133,656,000 $m^3$ count up to 46 for 13.7y years, following the Weibull distribution with k=1.4. The 3-day accumulated rain-falls which occurred one day before peak discharges (1day-before-3day -sum rainfall), are determined as a control variable for discharge, because their magnitude is best correlated with that of the extreme discharge events. The minimum value of the corresponding 1day-before-3day-sum rainfall, 50.98mm is initially set to a threshold for the selection of discharge-inducing rainfall cases. The number of 1day-before-3day-sum rainfall groups after selection, however, exceeds that of the extreme discharge events. The canonical discriminant analysis indicates that water level over target level (-1.35 m EL.) can be useful to divide the 1day-before-3day-sum rainfall groups into discharge-induced and non-discharge ones. It also shows that the newly-set threshold, 104mm, can just separate these two cases without errors. The magnitude-frequency relationships between rainfall and discharge are established with the newly-selected lday-before-3day-sum rainfalls: $D=1.111{\times}10^8+1.677{\times}10^6{\overline{r_{3day}}$, (${\overline{r_{3day}}{\geqq}104$, $R^2=0.459$), $T_d=1.326T^{0.683}_{r3}$, $T_d=0.117{\exp}[0.0155{\overline{r_{3day}}]$, where D is the quantity of discharge, ${\overline{r_{3day}}$ the 1day-before-3day-sum rainfall, $T_{r3}$ and $T_d$, are respectively return periods of 1day-before-3day-sum rainfall and freshwater discharge. These relations provide the framework to evaluate the effect of freshwater discharge on estuarine flow structure, water quality, responses of ecosystems from the perspective of magnitude and frequency.

Quantitative Assessment Technology of Small Animal Myocardial Infarction PET Image Using Gaussian Mixture Model (다중가우시안혼합모델을 이용한 소동물 심근경색 PET 영상의 정량적 평가 기술)

  • Woo, Sang-Keun;Lee, Yong-Jin;Lee, Won-Ho;Kim, Min-Hwan;Park, Ji-Ae;Kim, Jin-Su;Kim, Jong-Guk;Kang, Joo-Hyun;Ji, Young-Hoon;Choi, Chang-Woon;Lim, Sang-Moo;Kim, Kyeong-Min
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.42-51
    • /
    • 2011
  • Nuclear medicine images (SPECT, PET) were widely used tool for assessment of myocardial viability and perfusion. However it had difficult to define accurate myocardial infarct region. The purpose of this study was to investigate methodological approach for automatic measurement of rat myocardial infarct size using polar map with adaptive threshold. Rat myocardial infarction model was induced by ligation of the left circumflex artery. PET images were obtained after intravenous injection of 37 MBq $^{18}F$-FDG. After 60 min uptake, each animal was scanned for 20 min with ECG gating. PET data were reconstructed using ordered subset expectation maximization (OSEM) 2D. To automatically make the myocardial contour and generate polar map, we used QGS software (Cedars-Sinai Medical Center). The reference infarct size was defined by infarction area percentage of the total left myocardium using TTC staining. We used three threshold methods (predefined threshold, Otsu and Multi Gaussian mixture model; MGMM). Predefined threshold method was commonly used in other studies. We applied threshold value form 10% to 90% in step of 10%. Otsu algorithm calculated threshold with the maximum between class variance. MGMM method estimated the distribution of image intensity using multiple Gaussian mixture models (MGMM2, ${\cdots}$ MGMM5) and calculated adaptive threshold. The infarct size in polar map was calculated as the percentage of lower threshold area in polar map from the total polar map area. The measured infarct size using different threshold methods was evaluated by comparison with reference infarct size. The mean difference between with polar map defect size by predefined thresholds (20%, 30%, and 40%) and reference infarct size were $7.04{\pm}3.44%$, $3.87{\pm}2.09%$ and $2.15{\pm}2.07%$, respectively. Otsu verse reference infarct size was $3.56{\pm}4.16%$. MGMM methods verse reference infarct size was $2.29{\pm}1.94%$. The predefined threshold (30%) showed the smallest mean difference with reference infarct size. However, MGMM was more accurate than predefined threshold in under 10% reference infarct size case (MGMM: 0.006%, predefined threshold: 0.59%). In this study, we was to evaluate myocardial infarct size in polar map using multiple Gaussian mixture model. MGMM method was provide adaptive threshold in each subject and will be a useful for automatic measurement of infarct size.