• Title/Summary/Keyword: Parametric algorithm

Search Result 459, Processing Time 0.028 seconds

CT-Derived Deep Learning-Based Quantification of Body Composition Associated with Disease Severity in Chronic Obstructive Pulmonary Disease (CT 기반 딥러닝을 이용한 만성 폐쇄성 폐질환의 체성분 정량화와 질병 중증도)

  • Jae Eun Song;So Hyeon Bak;Myoung-Nam Lim;Eun Ju Lee;Yoon Ki Cha;Hyun Jung Yoon;Woo Jin Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.84 no.5
    • /
    • pp.1123-1133
    • /
    • 2023
  • Purpose Our study aimed to evaluate the association between automated quantified body composition on CT and pulmonary function or quantitative lung features in patients with chronic obstructive pulmonary disease (COPD). Materials and Methods A total of 290 patients with COPD were enrolled in this study. The volume of muscle and subcutaneous fat, area of muscle and subcutaneous fat at T12, and bone attenuation at T12 were obtained from chest CT using a deep learning-based body segmentation algorithm. Parametric response mapping-derived emphysema (PRMemph), PRM-derived functional small airway disease (PRMfSAD), and airway wall thickness (AWT)-Pi10 were quantitatively assessed. The association between body composition and outcomes was evaluated using Pearson's correlation analysis. Results The volume and area of muscle and subcutaneous fat were negatively associated with PRMemph and PRMfSAD (p < 0.05). Bone density at T12 was negatively associated with PRMemph (r = -0.1828, p = 0.002). The volume and area of subcutaneous fat and bone density at T12 were positively correlated with AWT-Pi10 (r = 0.1287, p = 0.030; r = 0.1668, p = 0.005; r = 0.1279, p = 0.031). However, muscle volume was negatively correlated with the AWT-Pi10 (r = -0.1966, p = 0.001). Muscle volume was significantly associated with pulmonary function (p < 0.001). Conclusion Body composition, automatically assessed using chest CT, is associated with the phenotype and severity of COPD.

Development and Application of Pipeline Network Optimization Simulator (파이프라인 네트워킹 최적화 모델의 개발 및 활용)

  • Sung Won-Mo;Kwon Oh-kwang;Lee Chung-Hwan;Huh Dae-ki,
    • Journal of the Korean Institute of Gas
    • /
    • v.1 no.1
    • /
    • pp.56-63
    • /
    • 1997
  • This paper presents a hybrid network model(HY-PIPENET) implementing a minimum cost spanning tree(MCST) network algorithm to be able to determine optimum path and constrained derivative(CD) method to select optimum Pipe diameter. The HY-PIPENET has been validated with the published data of 6-node/7-pipe network. Networking system and also this system has been optimized with MCST-CD method. As a result, it was found that the gas can be sufficiently supplied at the lower pressure with the smaller diameters of pipe compared to the original system in 6-node/7-pipe network. Hence, the construction cost was reduced about $40\%$ in the optimized system. The hybrid networking model has been also applied to a complicated domestic gas pipeline network in metropolitan area, Korea. In this simulation, parametric study was peformed to understand the role of each individual parameter such as source pressure, flow rate, and pipe diameter on the optimized network. From the results of these simulations, we have proposed the optimized network as tree-type structure with optimum pipe diameter and source pressure in metropolitan area, Korea, however, this proposed system does not consider the environmental problems or safety concerns.

  • PDF

Diagnosis of Ictal Hyperperfusion Using Subtraction Image of Ictal and Interictal Brain Perfusion SPECT (발작기와 발작간기 뇌 관류 SPECT 감산영상을 이용한 간질원인 병소 진단)

  • Lee, Dong Soo;Seo, Jong-Mo;Lee, Jae Sung;Lee, Sang-Kun;Kim, Hyun Jip;Chung, June-Key;Lee, Myung Chul;Koh, Chang-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.1
    • /
    • pp.20-31
    • /
    • 1998
  • A robust algorithm to disclose and display the difference of ictal and interictal perfusion may facilitate the detection of ictal hyperfusion foci. Diagnostic performance of localizing epileptogenic zones with subtracted SPECT images was compared with the visual diagnosis using ictal and interictal SPECT, MR, or PET. Ietal and interictal Tc-99m-HMPAO cerebral perfusion SPECT images of 48 patients(pts) were processed to get parametric subtracted images. Epileptogenic foci of all pts were diagnosed by seizure free state after resection of epileptogenic zones. In subtraction SPECT, we used normalized difference ratio of pixel counts(ictal-interictal)/interictal ${\times}100%$) after correcting coordinates of ictal and interictal SPECT in semi-automatized 3-dimensional fashion. We found epileptogenic zones in subtraction SPECT and compared the performance with visual diagnosis of ictal and interictal SPECT, MR and PET using post-surgical diagnosis as gold standard. The concordance of subtraction SPECT and ictal-interictal SPECT was moderately good(kappa=0.49). The sensitivity of ictal-interictal SPECT was 73% and that of subtraction SPECT 58%. Positive predictive value of ictal-interictal SPECT was 76% and that of subtraction SPECT was 64%. There was no statistical difference between sensitivity or positive predictive values of subtraction SPECT and ictal-interictal SPECT, MR or PET. Such was also the case when we divided patients into temporal lobe epilepsy and neocortical epilepsy. We conclude that subtraction SPECT we produced had equivalent diagnostic performance compared with ictal-interictal SPECT in localizing epileptogenic zones. Additional value of these subtraction SPECT in clinical interpretation of ictal and interictal SPECT should be further evaluated.

  • PDF

Development of Macro-Element for the Analysis of Elastically Supported Plates (탄성 지지된 판구조 해석을 위한 매크로 요소의 개발)

  • 강영종;박남회;앙기재;최진유
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.13 no.1
    • /
    • pp.25-35
    • /
    • 2000
  • The superstructure of general bridge like slab bridge and slab on girder bridge is composed of elastically supported isotropic plate. The objective of this study is to develop the new analysis method for elastically supported plate with general edge beam or girder(boundaries) under arbitrary out of plane loading. The displacement solutions for the macro-element of plate and beam are obtained by solving for the unknown interactive forces and moments at the beam or nodal line locations after satisfying equilibrium equation along the nodal line. The displacement functions for macro-elements ate proposed in single Fourier series using harmonic analysis, and the equilibrium equations of nodal line are composed by using slope-deflection method. The proposed analysis method is programmed by MS-Fortran and can be applied to all types of isotropic decks with bridge-type boundaries. Numerical examples involving elastically supported plates with various aspect ratio, loading cases, and bridge-type boundary conditions are presented to demonstrate the accuracy of this program. The major advantage of this new analysis method is the development of a simple solution algorithm, leads to obtain rapidly responses of bridge deck system. This proposed method can be used in parametric study of behavior of bridge decks.

  • PDF

A preliminary study for numerical and analytical evaluation of surface settlement due to EPB shield TBM excavation (토압식 쉴드 TBM 굴착에 따른 지반침하 거동 평가에 관한 해석적 기초연구)

  • An, Jun-Beom;Kang, Seok-Jun;Kim, Jung Joo;Kim, Kyoung Yul;Cho, Gye-Chun
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.3
    • /
    • pp.183-198
    • /
    • 2021
  • The EPB (Earth Pressure Balanced) shield TBM method restrains the ground deformation through continuous excavation and support. Still, the significant surface settlement occurs due to the ground conditions, tunnel dimensions, and construction conditions. Therefore, it is necessary to clarify the settlement behavior with its influence factors and evaluate the possible settlement during construction. In this study, the analytical model of surface settlement based on the influence factors and their mechanisms were proposed. Then, the parametric study for controllable factors during excavation was conducted by numerical method. Through the numerical analysis, the settlement behavior according to the construction conditions was quantitatively derived. Then, the qualitative trend according to the ground conditions was visualized by coupling the numerical results with the analytical model of settlement. Based on the results of this study, it is expected to contribute to the derivation of the settlement prediction algorithm for EPB shield TBM excavation.

Data Augmentation using a Kernel Density Estimation for Motion Recognition Applications (움직임 인식응용을 위한 커널 밀도 추정 기반 학습용 데이터 증폭 기법)

  • Jung, Woosoon;Lee, Hyung Gyu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.4
    • /
    • pp.19-27
    • /
    • 2022
  • In general, the performance of ML(Machine Learning) application is determined by various factors such as the type of ML model, the size of model (number of parameters), hyperparameters setting during the training, and training data. In particular, the recognition accuracy of ML may be deteriorated or experienced overfitting problem if the amount of dada used for training is insufficient. Existing studies focusing on image recognition have widely used open datasets for training and evaluating the proposed ML models. However, for specific applications where the sensor used, the target of recognition, and the recognition situation are different, it is necessary to build the dataset manually. In this case, the performance of ML largely depends on the quantity and quality of the data. In this paper, training data used for motion recognition application is augmented using the kernel density estimation algorithm which is a type of non-parametric estimation method. We then compare and analyze the recognition accuracy of a ML application by varying the number of original data, kernel types and augmentation rate used for data augmentation. Finally experimental results show that the recognition accuracy is improved by up to 14.31% when using the narrow bandwidth Tophat kernel.

Development of Detailed Design Automation Technology for AI-based Exterior Wall Panels and its Backframes

  • Kim, HaYoung;Yi, June-Seong
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1249-1249
    • /
    • 2022
  • The facade, an exterior material of a building, is one of the crucial factors that determine its morphological identity and its functional levels, such as energy performance, earthquake and fire resistance. However, regardless of the type of exterior materials, huge property and human casualties are continuing due to frequent exterior materials dropout accidents. The quality of the building envelope depends on the detailed design and is closely related to the back frames that support the exterior material. Detailed design means the creation of a shop drawing, which is the stage of developing the basic design to a level where construction is possible by specifying the exact necessary details. However, due to chronic problems in the construction industry, such as reducing working hours and the lack of design personnel, detailed design is not being appropriately implemented. Considering these characteristics, it is necessary to develop the detailed design process of exterior materials and works based on the domain-expert knowledge of the construction industry using artificial intelligence (AI). Therefore, this study aims to establish a detailed design automation algorithm for AI-based condition-responsive exterior wall panels and their back frames. The scope of the study is limited to "detailed design" performed based on the working drawings during the exterior work process and "stone panels" among exterior materials. First, working-level data on stone works is collected to analyze the existing detailed design process. After that, design parameters are derived by analyzing factors that affect the design of the building's exterior wall and back frames, such as structure, floor height, wind load, lift limit, and transportation elements. The relational expression between the derived parameters is derived, and it is algorithmized to implement a rule-based AI design. These algorithms can be applied to detailed designs based on 3D BIM to automatically calculate quantity and unit price. The next goal is to derive the iterative elements that occur in the process and implement a robotic process automation (RPA)-based system to link the entire "Detailed design-Quality calculation-Order process." This study is significant because it expands the design automation research, which has been rather limited to basic and implemented design, to the detailed design area at the beginning of the construction execution and increases the productivity by using AI. In addition, it can help fundamentally improve the working environment of the construction industry through the development of direct and applicable technologies to practice.

  • PDF

A preliminary study for development of an automatic incident detection system on CCTV in tunnels based on a machine learning algorithm (기계학습(machine learning) 기반 터널 영상유고 자동 감지 시스템 개발을 위한 사전검토 연구)

  • Shin, Hyu-Soung;Kim, Dong-Gyou;Yim, Min-Jin;Lee, Kyu-Beom;Oh, Young-Sup
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.1
    • /
    • pp.95-107
    • /
    • 2017
  • In this study, a preliminary study was undertaken for development of a tunnel incident automatic detection system based on a machine learning algorithm which is to detect a number of incidents taking place in tunnel in real time and also to be able to identify the type of incident. Two road sites where CCTVs are operating have been selected and a part of CCTV images are treated to produce sets of training data. The data sets are composed of position and time information of moving objects on CCTV screen which are extracted by initially detecting and tracking of incoming objects into CCTV screen by using a conventional image processing technique available in this study. And the data sets are matched with 6 categories of events such as lane change, stoping, etc which are also involved in the training data sets. The training data are learnt by a resilience neural network where two hidden layers are applied and 9 architectural models are set up for parametric studies, from which the architectural model, 300(first hidden layer)-150(second hidden layer) is found to be optimum in highest accuracy with respect to training data as well as testing data not used for training. From this study, it was shown that the highly variable and complex traffic and incident features could be well identified without any definition of feature regulation by using a concept of machine learning. In addition, detection capability and accuracy of the machine learning based system will be automatically enhanced as much as big data of CCTV images in tunnel becomes rich.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.