• Title/Summary/Keyword: 모형 실험

Search Result 6,095, Processing Time 0.038 seconds

Effect of Human Implantable Medical Devices on Dose and Image Quality during Chest Radiography using Automatic Exposure Control (자동노출제어를 적용한 흉부 방사선 검사 시 인체 이식형 의료기기가 선량과 화질에 미치는 영향)

  • Kang-Min Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.3
    • /
    • pp.257-265
    • /
    • 2024
  • In this study, we applied AEC(Auto Exposure Control), which is used in many chest examinations, to evaluate whether medical devices inserted into the body affect the dose and image quality of chest images. After attaching three HIMD(Human implantable medical devices) to the ion chamber, the Monte Carlo methodology-based program PCXMC(PC Program for X-ray Monte Carlo) 2.0 was applied to measure the effective dose by inputting the DAP(Dose Ares Product) value derived from the Pacemaker and CRT and Chemoport Additionally, to evaluate image quality, we set three regions of interest and one noise region on the chest and measured SNR and CNR. The final study results showed significant differences in DAP and Effective dose. There was a significant difference between Pacemaker and CRT when AEC was applied and not applied. (p<0.05) When applied, the dose increased by 37% for Pacemaekr and 52% for CRT. Chemoport showed a 10% increase in effective dose depending on whether AEC was applied, but there was no significant difference. (p>0.05) In the image quality evaluation, there was no significant difference in image quality between all HIMD insertions and AEC applied or not. (p>0.05) Therefore, when the HIMD was inserted into the chest during a chest x ray and overlapped with the ion chamber sensor, the effective dose increased, and there was no difference in image quality even at a low dose without AEC. Therefore, when performing a chest X-ray examination of a patient with a HIMD inserted, it is considered that performing the examination without applying AEC is a method that can be considered to reduce the patient's radiation exposure.

Evaluation of Proper Image Acquisition Time by Change of Infusion dose in PET/CT (PET/CT 검사에서 주입선량의 변화에 따른 적정한 영상획득시간의 평가)

  • Kim, Chang Hyeon;Lee, Hyun Kuk;Song, Chi Ok;Lee, Gi Heun
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.2
    • /
    • pp.22-27
    • /
    • 2014
  • Purpose There is the recent PET/CT scan in tendency that use low dose to reduce patient's exposure along with development of equipments. We diminished $^{18}F$-FDG dose of patient to reduce patient's exposure after setting up GE Discovery 690 PET/CT scanner (GE Healthcare, Milwaukee, USA) establishment at this hospital in 2011. Accordingly, We evaluate acquisition time per proper bed by change of infusion dose to maintain quality of image of PET/CT scanner. Materials and Methods We inserted Air, Teflon, hot cylinder in NEMA NU2-1994 phantom and maintained radioactivity concentration based on the ratio 4:1 of hot cylinder and back ground activity and increased hot cylinder's concentration to 3, 4.3, 5.5, 6.7 MBq/kg, after acquisition image as increase acquisition time per bed to 30 seconds, 1 minute, 1 minute 30 seconds, 2 minute, 2 minutes 30 seconds, 3 minutes, 3 minutes 30 seconds, 4 minutes, 4 minutes 30 seconds, 5 minutes, 5 minutes 30 seconds, 10 minutes, 20 minutes, and 30 minutes, ROI was set up on hot cylinder and back radioactivity region. We computated standard deviation of Signal to Noise Ratio (SNR) and BKG (Background), compared with hot cylinder's concentration and change by acquisition time per bed, after measured Standard Uptake Value maximum ($SUV_{max}$). Also, we compared each standard deviation of $SUV_{max}$, SNR, BKG following in change of inspection waiting time (15minutes and 1 hour) by using 4.3 MBq phantom. Results The radioactive concentration per unit mass was increased to 3, 4.3, 5.5, 6.7 MBqs. And when we increased time/bed of each concentration from 1 minute 30 seconds to 30 minutes, we found that the $SUV_{max}$ of hot cylinder acquisition time per bed changed seriously according to each radioactive concentration in up to 18.3 to at least 7.3 from 30 seconds to 2 minutes. On the other side, that displayed changelessly at least 5.6 in up to 8 from 2 minutes 30 seconds to 30 minutes. SNR by radioactive change per unit mass was fixed to up to 0.49 in at least 0.41 in 3 MBqs and accroding as acquisition time per bed increased, rose to up to 0.59, 0.54 in each at least 0.23, 0.39 in 4.3 MBqs and in 5.5 MBqs. It was high to up to 0.59 from 30 seconds in radioactivity concentration 6.7 MBqs, but kept fixed from 0.43 to 0.53. Standard deviation of BKG (Background) was low from 0.38 to 0.06 in 3 MBqs and from 2 minutes 30 seconds after, low from 0.38 to 0 in 4.3 MBqs and 5.5 MBqs from 1 minute 30 seconds after, low from 0.33 to 0.05 in 6.7 MBqs at all section from 30 seconds to 30 minutes. In result that was changed the inspection waiting time to 15 minutes and 1 hour by 4.3 MBq phantoms, $SUV_{max}$ represented each other fixed values from 2 minutes 30 seconds of acquisition time per bed and SNR shown similar values from 1 minute 30 seconds. Conclusion As shown in the above, when we increased radioactive concentration per unit mass by 3, 4.3, 5.5, 6.7 MBqs, the values of $SUV_{max}$ and SNR was kept changelessly each other more than 2 minutes 30 seconds of acquisition time per bed. In the same way, in the change of inspection waiting time (15 minutes and 1 hour), we could find that the values of $SUV_{max}$ and SNR was kept changelessly each other more than 2 minutes 30 seconds of acquisition time per bed. In the result of this NEMA NU2-1994 phantom experiment, we found that the minimum acquisition time per bed was 2 minutes 30 seconds for evaluating values of fixed $SUV_{max}$ and SNR even in change of inserting radioactive concentration. However, this acquisition time can be different according to features and qualities of equipment.

  • PDF

THE EFFECTS OF THERMAL STIMULI TO THE FILLED TOOTH STRUCTURE (온도자극이 충전된 치질에 미치는 영향)

  • Baik, Byeong-Ju;Roh, Yong-Kwan;Lee, Young-Su;Yang, Jeong-Suk;Kim, Jae-Gon
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.26 no.2
    • /
    • pp.339-349
    • /
    • 1999
  • The dental structure substituted by restorative materials may produce discomfort resulting from hot or cold stimuli. To investigate the effects of this stimuli on the human teeth, thermal analysis was carried out by calculation of general heat conduction equation in a modeled tooth using numerical method. The method has been applied to axisymmetric and two-dimensional model, analyzing the effects of constant temperature $4^{\circ}C\;and\;60^{\circ}C$. That thermal shock was provided for 2 seconds and 4 seconds, respectively and recovered to normal condition of $20^{\circ}C$ until 10 seconds. The thermal behavior of tooth covered with a crown of gold or stainless steel was compared with that of tooth without crown. At the same time, the effects of restorative materials(amalgam, gold and zinc oxide-eugenol(ZOE)) on the temperature of PDJ(pulpo-dentinal junction) has been studied. The geometry used for thermal analysis so far has been limited to two-dimensional as well as axisymmetric tooth models. But the general restorative tooth forms a cross shaped cavity which is no longer two-dimensional and axisymmetric. Therefore, in this study, the three-dimensional model was developed to investigate the effect of shape and size of cavity. This three-dimensional model might be used for further research to investigate the effects of restorative materials and cavity design on the thermal behavior of the real shaped tooth. The results were as follows; 1. When cold temperature of $4^{\circ}C$ was applied to the surface of the restored teeth with amalgam for 2 seconds and recovered to ambient temperature of $20^{\circ}C$, the PDJ temperature decreased rapidly to $29^{\circ}C$ until 3 seconds and reached to $25^{\circ}C$ after 9 seconds. This temperature decreased rather slowly with stainless steel crown, but kept similar temperature within $1^{\circ}C$ differences. Using the gold as a restorative material, the PDJ temperature decreased very fast due to the high thermal conductivity and reached near to $25^{\circ}C$ but the temperature after 9 seconds was similar to that in the teeth without crown. The effects of coldness could be attenuated with the ZOE situated under the cavity. The low thermal conductivity caused a delay in temperature decrease and keeps $4^{\circ}C$ higher than the temperature of other conditions after 9 seconds. 2. The elapse time of cold stimuli was increased also until 4 seconds and recovered to $20^{\circ}C$ after 4 seconds to 9 seconds. The temperature after 9 seconds was about $2-3^{\circ}C$ lower than the temperature of 2 seconds stimuli, but in case of gold restoration, the high thermal conductivity of gold caused the minimum temperature of $21^{\circ}C$ after 5 seconds and got warm to $23^{\circ}C$ after 9 seconds. 3. The effects of hot stimuli was also investigated with the temperature of $60^{\circ}C$. For 2 seconds stimuli, the temperature increased to $40^{\circ}C$ from the initial temperature of $35^{\circ}C$ after 3 seconds of stimuli and decreased to $30^{\circ}C$ after 9 seconds in the teeth without crown. This temperature was sensitive to surface temperature in the teeth with gold restoration. It increased rapidly to $41^{\circ}C$ from the initial temperature of $35^{\circ}C$ after 2 seconds and decreased to $28^{\circ}C$ after 9 seconds, which showed $13^{\circ}C$ temperature variations for 9 seconds upon the surface temperature. This temperature variations were only in the range of $5^{\circ}C$ by using ZOE in the bottom of cavity and showed maximum temperature of $37^{\circ}C$ after 3 seconds of stimuli.

  • PDF

Future Changes in Global Terrestrial Carbon Cycle under RCP Scenarios (RCP 시나리오에 따른 미래 전지구 육상탄소순환 변화 전망)

  • Lee, Cheol;Boo, Kyung-On;Hong, Jinkyu;Seong, Hyunmin;Heo, Tae-kyung;Seol, Kyung-Hee;Lee, Johan;Cho, ChunHo
    • Atmosphere
    • /
    • v.24 no.3
    • /
    • pp.303-315
    • /
    • 2014
  • Terrestrial ecosystem plays the important role as carbon sink in the global carbon cycle. Understanding of interactions of terrestrial carbon cycle with climate is important for better prediction of future climate change. In this paper, terrestrial carbon cycle is investigated by Hadley Centre Global Environmental Model, version 2, Carbon Cycle (HadGEM2-CC) that considers vegetation dynamics and an interactive carbon cycle with climate. The simulation for future projection is based on the three (8.5/4.5/2.6) representative concentration pathways (RCPs) from 2006 to 2100 and compared with historical land carbon uptake from 1979 to 2005. Projected changes in ecological features such as production, respiration, net ecosystem exchange and climate condition show similar pattern in three RCPs, while the response amplitude in each RCPs are different. For all RCP scenarios, temperature and precipitation increase with rising of the atmospheric $CO_2$. Such climate conditions are favorable for vegetation growth and extension, causing future increase of terrestrial carbon uptakes in all RCPs. At the end of 21st century, the global average of gross and net primary productions and respiration increase in all RCPs and terrestrial ecosystem remains as carbon sink. This enhancement of land $CO_2$ uptake is attributed by the vegetated area expansion, increasing LAI, and early onset of growing season. After mid-21st century, temperature rising leads to excessive increase of soil respiration than net primary production and thus the terrestrial carbon uptake begins to fall since that time. Regionally the NEE average value of East-Asia ($90^{\circ}E-140^{\circ}E$, $20^{\circ}N{\sim}60^{\circ}N$) area is bigger than that of the same latitude band. In the end-$21^{st}$ the NEE mean values in East-Asia area are $-2.09PgC\;yr^{-1}$, $-1.12PgC\;yr^{-1}$, $-0.47PgC\;yr^{-1}$ and zonal mean NEEs of the same latitude region are $-1.12PgC\;yr^{-1}$, $-0.55PgC\;yr^{-1}$, $-0.17PgC\;yr^{-1}$ for RCP 8.5, 4.5, 2.6.

Optimization and Development of Prediction Model on the Removal Condition of Livestock Wastewater using a Response Surface Method in the Photo-Fenton Oxidation Process (Photo-Fenton 산화공정에서 반응표면분석법을 이용한 축산폐수의 COD 처리조건 최적화 및 예측식 수립)

  • Cho, Il-Hyoung;Chang, Soon-Woong;Lee, Si-Jin
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.30 no.6
    • /
    • pp.642-652
    • /
    • 2008
  • The aim of our research was to apply experimental design methodology in the optimization condition of Photo-Fenton oxidation of the residual livestock wastewater after the coagulation process. The reactions of Photo-Fenton oxidation were mathematically described as a function of parameters amount of Fe(II)($x_1$), $H_2O_2(x_2)$ and pH($x_3$) being modeled by the use of the Box-Behnken method, which was used for fitting 2nd order response surface models and was alternative to central composite designs. The application of RSM using the Box-Behnken method yielded the following regression equation, which is an empirical relationship between the removal(%) of livestock wastewater and test variables in coded unit: Y = 79.3 + 15.61x$_1$ - 7.31x$_2$ - 4.26x$_3$ - 18x$_1{^2}$ - 10x$_2{^2}$ - 11.9x$_3{^2}$ + 2.49x$_1$x$_2$ - 4.4x$_2$x$_3$ - 1.65x$_1$x$_3$. The model predicted also agreed with the experimentally observed result(R$^2$ = 0.96) The results show that the response of treatment removal(%) in Photo-Fenton oxidation of livestock wastewater were significantly affected by the synergistic effect of linear terms(Fe(II)($x_1$), $H_2O_2(x_2)$, pH(x$_3$)), whereas Fe(II) $\times$ Fe(II)(x$_1{^2}$), $H_2O_2$ $\times$ $H_2O_2$(x$_2{^2}$) and pH $\times$ pH(x$_3{^2}$) on the quadratic terms were significantly affected by the antagonistic effect. $H_2O_2$ $\times$ pH(x$_2$x$_3$) had also a antagonistic effect in the cross-product term. The estimated ridge of the expected maximum response and optimal conditions for Y using canonical analysis were 84 $\pm$ 0.95% and (Fe(II)(X$_1$) = 0.0146 mM, $H_2O_2$(X$_2$) = 0.0867 mM and pH(X$_3$) = 4.704, respectively. The optimal ratio of Fe/H$_2O_2$ was also 0.17 at the pH 4.7.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Study on the Tractive Characteristics of the Seed Furrow Opener for No-till Planter (무경운(無耕耘) 파종기용(播種機用) 구체기(溝切器)의 견인특성(牽引特性)에 관(關)한 연구(硏究))

  • La, Woo-Jung
    • Korean Journal of Agricultural Science
    • /
    • v.5 no.2
    • /
    • pp.149-157
    • /
    • 1978
  • This study was carried out to obtain basic data for the type selection of furrow openers for the no-tillage soybean planter trailed by the two-wheel tractor from the standpoint of minimum draft and good performance of furrowing. For this study, two models of furrow opener, hoe and disc type, were tested on the artificial soil stuffed in the moving soil bin. The results obtained were as follows. In the case of disc furrow opener, the drafts were measured according to various diameters of discs under the condition of disc angle $8^{\circ}$ and $16^{\circ}$, working depth 3cm and 6cm, working speed 2.75cm/sec. Minimum draft appeared when the diameter of disc was about 28cm and the drafts increased as the diameter of discs became larger or smaller than this diameter. Specific draft showed almost same tendencies as above but showed the minimum when the diameter was about 30cm. For the purpose of controlling the seeding depth, the relationships between draft and working depths, 3cm and 6cm, were tested. The variations of draft concerning the various working depths showed linear changes and were affected in higher degree by depths than other factors. There were general tendencies at both working depths, 3cm and 6cm, that total draft showed the minimum with the disc diameter of about 28cm and specific draft showed it with the disc diameter of about 30cm regardless of disc angle and working speed. For the purpose of controlling the working width and speed, the relationships among drafts, disc angle and working speed were investigated and there were general tendencies that the draft increased as the angle and speed were increased and the draft was affected more by speed than by angle. To compare the hoe-type with disc-type opener, the specific drafts of hoe openers were compared with those of disc opener of $16^{\circ}$ angle and 30cm diameter. The specific draft of disc-type opener with the diameter of 30cm was $0.35{\sim}0.5kg/cm^2$, while $0.71{\sim}1.02kg/cm^2$ in the case of hoe type with the lift angle of $20^{\circ}$ which is 2 times as much as that of disc type in average value. And the furrows opened by disc openers were cleaner than those opened by hoe openers.

  • PDF

A Hybrid Recommender System based on Collaborative Filtering with Selective Use of Overall and Multicriteria Ratings (종합 평점과 다기준 평점을 선택적으로 활용하는 협업필터링 기반 하이브리드 추천 시스템)

  • Ku, Min Jung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.85-109
    • /
    • 2018
  • Recommender system recommends the items expected to be purchased by a customer in the future according to his or her previous purchase behaviors. It has been served as a tool for realizing one-to-one personalization for an e-commerce service company. Traditional recommender systems, especially the recommender systems based on collaborative filtering (CF), which is the most popular recommendation algorithm in both academy and industry, are designed to generate the items list for recommendation by using 'overall rating' - a single criterion. However, it has critical limitations in understanding the customers' preferences in detail. Recently, to mitigate these limitations, some leading e-commerce companies have begun to get feedback from their customers in a form of 'multicritera ratings'. Multicriteria ratings enable the companies to understand their customers' preferences from the multidimensional viewpoints. Moreover, it is easy to handle and analyze the multidimensional ratings because they are quantitative. But, the recommendation using multicritera ratings also has limitation that it may omit detail information on a user's preference because it only considers three-to-five predetermined criteria in most cases. Under this background, this study proposes a novel hybrid recommendation system, which selectively uses the results from 'traditional CF' and 'CF using multicriteria ratings'. Our proposed system is based on the premise that some people have holistic preference scheme, whereas others have composite preference scheme. Thus, our system is designed to use traditional CF using overall rating for the users with holistic preference, and to use CF using multicriteria ratings for the users with composite preference. To validate the usefulness of the proposed system, we applied it to a real-world dataset regarding the recommendation for POI (point-of-interests). Providing personalized POI recommendation is getting more attentions as the popularity of the location-based services such as Yelp and Foursquare increases. The dataset was collected from university students via a Web-based online survey system. Using the survey system, we collected the overall ratings as well as the ratings for each criterion for 48 POIs that are located near K university in Seoul, South Korea. The criteria include 'food or taste', 'price' and 'service or mood'. As a result, we obtain 2,878 valid ratings from 112 users. Among 48 items, 38 items (80%) are used as training dataset, and the remaining 10 items (20%) are used as validation dataset. To examine the effectiveness of the proposed system (i.e. hybrid selective model), we compared its performance to the performances of two comparison models - the traditional CF and the CF with multicriteria ratings. The performances of recommender systems were evaluated by using two metrics - average MAE(mean absolute error) and precision-in-top-N. Precision-in-top-N represents the percentage of truly high overall ratings among those that the model predicted would be the N most relevant items for each user. The experimental system was developed using Microsoft Visual Basic for Applications (VBA). The experimental results showed that our proposed system (avg. MAE = 0.584) outperformed traditional CF (avg. MAE = 0.591) as well as multicriteria CF (avg. AVE = 0.608). We also found that multicriteria CF showed worse performance compared to traditional CF in our data set, which is contradictory to the results in the most previous studies. This result supports the premise of our study that people have two different types of preference schemes - holistic and composite. Besides MAE, the proposed system outperformed all the comparison models in precision-in-top-3, precision-in-top-5, and precision-in-top-7. The results from the paired samples t-test presented that our proposed system outperformed traditional CF with 10% statistical significance level, and multicriteria CF with 1% statistical significance level from the perspective of average MAE. The proposed system sheds light on how to understand and utilize user's preference schemes in recommender systems domain.

The Evaluation of SUV Variations According to the Errors of Entering Parameters in the PET-CT Examinations (PET/CT 검사에서 매개변수 입력오류에 따른 표준섭취계수 평가)

  • Kim, Jia;Hong, Gun Chul;Lee, Hyeok;Choi, Seong Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.43-48
    • /
    • 2014
  • Purpose: In the PET/CT images, The SUV (standardized uptake value) enables the quantitative assessment according to the biological changes of organs as the index of distinction whether lesion is malignant or not. Therefore, It is too important to enter parameters correctly that affect to the SUV. The purpose of this study is to evaluate an allowable error range of SUV as measuring the difference of results according to input errors of Activity, Weight, uptake Time among the parameters. Materials and Methods: Three inserts, Hot, Teflon and Air, were situated in the 1994 NEMA Phantom. Phantom was filled with 27.3 MBq/mL of 18F-FDG. The ratio of hotspot area activity to background area activity was regulated as 4:1. After scanning, Image was re-reconstructed after incurring input errors in Activity, Weight, uptake Time parameters as ${\pm}5%$, 10%, 15%, 30%, 50% from original data. ROIs (region of interests) were set one in the each insert areas and four in the background areas. $SUV_{mean}$ and percentage differences were calculated and compared in each areas. Results: $SUV_{mean}$ of Hot. Teflon, Air and BKG (Background) areas of original images were 4.5, 0.02. 0.1 and 1.0. The min and max value of $SUV_{mean}$ according to change of Activity error were 3.0 and 9.0 in Hot, 0.01 and 0.04 in Teflon, 0.1 and 0.3 in Air, 0.6 and 2.0 in BKG areas. And percentage differences were equally from -33% to 100%. In case of Weight error showed $SUV_{mean}$ as 2.2 and 6.7 in Hot, 0.01 and 0.03 in Tefron, 0.09 and 0.28 in Air, 0.5 and 1.5 in BKG areas. And percentage differences were equally from -50% to 50% except Teflon area's percentage deference that was from -50% to 52%. In case of uptake Time error showed $SUV_{mean}$ as 3.8 and 5.3 in Hot, 0.01 and 0.02 in Teflon, 0.1 and 0.2 in Air, 0.8 and 1.2 in BKG areas. And percentage differences were equally from 17% to -14% in Hot and BKG areas. Teflon area's percentage difference was from -50% to 52% and Air area's one was from -12% to 20%. Conclusion: As shown in the results, It was applied within ${\pm}5%$ of Activity and Weight errors if the allowable error range was configured within 5%. So, The calibration of dose calibrator and weighing machine has to conduct within ${\pm}5%$ error range because they can affect to Activity and Weight rates. In case of Time error, it showed separate error ranges according to the type of inserts. It showed within 5% error when Hot and BKG areas error were within ${\pm}15%$. So we have to consider each time errors if we use more than two clocks included scanner's one during the examinations.

  • PDF

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.