• Title/Summary/Keyword: 구조 성능 평가

Search Result 4,945, Processing Time 0.031 seconds

Evaluation and Verification of the Attenuation Rate of Lead Sheets by Tube Voltage for Reference to Radiation Shielding Facilities (방사선 방어시설 구축 시 활용 가능한 관전압별 납 시트 차폐율 성능평가 및 실측 검증)

  • Ki-Yoon Lee;Kyung-Hwan Jung;Dong-Hee Han;Jang-Oh Kim;Man-Seok Han;Jong-Won Gil;Cheol-Ha Baek
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.4
    • /
    • pp.489-495
    • /
    • 2023
  • Radiation shielding facilities are constructed in locations where diagnostic radiation generators are installed, with the aim of preventing exposure for patients and radiation workers. The purpose of this study is seek to compare and validate the trend of attenuation thickness of lead, the primary material in these radiation shielding facilities, at different maximum tube voltages by Monte Carlo simulations and measurement. We employed the Monte Carlo N-Particle 6 simulation code. Within this simulation, we set a lead shielding arrangement, where the distance between the source and the lead sheet was set at 100 cm and the field of view was set at 10 × 10 cm2. Additionally, we varied the tube voltages to encompass 80, 100, 120, and 140 kVp. We calculated energy spectra for each respective tube voltage and applied them in the simulations. Lead thicknesses corresponding to attenuation rates of 50, 70, 90, and 95% were determined for tube voltages of 80, 100, 120, and 140 kVp. For 80 kVp, the calculated thicknesses for these attenuation rates were 0.03, 0.08, 0.21, and 0.33 mm, respectively. For 100 kVp, the values were 0.05, 0.12, 0.30, and 0.50 mm. Similarly, for 120 kVp, they were 0.06, 0.14, 0.38, and 0.56 mm. Lastly, at 140 kVp, the corresponding thicknesses were 0.08, 0.16, 0.42, and 0.61 mm. Measurements were conducted to validate the calculated lead thicknesses. The radiation generator employed was the GE Healthcare Discovery XR 656, and the dosimeter used was the IBA MagicMax. The experimental results showed that at 80 kVp, the attenuation rates for different thicknesses were 43.56, 70.33, 89.85, and 93.05%, respectively. Similarly, at 100 kVp, the rates were 52.49, 72.26, 86.31, and 92.17%. For 120 kVp, the attenuation rates were 48.26, 71.18, 87.30, and 91.56%. Lastly, at 140 kVp, they were measured 50.45, 68.75, 89.95, and 91.65%. Upon comparing the simulation and experimental results, it was confirmed that the differences between the two values were within an average of approximately 3%. These research findings serve to validate the reliability of Monte Carlo simulations and could be employed as fundamental data for future radiation shielding facility construction.

A Comparative Study of the Standard Uptake Values of the PET Reconstruction Methods; Using Contrast Enhanced CT and Non Contrast Enhanced CT (PET/CT 영상에서 조영제를 사용하지 않은 CT와 조영제를 사용한 CT를 이용한 감쇠보정에 따른 표준화섭취계수의 비교)

  • Lee, Seung-Jae;Park, Hoon-Hee;Ahn, Sha-Ron;Oh, Shin-Hyun;NamKoong, Heuk;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.3
    • /
    • pp.235-240
    • /
    • 2008
  • Purpose: At the beginning of PET/CT, Computed Tomography was mainly used only for Attenuation Correction (AC), but as the performance of the CT have been increase, it could give improved diagnostic information with Contrast Media. But it was controversial that Contrast Media could affect AC on PET/CT scan. Some submitted thesis' show that Contrast Media could overestimate when it is for AC data processing. On the contrary, the opinion that Contrast Media could be possible to affect the alteration of SUV because of the overestimated AC. But it does not have a definite effect on the diagnosis. Thus, the affection of Contrast Media on AC was investigated in this study. Materials and Methods: Patient inclusion criteria required a history of a malignancy and performance of an integrated PET/CT scan and contrast- enhanced CT scan within a 1-day period. Thirty oncologic patients who had PET/CT scan from December 2007 to June 2008 underwent staging evaluation and met these criteria. All patients fasted for at least 6 hr before the IV injection of approximately 5.6 MBq/kg (0.15 mCi/kg) of $^{18}F$-FDG and were scanned about 60 min after injection. All patients had a whole body PET/CT performed without IV contrast media followed by a contrast-enhanced CT on the Discovery STe PET/CT scanner. CT data were used for AC and PET images came out after AC. The ROIs drew and measured SUV. A paired t-test of these results was performed to assess the significance of the difference between the SUV obtained from the two attenuation corrected PET images. Results: The mean and maximum Standardized Uptake Values (SUV) for different regions averaged over all Patients. Comparing before using Contrast Media and after using, Most of ROIs have the increased SUV when it did Contrast Enhanced CT compare to Non-Contrast enhanced CT. All regions have increased SUV and also their p value was under 0.05 except the mean SUV of the Heart region. Conclusion: In this regard, the effect on SUV measurements that occurs when a contrast-enhanced CT is used for attenuation correction could have significant clinical ramifications. But some submitted thesis insisted that the percentage change in SUV that can determine or modify clinical management of oncology patients is small. Because there was not much difference that could be discovered by interpreter. But obviously the numerical change was occurred and on the stage finding primary region, small change would be base line, such as the region of liver which has greater change than the other regions needs more attention.

  • PDF

A Study on Intelligent Value Chain Network System based on Firms' Information (기업정보 기반 지능형 밸류체인 네트워크 시스템에 관한 연구)

  • Sung, Tae-Eung;Kim, Kang-Hoe;Moon, Young-Su;Lee, Ho-Shin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.67-88
    • /
    • 2018
  • Until recently, as we recognize the significance of sustainable growth and competitiveness of small-and-medium sized enterprises (SMEs), governmental support for tangible resources such as R&D, manpower, funds, etc. has been mainly provided. However, it is also true that the inefficiency of support systems such as underestimated or redundant support has been raised because there exist conflicting policies in terms of appropriateness, effectiveness and efficiency of business support. From the perspective of the government or a company, we believe that due to limited resources of SMEs technology development and capacity enhancement through collaboration with external sources is the basis for creating competitive advantage for companies, and also emphasize value creation activities for it. This is why value chain network analysis is necessary in order to analyze inter-company deal relationships from a series of value chains and visualize results through establishing knowledge ecosystems at the corporate level. There exist Technology Opportunity Discovery (TOD) system that provides information on relevant products or technology status of companies with patents through retrievals over patent, product, or company name, CRETOP and KISLINE which both allow to view company (financial) information and credit information, but there exists no online system that provides a list of similar (competitive) companies based on the analysis of value chain network or information on potential clients or demanders that can have business deals in future. Therefore, we focus on the "Value Chain Network System (VCNS)", a support partner for planning the corporate business strategy developed and managed by KISTI, and investigate the types of embedded network-based analysis modules, databases (D/Bs) to support them, and how to utilize the system efficiently. Further we explore the function of network visualization in intelligent value chain analysis system which becomes the core information to understand industrial structure ystem and to develop a company's new product development. In order for a company to have the competitive superiority over other companies, it is necessary to identify who are the competitors with patents or products currently being produced, and searching for similar companies or competitors by each type of industry is the key to securing competitiveness in the commercialization of the target company. In addition, transaction information, which becomes business activity between companies, plays an important role in providing information regarding potential customers when both parties enter similar fields together. Identifying a competitor at the enterprise or industry level by using a network map based on such inter-company sales information can be implemented as a core module of value chain analysis. The Value Chain Network System (VCNS) combines the concepts of value chain and industrial structure analysis with corporate information simply collected to date, so that it can grasp not only the market competition situation of individual companies but also the value chain relationship of a specific industry. Especially, it can be useful as an information analysis tool at the corporate level such as identification of industry structure, identification of competitor trends, analysis of competitors, locating suppliers (sellers) and demanders (buyers), industry trends by item, finding promising items, finding new entrants, finding core companies and items by value chain, and recognizing the patents with corresponding companies, etc. In addition, based on the objectivity and reliability of the analysis results from transaction deals information and financial data, it is expected that value chain network system will be utilized for various purposes such as information support for business evaluation, R&D decision support and mid-term or short-term demand forecasting, in particular to more than 15,000 member companies in Korea, employees in R&D service sectors government-funded research institutes and public organizations. In order to strengthen business competitiveness of companies, technology, patent and market information have been provided so far mainly by government agencies and private research-and-development service companies. This service has been presented in frames of patent analysis (mainly for rating, quantitative analysis) or market analysis (for market prediction and demand forecasting based on market reports). However, there was a limitation to solving the lack of information, which is one of the difficulties that firms in Korea often face in the stage of commercialization. In particular, it is much more difficult to obtain information about competitors and potential candidates. In this study, the real-time value chain analysis and visualization service module based on the proposed network map and the data in hands is compared with the expected market share, estimated sales volume, contact information (which implies potential suppliers for raw material / parts, and potential demanders for complete products / modules). In future research, we intend to carry out the in-depth research for further investigating the indices of competitive factors through participation of research subjects and newly developing competitive indices for competitors or substitute items, and to additively promoting with data mining techniques and algorithms for improving the performance of VCNS.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.