• 제목/요약/키워드: Index and performance tests

Search Result 195, Processing Time 0.025 seconds

Development of Preliminary Quality Assurance Software for $GafChromic^{(R)}$ EBT2 Film Dosimetry ($GafChromic^{(R)}$ EBT2 Film Dosimetry를 위한 품질 관리용 초기 프로그램 개발)

  • Park, Ji-Yeon;Lee, Jeong-Woo;Choi, Kyoung-Sik;Hong, Semie;Park, Byung-Moon;Bae, Yong-Ki;Jung, Won-Gyun;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.21 no.1
    • /
    • pp.113-119
    • /
    • 2010
  • Software for GafChromic EBT2 film dosimetry was developed in this study. The software provides film calibration functions based on color channels, which are categorized depending on the colors red, green, blue, and gray. Evaluations of the correction effects for light scattering of a flat-bed scanner and thickness differences of the active layer are available. Dosimetric results from EBT2 films can be compared with those from the treatment planning system ECLIPSE or the two-dimensional ionization chamber array MatriXX. Dose verification using EBT2 films is implemented by carrying out the following procedures: file import, noise filtering, background correction and active layer correction, dose calculation, and evaluation. The relative and absolute background corrections are selectively applied. The calibration results and fitting equation for the sensitometric curve are exported to files. After two different types of dose matrixes are aligned through the interpolation of spatial pixel spacing, interactive translation, and rotation, profiles and isodose curves are compared. In addition, the gamma index and gamma histogram are analyzed according to the determined criteria of distance-to-agreement and dose difference. The performance evaluations were achieved by dose verification in the $60^{\circ}$-enhanced dynamic wedged field and intensity-modulated (IM) beams for prostate cancer. All pass ratios for the two types of tests showed more than 99% in the evaluation, and a gamma histogram with 3 mm and 3% criteria was used. The software was developed for use in routine periodic quality assurance and complex IM beam verification. It can also be used as a dedicated radiochromic film software tool for analyzing dose distribution.

The Usefulness of Dyspnea Rating in Evaluation for Pulmonary Impairment/Disability in Patients with Chronic Pulmonary Disease (만성폐질환자의 폐기능손상 및 장애 평가에 있어서 호흡곤란정도의 유용성)

  • Park, Jae-Min;Lee, Jun-Gu;Kim, Young-Sam;Chang, Yoon-Soo;Ahn, Kang-Hyun;Cho, Hyun-Myung;Kim, Se-Kyu;Chang, Joon;Kim, Sung-Kyu;Lee, Won-Young
    • Tuberculosis and Respiratory Diseases
    • /
    • v.46 no.2
    • /
    • pp.204-214
    • /
    • 1999
  • Background: Resting pulmonary function tests(PFTs) are routinely used in the evaluation of pulmonary impairment/disability. But the significance of the cardiopulmonary exercise test(CPX) in the evaluation of pulmonary impairment is controvertible. Many experts believe that dyspnea, though a necessary part of the assessment, is not a reliable predictor of impairment. Nevertheless, oxygen requirements of an organism at rest are different from at activity or exercising, and a clear relationship between resting PFTs and exercise tolerance has not been established in patients with chronic pulmonary disease. As well, the relationship between resting PFTs and dyspnea is complex. To investigate the relationship of dyspnea, resting PFTs, and CPX, we evaluated the patients of stabilized chronic pulmonary disease with clinical dyspnea rating(baseline dyspnea index, BDI), resting PFTs, and CPX. Method: The 50 patients were divided into two groups: non-severe and severe group on basis of results of resting PFTs(by criteria of ATS), CPX(by criteria of ATS or Ortega), and dyspnea rating(by focal score of BDI). Groups were compared with respect to pulmonary function, indices of CPX, and dyspnea rating. Results: 1. According to the criteria of pulmonary impairment with resting PFTs, $VO_2$max, and focal score of BDI were significantly low in the severe group(p<0.01). According to the criteria of $VO_2$max(ml/kg/min) and $VO_2$max(%), the parameters of resting PFTs, except $FEV_1$ were not significantly different between non-severe and severe(p>0.05). According to focal score($FEV_1$(%), FVC(%), MW(%), $FEV_1/FVC$, and $VO_2$max were significantly lower in the severe group(p<0.01). However, in the more severe dyspneic group(focal score<5), only $VO_2$max(ml/kg/min) and $VO_2$max(%) were low(p<0.01). $FEV_1$(%) was correlated with $VO_2$max(%)(r=0.52;p<0.01), but not predictive of exercise performance. The focal score had the correlation with max WR(%) (r=0.55;p<0.01). Sensitivity and specificity analysis were utilized to compare the different criteria used to evaluate the severity of pulmonary impairment, revealed that the classification would be different according to the criteria used. And focal score for dyspnea showed similar sensitivity and specificity. Conclusion : According to these result, resting PFTs were not superior to rating of dyspnea in prediction of exercise performance in patients with chronic pulmonary diseases and less correlative with focal score for dyspnea than $VO_2$max and max WR. Therefore, if not contraindicated, CPX would be considered to evaluate the severity of pulmonary impairment in patients with chronic pulmonary diseases, including with severe resting PFTs. Current criteria used to evaluate the severity of impairment were insufficient in considering the degree of dyspnea, so new criteria, including the severity of dyspnea, may be necessary.

  • PDF

Development of a Real-Time Mobile GIS using the HBR-Tree (HBR-Tree를 이용한 실시간 모바일 GIS의 개발)

  • Lee, Ki-Yamg;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.6 no.1 s.11
    • /
    • pp.73-85
    • /
    • 2004
  • Recently, as the growth of the wireless Internet, PDA and HPC, the focus of research and development related with GIS(Geographic Information System) has been changed to the Real-Time Mobile GIS to service LBS. To offer LBS efficiently, there must be the Real-Time GIS platform that can deal with dynamic status of moving objects and a location index which can deal with the characteristics of location data. Location data can use the same data type(e.g., point) of GIS, but the management of location data is very different. Therefore, in this paper, we studied the Real-Time Mobile GIS using the HBR-tree to manage mass of location data efficiently. The Real-Time Mobile GIS which is developed in this paper consists of the HBR-tree and the Real-Time GIS Platform HBR-tree. we proposed in this paper, is a combined index type of the R-tree and the spatial hash Although location data are updated frequently, update operations are done within the same hash table in the HBR-tree, so it costs less than other tree-based indexes Since the HBR-tree uses the same search mechanism of the R-tree, it is possible to search location data quickly. The Real-Time GIS platform consists of a Real-Time GIS engine that is extended from a main memory database system. a middleware which can transfer spatial, aspatial data to clients and receive location data from clients, and a mobile client which operates on the mobile devices. Especially, this paper described the performance evaluation conducted with practical tests if the HBR-tree and the Real-Time GIS engine respectively.

  • PDF

The Masking Effect According in Olfactory Stimulus on Horns Stimulus While Driving in Graphic Driving Simulator (화상 자동차 시뮬레이터에서 운전 중에 경적음 자극에 대한 후각자극의 마스킹 효과)

  • Min, Cheol-Kee;Ji, Doo-Hwan;Ko, Bok-Soo;Kim, Jin-Soo;Lee, Dong-Hyung;Ryu, Tae-Beum;Shin, Moon-Soo;Chung, Soon-Cheol;Min, Byung-Chan;Kang, Jin-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.35 no.4
    • /
    • pp.227-234
    • /
    • 2012
  • In this study, the masking effect of olfactory stimulus on the awakening state due to sound stimuli while driving using Graphic Driving Simulator was observed through the response of autonomic nervous system. The test was conducted for 11 males in their twenties. The siren of ambulance car was presented to them as auditory stimulus for 30 secs while driving in a situation of high way in the condition of both peppermint and control, respectively, and LF/HF ratio of HRV (Heart Rate Variability), the activity index of sympathetic nerve, and GSR (Galvanic Skin Response) response were examined. The test was proceeded in the order of three stages, that is, sound stimuli (test 1), driving performance, and sound stimuli (test 2), and fragrance stimulus, driving performance, and sound stimuli (test 3), and the physiological signal of GSR, HRV was measured in the whole stages. As a result of test, comparing the results of before and after auditory stimulus test (1) (p < 0.01), test (2) (p < 0.05), and test (3) (p < 0.01), driving performance test (2) (p < 0.01), test (3) (p < 0.01), and olfactory stimulus test (3) (p < 0.05), respectively, GSR response increased, showing significant difference in all the tests. It indicates that when auditory stimulus was presented to the subjects, they were in the awakening state as sympathetic nervous system got activated. As a result of comparing auditory stimulus while driving before and after presenting olfactory stimulus, there was no significant difference in GSR response. The LF/HF ratio of HRV increased, showing a significant difference only in test (2) (p < 0.05), and in driving performance test (2) (p < 0.05) in auditory stimulus, however, it showed no significant difference in olfactory stimulus. As a result of comparing auditory stimulus while driving before and after presenting olfactory stimulus, there was a decrease, showing significant difference (p < 0.05) in LF/HF ratio of HRV. That is, it means that the activation of sympathetic nervous system decreased, and that parasympathetic nervous system got activated. From these results, it was observed that while driving, the awakening level due to auditory stimulus was settled with olfactory stimulus. In conclusion, it was drawn that while driving, olfactory stimulus could have the masking effect on auditory stimulus.

An Experimental Study on the Durability Characterization using Porosity (시멘트 모르타르의 공극률과 내구특성과의 관계에 대한 실험적 연구)

  • Park, Sang Soon;Kwon, Seung-Jun;Kim, Tae Sang
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.2A
    • /
    • pp.171-179
    • /
    • 2009
  • The porosity in porous media like concrete can be considered as a durability index since it may be a routine for the intrusion of harmful ions and room for the keeping moisture. Recently, modeling and analysis techniques for deterioration are provided based on the pore structure with the significance of durability and the relationship between porosity and durability characteristics is an important issue. In this paper, a series of mortar samples with five water to cement ratios are prepared and tests for durability performance are carried out including porosity measurement. The durability test covers those for compressive strength, air permeability, chloride diffusion coefficient, absorption, and moisture diffusion coefficient. They are compared with water to cement ratios and porosity. From the normalized data, when porosity increases to 1.45 times, air permeability, chloride diffusion coefficient, absorption, and moisture diffusion coefficient decrease to 2.3 times, 2.1 times, 5.5 times and 3.7 times, respectively, while compressive strength decreases to 0.6 times. It was evaluated that these are linearly changed with porosity showing high corelation factors. Additionally, intended durability performances are established from the test results and literature studies and a porosity for durable concrete is proposed based on them.

Study Design and Baseline Results in a Cohort Study to Identify Predictors for the Clinical Progression to Mild Cognitive Impairment or Dementia From Subjective Cognitive Decline (CoSCo) Study

  • SeongHee Ho;Yun Jeong Hong;Jee Hyang Jeong;Kee Hyung Park;SangYun Kim;Min Jeong Wang;Seong Hye Choi;SeungHyun Han;Dong Won Yang
    • Dementia and Neurocognitive Disorders
    • /
    • v.21 no.4
    • /
    • pp.147-161
    • /
    • 2022
  • Background and Purpose: Subjective cognitive decline (SCD) refers to the self-perception of cognitive decline with normal performance on objective neuropsychological tests. SCD, which is the first help-seeking stage and the last stage before the clinical disease stage, can be considered to be the most appropriate time for prevention and treatment. This study aimed to compare characteristics between the amyloid positive and amyloid negative groups of SCD patients. Methods: A cohort study to identify predictors for the clinical progression to mild cognitive impairment (MCI) or dementia from subjective cognitive decline (CoSCo) study is a multicenter, prospective observational study conducted in the Republic of Korea. In total, 120 people aged 60 years or above who presented with a complaint of persistent cognitive decline were selected, and various risk factors were measured among these participants. Continuous variables were analyzed using the Wilcoxon rank-sum test, and categorical variables were analyzed using the χ2 test or Fisher's exact test. Logistic regression models were used to assess the predictors of amyloid positivity. Results: The multivariate logistic regression model indicated that amyloid positivity on PET was related to a lack of hypertension, atrophy of the left temporal lateral and entorhinal cortex, low body mass index, low waist circumference, less body and visceral fat, fast gait speed, and the presence of the apolipoprotein E ε4 allele in amnestic SCD patients. Conclusions: The CoSCo study is still in progress, and the authors aim to identify the risk factors that are related to the progression of MCI or dementia in amnestic SCD patients through a two-year follow-up longitudinal study.

Analysis and Evaluation of CPC / COLSS Related Test Result During YGN 3 Initial Startup (영광 3호기 초기 시운전 동안 CPC / COLSS 관련시험 결과 분석 및 평가)

  • Chi, S.G.;Yu, S.S.;In, W.K.;Auh, G.S.;Doo, J.Y.;Kim, D.K.
    • Nuclear Engineering and Technology
    • /
    • v.27 no.6
    • /
    • pp.877-887
    • /
    • 1995
  • YGN 3 is the first nuclear power plant to use the Core Protection Calculator (CPC) as the core protection system and the Core Operating Limit Supervisory System (COLSS) as the core monitor-ing system in Korea. The CPC is designed to provide on-line calculations of Departure from Nucleate Boiling Ratio (DNBR) and Local Power Density (LPD) and to initiate reactor trip if the core conditions exceed the DNBR or LPD design limit. The COLSS is designed to assist the operator in implementing the Limiting Conditions for Operation (LCOs) in Technical Specifications for DNBR/Linear Heat Rate (LHR) margin, azimuthal tilt, and axial shape index and to provide alarm when the LCOs are reached. During YGN 3 initial startup testing, extensive CPC/COLSS related tests ore peformed to ver-ify the CPC/COLSS performance and to obtain optimum CPC/COLSS calibration constants at var, -ious core conditions. Most of test results met their specific acceptance criteria. In the case of missing the acceptance criteria, the test results ore analyzed, evaluated, and justified. Through the analysis and evaluation of each of the CPC/COLSS related test results, it can be concluded that the CPC/COLSS are successfully Implemented as designed at YGN 3.

  • PDF

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

Internal Flow Analysis of Urea-SCR System for Passenger Cars Considering Actual Driving Conditions (운전 조건을 고려한 승용차용 요소첨가 선택적 촉매환원장치의 내부 유동 해석에 관한 연구)

  • Moon, Seong Joon;Jo, Nak Won;Oh, Se Doo;Lee, Ho Kil;Park, Kyoung Woo
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.40 no.3
    • /
    • pp.127-138
    • /
    • 2016
  • Diesel vehicles should be equipped with urea-selective catalytic reduction(SCR) system as a high-performance catalyst, in order to reduce harmful nitrogen oxide emissions. In this study, a three-dimensional Eulerian-Lagrangian CFD analysis was used to numerically predict the multiphase flow characteristics of the urea-SCR system, coupled with the chemical reactions of the system's transport phenomena. Then, the numerical spray structure was modified by comparing the results with the measured values from spray visualization, such as the injection velocity, penentration length, spray radius, and sauter mean diameter. In addition, the analysis results were verified by comparison with the removal efficiency of the nitrogen oxide emissions during engine and chassis tests, resulting in accuracy of the relative error of less than 5%. Finally, a verified CFD analysis was used to calculate the interanl flow of the urea-SCR system, thereby analyzing the characteristics of pressure drop and velocity increase, and predicting the uniformity index and overdistribution positions of ammonia.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.