• Title/Summary/Keyword: Computation time

Search Result 3,162, Processing Time 0.036 seconds

Histological Validation of Cardiovascular Magnetic Resonance T1 Mapping for Assessing the Evolution of Myocardial Injury in Myocardial Infarction: An Experimental Study

  • Lu Zhang;Zhi-gang Yang;Huayan Xu;Meng-xi Yang;Rong Xu;Lin Chen;Ran Sun;Tianyu Miao;Jichun Zhao;Xiaoyue Zhou;Chuan Fu;Yingkun Guo
    • Korean Journal of Radiology
    • /
    • v.21 no.12
    • /
    • pp.1294-1304
    • /
    • 2020
  • Objective: To determine whether T1 mapping could monitor the dynamic changes of injury in myocardial infarction (MI) and be histologically validated. Materials and Methods: In 22 pigs, MI was induced by ligating the left anterior descending artery and they underwent serial cardiovascular magnetic resonance examinations with modified Look-Locker inversion T1 mapping and extracellular volume (ECV) computation in acute (within 24 hours, n = 22), subacute (7 days, n = 13), and chronic (3 months, n = 7) phases of MI. Masson's trichrome staining was performed for histological ECV calculation. Myocardial native T1 and ECV were obtained by region of interest measurement in infarcted, peri-infarct, and remote myocardium. Results: Native T1 and ECV in peri-infarct myocardium differed from remote myocardium in acute (1181 ± 62 ms vs. 1113 ± 64 ms, p = 0.002; 24 ± 4% vs. 19 ± 4%, p = 0.031) and subacute phases (1264 ± 41 ms vs. 1171 ± 56 ms, p < 0.001; 27 ± 4% vs. 22 ± 2%, p = 0.009) but not in chronic phase (1157 ± 57 ms vs. 1120 ± 54 ms, p = 0.934; 23 ± 2% vs. 20 ± 1%, p = 0.109). From acute to chronic MI, infarcted native T1 peaked in subacute phase (1275 ± 63 ms vs. 1637 ± 123 ms vs. 1471 ± 98 ms, p < 0.001), while ECV progressively increased with time (35 ± 7% vs. 46 ± 6% vs. 52 ± 4%, p < 0.001). Native T1 correlated well with histological findings (R2 = 0.65 to 0.89, all p < 0.001) so did ECV (R2 = 0.73 to 0.94, all p < 0.001). Conclusion: T1 mapping allows the quantitative assessment of injury in MI and the noninvasive monitoring of tissue injury evolution, which correlates well with histological findings.

Wavelet Transform-based Face Detection for Real-time Applications (실시간 응용을 위한 웨이블릿 변환 기반의 얼굴 검출)

  • 송해진;고병철;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.829-842
    • /
    • 2003
  • In this Paper, we propose the new face detection and tracking method based on template matching for real-time applications such as, teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Since the main purpose of paper is to track a face regardless of various environments, we use template-based face tracking method. To generate robust face templates, we apply wavelet transform to the average face image and extract three types of wavelet template from transformed low-resolution average face. However template matching is generally sensitive to the change of illumination conditions, we apply Min-max normalization with histogram equalization according to the variation of intensity. Tracking method is also applied to reduce the computation time and predict precise face candidate region. Finally, facial components are also detected and from the relative distance of two eyes, we estimate the size of facial ellipse.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

A Study on the Shape and Movement in Dissolved Air Flotation for the Algae Removal (수중조류제거(水中藻類除去)를 위한 가압부상(加壓浮上)에 있어서 기포(氣泡)의 양태(模態)에 관한 연구(研究))

  • Kim, Hwan Gi;Jeong, Tae Seop
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.4 no.4
    • /
    • pp.79-93
    • /
    • 1984
  • The dissolved air flotation(DAF) has been shown to be efficient process for the removal of algae ftom water. The efficiency of DAF can be affected by the volume ratio of pressurized liquid to sample, the pressure pressurized liquid, the contact time, the appropriate coagulant and its amount, the water temperature, the turbulence of reactor, the bubble size and rising velocity etc. The purpose of this paper is to compare the practical bubble rising velocity with the theoretical one, to investigate the adhesion phenomenon of bubbles and floc, and the influence of bubble size and velocity upon the process. The results through theoretical review and experimental investigation are as follows: Ives' equation is more suitable than Stokes' equation in computation of the bubble rising velocity. The collection of bubble and algae floc is convective collection type and resulted from absorption than adhesion or collision. The treatment efficiency is excellent when the bubble sizes are smaller than $l00{\mu}m$, and the turbulence of reactor is small. In the optimum condition of continuous type DAF the volume ratio of pressurized liquid to sample is 15%, the contact time in reactor is 15 minutes, the pressure of pressurized liquid is $4kg/cm^2$ and the distance from jet needle to inlet is 30cm.

  • PDF

Numerical Hydrodynamic Modeling Incorporating the Flow through Permeable Sea-Wall (투수성 호안의 해수유통을 고려한 유동 수치모델링)

  • Bang, Ki-Young;Park, Sung Jin;Kim, Sun Ou;Cho, Chang Woo;Kim, Tae In;Song, Yong Sik;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.25 no.2
    • /
    • pp.63-75
    • /
    • 2013
  • The Inner Port Phase 2 area of the Pyeongtaek-Dangjin Port is enclosed by a total of three permeable sea-walls, and the disposal site to the east of the Inner Port Phase 2 is also enclosed by two permeable sea-walls. The maximum tidal range measured in the Inner Port Phase 2 and in the disposal site in May 2010 is 4.70 and 2.32 m, respectively. It reaches up to 54 and 27%, respectively of 8.74 m measured simultaneously in the exterior. Regression formulas between the difference of hydraulic head and the rate of interior water volume change, are induced. A three-dimensional numerical hydrodynamic model for the Asan Bay is constructed incorporating a module to compute water discharge through the permeable sea-walls at each computation time step by employing the formulas. Hydrodynamics for the period from 13th to 27th May, 2010 is simulated by driving forces of real-time reconstructed tide with major five constituents($M_2$, $S_2$, $K_1$, $O_1$ and $N_2$) and freshwater discharges from Asan, Sapkyo, Namyang and Seokmoon Sea dikes. The skill scores of modeled mean high waters, mean sea levels and mean low waters are excellent to be 96 to 100% in the interior of permeable sea-walls. Compared with the results of simulation to obstruct the flow through the permeable sea-walls, the maximum current speed increases by 0.05 to 0.10 m/s along the main channel and by 0.1 to 0.2 m/s locally in the exterior of the Outer Sea-wall of Inner Port. The maximum bottom shear stress is also intensified by 0.1 to 0.4 $N/m^2$ in the main channel and by more than 0.4 $N/m^2$ locally around the arched Outer Sea-wall. The module developed to compute the flow through impermeable seawalls can be practically applied to simulate and predict the advection and dispersion of materials, the erosion or deposion of sediments, and the local scouring around coastal structures where large-scale permeable sea-walls are maintained.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

A Study on Hoslital Nurses' Preferred Duty Shift and Duty Hours (병원 간호사의 선호근무시간대에 관한 연구)

  • Lee, Gyeong-Sik;Jeong, Geum-Hui
    • The Korean Nurse
    • /
    • v.36 no.1
    • /
    • pp.77-96
    • /
    • 1997
  • The duty shifts of hospital nurses not only affect nurses' physical and mental health but also present various personnel management problems which often result in high turnover rates. In this context a study was carried out from October to November 1995 for a period of two months to find out the status of hospital nurses' duty shift patterns, and preferred duty hours and fixed duty shifts. The study population was 867 RNs working in five general hospitals located in Seoul and its vicinity. The questionnaire developed by the writer was used for data collection. The response rate was 85.9 percent or 745 returns. The SAS program was used for data analysis with the computation of frequencies, percentages and Chi square test. The findings of the study are as follows: 1. General characteristics of the study population: 56 percent of respondents was (25 years group and 76.5 percent were "single": the predominant proportion of respondents was junior nursing college graduates(92.2%) and have less than 5 years nursing experience in hospitals(65.5%). For their future working plan in nursing profession, nearly 50% responded as uncertain The reasons given for their career plan was predominantly 'personal growth and development' rather than financial reasons. 2. The interval for rotations of duty stations was found to be mostly irregular(56.4%) while others reported as weekly(16.1%), monthly(12.9%), and fixed terms(4.6%). 3. The main problems related to duty shifts particularly the evening and night duty nurses reported were "not enough time for the family, " "afraid of security problems after the work when returning home late at night." and "lack of leisure time". "problems in physical and physiological adjustment." "problems in family life." "lack of time for interactions with fellow nurses" etc. 4. The forty percent of respondents reported to have '1-2 times' of duty shift rotations while all others reported that '0 time'. '2-3 times'. 'more than 3 times' etc. which suggest the irregularity in duty shift rotations. 5. The majority(62.8%) of study population found to favor the rotating system of duty stations. The reasons for favoring the rotation system were: the opportunity for "learning new things and personal development." "better human relations are possible. "better understanding in various duty stations." "changes in monotonous routine job" etc. The proportion of those disfavor the rotating 'system was 34.7 percent. giving the reasons of"it impedes development of specialization." "poor job performances." "stress factors" etc. Furthermore. respondents made the following comments in relation to the rotation of duty stations: the nurses should be given the opportunity to participate in the. decision making process: personal interest and aptitudes should be considered: regular intervals for the rotations or it should be planned in advance. etc. 6. For the future career plan. the older. married group with longer nursing experiences appeared to think the nursing as their lifetime career more likely than the younger. single group with shorter nursing experiences ($x^2=61.19.{\;}p=.000;{\;}x^2=41.55.{\;}p=.000$). The reason given for their future career plan regardless of length of future service, was predominantly "personal growth and development" rather than financial reasons. For further analysis, the group those with the shorter career plan appeared to claim "financial reasons" for their future career more readily than the group who consider the nursing job as their lifetime career$(x^2$= 11.73, p=.003) did. This finding suggests the need for careful .considerations in personnel management of nursing administration particularly when dealing with the nurses' career development. The majority of respondents preferred the fixed day shift. However, further analysis of those preferred evening shift by age and civil status, "< 25 years group"(15.1%) and "single group"(13.2) were more likely to favor the fixed evening shift than > 25 years(6.4%) and married(4.8%)groups. This differences were statistically significant ($x^2=14.54, {\;}p=.000;{\;}x^2=8.75, {\;}p=.003$). 7. A great majority of respondents(86.9% or n=647) found to prefer the day shifts. When the four different types of duty shifts(Types A. B. C, D) were presented, 55.0 percent of total respondents preferred the A type or the existing one followed by D type(22.7%). B type(12.4%) and C type(8.2%). 8. When the condition of monetary incentives for the evening(20% of salary) and night shifts(40% of. salary) of the existing duty type was presented. again the day shift appeared to be the most preferred one although the rate was slightly lower(66.4% against 86.9%). In the case of evening shift, with the same incentive, the preference rates for evening and night shifts increased from 11.0 to 22.4 percent and from 0.5 to 3.0 percent respectively. When the age variable was controlled. < 25 yrs group showed higher rates(31.6%. 4.8%) than those of > 25 yrs group(15.5%. 1.3%) respectively preferring the evening and night shifts(p=.000). The civil status also seemed to operate on the preferences of the duty shifts as the single group showed lower rate(69.0%) for day duty against 83. 6% of the married group. and higher rates for evening and night duties(27.2%. 15.1%) respectively against those of the married group(3.8%. 1.8%) while a higher proportion of the married group(83. 6%) preferred the day duties than the single group(69.0%). These differences were found to be statistically all significant(p=.001). 9. The findings on preferences of three different types of fixed duty hours namely, B, C. and D(with additional monetary incentives) are as follows in order of preference: B type(12hrs a day, 3days a wk): day shift(64.1%), evening shift(26.1%). night shift(6.5%) C type(12hrs a day. 4days a wk) : evening shift(49.2%). day shift(32.8%), night shift(11.5%) D type(10hrs a day. 4days a wk): showed the similar trend as B type. The findings of higher preferences on the evening and night duties when the incentives are given. as shown above, suggest the need for the introductions of different patterns of duty hours and incentive measures in order to overcome the difficulties in rostering the nursing duties. However, the interpretation of the above data, particularly the C type, needs cautions as the total number of respondents is very small(n=61). It requires further in-depth study. In conclusion. it seemed to suggest that the patterns of nurses duty hours and shifts in the most hospitals in the country have neither been tried for different duty types nor been flexible. The stereotype rostering system of three shifts and insensitiveness for personal life aspect of nurses seemed to be prevailing. This study seems to support that irregular and frequent rotations of duty shifts may be contributing factors for most nurses' maladjustment problems in physical and mental health. personal and family life which eventually may result in high turnover rates. In order to overcome the increasing problems in personnel management of hospital nurses particularly in rostering of evening and night duty shifts, which may related to eventual high turnover rates, the findings of this study strongly suggest the need for an introduction of new rostering systems including fixed duties and appropriate incentive measures for evenings and nights which the most nurses want to avoid, In considering the nursing care of inpatients is the round-the clock business. the practice of the nursing duty shift system is inevitable. In this context, based on the findings of this study. the following are recommended: 1. The further in-depth studies on duty shifts and hours need to be undertaken for the development of appropriate and effective rostering systems for hospital nurses. 2. An introduction of appropriate incentive measures for evening and night duty shifts along with organizational considerations such as the trials for preferred duty time bands, duty hours, and fixed duty shifts should be considered if good quality of care for the patients be maintained for the round the clock. This may require an initiation of systematic research and development activities in the field of hospital nursing administration as a part of permanent system in the hospital. 3. Planned and regular intervals, orientation and training, and professional and personal growth should be considered for the rotation of different duty stations or units. 4. In considering the higher degree of preferences in the duty type of "10hours a day, 4days a week" shown in this study, it would be worthwhile to undertake the R&D type studies in large hospital settings.

  • PDF

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.

Implementation and Evaluation of the Electron Arc Plan on a Commercial Treatment Planning System with a Pencil Beam Algorithm (Pencil Beam 알고리즘 기반의 상용 치료계획 시스템을 이용한 전자선 회전 치료 계획의 구현 및 정확도 평가)

  • Kang, Sei-Kwon;Park, So-Ah;Hwang, Tae-Jin;Cheong, Kwang-Ho;Lee, Me-Yeon;Kim, Kyoung-Ju;Oh, Do-Hoon;Bae, Hoon-Sik
    • Progress in Medical Physics
    • /
    • v.21 no.3
    • /
    • pp.304-310
    • /
    • 2010
  • Less execution of the electron arc treatment could in large part be attributed to the lack of an adequate planning system. Unlike most linear accelerators providing the electron arc mode, no commercial planning systems for the electron arc plan are available at this time. In this work, with the expectation that an easily accessible planning system could promote electron arc therapy, a commercial planning system was commissioned and evaluated for the electron arc plan. For the electron arc plan with use of a Varian 21-EX, Pinnacle3 (ver. 7.4f), with an electron pencil beam algorithm, was commissioned in which the arc consisted of multiple static fields with a fixed beam opening. Film dosimetry and point measurements were executed for the evaluation of the computation. Beam modeling was not satisfactory with the calculation of lateral profiles. Contrary to good agreement within 1% of the calculated and measured depth profiles, the calculated lateral profiles showed underestimation compared with measurements, such that the distance-to-agreement (DTA) was 5.1 mm at a 50% dose level for 6 MeV and 6.7 mm for 12 MeV with similar results for the measured depths. Point and film measurements for the humanoid phantom revealed that the delivered dose was more than the calculation by approximately 10%. The electron arc plan, based on the pencil beam algorithm, provides qualitative information for the dose distribution. Dose verification before the treatment should be mandatory.