• Title/Summary/Keyword: input parameter

Search Result 1,636, Processing Time 0.035 seconds

Evaluation of Setup Uncertainty on the CTV Dose and Setup Margin Using Monte Carlo Simulation (몬테칼로 전산모사를 이용한 셋업오차가 임상표적체적에 전달되는 선량과 셋업마진에 대하여 미치는 영향 평가)

  • Cho, Il-Sung;Kwark, Jung-Won;Cho, Byung-Chul;Kim, Jong-Hoon;Ahn, Seung-Do;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.81-90
    • /
    • 2012
  • The effect of setup uncertainties on CTV dose and the correlation between setup uncertainties and setup margin were evaluated by Monte Carlo based numerical simulation. Patient specific information of IMRT treatment plan for rectal cancer designed on the VARIAN Eclipse planning system was utilized for the Monte Carlo simulation program including the planned dose distribution and tumor volume information of a rectal cancer patient. The simulation program was developed for the purpose of the study on Linux environment using open source packages, GNU C++ and ROOT data analysis framework. All misalignments of patient setup were assumed to follow the central limit theorem. Thus systematic and random errors were generated according to the gaussian statistics with a given standard deviation as simulation input parameter. After the setup error simulations, the change of dose in CTV volume was analyzed with the simulation result. In order to verify the conventional margin recipe, the correlation between setup error and setup margin was compared with the margin formula developed on three dimensional conformal radiation therapy. The simulation was performed total 2,000 times for each simulation input of systematic and random errors independently. The size of standard deviation for generating patient setup errors was changed from 1 mm to 10 mm with 1 mm step. In case for the systematic error the minimum dose on CTV $D_{min}^{stat{\cdot}}$ was decreased from 100.4 to 72.50% and the mean dose $\bar{D}_{syst{\cdot}}$ was decreased from 100.45% to 97.88%. However the standard deviation of dose distribution in CTV volume was increased from 0.02% to 3.33%. The effect of random error gave the same result of a reduction of mean and minimum dose to CTV volume. It was found that the minimum dose on CTV volume $D_{min}^{rand{\cdot}}$ was reduced from 100.45% to 94.80% and the mean dose to CTV $\bar{D}_{rand{\cdot}}$ was decreased from 100.46% to 97.87%. Like systematic error, the standard deviation of CTV dose ${\Delta}D_{rand}$ was increased from 0.01% to 0.63%. After calculating a size of margin for each systematic and random error the "population ratio" was introduced and applied to verify margin recipe. It was found that the conventional margin formula satisfy margin object on IMRT treatment for rectal cancer. It is considered that the developed Monte-carlo based simulation program might be useful to study for patient setup error and dose coverage in CTV volume due to variations of margin size and setup error.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Estimate and Analysis of Planetary Boundary Layer Height (PBLH) using a Mobile Lidar Vehicle system (이동형 차량탑재 라이다 시스템을 활용한 경계층고도 산출 및 분석)

  • Nam, Hyoung-Gu;Choi, Won;Kim, Yoo-Jun;Shim, Jae-Kwan;Choi, Byoung-Choel;Kim, Byung-Gon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.3
    • /
    • pp.307-321
    • /
    • 2016
  • Planetary Boundary Layer Height (PBLH) is a major input parameter for weather forecasting and atmosphere diffusion models. In order to estimate the sub-grid scale variability of PBLH, we need to monitor PBLH data with high spatio-temporal resolution. Accordingly, we introduce a LIdar observation VEhicle (LIVE), and analyze PBLH derived from the lidar loaded in LIVE. PBLH estimated from LIVE shows high correlations with those estimated from both WRF model ($R^2=0.68$) and radiosonde ($R^2=0.72$). However, PBLH from lidar tend to be overestimated in comparison with those from both WRF and radiosonde because lidar appears to detect height of Residual Layer (RL) as PBLH which is overall below near the overlap height (< 300 m). PBLH from lidar with 10 min time resolution shows typical diurnal variation since it grows up after sunrise and reaches the maximum after 2 hours of sun culmination. The average growth rate of PBLH during the analysis period (2014/06/26 ~ 30) is 1.79 (-2.9 ~ 5.7) m $min^{-1}$. In addition, the lidar signal measured from moving LIVE shows that there is very low noise in comparison with that from the stationary observation. The PBLH from LIVE is 1065 m, similar to the value (1150 m) derived from the radiosonde launched at Sokcho. This study suggests that LIVE can observe continuous and reliable PBLH with high resolution in both stationary and mobile systems.

Prediction of Urban Flood Extent by LSTM Model and Logistic Regression (LSTM 모형과 로지스틱 회귀를 통한 도시 침수 범위의 예측)

  • Kim, Hyun Il;Han, Kun Yeun;Lee, Jae Yeong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.3
    • /
    • pp.273-283
    • /
    • 2020
  • Because of climate change, the occurrence of localized and heavy rainfall is increasing. It is important to predict floods in urban areas that have suffered inundation in the past. For flood prediction, not only numerical analysis models but also machine learning-based models can be applied. The LSTM (Long Short-Term Memory) neural network used in this study is appropriate for sequence data, but it demands a lot of data. However, rainfall that causes flooding does not appear every year in a single urban basin, meaning it is difficult to collect enough data for deep learning. Therefore, in addition to the rainfall observed in the study area, the observed rainfall in another urban basin was applied in the predictive model. The LSTM neural network was used for predicting the total overflow, and the result of the SWMM (Storm Water Management Model) was applied as target data. The prediction of the inundation map was performed by using logistic regression; the independent variable was the total overflow and the dependent variable was the presence or absence of flooding in each grid. The dependent variable of logistic regression was collected through the simulation results of a two-dimensional flood model. The input data of the two-dimensional flood model were the overflow at each manhole calculated by the SWMM. According to the LSTM neural network parameters, the prediction results of total overflow were compared. Four predictive models were used in this study depending on the parameter of the LSTM. The average RMSE (Root Mean Square Error) for verification and testing was 1.4279 ㎥/s, 1.0079 ㎥/s for the four LSTM models. The minimum RMSE of the verification and testing was calculated as 1.1655 ㎥/s and 0.8797 ㎥/s. It was confirmed that the total overflow can be predicted similarly to the SWMM simulation results. The prediction of inundation extent was performed by linking the logistic regression with the results of the LSTM neural network, and the maximum area fitness was 97.33 % when more than 0.5 m depth was considered. The methodology presented in this study would be helpful in improving urban flood response based on deep learning methodology.

Study on the Variation of Optical Properties of Asian Dust Plumes according to their Transport Routes and Source Regions using Multi-wavelength Raman LIDAR System (다파장 라만 라이다 시스템을 이용한 발원지 및 이동 경로에 따른 황사의 광학적 특성 변화 연구)

  • Shin, Sung-Kyun;Noh, Youngmin;Lee, Kwonho;Shin, Dongho;Kim, KwanChul;Kim, Young J.
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.2
    • /
    • pp.241-249
    • /
    • 2014
  • The continuous observations for atmospheric aerosol were carried out during 3 years (2009-2011) by using a multi-wavelength Raman lidar at the Gwangju Institute of Science and Technology (GIST), Korea ($35.11^{\circ}N$, $126.54^{\circ}E$). The particle depolarization ratios were retrieved from the observations in order to distinguish the Asian dust layer. The vertical information of Asian dust layers were used as input parameter for the Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model for analysis of its backward trajectories. The source regions and transport pathways of the Asian dust layer were identified. The most frequent source region of Asian dust in Korea was Gobi desert during observation period in this study. The statistical analysis on the particle depolarization ratio of Asian dust was conducted according to their transport route in order to retrieve the variation of optical properties of Asian dust during long-range transport. The transport routes were classified into the Asian dust which was transported to observation site directly from the source regions, and the Asian dust which was passed over pollution regions of China. The particle depolarization ratios of Asian dust which were transported via industrial regions of China was ranged 0.07-0.1, whereas, the particle depolarization ratio of Asian dust which was transported directly from the source regions to observation site were comparably higher and ranged 0.11-0.15. It is considered that the pure Asian dust particle from source regions were mixed with pollution particles, which is likely to spherical particle, during transportation so that the values of particle depolarization of Asian dust mixed with pollution was decreased.

An Estimation of Price Elasticities of Import Demand and Export Supply Functions Derived from an Integrated Production Model (생산모형(生産模型)을 이용(利用)한 수출(輸出)·수입함수(輸入函數)의 가격탄성치(價格彈性値) 추정(推定))

  • Lee, Hong-gue
    • KDI Journal of Economic Policy
    • /
    • v.12 no.4
    • /
    • pp.47-69
    • /
    • 1990
  • Using an aggregator model, we look into the possibilities for substitution between Korea's exports, imports, domestic sales and domestic inputs (particularly labor), and substitution between disaggregated export and import components. Our approach heavily draws on an economy-wide GNP function that is similar to Samuelson's, modeling trade functions as derived from an integrated production system. Under the condition of homotheticity and weak separability, the GNP function would facilitate consistent aggregation that retains certain properties of the production structure. It would also be useful for a two-stage optimization process that enables us to obtain not only the net output price elasticities of the first-level aggregator functions, but also those of the second-level individual components of exports and imports. For the implementation of the model, we apply the Symmetric Generalized McFadden (SGM) function developed by Diewert and Wales to both stages of estimation. The first stage of the estimation procedure is to estimate the unit quantity equations of the second-level exports and imports that comprise four components each. The parameter estimates obtained in the first stage are utilized in the derivation of instrumental variables for the aggregate export and import prices being employed in the upper model. In the second stage, the net output supply equations derived from the GNP function are used in the estimation of the price elasticities of the first-level variables: exports, imports, domestic sales and labor. With these estimates in hand, we can come up with various elasticities of both the net output supply functions and the individual components of exports and imports. At the aggregate level (first-level), exports appear to be substitutable with domestic sales, while labor is complementary with imports. An increase in the price of exports would reduce the amount of the domestic sales supply, and a decrease in the wage rate would boost the demand for imports. On the other hand, labor and imports are complementary with exports and domestic sales in the input-output structure. At the disaggregate level (second-level), the price elasticities of the export and import components obtained indicate that both substitution and complement possibilities exist between them. Although these elasticities are interesting in their own right, they would be more usefully applied as inputs to the computational general equilibrium model.

  • PDF

Analysis of Image Processing Characteristics in Computed Radiography System by Virtual Digital Test Pattern Method (Virtual Digital Test Pattern Method를 이용한 CR 시스템의 영상처리 특성 분석)

  • Choi, In-Seok;Kim, Jung-Min;Oh, Hye-Kyong;Kim, You-Hyun;Lee, Ki-Sung;Jeong, Hoi-Woun;Choi, Seok-Yoon
    • Journal of radiological science and technology
    • /
    • v.33 no.2
    • /
    • pp.97-107
    • /
    • 2010
  • The objectives of this study is to figure out the unknown image processing methods of commercial CR system. We have implemented the processing curve of each Look up table(LUT) in REGIUS 150 CR system by using virtual digital test pattern method. The characteristic of Dry Imager was measured also. First of all, we have generated the virtual digital test pattern file with binary file editor. This file was used as an input data of CR system (REGIUS 150 CR system, KONICA MINOLTA). The DICOM files which were automatically generated output files by the CR system, were used to figure out the processing curves of each LUT modes (THX, ST, STM, LUM, BONE, LIN). The gradation curves of Dry Imager were also measured to figure out the characteristics of hard copy image. According to the results of each parameters, we identified the characteristics of image processing parameter in CR system. The processing curves which were measured by this proposed method showed the characteristics of CR system. And we found the linearity of Dry Imager in the middle area of processing curves. With these results, we found that the relationships between the curves and each parameters. The G value is related to the slope and the S value is related to the shift in x-axis of processing curves. In conclusion, the image processing method of the each commercial CR systems are different, and they are concealed. This proposed method which uses virtual digital test pattern can measure the characteristics of parameters for the image processing patterns in the CR system. We expect that the proposed method is useful to analogize the image processing means not only for this CR system, but also for the other commercial CR systems.

Development of a Dose Calibration Program for Various Dosimetry Protocols in High Energy Photon Beams (고 에너지 광자선의 표준측정법에 대한 선량 교정 프로그램 개발)

  • Shin Dong Oh;Park Sung Yong;Ji Young Hoon;Lee Chang Geon;Suh Tae Suk;Kwon Soo IL;Ahn Hee Kyung;Kang Jin Oh;Hong Seong Eon
    • Radiation Oncology Journal
    • /
    • v.20 no.4
    • /
    • pp.381-390
    • /
    • 2002
  • Purpose : To develop a dose calibration program for the IAEA TRS-277 and AAPM TG-21, based on the air kerma calibration factor (or the cavity-gas calibration factor), as well as for the IAEA TRS-398 and the AAPM TG-51, based on the absorbed dose to water calibration factor, so as to avoid the unwanted error associated with these calculation procedures. Materials and Methods : Currently, the most widely used dosimetry Protocols of high energy photon beams are the air kerma calibration factor based on the IAEA TRS-277 and the AAPM TG-21. However, this has somewhat complex formalism and limitations for the improvement of the accuracy due to uncertainties of the physical quantities. Recently, the IAEA and the AAPM published the absorbed dose to water calibration factor based, on the IAEA TRS-398 and the AAPM TG-51. The formalism and physical parameters were strictly applied to these four dose calibration programs. The tables and graphs of physical data and the information for ion chambers were numericalized for their incorporation into a database. These programs were developed user to be friendly, with the Visual $C^{++}$ language for their ease of use in a Windows environment according to the recommendation of each protocols. Results : The dose calibration programs for the high energy photon beams, developed for the four protocols, allow the input of informations about a dosimetry system, the characteristics of the beam quality, the measurement conditions and dosimetry results, to enable the minimization of any inter-user variations and errors, during the calculation procedure. Also, it was possible to compare the absorbed dose to water data of the four different protocols at a single reference points. Conclusion : Since this program expressed information in numerical and data-based forms for the physical parameter tables, graphs and of the ion chambers, the error associated with the procedures and different user could be solved. It was possible to analyze and compare the major difference for each dosimetry protocol, since the program was designed to be user friendly and to accurately calculate the correction factors and absorbed dose. It is expected that accurate dose calculations in high energy photon beams can be made by the users for selecting and performing the appropriate dosimetry protocol.

Design of a Bit-Serial Divider in GF(2$^{m}$ ) for Elliptic Curve Cryptosystem (타원곡선 암호시스템을 위한 GF(2$^{m}$ )상의 비트-시리얼 나눗셈기 설계)

  • 김창훈;홍춘표;김남식;권순학
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.12C
    • /
    • pp.1288-1298
    • /
    • 2002
  • To implement elliptic curve cryptosystem in GF(2$\^$m/) at high speed, a fast divider is required. Although bit-parallel architecture is well suited for high speed division operations, elliptic curve cryptosystem requires large m(at least 163) to support a sufficient security. In other words, since the bit-parallel architecture has an area complexity of 0(m$\^$m/), it is not suited for this application. In this paper, we propose a new serial-in serial-out systolic array for computing division operations in GF(2$\^$m/) using the standard basis representation. Based on a modified version of tile binary extended greatest common divisor algorithm, we obtain a new data dependence graph and design an efficient bit-serial systolic divider. The proposed divider has 0(m) time complexity and 0(m) area complexity. If input data come in continuously, the proposed divider can produce division results at a rate of one per m clock cycles, after an initial delay of 5m-2 cycles. Analysis shows that the proposed divider provides a significant reduction in both chip area and computational delay time compared to previously proposed systolic dividers with the same I/O format. Since the proposed divider can perform division operations at high speed with the reduced chip area, it is well suited for division circuit of elliptic curve cryptosystem. Furthermore, since the proposed architecture does not restrict the choice of irreducible polynomial, and has a unidirectional data flow and regularity, it provides a high flexibility and scalability with respect to the field size m.

Quantitative Analysis of Digital Radiography Pixel Values to absorbed Energy of Detector based on the X-Ray Energy Spectrum Model (X선 스펙트럼 모델을 이용한 DR 화소값과 디텍터 흡수에너지의 관계에 대한 정량적 분석)

  • Kim Do-Il;Kim Sung-Hyun;Ho Dong-Su;Choe Bo-young;Suh Tae-Suk;Lee Jae-Mun;Lee Hyoung-Koo
    • Progress in Medical Physics
    • /
    • v.15 no.4
    • /
    • pp.202-209
    • /
    • 2004
  • Flat panel based digital radiography (DR) systems have recently become useful and important in the field of diagnostic radiology. For DRs with amorphous silicon photosensors, CsI(TI) is normally used as the scintillator, which produces visible light corresponding to the absorbed radiation energy. The visible light photons are converted into electric signal in the amorphous silicon photodiodes which constitute a two dimensional array. In order to produce good quality images, detailed behaviors of DR detectors to radiation must be studied. The relationship between air exposure and the DR outputs has been investigated in many studies. But this relationship was investigated under the condition of the fixed tube voltage. In this study, we investigated the relationship between the DR outputs and X-ray in terms of the absorbed energy in the detector rather than the air exposure using SPEC-l8, an X-ray energy spectrum model. Measured exposure was compared with calculated exposure for obtaining the inherent filtration that is a important input variable of SPEC-l8. The absorbed energy in the detector was calculated using algorithm of calculating the absorbed energy in the material and pixel values of real images under various conditions was obtained. The characteristic curve was obtained using the relationship of two parameter and the results were verified using phantoms made of water and aluminum. The pixel values of the phantom image were estimated and compared with the characteristic curve under various conditions. It was found that the relationship between the DR outputs and the absorbed energy in the detector was almost linear. In a experiment using the phantoms, the estimated pixel values agreed with the characteristic curve, although the effect of scattered photons introduced some errors. However, effect of a scattered X-ray must be studied because it was not included in the calculation algorithm. The result of this study can provide useful information about a pre-processing of digital radiography.

  • PDF