• Title/Summary/Keyword: input parameter

Search Result 1,639, Processing Time 0.033 seconds

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Ore Minerals, Fluid Inclusions, and Isotopic(S.C.O) Compositions in the Diatreme-Hosted Nokdong As-Zn Deposit, Southeastern Korea: The Character and Evolution of the Hydrothermal Fluids (다이아튜림 내에 부존한 녹동 비소-아연광상의 광석광물, 유체포유물, 유황-탄소-산소 동위원소 : 광화용액의 특성과 진화)

  • Park, Ki-Hwa;Park, Hee-In;Eastoe, Christopher J.;Choi, Suck-Won
    • Economic and Environmental Geology
    • /
    • v.24 no.2
    • /
    • pp.131-150
    • /
    • 1991
  • The Weolseong diatreme was temporally and spatially related to the intrusion of the Gadaeri granite, and was -mineralized by meteoric aqueous fluids. In the Nokdong As-Zn deposit, pyrite, aresenopyrite and sphalerite are the most abundant sulfide minerals. They are associated with minor amount of magnetite, pyrrhotite, chalcopyrite and cassiterite, and trace amounts of Pb-Sb-Bi-Ag sulphosalts. The AsZn ore probably occurred at about $350^{\circ}C$ according to fluid inclusion and compositional data estimated from the arsenic content of arsenopyrite and iron content of sphalerite intergrown with pyrrhotite + chalcopyrite + cubanite. Heating studies of fluid inclusions in quartz indicate a temperature range between 180 and $360^{\circ}C$, and freezing data indicate a salinity range from 0.8 to 4.1 eq.wt % NaCl. The coexisting assemblage pyrite + pyrrhotite + arsenopyrite suggests that $H_2S$ was the dominate reduced sulfur species, and defines fluid parameter thus: $10^{-34.5}$ < ${\alpha}_{S_2}$ < $10^{-33}$, $10^{-11}$ < $f_{S_2}$ < $10^{-8}$, -2.4 < ${\alpha}_{S_2}$ < -1.6 atm and pH= 5.2 (sericte stable) at $300^{\circ}C$. The sulfur isotope values ranged from 1.8 to 5.5% and indicate that the sulfur in the sulfides is of magmatic in origin. The carbon isotope values range from -7.8 to -11.6%, and the oxygen isotope values from the carbonates in mineralized wall rock range from 2 to 11.4%. The oxygen isotope compositions of water coexisting with calcite require an input of meteoric water. The geochemical data indicate that the ore-forming fluid probably was generated by a variety of mechanisms, including deep circulation of meteoric water driven by magmatic heat, with possible input of magniatic water and ore component.

  • PDF

A Sensitivity Analysis of JULES Land Surface Model for Two Major Ecosystems in Korea: Influence of Biophysical Parameters on the Simulation of Gross Primary Productivity and Ecosystem Respiration (한국의 두 주요 생태계에 대한 JULES 지면 모형의 민감도 분석: 일차생산량과 생태계 호흡의 모사에 미치는 생물리모수의 영향)

  • Jang, Ji-Hyeon;Hong, Jin-Kyu;Byun, Young-Hwa;Kwon, Hyo-Jung;Chae, Nam-Yi;Lim, Jong-Hwan;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.12 no.2
    • /
    • pp.107-121
    • /
    • 2010
  • We conducted a sensitivity test of Joint UK Land Environment Simulator (JULES), in which the influence of biophysical parameters on the simulation of gross primary productivity (GPP) and ecosystem respiration (RE) was investigated for two typical ecosystems in Korea. For this test, we employed the whole-year observation of eddy-covariance fluxes measured in 2006 at two KoFlux sites: (1) a deciduous forest in complex terrain in Gwangneung and (2) a farmland with heterogeneous mosaic patches in Haenam. Our analysis showed that the simulated GPP was most sensitive to the maximum rate of RuBP carboxylation and leaf nitrogen concentration for both ecosystems. RE was sensitive to wood biomass parameter for the deciduous forest in Gwangneung. For the mixed farmland in Haenam, however, RE was most sensitive to the maximum rate of RuBP carboxylation and leaf nitrogen concentration like the simulated GPP. For both sites, the JULES model overestimated both GPP and RE when the default values of input parameters were adopted. Considering the fact that the leaf nitrogen concentration observed at the deciduous forest site was only about 60% of its default value, the significant portion of the model's overestimation can be attributed to such a discrepancy in the input parameters. Our finding demonstrates that the abovementioned key biophysical parameters of the two ecosystems should be evaluated carefully prior to any simulation and interpretation of ecosystem carbon exchange in Korea.

Evaluation of Setup Uncertainty on the CTV Dose and Setup Margin Using Monte Carlo Simulation (몬테칼로 전산모사를 이용한 셋업오차가 임상표적체적에 전달되는 선량과 셋업마진에 대하여 미치는 영향 평가)

  • Cho, Il-Sung;Kwark, Jung-Won;Cho, Byung-Chul;Kim, Jong-Hoon;Ahn, Seung-Do;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.81-90
    • /
    • 2012
  • The effect of setup uncertainties on CTV dose and the correlation between setup uncertainties and setup margin were evaluated by Monte Carlo based numerical simulation. Patient specific information of IMRT treatment plan for rectal cancer designed on the VARIAN Eclipse planning system was utilized for the Monte Carlo simulation program including the planned dose distribution and tumor volume information of a rectal cancer patient. The simulation program was developed for the purpose of the study on Linux environment using open source packages, GNU C++ and ROOT data analysis framework. All misalignments of patient setup were assumed to follow the central limit theorem. Thus systematic and random errors were generated according to the gaussian statistics with a given standard deviation as simulation input parameter. After the setup error simulations, the change of dose in CTV volume was analyzed with the simulation result. In order to verify the conventional margin recipe, the correlation between setup error and setup margin was compared with the margin formula developed on three dimensional conformal radiation therapy. The simulation was performed total 2,000 times for each simulation input of systematic and random errors independently. The size of standard deviation for generating patient setup errors was changed from 1 mm to 10 mm with 1 mm step. In case for the systematic error the minimum dose on CTV $D_{min}^{stat{\cdot}}$ was decreased from 100.4 to 72.50% and the mean dose $\bar{D}_{syst{\cdot}}$ was decreased from 100.45% to 97.88%. However the standard deviation of dose distribution in CTV volume was increased from 0.02% to 3.33%. The effect of random error gave the same result of a reduction of mean and minimum dose to CTV volume. It was found that the minimum dose on CTV volume $D_{min}^{rand{\cdot}}$ was reduced from 100.45% to 94.80% and the mean dose to CTV $\bar{D}_{rand{\cdot}}$ was decreased from 100.46% to 97.87%. Like systematic error, the standard deviation of CTV dose ${\Delta}D_{rand}$ was increased from 0.01% to 0.63%. After calculating a size of margin for each systematic and random error the "population ratio" was introduced and applied to verify margin recipe. It was found that the conventional margin formula satisfy margin object on IMRT treatment for rectal cancer. It is considered that the developed Monte-carlo based simulation program might be useful to study for patient setup error and dose coverage in CTV volume due to variations of margin size and setup error.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Estimate and Analysis of Planetary Boundary Layer Height (PBLH) using a Mobile Lidar Vehicle system (이동형 차량탑재 라이다 시스템을 활용한 경계층고도 산출 및 분석)

  • Nam, Hyoung-Gu;Choi, Won;Kim, Yoo-Jun;Shim, Jae-Kwan;Choi, Byoung-Choel;Kim, Byung-Gon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.3
    • /
    • pp.307-321
    • /
    • 2016
  • Planetary Boundary Layer Height (PBLH) is a major input parameter for weather forecasting and atmosphere diffusion models. In order to estimate the sub-grid scale variability of PBLH, we need to monitor PBLH data with high spatio-temporal resolution. Accordingly, we introduce a LIdar observation VEhicle (LIVE), and analyze PBLH derived from the lidar loaded in LIVE. PBLH estimated from LIVE shows high correlations with those estimated from both WRF model ($R^2=0.68$) and radiosonde ($R^2=0.72$). However, PBLH from lidar tend to be overestimated in comparison with those from both WRF and radiosonde because lidar appears to detect height of Residual Layer (RL) as PBLH which is overall below near the overlap height (< 300 m). PBLH from lidar with 10 min time resolution shows typical diurnal variation since it grows up after sunrise and reaches the maximum after 2 hours of sun culmination. The average growth rate of PBLH during the analysis period (2014/06/26 ~ 30) is 1.79 (-2.9 ~ 5.7) m $min^{-1}$. In addition, the lidar signal measured from moving LIVE shows that there is very low noise in comparison with that from the stationary observation. The PBLH from LIVE is 1065 m, similar to the value (1150 m) derived from the radiosonde launched at Sokcho. This study suggests that LIVE can observe continuous and reliable PBLH with high resolution in both stationary and mobile systems.

Prediction of Urban Flood Extent by LSTM Model and Logistic Regression (LSTM 모형과 로지스틱 회귀를 통한 도시 침수 범위의 예측)

  • Kim, Hyun Il;Han, Kun Yeun;Lee, Jae Yeong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.3
    • /
    • pp.273-283
    • /
    • 2020
  • Because of climate change, the occurrence of localized and heavy rainfall is increasing. It is important to predict floods in urban areas that have suffered inundation in the past. For flood prediction, not only numerical analysis models but also machine learning-based models can be applied. The LSTM (Long Short-Term Memory) neural network used in this study is appropriate for sequence data, but it demands a lot of data. However, rainfall that causes flooding does not appear every year in a single urban basin, meaning it is difficult to collect enough data for deep learning. Therefore, in addition to the rainfall observed in the study area, the observed rainfall in another urban basin was applied in the predictive model. The LSTM neural network was used for predicting the total overflow, and the result of the SWMM (Storm Water Management Model) was applied as target data. The prediction of the inundation map was performed by using logistic regression; the independent variable was the total overflow and the dependent variable was the presence or absence of flooding in each grid. The dependent variable of logistic regression was collected through the simulation results of a two-dimensional flood model. The input data of the two-dimensional flood model were the overflow at each manhole calculated by the SWMM. According to the LSTM neural network parameters, the prediction results of total overflow were compared. Four predictive models were used in this study depending on the parameter of the LSTM. The average RMSE (Root Mean Square Error) for verification and testing was 1.4279 ㎥/s, 1.0079 ㎥/s for the four LSTM models. The minimum RMSE of the verification and testing was calculated as 1.1655 ㎥/s and 0.8797 ㎥/s. It was confirmed that the total overflow can be predicted similarly to the SWMM simulation results. The prediction of inundation extent was performed by linking the logistic regression with the results of the LSTM neural network, and the maximum area fitness was 97.33 % when more than 0.5 m depth was considered. The methodology presented in this study would be helpful in improving urban flood response based on deep learning methodology.

Study on the Variation of Optical Properties of Asian Dust Plumes according to their Transport Routes and Source Regions using Multi-wavelength Raman LIDAR System (다파장 라만 라이다 시스템을 이용한 발원지 및 이동 경로에 따른 황사의 광학적 특성 변화 연구)

  • Shin, Sung-Kyun;Noh, Youngmin;Lee, Kwonho;Shin, Dongho;Kim, KwanChul;Kim, Young J.
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.2
    • /
    • pp.241-249
    • /
    • 2014
  • The continuous observations for atmospheric aerosol were carried out during 3 years (2009-2011) by using a multi-wavelength Raman lidar at the Gwangju Institute of Science and Technology (GIST), Korea ($35.11^{\circ}N$, $126.54^{\circ}E$). The particle depolarization ratios were retrieved from the observations in order to distinguish the Asian dust layer. The vertical information of Asian dust layers were used as input parameter for the Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model for analysis of its backward trajectories. The source regions and transport pathways of the Asian dust layer were identified. The most frequent source region of Asian dust in Korea was Gobi desert during observation period in this study. The statistical analysis on the particle depolarization ratio of Asian dust was conducted according to their transport route in order to retrieve the variation of optical properties of Asian dust during long-range transport. The transport routes were classified into the Asian dust which was transported to observation site directly from the source regions, and the Asian dust which was passed over pollution regions of China. The particle depolarization ratios of Asian dust which were transported via industrial regions of China was ranged 0.07-0.1, whereas, the particle depolarization ratio of Asian dust which was transported directly from the source regions to observation site were comparably higher and ranged 0.11-0.15. It is considered that the pure Asian dust particle from source regions were mixed with pollution particles, which is likely to spherical particle, during transportation so that the values of particle depolarization of Asian dust mixed with pollution was decreased.

An Estimation of Price Elasticities of Import Demand and Export Supply Functions Derived from an Integrated Production Model (생산모형(生産模型)을 이용(利用)한 수출(輸出)·수입함수(輸入函數)의 가격탄성치(價格彈性値) 추정(推定))

  • Lee, Hong-gue
    • KDI Journal of Economic Policy
    • /
    • v.12 no.4
    • /
    • pp.47-69
    • /
    • 1990
  • Using an aggregator model, we look into the possibilities for substitution between Korea's exports, imports, domestic sales and domestic inputs (particularly labor), and substitution between disaggregated export and import components. Our approach heavily draws on an economy-wide GNP function that is similar to Samuelson's, modeling trade functions as derived from an integrated production system. Under the condition of homotheticity and weak separability, the GNP function would facilitate consistent aggregation that retains certain properties of the production structure. It would also be useful for a two-stage optimization process that enables us to obtain not only the net output price elasticities of the first-level aggregator functions, but also those of the second-level individual components of exports and imports. For the implementation of the model, we apply the Symmetric Generalized McFadden (SGM) function developed by Diewert and Wales to both stages of estimation. The first stage of the estimation procedure is to estimate the unit quantity equations of the second-level exports and imports that comprise four components each. The parameter estimates obtained in the first stage are utilized in the derivation of instrumental variables for the aggregate export and import prices being employed in the upper model. In the second stage, the net output supply equations derived from the GNP function are used in the estimation of the price elasticities of the first-level variables: exports, imports, domestic sales and labor. With these estimates in hand, we can come up with various elasticities of both the net output supply functions and the individual components of exports and imports. At the aggregate level (first-level), exports appear to be substitutable with domestic sales, while labor is complementary with imports. An increase in the price of exports would reduce the amount of the domestic sales supply, and a decrease in the wage rate would boost the demand for imports. On the other hand, labor and imports are complementary with exports and domestic sales in the input-output structure. At the disaggregate level (second-level), the price elasticities of the export and import components obtained indicate that both substitution and complement possibilities exist between them. Although these elasticities are interesting in their own right, they would be more usefully applied as inputs to the computational general equilibrium model.

  • PDF

Analysis of Image Processing Characteristics in Computed Radiography System by Virtual Digital Test Pattern Method (Virtual Digital Test Pattern Method를 이용한 CR 시스템의 영상처리 특성 분석)

  • Choi, In-Seok;Kim, Jung-Min;Oh, Hye-Kyong;Kim, You-Hyun;Lee, Ki-Sung;Jeong, Hoi-Woun;Choi, Seok-Yoon
    • Journal of radiological science and technology
    • /
    • v.33 no.2
    • /
    • pp.97-107
    • /
    • 2010
  • The objectives of this study is to figure out the unknown image processing methods of commercial CR system. We have implemented the processing curve of each Look up table(LUT) in REGIUS 150 CR system by using virtual digital test pattern method. The characteristic of Dry Imager was measured also. First of all, we have generated the virtual digital test pattern file with binary file editor. This file was used as an input data of CR system (REGIUS 150 CR system, KONICA MINOLTA). The DICOM files which were automatically generated output files by the CR system, were used to figure out the processing curves of each LUT modes (THX, ST, STM, LUM, BONE, LIN). The gradation curves of Dry Imager were also measured to figure out the characteristics of hard copy image. According to the results of each parameters, we identified the characteristics of image processing parameter in CR system. The processing curves which were measured by this proposed method showed the characteristics of CR system. And we found the linearity of Dry Imager in the middle area of processing curves. With these results, we found that the relationships between the curves and each parameters. The G value is related to the slope and the S value is related to the shift in x-axis of processing curves. In conclusion, the image processing method of the each commercial CR systems are different, and they are concealed. This proposed method which uses virtual digital test pattern can measure the characteristics of parameters for the image processing patterns in the CR system. We expect that the proposed method is useful to analogize the image processing means not only for this CR system, but also for the other commercial CR systems.