• Title/Summary/Keyword: Output error

Search Result 2,333, Processing Time 0.033 seconds

Analysis of Growth Response by Non - destructive, Continuous Measurement of Fresh Weight in Leaf Lettuce 1. Effect of Nutrient Solution and Light Condition on the Growth of Leaf Lettuce (비파괴 연속 생체중 측정장치의 개발 및 이에 의한 상추의 생장반응 분석 l. 양액의 이온 농도 및 명ㆍ암 처리가 생장에 미치는 영향)

  • 남윤일;채제천
    • Journal of Bio-Environment Control
    • /
    • v.4 no.1
    • /
    • pp.50-58
    • /
    • 1995
  • These studies were carried out to develop a system for non -destructive and continuous measurement of fresh weight and to analyse the growth response of leaf lettuce under the different nutrient solution and light condition with this system. The developed measurement system was consisted of four load cells and a microcomputer. The output from the system was highly positive correlation with the plant fresh weight above the surface of the hydroponic solution. The top fresh weight of plant could be measured within the error $\pm$ 1.0g in the range of 0 - 2000g. The top fresh weight of leaf lettuce increased 44 times at 18th day after transferring to the nutrient solution, and the maximum growth rate was observed at 13th day after transferring. The growth rate was 10.7- 29.6% per day during 18 days. Optimum concentration of the nutrient solution for the growth of lettuce was 1.4 - 2.2 mS/cm of EC level. When the light condition was changed from dark to light, the fresh weight was temporarily decreased, but the fresh weight increased under the opposite condition. Top fresh weight of leaf lettuce in the darkness normally increased within 12 hours after darkness treatment, and then slowly increased until 78 hours under continuous dark condition. After that times, the fresh weight began to decrease.

  • PDF

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Beam Shaping by Independent Jaw Closure in Steveotactic Radiotherapy (정위방사선치료 시 독립턱 부분폐쇄를 이용하는 선량분포개선 방법)

  • Ahn Yong Chan;Cho Byung Chul;Choi Dong Rock;Kim Dae Yong;Huh Seung Jae;Oh Do Hoon;Bae Hoonsik;Yeo In Hwan;Ko Young Eun
    • Radiation Oncology Journal
    • /
    • v.18 no.2
    • /
    • pp.150-156
    • /
    • 2000
  • Purpose : Stereotactic radiation therapy (SRT) can deliver highly focused radiation to a small and spherical target lesion with very high degree of mechanical accuracy. For non-spherical and large lesions, however, inclusion of the neighboring normal structures within the high dose radiation volume is inevitable in SRT This is to report the beam shaping using the partial closure of the independent jaw in SRT and the verification of dose calculation and the dose display using a home-made soft ware. Materials and Methods : Authors adopted the idea to partially close one or more independent collimator jaw(5) in addition to the circular collimator cones to shield the neighboring normal structures while keeping the target lesion within the radiation beam field at all angles along the arc trajectory. The output factors (OF's) and the tissue-maximum ratios (TMR's) were measured using the micro ion chamber in the water phantom dosimetry system, and were compared with the theoretical calculations. A film dosimetry procedure was peformed to obtain the depth dose profiles at 5 cm, and they were also compared with the theoretical calculations, where the radiation dose would depend on the actual area of irradiation. Authors incorporated this algorithm into the home-made SRT software for the isodose calculation and display, and was tried on an example case with single brain metastasis. The dose-volume histograms (DVH's) of the planning target volume (PTV) and the normal brain derived by the control plan were reciprocally compared with those derived by the plan using the same arc arrangement plus the independent collimator jaw closure. Results : When using 5.0 cm diameter collimator, the measurements of the OF's and the TMR's with one independent jaw set at 30 mm (unblocked), 15.5 mm, 8.6 mm, and 0 mm from th central beam axis showed good correlation to the theoretical calculation within 0.5% and 0.3% error range. The dose profiles at 5 cm depth obtained by the film dosimetry also showed very good correlation to the theoretical calculations. The isodose profiles obtained on the home-made software demonstrated a slightly more conformal dose distribution around the target lesion by using the independent jaw closure, where the DVH's of the PTV were almost equivalent on the two plans, while the DVH's for the normal brain showed that less volume of the normal brain receiving high radiation dose by using this modification than the control plan employing the circular collimator cone only. Conclusions : With the beam shaping modification using the independent jaw closure, authors have realized wider clinical application of SRT with more conformal dose planning. Authors believe that SRT, with beam shaping ideas and efforts, should no longer be limited to the small spherical lesions, but be more widely applied to rather irregularly shaped tumors in the intracranial and the head and neck regions.

  • PDF

Building the Process for Reducing Whole Body Bone Scan Errors and its Effect (전신 뼈 스캔의 오류 감소를 위한 프로세스 구축과 적용 효과)

  • Kim, Dong Seok;Park, Jang Won;Choi, Jae Min;Shim, Dong Oh;Kim, Ho Seong;Lee, Yeong Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.76-82
    • /
    • 2017
  • Purpose Whole body bone scan is one of the most frequently performed in nuclear medicine. Basically, both the anterior and posterior views are acquired simultaneously. Occasionally, it is difficult to distinguish the lesion by only the anterior view and the posterior view. In this case, accurate location of the lesion through SPECT / CT or additional static scan images are important. Therefore, in this study, various improvement activities have been carried out in order to enhance the work capacity of technologists. In this study, we investigate the effect of technologist training and standardized work process processes on bone scan error reduction. Materials and Methods Several systems have been introduced in sequence for the application of new processes. The first is the implementation of education and testing with physicians, the second is the classification of patients who are expected to undergo further scanning, introducing a pre-filtration system that allows technologists to check in advance, and finally, The communication system called NMQA is applied. From January, 2014 to December, 2016, we examined the whole body bone scan patients who visited the Department of Nuclear Medicine, Asan Medical Center, Seoul, Korea Results We investigated errors based on the Bone Scan NMQA sent from January 2014 to December 2016. The number of tests in which NMQA was transmitted over the entire bone scan during the survey period was calculated as a percentage. The annual output is 141 cases in 2014, 88 cases in 2015, and 86 cases in 2016. The rate of NMQA has decreased to 0.88% in 2014, 0.53% in 2015 and 0.45% in 2016. Conclusion The incidence of NMQA has decreased since 2014 when the new process was applied. However, we believe that it will be necessary to accumulate data continuously in the future because of insufficient data until statistically confirming its usefulness. This study confirmed the necessity of standardized work and education to improve the quality of Bone Scan image, and it is thought that update is needed for continuous research and interest in the future.

  • PDF

Estimation of TROPOMI-derived Ground-level SO2 Concentrations Using Machine Learning Over East Asia (기계학습을 활용한 동아시아 지역의 TROPOMI 기반 SO2 지상농도 추정)

  • Choi, Hyunyoung;Kang, Yoojin;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.275-290
    • /
    • 2021
  • Sulfur dioxide (SO2) in the atmosphere is mainly generated from anthropogenic emission sources. It forms ultra-fine particulate matter through chemical reaction and has harmful effect on both the environment and human health. In particular, ground-level SO2 concentrations are closely related to human activities. Satellite observations such as TROPOMI (TROPOspheric Monitoring Instrument)-derived column density data can provide spatially continuous monitoring of ground-level SO2 concentrations. This study aims to propose a 2-step residual corrected model to estimate ground-level SO2 concentrations through the synergistic use of satellite data and numerical model output. Random forest machine learning was adopted in the 2-step residual corrected model. The proposed model was evaluated through three cross-validations (i.e., random, spatial and temporal). The results showed that the model produced slopes of 1.14-1.25, R values of 0.55-0.65, and relative root-mean-square-error of 58-63%, which were improved by 10% for slopes and 3% for R and rRMSE when compared to the model without residual correction. The model performance by country was slightly reduced in Japan, often resulting in overestimation, where the sample size was small, and the concentration level was relatively low. The spatial and temporal distributions of SO2 produced by the model agreed with those of the in-situ measurements, especially over Yangtze River Delta in China and Seoul Metropolitan Area in South Korea, which are highly dependent on the characteristics of anthropogenic emission sources. The model proposed in this study can be used for long-term monitoring of ground-level SO2 concentrations on both the spatial and temporal domains.

K-DEV: A Borehole Deviation Logging Probe Applicable to Steel-cased Holes (철재 케이싱이 설치된 시추공에서도 적용가능한 공곡검층기 K-DEV)

  • Yoonho, Song;Yeonguk, Jo;Seungdo, Kim;Tae Jong, Lee;Myungsun, Kim;In-Hwa, Park;Heuisoon, Lee
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.4
    • /
    • pp.167-176
    • /
    • 2022
  • We designed a borehole deviation survey tool applicable for steel-cased holes, K-DEV, and developed a prototype for a depth of 500 m aiming to development of own equipment required to secure deep subsurface characterization technologies. K-DEV is equipped with sensors that provide digital output with verified high performance; moreover, it is also compatible with logging winch systems used in Korea. The K-DEV prototype has a nonmagnetic stainless steel housing with an outer diameter of 48.3 mm, which has been tested in the laboratory for water resistance up to 20 MPa and for durability by running into a 1-km deep borehole. We confirmed the operational stability and data repeatability of the prototype by constantly logging up and down to the depth of 600 m. A high-precision micro-electro-mechanical system (MEMS) gyroscope was used for the K-DEV prototype as the gyro sensor, which is crucial for azimuth determination in cased holes. Additionally, we devised an accurate trajectory survey algorithm by employing Unscented Kalman filtering and data fusion for optimization. The borehole test with K-DEV and a commercial logging tool produced sufficiently similar results. Furthermore, the issue of error accumulation due to drift over time of the MEMS gyro was successfully overcome by compensating with stationary measurements for the same attitude at the wellhead before and after logging, as demonstrated by the nearly identical result to the open hole. We believe that the methodology of K-DEV development and operational stability, as well as the data reliability of the prototype, were confirmed through these test applications.

Estimation for Ground Air Temperature Using GEO-KOMPSAT-2A and Deep Neural Network (심층신경망과 천리안위성 2A호를 활용한 지상기온 추정에 관한 연구)

  • Taeyoon Eom;Kwangnyun Kim;Yonghan Jo;Keunyong Song;Yunjeong Lee;Yun Gon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.207-221
    • /
    • 2023
  • This study suggests deep neural network models for estimating air temperature with Level 1B (L1B) datasets of GEO-KOMPSAT-2A (GK-2A). The temperature at 1.5 m above the ground impact not only daily life but also weather warnings such as cold and heat waves. There are many studies to assume the air temperature from the land surface temperature (LST) retrieved from satellites because the air temperature has a strong relationship with the LST. However, an algorithm of the LST, Level 2 output of GK-2A, works only clear sky pixels. To overcome the cloud effects, we apply a deep neural network (DNN) model to assume the air temperature with L1B calibrated for radiometric and geometrics from raw satellite data and compare the model with a linear regression model between LST and air temperature. The root mean square errors (RMSE) of the air temperature for model outputs are used to evaluate the model. The number of 95 in-situ air temperature data was 2,496,634 and the ratio of datasets paired with LST and L1B show 42.1% and 98.4%. The training years are 2020 and 2021 and 2022 is used to validate. The DNN model is designed with an input layer taking 16 channels and four hidden fully connected layers to assume an air temperature. As a result of the model using 16 bands of L1B, the DNN with RMSE 2.22℃ showed great performance than the baseline model with RMSE 3.55℃ on clear sky conditions and the total RMSE including overcast samples was 3.33℃. It is suggested that the DNN is able to overcome cloud effects. However, it showed different characteristics in seasonal and hourly analysis and needed to append solar information as inputs to make a general DNN model because the summer and winter seasons showed a low coefficient of determinations with high standard deviations.

Study on water quality prediction in water treatment plants using AI techniques (AI 기법을 활용한 정수장 수질예측에 관한 연구)

  • Lee, Seungmin;Kang, Yujin;Song, Jinwoo;Kim, Juhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.151-164
    • /
    • 2024
  • In water treatment plants supplying potable water, the management of chlorine concentration in water treatment processes involving pre-chlorination or intermediate chlorination requires process control. To address this, research has been conducted on water quality prediction techniques utilizing AI technology. This study developed an AI-based predictive model for automating the process control of chlorine disinfection, targeting the prediction of residual chlorine concentration downstream of sedimentation basins in water treatment processes. The AI-based model, which learns from past water quality observation data to predict future water quality, offers a simpler and more efficient approach compared to complex physicochemical and biological water quality models. The model was tested by predicting the residual chlorine concentration downstream of the sedimentation basins at Plant, using multiple regression models and AI-based models like Random Forest and LSTM, and the results were compared. For optimal prediction of residual chlorine concentration, the input-output structure of the AI model included the residual chlorine concentration upstream of the sedimentation basin, turbidity, pH, water temperature, electrical conductivity, inflow of raw water, alkalinity, NH3, etc. as independent variables, and the desired residual chlorine concentration of the effluent from the sedimentation basin as the dependent variable. The independent variables were selected from observable data at the water treatment plant, which are influential on the residual chlorine concentration downstream of the sedimentation basin. The analysis showed that, for Plant, the model based on Random Forest had the lowest error compared to multiple regression models, neural network models, model trees, and other Random Forest models. The optimal predicted residual chlorine concentration downstream of the sedimentation basin presented in this study is expected to enable real-time control of chlorine dosing in previous treatment stages, thereby enhancing water treatment efficiency and reducing chemical costs.

A New Exploratory Research on Franchisor's Provision of Exclusive Territories (가맹본부의 배타적 영업지역보호에 대한 탐색적 연구)

  • Lim, Young-Kyun;Lee, Su-Dong;Kim, Ju-Young
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.37-63
    • /
    • 2012
  • In franchise business, exclusive sales territory (sometimes EST in table) protection is a very important issue from an economic, social and political point of view. It affects the growth and survival of both franchisor and franchisee and often raises issues of social and political conflicts. When franchisee is not familiar with related laws and regulations, franchisor has high chance to utilize it. Exclusive sales territory protection by the manufacturer and distributors (wholesalers or retailers) means sales area restriction by which only certain distributors have right to sell products or services. The distributor, who has been granted exclusive sales territories, can protect its own territory, whereas he may be prohibited from entering in other regions. Even though exclusive sales territory is a quite critical problem in franchise business, there is not much rigorous research about the reason, results, evaluation, and future direction based on empirical data. This paper tries to address this problem not only from logical and nomological validity, but from empirical validation. While we purse an empirical analysis, we take into account the difficulties of real data collection and statistical analysis techniques. We use a set of disclosure document data collected by Korea Fair Trade Commission, instead of conventional survey method which is usually criticized for its measurement error. Existing theories about exclusive sales territory can be summarized into two groups as shown in the table below. The first one is about the effectiveness of exclusive sales territory from both franchisor and franchisee point of view. In fact, output of exclusive sales territory can be positive for franchisors but negative for franchisees. Also, it can be positive in terms of sales but negative in terms of profit. Therefore, variables and viewpoints should be set properly. The other one is about the motive or reason why exclusive sales territory is protected. The reasons can be classified into four groups - industry characteristics, franchise systems characteristics, capability to maintain exclusive sales territory, and strategic decision. Within four groups of reasons, there are more specific variables and theories as below. Based on these theories, we develop nine hypotheses which are briefly shown in the last table below with the results. In order to validate the hypothesis, data is collected from government (FTC) homepage which is open source. The sample consists of 1,896 franchisors and it contains about three year operation data, from 2006 to 2008. Within the samples, 627 have exclusive sales territory protection policy and the one with exclusive sales territory policy is not evenly distributed over 19 representative industries. Additional data are also collected from another government agency homepage, like Statistics Korea. Also, we combine data from various secondary sources to create meaningful variables as shown in the table below. All variables are dichotomized by mean or median split if they are not inherently dichotomized by its definition, since each hypothesis is composed by multiple variables and there is no solid statistical technique to incorporate all these conditions to test the hypotheses. This paper uses a simple chi-square test because hypotheses and theories are built upon quite specific conditions such as industry type, economic condition, company history and various strategic purposes. It is almost impossible to find all those samples to satisfy them and it can't be manipulated in experimental settings. However, more advanced statistical techniques are very good on clean data without exogenous variables, but not good with real complex data. The chi-square test is applied in a way that samples are grouped into four with two criteria, whether they use exclusive sales territory protection or not, and whether they satisfy conditions of each hypothesis. So the proportion of sample franchisors which satisfy conditions and protect exclusive sales territory, does significantly exceed the proportion of samples that satisfy condition and do not protect. In fact, chi-square test is equivalent with the Poisson regression which allows more flexible application. As results, only three hypotheses are accepted. When attitude toward the risk is high so loyalty fee is determined according to sales performance, EST protection makes poor results as expected. And when franchisor protects EST in order to recruit franchisee easily, EST protection makes better results. Also, when EST protection is to improve the efficiency of franchise system as a whole, it shows better performances. High efficiency is achieved as EST prohibits the free riding of franchisee who exploits other's marketing efforts, and it encourages proper investments and distributes franchisee into multiple regions evenly. Other hypotheses are not supported in the results of significance testing. Exclusive sales territory should be protected from proper motives and administered for mutual benefits. Legal restrictions driven by the government agency like FTC could be misused and cause mis-understandings. So there need more careful monitoring on real practices and more rigorous studies by both academicians and practitioners.

  • PDF

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.