• Title/Summary/Keyword: Linearity Error

Search Result 288, Processing Time 0.022 seconds

Shipping Industry Support Plan based on Research of Factors Affecting on the Freight Rate of Bulk Carriers by Sizes (부정기선 운임변동성 영향 요인 분석에 따른 우리나라 해운정책 지원 방안)

  • Cheon, Min-Soo;Mun, Ae-ri;Kim, Seog-Soo
    • Journal of Korea Port Economic Association
    • /
    • v.36 no.4
    • /
    • pp.17-30
    • /
    • 2020
  • In the shipping industry, it is essential to engage in the preemptive prediction of freight rate volatility through market monitoring. Considering that freight rates have already started to fall, the loss of shipping companies will soon be uncontrollable. Therefore, in this study, factors affecting the freight rates of bulk carriers, which have relatively large freight rate volatility as compared to container freight rates, were quantified and analyzed. In doing so, we intended to contribute to future shipping market monitoring. We performed an analysis using a vector error correction model and estimated the influence of six independent variables on the charter rates of bulk carriers by Handy Size, Supramax, Panamax, and Cape Size. The six independent variables included the bulk carrier fleet volume, iron ore traffic volume, ribo interest rate, bunker oil price, and Euro-Dollar exchange rate. The dependent variables were handy size (32,000 DWT) spot charter rates, Supramax 6 T/C average charter rates, Pana Max (75,000 DWT) spot charter, and Cape Size (170,000 DWT) spot charter. The study examined charter rates by size of bulk carriers, which was different from studies on existing specific types of ships or fares in oil tankers and chemical carriers other than bulk carriers. Findings revealed that influencing factors differed for each ship size. The Libo interest rate had a significant effect on all four ship types, and the iron ore traffic volume had a significant effect on three ship types. The Ribo rate showed a negative (-) relationship with Handy Size, Supramax, Panamax, and Cape Size. Iron ore traffic influenced three types of linearity, except for Panamax. The size of shipping companies differed depending on their characteristics. These findings are expected to contribute to the establishment of a management strategy for shipping companies by analyzing the factors influencing changes in the freight rates of charterers, which have a profound effect on the management performance of shipping companies.

Multi-residue Pesticide Analysis in Cereal using Modified QuEChERS Samloe Preparation Method (곡물류 중 잔류농약 다성분 분석을 위한 개선된 QuEChERS 시료 정제법의 개발)

  • Yang, In-Cheol;Hong, Su-Myeong;Kwon, Hye-Young;Kim, Taek-Kyum;Kim, Doo-Ho
    • The Korean Journal of Pesticide Science
    • /
    • v.17 no.4
    • /
    • pp.314-334
    • /
    • 2013
  • This study explored an efficient modified Quick, Easy, Cheap, Effective, Rugged and safe (QuEChERS) method combined with liquid chromatography-electrospray ionization with tandem mass spectrometric detection for the analysis of residues of 76 pesticides in brown rice, barley and corn including acidic sulfonylurea herbicides. Formic acid (1%) acid in acetonitrile and dispersive solid phase extractions used for extraction of pesticides and clean-up of the extract respectively. Two fortified spikes at 50 and 200 ng $g^{-1}$ levels were performed for recovery test. Mean recoveries of majority of pesticides at two spike levels ranged from 73.2 to 132.2, 80.9 to 136.8, 66.6 to 143.5 for brown rice, barley and corn respectively with standard error (CV) less than 10%. Good linearity of calibration curves were achieved with $R^2$ > 0.9907 within the observed concentration ranged. The modified method also provided satisfactory results for sulfonylurea herbicides. The method was applied to the determination of residues of target pesticides in real samples. A total of 26 pesticides in 36 out of 98 tasted samples were observed. The highest concentration was observed for tricyclazole at 1.17 mg $kg^{-1}$ in brown rice. This pesticide in two brown rice samples exceeded their MRLs regulated for rice in republic of Korea. Except tricyclazole none of the observed pesticides' concentration was higher than their MRLs. The results reveal that the method is effectively applicable to routine analysis of residues of target pesticides in brown rice, barley and corn.

Determination of secondary aliphatic amines in surface and tap waters as benzenesulfonamide derivatives using GC-MS (Benzenesulfonamide 유도체로 GC-MS를 사용한 지표수 및 수돗물 중 2차 지방족 아민의 분석)

  • Park, Sunyoung;Jung, Sungjin;Kim, Yunjeong;Kim, Hekap
    • Analytical Science and Technology
    • /
    • v.31 no.2
    • /
    • pp.96-105
    • /
    • 2018
  • This study aimed to improve the method for detecting eight secondary aliphatic amines (SAAs), so as to measure their concentrations in fresh water and tap water samples. NaOH (8 mL, 10 M) and benzenesulfonyl chloride (2 mL) were added to a water sample (200 mL), and the mixture was stirred at $80^{\circ}C$ for 30 min. An additional NaOH solution (10 mL) was added and the stirring was continued for another 30 min. The pH of the cooled mixture was adjusted to 5.5-6.0 by adding HCl (35 %), and the SAAs were extracted using dichloromethane (50 mL). This extraction was repeated once. The extract was then washed with $NaHCO_3$ (15 mL, 0.05 M) and dried over $Na_2SO_4$ (4 g). The extract was finally concentrated to 0.1 mL, of which $1{\mu}L$ was analyzed for SAAs by GC-MS. The linearity of the spike calibration curves was high ($r^2=0.9969-0.9996$). The detection limits of the method ranged from 0.01 to $0.20{\mu}g/L$, and its repeatability and reproducibility (expressed as relative standard deviation) were both less than 10 % (6.6-9.4 %). Its accuracy (measured in percentage error) ranged between 2.4 % and 6.1 %. The established method was applied to the analysis of five surface water and 82 tap water samples. Dimethylamine was the only SAA detected in all the water samples, and its average concentration was $0.79{\mu}g/L$ (range: $0.20-2.54{\mu}g/L$). Therefore, this study improved the analytical method for SAAs in surface water and tap water, and the regional and seasonal concentration distributions were obtained.

The study of quantitative analytical method for pH and moisture of Hanji record paper using non-destructive FT-NIR spectroscopy (비파괴 분석 방법인 푸리에 변환 근적외선 분광 분석을 이용한 한지 기록물의 산성도 및 함수율 정량 분석 연구)

  • Shin, Yong-Min;Park, Soung-Be;Lee, Chang-Yong;Kim, Chan-Bong;Lee, Seong-Uk;Cho, Won-Bo;Kim, Hyo-Jin
    • Analytical Science and Technology
    • /
    • v.25 no.2
    • /
    • pp.121-126
    • /
    • 2012
  • It is essential to evaluate the quality of Hanji record paper without damaging the record paper by previous destructive methods. The samples were Hanji record paper produced in the 1900s. Near-infrared (NIR) spectrometer was used as a non destructive method for evaluating the quality of record papers. Fourier transform (FT) spectrometer was used with 12,500 to 4,000 $cm^{-1}$ wavenumber range for quantitative analysis and it has high accuracy and good signal-to-noise ratio. The acidity and moisture content of Hanji record paper were measured by integrating sphere as diffuse reflectance type. The acidity (pH) of chemical factors as a quality evaluated factor of Hanji was correlated to NIR spectrum. The NIR spectrum was pretreated to obtain the coefficients of optimum correlation. Multiplicative scatter correction (MSC) and First derivative of Savitzky-Golay were used as pretreated methods. The coefficients of optimum correlation were calculated by PLSR (partial least square regression). The correlation coefficients ($R^2$) of acidity had 0.92 on NIR spectra without pretreatment. Also the standard error of prediction (SEP) of pH was 0.24. And then the NIR spectra with pretreatment would have better correlation coefficient ($R^2$ = 0.98) and 0.19 as SEP on pH. For moisture contents, the linearity correlation without pretreatment was higher than the case with pretreatment (MSC, $1^{st}$ derivative). As the best result, the $R^2$ was 0.99 and SEP was 0.45. This indicates that it is highly proper to evaluate the quality of Hanji record papers speedily with integrated sphere and FT NIR analyzer as a non-destructive method.

Performance evaluation of hyperspectral bathymetry method for morphological mapping in a large river confluence (초분광수심법 기반 대하천 합류부 하상측정 성능 평가)

  • Kim, Dongsu;Seo, Youngcheol;You, Hojun;Gwon, Yeonghwa
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.3
    • /
    • pp.195-210
    • /
    • 2023
  • Additional deposition and erosion in large rivers in South Korea have continued to occur toward morphological stabilization after massive dredging through the four major river restoration project, subsequently requiring precise bathymetry monitoring. Hyperspectral bathymetry method has increasingly been highlighted as an alternative way to estimate bathymetry with high spatial resolution in shallow depth for replacing classical intrusive direct measurement techniques. This study introduced the conventional Optimal Band Ratio Analysis (OBRA) of hyperspectral bathymetry method, and evaluated the performance in a domestic large river in normal turbid and flow condition. Maximum measurable depth was estimated by applying correlation coefficient and root mean square error (RMSE) produced during OBRA with cascadedly applying cut-off depth, where the consequent hyperspectral bathymetry map excluded the region over the derived maximum measurable depth. Also non-linearity was considered in building relation between optimal band and depth. We applied the method to the Nakdong and Hwang River confluence as a large river case and obtained the following features. First, the hyperspectal method showed acceptable performance in morphological mapping for shallow regions, where the maximum measurable depth was 2.5 m and 1.25 m in the Nakdong and Hwang river, respectively. Second, RMSE was more feasible to derive the maximum measurable depth rather than the conventional correlation coefficient whereby considering various scenario of excluding range of in situ depths for OBRA. Third, highly turbid region in Hwang River did not allow hyperspectral bathymetry mapping compared with the case of adjacent Nakdong River, where maximum measurable depth was down to half in Hwang River.

Development of control system for complex microbial incubator (복합 미생물 배양기의 제어시스템 개발)

  • Hong-Jik Kim;Won-Bog Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.1
    • /
    • pp.122-126
    • /
    • 2023
  • In this paper, a control system for a complex microbial incubator was proposed. The proposed control system consists of a control unit, a communication unit, a power supply unit, and a control system of the complex microbial incubator. The controller of the complex microbial incubator is designed and manufactured to convert analog signals and digital signals, and control signals of sensors such as displays using LCD panels, water level sensors, temperature sensors, and pH concentration sensors. The water level sensor used is designed and manufactured to enable accurate water level measurement by using the IR laser method with excellent linearity in order to solve the problem that existing water level sensors are difficult to measure due to foreign substances such as bubbles. The temperature sensor is designed and used so that it has high accuracy and no cumulative resistance error by measuring using the thermal resistance principle. The communication unit consists of two LAN ports and one RS-232 port, and is designed and manufactured to transmit signals such as LCD panel, PCT panel, and load cell controller used in the complex microbial incubator to the control unit. The power supply unit is designed and manufactured to supply power by configuring it with three voltage supply terminals such as 24V, 12V and 5V so that the control unit and communication unit can operate smoothly. The control system of the complex microbial incubator uses PLC to control sensor values such as pH concentration sensor, temperature sensor, and water level sensor, and the operation of circulation pump, circulation valve, rotary pump, and inverter load cell used for cultivation. In order to evaluate the performance of the control system of the proposed complex microbial incubator, the result of the experiment conducted by the accredited certification body showed that the range of water level measurement sensitivity was -0.41mm~1.59mm, and the range of change in water temperature was ±0.41℃, which is currently commercially available. It was confirmed that the product operates with better performance than the performance of the products. Therefore, the effectiveness of the control system of the complex microbial incubator proposed in this paper was demonstrated.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.