• Title/Summary/Keyword: multi-sample problem

Search Result 84, Processing Time 0.027 seconds

A Study on the Financial Service Negotiations in the Korean-Chinese Free-Trade Agreement (FTA) with Respect to RMB Internationalization (위안화 국제화를 고려한 한·중 FTA 금융서비스 협상 전략에 관한 연구)

  • Kim, Sang-Su;Son, Sam-Ho
    • Journal of Distribution Science
    • /
    • v.11 no.4
    • /
    • pp.81-88
    • /
    • 2013
  • Purpose - This paper analyzes the influence of the RMB internationalization on the KRW/dollar exchange rate using an autoregressive distributed lag model. Comparing the parameter estimators from the sample period before and after the global financial crisis, we found that the RMB/dollar exchange rate has increasingly become more influential on the KRW/dollar exchange rate. Moreover, for the past several years, the Chinese government has actively utilized the financial service FTA negotiation as a measure for the RMB internationalization. This paper simultaneously considers RMB internationalization and financial service negotiations in the Korean-Chinese FTA. The purpose of this paper is to explicitly suggest a direction for the financial service negotiations in the Korean-Chinese FTA considering the effects of RMB internationalization. Research design, data, and methodology - The research plan of this paper has two parts. First, for an empirical study, this paper uses the daily exchange rate of the U.S. dollar against the currencies of the ASEAN5, Taiwan,and Korea. By using an autoregressive distributed lag model, this paper studies the influence of the change in the RMB/dollar exchange rate on changes in the local currency/dollar exchange rate in seven economies neighboring China. Our sample periods are 06/2005 - 07/2008 and 06/2010 -02/2013. During these periods, China was under the multi-currency basket system. We exempted the period of 08/2008 - 05/2010 from the analysis because there was nearly no RMB/dollar exchange rate fluctuation during those months. Second, after analyzing the recent financial service liberalizations and deregulations in China, we recommend a direction for the financial service negotiations in the Korean-Chinese FTA. In the past several years,the main Chinese financial policy agenda has surrounded the RMB internationalization. Therefore, it is crucial to understand this in the search for strategies for the financial service negotiations in the Korean-Chinese FTA. This paper employs an existing literature survey and examines the FTA protocols in its research methodology. Results and Conclusions - After the global financial crisis, the Chinese government wanted to break away from the dollar influence and pursued independent RMB internationalization in order to continue the growth and stability of its economy. Hence, every neighboring economy of China has been strategically impacted by RMB internationalization. Nevertheless, there is little empirical study on the influence of RMB internationalization on the KRW/dollar exchange rate. This paper is one of the few studies to analyze this problem comprehensively. By using a relatively simple estimation model, we can confirm that the coefficient of the RMB/dollar exchange rate has become more significant, except in the case of Indonesia. Although Korea is not under the multi-currency basket system but under the weakly controlled floating exchange rate system, its coefficient appears as large as that of the ASEAN5. This is the basis of the currency cooperation that has grown from the expansion of trade between the two countries. These empirical results suggest that the Korean government should specifically consider the RMB internationalization in the Korean-Chinese FTA negotiations.

  • PDF

Optimized Allocation of Water for the Multi-Purpose Use in Agricultural Reservoirs (농업용 저수지의 다목적 이용을 위한 용수의 적정배분)

  • 신일선;권순국
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.29 no.3
    • /
    • pp.125-137
    • /
    • 1987
  • The purpose of this paper is to examine some difficulties in water management of agricultural reservoirs in Korea, for there are approximately more than 15,000 reservoirs which are now being utilized for the purpose of irrigation, along with the much amount of expenses and labors to be invested against droughts and floods periodically occurred. Recently, the effective use of water resources in the agricultural reservoirs with a single purpose, is becomming multiple according to the alterable environment of water use. Therefore, the task to allocate agricultural water rationally and economically must be solved for the multiple use of agricultural reservoirs. On the basis of the above statement, this study aims at suggesting the rational method of water management by introducing an optimal technique to allocate the water in an existing agricultural reservoir rationally, for the sake of maximizing the economic effect. To achieve this objective, a reservoir, called "0-Bongje" as a sample of the case study, is selected for an agricultural water development proiect of medium scale. As a model for the optimum allocation of water in the multi-purpose use of reservoirs a linear programming model is developed and analyzed. As a result, findings of the study are as follows : First, a linear programing model is developed for the optimum allocation of water in the multi-purpose use of agricultural reservoirs. By adopting the model in the case of reservoir called "O-Bongje," the optimum solution for such various objects as irrigation area, the amount of domestic water supply, the size of power generation, and the size of reservoir storage, etc., can be obtained. Second, by comparing the net benefits in each object under the changing condition of inflow into the reservoir, the factors which can most affect the yearly total net benefit can be drawn, and they are in the order of the amount of domestic water supply, irrigation area, and power generation. Third, the sensitivity analysis for the decision variable of irrigation which may have a first priority among the objects indicate that the effective method of water management can be rapidly suggested in accordance with a condition under the decreasing area of irrigation. Fourth, in the case of decision making on the water allocation policy in an existing multi-purpose reservoir, the rapid comparison of numerous alternatives can be possible by adopting the linear programming model. Besides, as the resources can be analyed in connection with various activities, it can be concluded that the linear programing model developed in this study is more quantitative than the traditional methods of analysis. Fifth, all the possible constraint equations, in using a linear programming model for adopting a water allocation problem in the agricultural reservoirs, are presented, and the method of analysis is also suggested in this study. Finally, as the linear programming model in this study is found comprehensive, the model can be adopted in any different kind of conditions of agricultural reservoirs for the purpose of analyzing optimum water allocation, if the economic and technical coefficients are known, and the decision variable is changed in accordance with the changing condition of irrigation area.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

The Determinants of R&D and Product Innovation Pattern in High-Technology Industry and Low-Technology Industry: A Hurdle Model and Heckman Sample Selection Model Approach (고기술산업과 저기술산업의 제품혁신패턴 및 연구개발 결정요인 분석: Hurdle 모형과 Heckman 표본선택모형을 중심으로)

  • Lee, Yunha;Kang, Seung-Gyu;Park, Jaemin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.10
    • /
    • pp.76-91
    • /
    • 2019
  • There have been many studies to examine the patterns in innovations reflecting industry-specific characteristics from an evolutionary economics perspective. The purpose of this study is to identify industry-specific differences in product innovation patterns and determinants of innovation performance. For this, Korean manufacturing is classified into high-tech industries and low-tech industries according to technology intensity. It is also pointed out that existing research does not reflect the decision-making process of firms' R&D implementations. In order to solve this problem, this study presents a Heckman sample selection model and a double-hurdle model as alternatives, and analyzes data from 1,637 firms in the 2014 Survey on Technology of SMEs. As a result, it was confirmed that the determinants and patterns of manufacturing in small and medium-size enterprise (SME) product innovation are significantly different between high-tech and low-tech industries. Also, through an extended empirical model, we found that there exist a sample selection bias and a hurdle-like threshold in the decision-making process. In this study, the industry-specific features and patterns of product innovation are examined from a multi-sided perspective, and it is meaningful that the decision-making process for manufacturing SMEs' R&D performance can be better understood.

Comparative Study of the Efficiency of GC with Large Volume Injector and SPE Clean-up Process Applied in QuEChERS Method (GC-대용량 주입장치와 SPE를 적용한 QuEChERS 잔류농약 분석법의 효율성 비교)

  • Park, Young Jun;Hong, Su Myeong;Kim, Taek Kyum;Kwon, Hye Young;Hur, Jang Hyun
    • The Korean Journal of Pesticide Science
    • /
    • v.19 no.4
    • /
    • pp.370-393
    • /
    • 2015
  • This study was conducted to compare STQ method, multi-residue method in Korean food code and QuEChERS method for validated selected and accuracy, reproducibility and efficiency. A total of 45 selected and targeted pesticides were the analyzed by GC and 5 of them were crops (apple, potato, green pepper, rice, soy bean). $R^2$ values were calculated in the standard calibration curve was over 0.990. Recovery tests were performed by three replications in two levels and the relative standard deviation of the repeated experiments was less than 30%. The average percentage of recoveries in the multi-residue method in Korean food code was 89.13%, QuEChERS method was 92.45% and STQ method was 85.28%. In addition, matrix effects in multi-residue method in Korean food code was 24.61%, QuEChERS method was 23.98% and STQ method showed 11.24%. The STQ method is easy and showed high clean-up effect in extracting the sample solution than the QuEChERS method and clean-up with C18, PLS, PSA cartridge columns. A large volume of the sample was injected in order to compensable for the problem, that occurred due to high detection limit in the analyser. When the STQ method was applied using a large volume injector, the standard calibration curve showed a higher linearity $R^2=0.990$, and method detection limit was 0.01 mg/kg. It showed an average recovery of 91.84% and the relative standard deviations of three replications repeated in two level process was less than 30% and had an average matrix effect of 17.90%.

User-Perspective Issue Clustering Using Multi-Layered Two-Mode Network Analysis (다계층 이원 네트워크를 활용한 사용자 관점의 이슈 클러스터링)

  • Kim, Jieun;Kim, Namgyu;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.93-107
    • /
    • 2014
  • In this paper, we report what we have observed with regard to user-perspective issue clustering based on multi-layered two-mode network analysis. This work is significant in the context of data collection by companies about customer needs. Most companies have failed to uncover such needs for products or services properly in terms of demographic data such as age, income levels, and purchase history. Because of excessive reliance on limited internal data, most recommendation systems do not provide decision makers with appropriate business information for current business circumstances. However, part of the problem is the increasing regulation of personal data gathering and privacy. This makes demographic or transaction data collection more difficult, and is a significant hurdle for traditional recommendation approaches because these systems demand a great deal of personal data or transaction logs. Our motivation for presenting this paper to academia is our strong belief, and evidence, that most customers' requirements for products can be effectively and efficiently analyzed from unstructured textual data such as Internet news text. In order to derive users' requirements from textual data obtained online, the proposed approach in this paper attempts to construct double two-mode networks, such as a user-news network and news-issue network, and to integrate these into one quasi-network as the input for issue clustering. One of the contributions of this research is the development of a methodology utilizing enormous amounts of unstructured textual data for user-oriented issue clustering by leveraging existing text mining and social network analysis. In order to build multi-layered two-mode networks of news logs, we need some tools such as text mining and topic analysis. We used not only SAS Enterprise Miner 12.1, which provides a text miner module and cluster module for textual data analysis, but also NetMiner 4 for network visualization and analysis. Our approach for user-perspective issue clustering is composed of six main phases: crawling, topic analysis, access pattern analysis, network merging, network conversion, and clustering. In the first phase, we collect visit logs for news sites by crawler. After gathering unstructured news article data, the topic analysis phase extracts issues from each news article in order to build an article-news network. For simplicity, 100 topics are extracted from 13,652 articles. In the third phase, a user-article network is constructed with access patterns derived from web transaction logs. The double two-mode networks are then merged into a quasi-network of user-issue. Finally, in the user-oriented issue-clustering phase, we classify issues through structural equivalence, and compare these with the clustering results from statistical tools and network analysis. An experiment with a large dataset was performed to build a multi-layer two-mode network. After that, we compared the results of issue clustering from SAS with that of network analysis. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The sample dataset contains 150 million transaction logs and 13,652 news articles of 5,000 panels over one year. User-article and article-issue networks are constructed and merged into a user-issue quasi-network using Netminer. Our issue-clustering results applied the Partitioning Around Medoids (PAM) algorithm and Multidimensional Scaling (MDS), and are consistent with the results from SAS clustering. In spite of extensive efforts to provide user information with recommendation systems, most projects are successful only when companies have sufficient data about users and transactions. Our proposed methodology, user-perspective issue clustering, can provide practical support to decision-making in companies because it enhances user-related data from unstructured textual data. To overcome the problem of insufficient data from traditional approaches, our methodology infers customers' real interests by utilizing web transaction logs. In addition, we suggest topic analysis and issue clustering as a practical means of issue identification.

Multi-channel LD - Driver designed for CTP(computer to plate) (CTP용 다 채널 LD - 드라이버 설계)

  • Lee, Bae-Kyu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.3
    • /
    • pp.667-673
    • /
    • 2015
  • A laser diode(LD) has been studied in many fields what medical, industrial processing, sensor, advertising equipment, printing equipment. And the LD is being used in industry. However, LD will require precision handling. Therefore, the actual use of LD is limited to areas of specialization. In this study, attend to the characteristics of the LD what weak to electrostatic and physical impact, current and heat. And will make a sample module that use comfortably a various wavelength LD. Furthermore, Furthermore, through the printing CTP(Computer to Plate) equipment used the 128-channel LD-Driver, compares it with a 64-channel CTP device about the print speed and resolution. And will solved the problem of delay between the dot and the dot. Finally, consider the potential of the 256-channel LD-Driver.

Power Estimation and Optimum Design of a Buoy for the Resonant Type Wave Energy Converter Using Approximation Scheme (근사기법을 활용한 공진형 파력발전 부이의 발전량 추정 및 최적설계)

  • Koh, Hyeok-Jun;Ruy, Won-Sun;Cho, Il-Hyoung
    • Journal of Ocean Engineering and Technology
    • /
    • v.27 no.1
    • /
    • pp.85-92
    • /
    • 2013
  • This paper deals with the resonant type of a WEC (wave energy converter) and the determination method of its geometric parameters which were obtained to construct the robust and optimal structure, respectively. In detail, the optimization problem is formulated with the constraints composed of the response surfaces which stand for the resonance period(heave, pitch) and the meta center height of the buoy. Use of a signal-to-noise ratio calculated from normalized multi-objective results with the weight factor can help to select the robust design level. In order to get the sample data set, the motion responses of the power buoy were analyzed using the BEM (boundary element method)-based commercial code. Also, the optimization result is compared with a robust design for a feasibility study. Finally, the power efficiency of the WEC with the optimum design variables is estimated as the captured wave ratio resulting from absorbed power which mainly related to PTO (power take off) damping. It could be said that the resultant of the WEC design is the economical optimal design which satisfy the given constraints.

Hardware Synthesis From Coarse-Grained Dataflow Specification For Fast HW/SW Cosynthesis (빠른 하드웨어/소프트웨어 통합합성을 위한 데이타플로우 명세로부터의 하드웨어 합성)

  • Jung, Hyun-Uk;Ha, Soon-Hoi
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.5
    • /
    • pp.232-242
    • /
    • 2005
  • This paper concerns automatic hardware synthesis from data flow graph (DFG) specification for fast HW/SW cosynthesis. A node in BFG represents a coarse grain block such as FIR and DCT and a port in a block may consume multiple data samples per invocation, which distinguishes our approach from behavioral synthesis and complicates the problem. In the presented design methodology, a dataflow graph with specified algorithm can be mapped to various hardware structures according to the resource allocation and schedule information. This simplifies the management of the area/performance tradeoff in hardware design and widens the design space of hardware implementation of a dataflow graph compared with the previous approaches. Through experiments with some examples, the usefulness of the proposed technique is demonstrated.

Monitoring and Risk Assessment of Pesticide Residues in Commercially Dried Vegetables

  • Seo, Young-Ho;Cho, Tae-Hee;Hong, Chae-Kyu;Kim, Mi-Sun;Cho, Sung-Ja;Park, Won-Hee;Hwang, In-Sook;Kim, Moo-Sang
    • Preventive Nutrition and Food Science
    • /
    • v.18 no.2
    • /
    • pp.145-149
    • /
    • 2013
  • We tested for residual pesticide levels in dried vegetables in Seoul, Korea. A total of 100 samples of 13 different types of agricultural products were analyzed by a gas chromatography-nitrogen phosphate detector (GC-NPD), an electron capture detector (GC-${\mu}ECD$), a mass spectrometry detector (GC-MSD), and a high performance liquid chromatography- ultraviolet detector (HPLC-UV). We used multi-analysis methods to analyze for 253 different pesticide types. Among the selected agricultural products, residual pesticides were detected in 11 samples, of which 2 samples (2.0%) exceeded the Korea Maximum Residue limits (MRLs). We detected pesticide residue in 6 of 9 analyzed dried pepper leaves and 1 sample exceeded the Korea MRLs. Data obtained were then used for estimating the potential health risks associated with the exposures to these pesticides. The estimated daily intakes (EDIs) range from 0.1% of the acceptable daily intake (ADI) for bifenthrin to 8.4% of the ADI for cadusafos. The most critical commodity is cadusafos in chwinamul, contributing 8.4% to the hazard index (HI). This results show that the detected pesticides could not be considered a serious public health problem. Nevertheless, an investigation into continuous monitoring is recommended.