• Title/Summary/Keyword: Robustness performance

Search Result 1,700, Processing Time 0.024 seconds

Estimation of Economic Effects on Overseas Oil and Gas E&P by Macroeconomic Model of Korea (거시경제모형을 이용한 해외석유가스개발사업의 경제적 효과 추정 연구)

  • Kim, Ji-Whan;Chung, Woo Jin;Kim, Yoon Kyung
    • Environmental and Resource Economics Review
    • /
    • v.23 no.1
    • /
    • pp.133-156
    • /
    • 2014
  • In general, quantity results of empirical analysis using model shows how much big performance policy has. Therefore this is useful to evaluate a policy. This paper composed macro economic model based on Bank of Korea's quarterly model and annual model, that estimates performance of overseas oil and gas development project to Korean economy in aspect of quantity. In this model, we estimated each effect in real GDP, current account, unemployment rate, CPI and exchange rate carried by recovered amount from overseas oil and gas development project. The recovered amount was evaluated in currency coming from oil and gas acquired from overseas oil and gas development project. Macro economic model of this paper benchmarked macro model composed by Bank of Korea(1997, 2004, 2012). We reviewed model robustness using statistical suitability of each equation and historical simulation for from 1994 to 2011. The recovered amount of overseas oil and gas development project has positive effect in every macro economic index except CPI and exchange rate. Economic effect to macro economic index become bigger with time because the recovered amount of overseas oil and gas development project are increasing until now. Although empirical results of economic effects in every year from the recovered amount of overseas oil and gas development project are different, as of 2011, empirical results showed that the recovered amount of overseas oil and gas development project increase 2.226% and 0.401% in current account and real GDP respectively. And it also decrease 0.489%p in unemployment rate. Exchange rate to US dollars also decrease in amount of 0.379%.

The Effect of Corporate Social Responsibilities on the Quality of Corporate Reporting (기업의 사회책임이 기업경영보고의 질에 미치는 영향)

  • Jeong, Kap-Soo;Park, Cheong-Kyu
    • Journal of Distribution Science
    • /
    • v.14 no.6
    • /
    • pp.75-80
    • /
    • 2016
  • Purpose - A growing demand for sustainability reporting has placed pressure on firms with non-financial information that affects firm valuation, growth, and development. In particular, a number of researchers have investigated various topics in Corporate Social Responsibility (CSR), non-financial information. Prior studies suggest that CSR may affect corporate outcomes like corporate reporting, financial performance, and disclosures. However, the results from prior studies are not clear whether CSR affects corporate outcomes. This is partially due to the measurement issues with CSR. In this study, we examine whether CSR affects the quality of corporate reporting, one of the popular measures in corporate outcomes. We find an evidence that CSR positively affects the quality of corporate reporting. Research design, data, and methodology - In this study, we collected a unique dataset of CSR from MSCI. Total 169 firms listed in the Korean Stock Exchange from 2011 to 2014 were collected and analysed with the detailed CSR reports. Using a correlation test, we found a weak association between CSR and the quality of corporate reporting. However, the regression tests provided a strong relationship between CSR and the quality of corporate reporting after controlling for other variables that may affect the quality of corporate reporting. Additionally, we calculated the t-statistics based on heteroskedaticity-consistent standard errors (White, 1980). Results - Before we run the regression test, we sort the measures of the two dependent variables into each rating of CSR (from AAA to CCC). The results indicate that the quality of corporate reporting measured by discretionary accruals and performance-matched discretionary accruals monotonically decrease as the CSR ratings increase. This supports our hypothesis. In the regression tests, the coefficient on MJDA (PMDA) is -0.183 (-0.173) and significant at the 5% level. We can interpret the results as CSR affecting the quality of corporate reporting in positive ways. Other coefficients on control variables are consistent with prior studies. For example, the coefficients on both LOSS and LEV are positive and significant at conventional level, meaning that firms with financial difficulty may harm their quality of corporate reporting. Conclusion - We found an evidence that CSR is positively associated with the quality of corporate reporting. This study contributes to the literature in various ways. First, this study extends the line of CSR research by providing additional evidence in the setting of ethical behaviors by managements. This is consistent with the hypothesis and supports the results of prior studies. Second, to the best of my knowledge, this is the first study using the MSCI CSR ratings. In contrast with prior studies using different measures of CSR, the MSCI CSR ratings allow us to provide in-depth analysis. Third, the additional measure of dependent variable (PMDA) allows us to improve the robustness of our results. Overall, the results provided this study to extend the findings in prior studies by providing incremental evidence.

Total Degradation Performance Evaluation of the Time- and Frequency-Domain Clipping in OFDM Systems (OFDM 시스템에서 시간 및 주파수 영역 클리핑의 Total Degradation 성능평가)

  • Han, Chang-Sik;Seo, Man-Jung;Im, Sung-Bin
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.7 s.361
    • /
    • pp.17-22
    • /
    • 2007
  • OFDM (Orthogonal Frequency Division Multiplexing) is a special case of multicarrier transmission, where a single data stream is transmitted over a number of lower-rate subcarrier. One of the main reasons to use OFDM is to increase robustness against frequency-selective fading or narrowband interference. Unfortunately, an OFDM signal consists of a number of independently modulated subcarriers, which can give a large PAPR (Peak-to-Average Power Ratio) when added up coherently. In this paper, we investigate the performance of a simple PAPR reduction scheme, which requires no change of a receiver structure or no additional information transmission. The approach we employed is clipping in the time and frequency domains. The time-domain clipping is carried out with a predetermined clipping level while the frequency-domain clipping is done within EVM (Error Vector Magnitude). This approach is suboptimal with lower computational complexity compared to the optimal method. This evaluation is carried out on the OFDM system with an nonlinear amplifier. The simulation results demonstrated that the PAPR reduction algorithm is one of ways to reduce the effects of the nonlinear distortion of an HPA (High Power Amplifier).

Development of an Active Dry EEG Electrode Using an Impedance-Converting Circuit (임피던스 변환 회로를 이용한 건식능동뇌파전극 개발)

  • Ko, Deok-Won;Lee, Gwan-Taek;Kim, Sung-Min;Lee, Chany;Jung, Young-Jin;Im, Chang-Hwan;Jung, Ki-Young
    • Annals of Clinical Neurophysiology
    • /
    • v.13 no.2
    • /
    • pp.80-86
    • /
    • 2011
  • Background: A dry-type electrode is an alternative to the conventional wet-type electrode, because it can be applied without any skin preparation, such as a conductive electrolyte. However, because a dry-type electrode without electrolyte has high electrode-to-skin impedance, an impedance-converting amplifier is typically used to minimize the distortion of the bioelectric signal. In this study, we developed an active dry electroencephalography (EEG) electrode using an impedance converter, and compared its performance with a conventional Ag/AgCl EEG electrode. Methods: We developed an active dry electrode with an impedance converter using a chopper-stabilized operational amplifier. Two electrodes, a conventional Ag/AgCl electrode and our active electrode, were used to acquire EEG signals simultaneously, and the performance was tested in terms of (1) the electrode impedance, (2) raw data quality, and (3) the robustness of any artifacts. Results: The contact impedance of the developed electrode was lower than that of the Ag/AgCl electrode ($0.3{\pm}0.1$ vs. $2.7{\pm}0.7\;k{\Omega}$, respectively). The EEG signal and power spectrum were similar for both electrodes. Additionally, our electrode had a lower 60-Hz component than the Ag/AgCl electrode (16.64 vs. 24.33 dB, respectively). The change in potential of the developed electrode with a physical stimulus was lower than for the Ag/AgCl electrode ($58.7{\pm}30.6$ vs. $81.0{\pm}19.1\;{\mu}V$, respectively), and the difference was close to statistical significance (P=0.07). Conclusions: Our electrode can be used to replace Ag/AgCl electrodes, when EEG recording is emergently required, such as in emergency rooms or in intensive care units.

Installation of Very Broadband Seismic Stations to Observe Seismic and Cryogenic Signals, Antarctica (남극 지진 및 빙권 신호 관측을 위한 초광대역 지진계 설치)

  • Lee, Won-Sang;Park, Yong-Cheol;Yun, Suk-Young;Seo, Ki-Weon;Yee, Tae-Gyu;Choe, Han-Jin;Yoon, Ho-Il;Chae, Nam-Yi
    • Geophysics and Geophysical Exploration
    • /
    • v.15 no.3
    • /
    • pp.144-149
    • /
    • 2012
  • Korea Polar Research Institute (KOPRI) has successfully installed two autonomous very broadband three-component seismic stations at the King George Island (KGI), Antarctica, during the 24th KOPRI Antarctic Summer Expedition (2010 ~ 2011). The seismic observation system is originally designed by the Incorporated Research Institutions for Seismology Program for Array Seismic Studies of the Continental Lithosphere Instrument Center, which is fully compatible with the Polar Earth Observing Network seismic system. The installation is to achieve the following major goals: 1. Monitoring local earthquakes and icequakes in and around the KGI, 2. Validating the robustness of seismic system operation under harsh environment. For further intensive studies, we plan to move and install them adding a couple more stations at ice shelf system, e.g., Larsen Ice Shelf System, Antarctica, in 2013 to figure out ice dynamics and physical interaction between lithosphere and cryosphere. In this article, we evaluate seismic station performance and characteristics by examining ambient noise, and provide operational system information such as frequency response and State-Of-Health information.

A Digital Phase-locked Loop design based on Minimum Variance Finite Impulse Response Filter with Optimal Horizon Size (최적의 측정값 구간의 길이를 갖는 최소 공분산 유한 임펄스 응답 필터 기반 디지털 위상 고정 루프 설계)

  • You, Sung-Hyun;Pae, Dong-Sung;Choi, Hyun-Duck
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.4
    • /
    • pp.591-598
    • /
    • 2021
  • The digital phase-locked loops(DPLL) is a circuit used for phase synchronization and has been generally used in various fields such as communication and circuit fields. State estimators are used to design digital phase-locked loops, and infinite impulse response state estimators such as the well-known Kalman filter have been used. In general, the performance of the infinite impulse response state estimator-based digital phase-locked loop is excellent, but a sudden performance degradation may occur in unexpected situations such as inaccuracy of initial value, model error, and disturbance. In this paper, we propose a minimum variance finite impulse response filter with optimal horizon for designing a new digital phase-locked loop. A numerical method is introduced to obtain the measured value interval length, which is an important parameter of the proposed finite impulse response filter, and to obtain a gain, the covariance matrix of the error is set as a cost function, and a linear matrix inequality is used to minimize it. In order to verify the superiority and robustness of the proposed digital phase-locked loop, a simulation was performed for comparison and analysis with the existing method in a situation where noise information was inaccurate.

Robust Filter Based Wind Velocity Estimation Method for Unpowered Air Vehicle Without Air Speed Sensor (대기 속도 센서가 없는 무추력 항공기의 강인 필터 기반의 바람 속도 추정 기법)

  • Park, Yong-gonjong;Park, Chan Gook
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.47 no.2
    • /
    • pp.107-113
    • /
    • 2019
  • In this paper, a robust filter based wind velocity estimation algorithm without an air velocity sensor in an air vehicle is presented. The wind velocity is useful information for the air vehicle to perform precise guidance and control. In general, the wind velocity can be obtained by subtracting an air velocity which is obtained by an air velocity sensor such as a pitot-tube, and a ground velocity which is obtained by a navigation equipment. However, in order to simplify the configuration of the air vehicle, the wind estimation algorithm is necessary because the wind velocity can not be directly obtained if the air velocity measurement sensor is not used. At this time, the aerodynamic coefficient of the air vehicle changes due to the turbulence, which causes the uncertainty of the system model of the filter, and the wind estimation performance deteriorates. Therefore, in this study, we propose a wind estimation method using $H{\infty}$ filter to ensure robustness against aerodynamic coefficient uncertainty, and we confirmed through simulation that the proposed method improves the performance in the uncertainty of aerodynamic coefficient.

Comprehensive analysis of deep learning-based target classifiers in small and imbalanced active sonar datasets (소량 및 불균형 능동소나 데이터세트에 대한 딥러닝 기반 표적식별기의 종합적인 분석)

  • Geunhwan Kim;Youngsang Hwang;Sungjin Shin;Juho Kim;Soobok Hwang;Youngmin Choo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.329-344
    • /
    • 2023
  • In this study, we comprehensively analyze the generalization performance of various deep learning-based active sonar target classifiers when applied to small and imbalanced active sonar datasets. To generate the active sonar datasets, we use data from two different oceanic experiments conducted at different times and ocean. Each sample in the active sonar datasets is a time-frequency domain image, which is extracted from audio signal of contact after the detection process. For the comprehensive analysis, we utilize 22 Convolutional Neural Networks (CNN) models. Two datasets are used as train/validation datasets and test datasets, alternatively. To calculate the variance in the output of the target classifiers, the train/validation/test datasets are repeated 10 times. Hyperparameters for training are optimized using Bayesian optimization. The results demonstrate that shallow CNN models show superior robustness and generalization performance compared to most of deep CNN models. The results from this paper can serve as a valuable reference for future research directions in deep learning-based active sonar target classification.

Enhancement of Inter-Image Statistical Correlation for Accurate Multi-Sensor Image Registration (정밀한 다중센서 영상정합을 위한 통계적 상관성의 증대기법)

  • Kim, Kyoung-Soo;Lee, Jin-Hak;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.1-12
    • /
    • 2005
  • Image registration is a process to establish the spatial correspondence between images of the same scene, which are acquired at different view points, at different times, or by different sensors. This paper presents a new algorithm for robust registration of the images acquired by multiple sensors having different modalities; the EO (electro-optic) and IR(infrared) ones in the paper. The two feature-based and intensity-based approaches are usually possible for image registration. In the former selection of accurate common features is crucial for high performance, but features in the EO image are often not the same as those in the R image. Hence, this approach is inadequate to register the E0/IR images. In the latter normalized mutual Information (nHr) has been widely used as a similarity measure due to its high accuracy and robustness, and NMI-based image registration methods assume that statistical correlation between two images should be global. Unfortunately, since we find out that EO and IR images don't often satisfy this assumption, registration accuracy is not high enough to apply to some applications. In this paper, we propose a two-stage NMI-based registration method based on the analysis of statistical correlation between E0/1R images. In the first stage, for robust registration, we propose two preprocessing schemes: extraction of statistically correlated regions (ESCR) and enhancement of statistical correlation by filtering (ESCF). For each image, ESCR automatically extracts the regions that are highly correlated to the corresponding regions in the other image. And ESCF adaptively filters out each image to enhance statistical correlation between them. In the second stage, two output images are registered by using NMI-based algorithm. The proposed method provides prospective results for various E0/1R sensor image pairs in terms of accuracy, robustness, and speed.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.