• Title/Summary/Keyword: 열 분해

Search Result 3,054, Processing Time 0.03 seconds

Professional Speciality of Communication Administration and, Occupational Group and Series Classes of Position in National Public Official Law -for Efficiency of Telecommunication Management- (통신행정의 전문성과 공무원법상 직군렬 - 전기통신의 관리들 중심으로-)

  • 조정현
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.3 no.1
    • /
    • pp.26-27
    • /
    • 1978
  • It can be expected that intelligence and knowledge will be the core of the post-industrial society in a near future. Accordingly, the age of intelligence shall be accelerated extensively to find ourselves in an age of 'Communication' service enterprise. The communication actions will increase its efficiency and multiply its utility, indebted to its scientic principles and legal idea. The two basic elements of communication action, that is, communication station and communication men are considered to perform their function when they are properly supported and managed by the government administration. Since the communication action itself is composed of various factors, the elements such as communication stations and officials must be cultivated and managed by specialist or experts with continuous and extensive study practices concerned. With the above mind, this study reviewed our public service officials law with a view to improve it by providing some suggestions for communication experts and researchers to find suitable positions in the framework of government administration. In this study, I would like to suggest 'Occupational Group of Communication' that is consisted of a series of comm, management positions and research positions in parallel to the existing series of comm, technical position. The communication specialist or expert is required to be qualified with necessary scientific knowledge and techniques of communication, as well as prerequisites as government service officials. Communication experts must succeed in the first hand to obtain government licence concerned in with the government law and regulation, and international custom before they can be appointed to the official positions. This system of licence-prior-to-appointment is principally applied in the communication management position. And communication research positions are for those who shall engage themselves to the work of study and research in the field of both management and technical nature. It is hopefully expected that efficient and extensive management of communication activities, as well as scientific and continuous study over than communication enterprise will be upgraded at national dimensions.

  • PDF

Soil Physical Properties of Arable Land by Land Use Across the Country (토지이용별 전국 농경지 토양물리적 특성)

  • Cho, H.R.;Zhang, Y.S.;Han, K.H.;Cho, H.J.;Ryu, J.H.;Jung, K.Y.;Cho, K.R.;Ro, A.S.;Lim, S.J.;Choi, S.C.;Lee, J.I.;Lee, W.K.;Ahn, B.K.;Kim, B.H.;Kim, C.Y.;Park, J.H.;Hyun, S.H.
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.3
    • /
    • pp.344-352
    • /
    • 2012
  • Soil physical properties determine soil quality in aspect of root growth, infiltration, water and nutrient holding capacity. Although the monitoring of soil physical properties is important for sustainable agricultural production, there were few studies. This study was conducted to investigate the condition of soil physical properties of arable land according to land use across the country. The work was investigated on plastic film house soils, upland soils, orchard soils, and paddy soils from 2008 to 2011, including depth of topsoil, bulk density, hardness, soil texture, and organic matter. The average physical properties were following; In plastic film house soils, the depth of topsoil was 16.2 cm. For the topsoils, hardness was 9.0 mm, bulk density was 1.09 Mg $m^{-3}$, and organic matter content was 29.0 g $kg^{-1}$. For the subsoils, hardness was 19.8 mm, bulk density was 1.32 Mg $m^{-3}$, and organic matter content was 29.5 g $kg^{-1}$; In upland soils, depth of topsoil was 13.3 cm. For the topsoils, hardness was 11.3 mm, bulk density was 1.33 Mg $m^{-3}$, and organic matter content was 20.6 g $kg^{-1}$. For the subsoils, hardness was 18.8 mm, bulk density was 1.52 Mg $m^{-3}$, and organic matter content was 13.0 g $kg^{-1}$. Classified by the types of crop, soil physical properties were high value in a group of deep-rooted vegetables and a group of short-rooted vegetables soil, but low value in a group of leafy vegetables soil; In orchard soils, the depth of topsoil was 15.4 cm. For the topsoils, hardness was 16.1 mm, bulk density was 1.25 Mg $m^{-3}$, and organic matter content was 28.5 g $kg^{-1}$. For the subsoils, hardness was 19.8 mm, bulk density was 1.41 Mg $m^{-3}$, and organic matter content was 15.9 g $kg^{-1}$; In paddy soils, the depth of topsoil was 17.5 cm. For the topsoils, hardness was 15.3 mm, bulk density was 1.22 Mg $m^{-3}$, and organic matter content was 23.5 g $kg^{-1}$. For the subsoils, hardness was 20.3 mm, bulk density was 1.47 Mg $m^{-3}$, and organic matter content was 17.5 g $kg^{-1}$. The average of bulk density was plastic film house soils < paddy soils < orchard soils < upland soils in order, according to land use. The bulk density value of topsoils is mainly distributed in 1.0~1.25 Mg $m^{-3}$. The bulk density value of subsoils is mostly distributed in more than 1.50, 1.35~1.50, and 1.0~1.50 Mg $m^{-3}$ for upland and paddy soils, orchard soils, and plastic film house soils, respectively. Classified by soil textural family, there was lower bulk density in clayey soil, and higher bulk density in fine silty and sandy soil. Soil physical properties and distribution of topography were different classified by the types of land use and growing crops. Therefore, we need to consider the types of land use and crop for appropriate soil management.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.