• Title/Summary/Keyword: 정의적 영역

Search Result 1,798, Processing Time 0.033 seconds

The Development of Estimation Model (AFKAE0.5) for Water Balance and Soil Water Content Using Daily Weather Data (일별 기상자료를 이용한 농경지 물 수지 및 토양수분 예측모형 (AFKAE0.5) 개발)

  • Seo, Myung-Chul;Hur, Seung-Oh;Sonn, Yeon-Kyu;Cho, Hyeon-Suk;Jeon, Weon-Tai;Kim, Min-Kyeong;Kim, Min-Tae
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.6
    • /
    • pp.1203-1210
    • /
    • 2012
  • As the area of upland crops increase, it is become more important for farmers to understand status of soil water at their own fields due to key role of proper irrigation. In order to estimate daily water balance and soil water content with simple weather data and irrigation records, we have developed the model for estimating water balance and soil water content, called AFKAE0.5, and verified its simulated results comparing with daily change of soil water content observed by soil profile moisture sensors. AFKAE0.5 has two hypothesis before establishing its system. The first is the soil in the model has 300 mm in depth with soil texture. And the second is to simplify water movement between the subjected soil and beneath soil dividing 3 categories which is defined by soil water potential. AFKAE0.5 characterized with determining the amount of upward and downward water between the subjected soil and beneath soil. As a result of simulation of AFKAE0.5 at Gongju region with red pepper cultivation in 2005, the water balance with input minus output is recorded as - 88 mm. the amount of input water as precipitation, irrigation, and upward water is annually 1,043, 0, and 207 mm, on the other, output as evapotranspiration, run-off, and percolation is 831, 309, and 161 mm, respectively.

A review on teaching contents in area of Korean math textbook for first grade - even and odd number, composition and decomposition of numbers, calculation with carrying or with borrowing - (우리나라 초등학교 1학년 수학 교과서 <수와 연산> 영역의 지도 내용 검토 - 짝수.홀수, 수의 합성.분해, 받아올림.받아내림이 있는 계산 -)

  • Lee, Seung;Choi, Kyoung A;Park, Kyo Sik
    • Journal of the Korean School Mathematics Society
    • /
    • v.18 no.1
    • /
    • pp.1-14
    • /
    • 2015
  • In this paper, in order to improve the teaching contents on even and odd number, composition and decomposition of numbers, and (1 digit)+(1 digit) with carrying, (10 and 1 digit)-(1 digit) with borrowing, the corresponding teaching contents in ${\ll}$Math 1-1${\gg}$, ${\ll}$Math 1-2${\gg}$ are critically reviewed. Implications obtained through this review can be summarized as follows. First, the current incomplete definition of even and odd numbers would need to be reconsidered, and the appropriateness of dealing with even and odd numbers in first grade would need to be reconsidered. Second, it is necessary to deal with composition and decomposition of numbers less than 20. That is, it need to be considered to compose (10 and 1 digit) with 10 and (1 digit) and to decompose (10 and 1 digit) into 10 and (1 digit) on the basis of the 10. And the sequence dealing with composition and decomposition of 10 before dealing with composition and decomposition of (10 and 1 digit) need to be considered. And it need to be considered that composing (10 and 1 digit) with (1 digit) and (1 digit) and decomposing (10 and 1 digit) into (1 digit) and (1 digit) are substantially useless. Third, it is necessary to eliminate the logical leap in the calculation process. That is, it need to be considered to use the composing (10 and 1 digit) with 10 and (1 digit) and decomposing (10 and 1 digit) into 10 and (1 digit) on the basis of the 10 to eliminate the leap which can be seen in the explanation of calculating (1 digit)+(1 digit) with carrying, (10 and 1 digit)-(1 digit) with borrowing. And it need to be considered to deal with the vertical format for calculation of (1 digit)+(1 digit) with carrying and (10 and 1 digit)-(1 digit) with borrowing in ${\ll}$Math 1-2${\gg}$, or it need to be considered not to deal with the vertical format for calculation of (1 digit)+(1 digit) with carrying and (10 and 1 digit)-(1 digit) with borrowing in ${\ll}$Math 1-2 workbook${\gg}$ for the consistency.

Detection with a SWNT Gas Sensor and Diffusion of SF6 Decomposition Products by Corona Discharges (탄소나노튜브 가스센서의 SF6 분해생성물 검출 및 확산현상에 관한 연구)

  • Lee, J.C.;Jung, S.H.;Baik, S.H.
    • Journal of the Korean Vacuum Society
    • /
    • v.18 no.1
    • /
    • pp.66-72
    • /
    • 2009
  • The detection methods are required to monitor and diagnose the abnormality on the insulation condition inside a gas-insulated switchgear (GIS). Due to a good sensitivity to the products decomposed by partial discharges (PDs) in $SF_6$ gas, the development of a SWNT gas sensor is actively in progress. However, a few numerical studies on the diffusion mechanism of the $SF_6$ decomposition products by PD have been reported. In this study, we modeled $SF_6$ decomposition process in a chamber by calculating temperature, pressure and concentration of the decomposition products by using a commercial CFD program in conjunction with experimental data. It was assumed that the mass production rate and the generation temperature of the decomposition products were $5.04{\times}10^{-10}$ [g/s] and over 773 K respectively. To calculate the concentration equation, the Schmidt number was specified to get the diffusion coefficient functioned by viscosity and density of $SF_6$ gas instead rather than setting it directly. The results showed that the drive potential is governed mainly by the gradient of the decomposition concentration. A lower concentration of the decomposition products was observed as the sensors were placed more away from the discharge region. Also, the concentration increased by increasing the discharge time. By installing multiple sensors the location of PD is expected to be identified by monitoring the response time of the sensors, and the information should be very useful for the diagnosis and maintenance of GIS.

Study on Production Performance of Shale Gas Reservoir using Production Data Analysis (생산자료 분석기법을 이용한 셰일가스정 생산거동 연구)

  • Lee, Sun-Min;Jung, Ji-Hun;Sin, Chang-Hoon;Kwon, Sun-Il
    • Journal of the Korean Institute of Gas
    • /
    • v.17 no.4
    • /
    • pp.58-69
    • /
    • 2013
  • This paper presents production data analysis for two production wells located in the shale gas field, Canada, with the proper analysis method according to each production performance characteristics. In the case A production well, the analysis was performed by applying both time and superposition time because the production history has high variation. Firstly, the flow regimes were classified with a log-log plot, and as a result, only the transient flow was appeared. Then the area of simulated reservoir volume (SRV) analyzed based on flowing material balance plot was calculated to 180 acres of time, and 240 acres of superposition time. And the original gas in place (OGIP) also was estimated to 15, 20 Bscf, respectively. However, as the area of SRV was not analyzed with the boundary dominated flow data, it was regarded as the minimum one. Therefore, the production forecasting was conducted according to variation of b exponent and the area of SRV. As a result, estimated ultimate recovery (EUR) increased 1.2 and 1.4 times respectively depending on b exponent, which was 0.5 and 1. In addition, as the area of SRV increased from 240 to 360 acres, EUR increased 1.3 times. In the case B production well, the formation compressibility and permeability depending on the overburden were applied to the analysis of the overpressured reservoir. In comparison of the case that applied geomechanical factors and the case that did not, the area of SRV was increased 1.4 times, OGIP was increased 1.5 times respectively. As a result of analysis, the prediction of future productivity including OGIP and EUR may be quite different depending on the analysis method. Thus, it was found that proper analysis methods, such as pseudo-time, superposition time, geomechanical factors, need to be applied depending on the production data to gain accurate results.

Ecoclimatic Map over North-East Asia Using SPOT/VEGETATION 10-day Synthesis Data (SPOT/VEGETATION NDVI 자료를 이용한 동북아시아의 생태기후지도)

  • Park Youn-Young;Han Kyung-Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.8 no.2
    • /
    • pp.86-96
    • /
    • 2006
  • Ecoclimap-1, a new complete surface parameter global database at a 1-km resolution, was previously presented. It is intended to be used to initialize the soil-vegetation- atmosphere transfer schemes in meteorological and climate models. Surface parameters in the Ecoclimap-1 database are provided in the form of a per-class value by an ecoclimatic base map from a simple merging of land cover and climate maps. The principal objective of this ecoclimatic map is to consider intra-class variability of life cycle that the usual land cover map cannot describe. Although the ecoclimatic map considering land cover and climate is used, the intra-class variability was still too high inside some classes. In this study, a new strategy is defined; the idea is to use the information contained in S10 NDVI SPOT/VEGETATION profiles to split a land cover into more homogeneous sub-classes. This utilizes an intra-class unsupervised sub-clustering methodology instead of simple merging. This study was performed to provide a new ecolimatic map over Northeast Asia in the framework of Ecoclimap-2 global database construction for surface parameters. We used the University of Maryland's 1km Global Land Cover Database (UMD) and a climate map to determine the initial number of clusters for intra-class sub-clustering. An unsupervised classification process using six years of NDVI profiles allows the discrimination of different behavior for each land cover class. We checked the spatial coherence of the classes and, if necessary, carried out an aggregation step of the clusters having a similar NDVI time series profile. From the mapping system, 29 ecosystems resulted for the study area. In terms of climate-related studies, this new ecosystem map may be useful as a base map to construct an Ecoclimap-2 database and to improve the surface climatology quality in the climate model.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

THE RELATIONSHIP BETWEEN PARTICLE INJECTION RATE OBSERVED AT GEOSYNCHRONOUS ORBIT AND DST INDEX DURING GEOMAGNETIC STORMS (자기폭풍 기간 중 정지궤도 공간에서의 입자 유입률과 Dst 지수 사이의 상관관계)

  • 문가희;안병호
    • Journal of Astronomy and Space Sciences
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2003
  • To examine the causal relationship between geomagnetic storm and substorm, we investigate the correlation between dispersionless particle injection rate of proton flux observed from geosynchronous satellites, which is known to be a typical indicator of the substorm expansion activity, and Dst index during magnetic storms. We utilize geomagnetic storms occurred during the period of 1996 ~ 2000 and categorize them into three classes in terms of the minimum value of the Dst index ($Dst_{min}$); intense ($-200nT{$\leq$}Dst_{min}{$\leq$}-100nT$), moderate($-100nT{\leq}Dst_{min}{\leq}-50nT$), and small ($-50nT{\leq}Dst_{min}{\leq}-30nT$) -30nT)storms. We use the proton flux of the energy range from 50 keV to 670 keV, the major constituents of the ring current particles, observed from the LANL geosynchronous satellites located within the local time sector from 18:00 MLT to 04:00 MLT. We also examine the flux ratio ($f_{max}/f_{ave}$) to estimate particle energy injection rate into the inner magnetosphere, with $f_{ave}$ and $f_{max}$ being the flux levels during quiet and onset levels, respectively. The total energy injection rate into the inner magnetosphere can not be estimated from particle measurements by one or two satellites. However, the total energy injection rate should be at least proportional to the flux ratio and the injection frequency. Thus we propose a quantity, “total energy injection parameter (TEIP)”, defined by the product of the flux ratio and the injection frequency as an indicator of the injected energy into the inner magnetosphere. To investigate the phase dependence of the substorm contribution to the development of magnetic storm, we examine the correlations during the two intervals, main and recovery phase of storm separately. Several interesting tendencies are noted particularly during the main phase of storm. First, the average particle injection frequency tends to increase with the storm size with the correlation coefficient being 0.83. Second, the flux ratio ($f_{max}/f_{ave}$) tends to be higher during large storms. The correlation coefficient between $Dst_{min}$ and the flux ratio is generally high, for example, 0.74 for the 75~113 keV energy channel. Third, it is also worth mentioning that there is a high correlation between the TEIP and $Dst_{min}$ with the highest coefficient (0.80) being recorded for the energy channel of 75~113 keV, the typical particle energies of the ring current belt. Fourth, the particle injection during the recovery phase tends to make the storms longer. It is particularly the case for intense storms. These characteristics observed during the main phase of the magnetic storm indicate that substorm expansion activity is closely associated with the development of mangetic storm.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.