• Title/Summary/Keyword: Artificial Neutral Network

Search Result 18, Processing Time 0.024 seconds

An Option Hedge Strategy Using Machine Learning and Dynamic Delta Hedging (기계학습과 동적델타헤징을 이용한 옵션 헤지 전략)

  • Ru, Jae-Pil;Shin, Hyun-Joon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.2
    • /
    • pp.712-717
    • /
    • 2011
  • Option issuers generally utilize Dynamic Delta Hedging(DDH) technique to avoid the risk resulting from continuously changing option value. DDH duplicates payoff of option position by adjusting hedge position according to the delta value from Black-Scholes(BS) model in order to maintain risk neutral state. DDH, however, is not able to guarantee optimal hedging performance because of the weaknesses caused by impractical assumptions inherent in BS model. Therefore, this study presents a methodology for dynamic option hedge using artificial neural network(ANN) to enhance hedging performance and show the superiority of the proposed method using various computational experiments.

Development and Application of Total Maximum Daily Loads Simulation System Using Nonpoint Source Pollution Model (비점원오염모델을 이용한 오염총량모의시스템의 개발 및 적용)

  • Kang, Moon-Seong;Park, Seung-Woo
    • Journal of Korea Water Resources Association
    • /
    • v.36 no.1
    • /
    • pp.117-128
    • /
    • 2003
  • The objectives of this study are to develop the total maximum daily loads simulation system, TOLOS that is capable of estimating annual nonpoint source pollution from small watersheds, to monitor the hydrology and water quality of the Balkan HP#6 watershed, and to validate TOLOS with the field data. TOLOS consists of three subsystems: the input data processor based on a geographic information system, the models, and the post processor. Land use pattern at the tested watershed was classified from the Landsat TM data using the artificial neutral network model that adopts an error back propagation algorithm. Paddy field components were added to SWAT model to simulate water balance at irrigated paddy blocks. SWAT model parameters were obtained from the GIS data base, and additional parameters calibrated with field data. TOLOS was then tested with ungauged conditions. The simulated runoff was reasonably good as compared with the observed data. And simulated water quality parameters appear to be reasonably comparable to the field data.

Prediction of pollution loads in the Geum River upstream using the recurrent neural network algorithm

  • Lim, Heesung;An, Hyunuk;Kim, Haedo;Lee, Jeaju
    • Korean Journal of Agricultural Science
    • /
    • v.46 no.1
    • /
    • pp.67-78
    • /
    • 2019
  • The purpose of this study was to predict the water quality using the RNN (recurrent neutral network) and LSTM (long short-term memory). These are advanced forms of machine learning algorithms that are better suited for time series learning compared to artificial neural networks; however, they have not been investigated before for water quality prediction. Three water quality indexes, the BOD (biochemical oxygen demand), COD (chemical oxygen demand), and SS (suspended solids) are predicted by the RNN and LSTM. TensorFlow, an open source library developed by Google, was used to implement the machine learning algorithm. The Okcheon observation point in the Geum River basin in the Republic of Korea was selected as the target point for the prediction of the water quality. Ten years of daily observed meteorological (daily temperature and daily wind speed) and hydrological (water level and flow discharge) data were used as the inputs, and irregularly observed water quality (BOD, COD, and SS) data were used as the learning materials. The irregularly observed water quality data were converted into daily data with the linear interpolation method. The water quality after one day was predicted by the machine learning algorithm, and it was found that a water quality prediction is possible with high accuracy compared to existing physical modeling results in the prediction of the BOD, COD, and SS, which are very non-linear. The sequence length and iteration were changed to compare the performances of the algorithms.

Convolutional Neural Network Model Using Data Augmentation for Emotion AI-based Recommendation Systems

  • Ho-yeon Park;Kyoung-jae Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.57-66
    • /
    • 2023
  • In this study, we propose a novel research framework for the recommendation system that can estimate the user's emotional state and reflect it in the recommendation process by applying deep learning techniques and emotion AI (artificial intelligence). To this end, we build an emotion classification model that classifies each of the seven emotions of angry, disgust, fear, happy, sad, surprise, and neutral, respectively, and propose a model that can reflect this result in the recommendation process. However, in the general emotion classification data, the difference in distribution ratio between each label is large, so it may be difficult to expect generalized classification results. In this study, since the number of emotion data such as disgust in emotion image data is often insufficient, correction is made through augmentation. Lastly, we propose a method to reflect the emotion prediction model based on data through image augmentation in the recommendation systems.

Prediction on Mix Proportion Factor and Strength of Concrete Using Neural Network (신경망을 이용한 콘크리트 배합요소 및 압축강도 추정)

  • 김인수;이종헌;양동석;박선규
    • Journal of the Korea Concrete Institute
    • /
    • v.14 no.4
    • /
    • pp.457-466
    • /
    • 2002
  • An artificial neural network was applied to predict compressive strength, slump value and mix proportion of a concrete. Standard mixed tables were trained and estimated, and the results were compared with those of the experiments. To consider variabilities of material properties, the standard mixed fables from two companies of Ready Mixed Concrete were used. And they were trained with the neural network. In this paper, standard back propagation network was used. The mix proportion factors such as water cement ratio, sand aggregate ratio, unit water, unit cement, unit weight of sand, unit weight of crushed sand, unit coarse aggregate and air entraining admixture were used. For the arrangement on the approval of prediction of mix proportion factor, the standard compressive strength of $180kgf/cm^2{\sim}300kgf/cm^2$, and target slump value of 8 cm, 15 cm were used. For the arrangement on the approval of prediction of compressive strength and slump value, the standard compressive strength of $210kgf/cm^2{\sim}240kgf/cm^2$, and target slump value of 12 cm and 15 cm wore used because these ranges are most frequently used. In results, in the prediction of mix proportion factor, for all of the water cement ratio, sand aggregate ratio, unit water, unit cement, unit weight of sand, unit weight of crushed sand, unit coarse aggregate, air entraining admixture, the predicted values and the values of standard mixed tables were almost the same within the target error of 0.10 and 0.05, regardless of two companies. And in the prediction of compressive strength and slump value, the predicted values were converged well to the values of standard mixed fables within the target error of 0.10, 0.05, 0.001. Finally artificial neural network is successfully applied to the prediction of concrete mixture and compressive strength.

A Study on the Decision-Making of Private Banker's in Recommending Hedge Fund among Financial Goods (은행 금융상품에서 프라이빗 뱅커의 전문투자형 사모펀드 추천 의사결정)

  • Yu, Hwan;Lee, Young-Jai
    • The Journal of Information Systems
    • /
    • v.28 no.4
    • /
    • pp.333-358
    • /
    • 2019
  • Purpose The study aims to develop a data-based decision model for private bankers when recommending hedge funds to their customers in financial institutions. Design/methodology/approach The independent variables are set in two groups. The independent variables of the first group are aggressive investors, active investors, and risk-neutral type investors. In the second group, variables considered by private bankers include customer propensity to invest, reliability, product subscription experience, professionalism, intimacy, and product understanding. A decision-making variable for a private banker is in recommending a first-rate general private fund composed of foreign and domestic FinTech products. These contain dependent variables that include target return rate(%), fund period (months), safeguard existence, underlying asset, and hedge fund name. Findings Based on the research results, there is a 94.4% accuracy in decision-making when the independent variables (customer rating, reliability, intimacy, product subscription experience, professionalism and product understanding) are used according to the following order of relevant dependent variables: step 1 on safeguard existence, step 2 on target return rate, step 3 on fund period, and step 4 on hedge fund name. Next, a 93.7% accuracy is expected when decision-making uses the following order of dependent variables: step 1 on safeguard existence, step 2 on target return rate, step 3 on underlying asset, and step 4 on fund period. In conclusion, a private banker conducts a decision making stage when recommending hedge funds to their customers. When examining a private banker's recommendations of hedge funds to a customer, independent variables influencing dependent variables are intimacy, product comprehension, and product subscription experience according to a categorical regression model and artificial neural network analysis model.

COMPARISON OF LINEAR AND NON-LINEAR NIR CALIBRATION METHODS USING LARGE FORAGE DATABASES

  • Berzaghi, Paolo;Flinn, Peter C.;Dardenne, Pierre;Lagerholm, Martin;Shenk, John S.;Westerhaus, Mark O.;Cowe, Ian A.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1141-1141
    • /
    • 2001
  • The aim of the study was to evaluate the performance of 3 calibration methods, modified partial least squares (MPLS), local PLS (LOCAL) and artificial neural network (ANN) on the prediction of chemical composition of forages, using a large NIR database. The study used forage samples (n=25,977) from Australia, Europe (Belgium, Germany, Italy and Sweden) and North America (Canada and U.S.A) with information relative to moisture, crude protein and neutral detergent fibre content. The spectra of the samples were collected with 10 different Foss NIR Systems instruments, which were either standardized or not standardized to one master instrument. The spectra were trimmed to a wavelength range between 1100 and 2498 nm. Two data sets, one standardized (IVAL) and the other not standardized (SVAL) were used as independent validation sets, but 10% of both sets were omitted and kept for later expansion of the calibration database. The remaining samples were combined into one database (n=21,696), which was split into 75% calibration (CALBASE) and 25% validation (VALBASE). The chemical components in the 3 validation data sets were predicted with each model derived from CALBASE using the calibration database before and after it was expanded with 10% of the samples from IVAL and SVAL data sets. Calibration performance was evaluated using standard error of prediction corrected for bias (SEP(C)), bias, slope and R2. None of the models appeared to be consistently better across all validation sets. VALBASE was predicted well by all models, with smaller SEP(C) and bias values than for IVAL and SVAL. This was not surprising as VALBASE was selected from the calibration database and it had a sample population similar to CALBASE, whereas IVAL and SVAL were completely independent validation sets. In most cases, Local and ANN models, but not modified PLS, showed considerable improvement in the prediction of IVAL and SVAL after the calibration database had been expanded with the 10% samples of IVAL and SVAL reserved for calibration expansion. The effects of sample processing, instrument standardization and differences in reference procedure were partially confounded in the validation sets, so it was not possible to determine which factors were most important. Further work on the development of large databases must address the problems of standardization of instruments, harmonization and standardization of laboratory procedures and even more importantly, the definition of the database population.

  • PDF

Contactless Data Society and Reterritorialization of the Archive (비접촉 데이터 사회와 아카이브 재영토화)

  • Jo, Min-ji
    • The Korean Journal of Archival Studies
    • /
    • no.79
    • /
    • pp.5-32
    • /
    • 2024
  • The Korean government ranked 3rd among 193 UN member countries in the UN's 2022 e-Government Development Index. Korea, which has consistently been evaluated as a top country, can clearly be said to be a leading country in the world of e-government. The lubricant of e-government is data. Data itself is neither information nor a record, but it is a source of information and records and a resource of knowledge. Since administrative actions through electronic systems have become widespread, the production and technology of data-based records have naturally expanded and evolved. Technology may seem value-neutral, but in fact, technology itself reflects a specific worldview. The digital order of new technologies, armed with hyper-connectivity and super-intelligence, not only has a profound influence on traditional power structures, but also has an a similar influence on existing information and knowledge transmission media. Moreover, new technologies and media, including data-based generative artificial intelligence, are by far the hot topic. It can be seen that the all-round growth and spread of digital technology has led to the augmentation of human capabilities and the outsourcing of thinking. This also involves a variety of problems, ranging from deep fakes and other fake images, auto profiling, AI lies hallucination that creates them as if they were real, and copyright infringement of machine learning data. Moreover, radical connectivity capabilities enable the instantaneous sharing of vast amounts of data and rely on the technological unconscious to generate actions without awareness. Another irony of the digital world and online network, which is based on immaterial distribution and logical existence, is that access and contact can only be made through physical tools. Digital information is a logical object, but digital resources cannot be read or utilized without some type of device to relay it. In that respect, machines in today's technological society have gone beyond the level of simple assistance, and there are points at which it is difficult to say that the entry of machines into human society is a natural change pattern due to advanced technological development. This is because perspectives on machines will change over time. Important is the social and cultural implications of changes in the way records are produced as a result of communication and actions through machines. Even in the archive field, what problems will a data-based archive society face due to technological changes toward a hyper-intelligence and hyper-connected society, and who will prove the continuous activity of records and data and what will be the main drivers of media change? It is time to research whether this will happen. This study began with the need to recognize that archives are not only records that are the result of actions, but also data as strategic assets. Through this, author considered how to expand traditional boundaries and achieves reterritorialization in a data-driven society.