• Title/Summary/Keyword: DO 최적화

Search Result 700, Processing Time 0.032 seconds

Development and Validation of an Analytical Method for Quinoxyfen in Agricultural Products using QuEChERS and LC-MS/MS (QuEChERS법 및 LC-MS/MS를 이용한 농산물 중 살균제 Quinoxyfen의 잔류시험법 개발 및 검증)

  • Cho, Sung Min;Do, Jung-Ah;Lee, Han Sol;Park, Ji-Su;Shin, Hye-Sun;Jang, Dong Eun;Choi, Young-Nae;Jung, Yong-hyun;Lee, Kangbong
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.2
    • /
    • pp.140-147
    • /
    • 2019
  • An analytical method was developed for the determination of quinoxyfen in agricultural products using the QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method by liquid chromatography-tandem mass spectrometry (LC-MS/MS). The samples were extracted with 1% acetic acid in acetonitrile and water was removed by liquid-liquid partitioning with $MgSO_4$ (anhydrous magnesium sulfate) and sodium acetate. Dispersive solid-phase extraction (d-SPE) cleanup was carried out using $MgSO_4$, PSA (primary secondary amine), $C_{18}$ (octadecyl) and GCB (graphitized carbon black). The analytes were quantified and confirmed by using LC-MS/MS in positive mode with MRM (multiple reaction monitoring). The matrix-matched calibration curves were constructed using six levels ($0.001-0.25{\mu}g/mL$) and the coefficient of determination ($R^2$) was above 0.99. Recovery results at three concentrations (LOQ, 10 LOQ, and 50 LOQ, n=5) were in the range of 73.5-86.7% with RSDs (relative standard deviations) of less than 8.9%. For inter-laboratory validation, the average recovery was 77.2-95.4% and the CV (coefficient of variation) was below 14.5%. All results were consistent with the criteria ranges requested in the Codex guidelines (CAC/GL 40-1993, 2003) and Food Safety Evaluation Department guidelines (2016). The proposed analytical method was accurate, effective and sensitive for quinoxyfen determination in agricultural commodities. This study could be useful for the safe management of quinoxyfen residues in agricultural products.

Development and Validation of an Analytical Method for Fenpropimorph in Agricultural Products Using QuEChERS and LC-MS/MS (QuEChERS법과 LC-MS/MS를 이용한 농산물 중 Fenpropimorph 시험법 개발 및 검증)

  • Lee, Han Sol;Do, Jung-Ah;Park, Ji-Su;Cho, Sung Min;Shin, Hye-Sun;Jang, Dong Eun;Choi, Young-Nae;Jung, Yong-hyun;Lee, Kangbong
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.2
    • /
    • pp.115-123
    • /
    • 2019
  • An analytical method was developed for the determination of fenpropimorph, a morpholine fungicide, in hulled rice, potato, soybean, mandarin and green pepper using QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) sample preparation and LC-MS/MS (liquid chromatography-tandem mass spectrometry). The QuEChERS extraction was performed with acetonitrile followed by addition of anhydrous magnesium sulfate and sodium chloride. After centrifugation, d-SPE (dispersive solid phase extraction) cleanup was conducted using anhydrous magnesium sulfate, primary secondary amine sorbents and graphitized carbon black. The matrix-matched calibration curves were constructed using seven concentration levels, from 0.0025 to 0.25 mg/kg, and their correlation coefficient ($R^2$) of five agricultural products were higher than 0.9899. The limits of detection (LOD) and quantification (LOQ) were 0.001 and 0.0025 mg/kg, respectively, and the limits of quantification for the analytical method were 0.01 mg/kg. Average recoveries spiked at three levels (LOQ, $LOQ{\times}10$, $LOQ{\times}50$, n=5) and were in the range of 90.9~110.5% with associated relative standard deviation values less than 5.7%. As a result of the inter-laboratory validation, the average recoveries between the two laboratories were 88.6~101.4% and the coefficient of variation was also below 15%. All optimized results were satisfied the criteria ranges requested in the Codex guidelines and Food Safety Evaluation Department guidelines. This study could serve as a reference for safety management relative to fenpropimorph residues in imported and domestic agricultural products.

Are you a Machine or Human?: The Effects of Human-likeness on Consumer Anthropomorphism Depending on Construal Level (Are you a Machine or Human?: 소셜 로봇의 인간 유사성과 소비자 해석수준이 의인화에 미치는 영향)

  • Lee, Junsik;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.129-149
    • /
    • 2021
  • Recently, interest in social robots that can socially interact with humans is increasing. Thanks to the development of ICT technology, social robots have become easier to provide personalized services and emotional connection to individuals, and the role of social robots is drawing attention as a means to solve modern social problems and the resulting decline in the quality of individual lives. Along with the interest in social robots, the spread of social robots is also increasing significantly. Many companies are introducing robot products to the market to target various target markets, but so far there is no clear trend leading the market. Accordingly, there are more and more attempts to differentiate robots through the design of social robots. In particular, anthropomorphism has been studied importantly in social robot design, and many approaches have been attempted to anthropomorphize social robots to produce positive effects. However, there is a lack of research that systematically describes the mechanism by which anthropomorphism for social robots is formed. Most of the existing studies have focused on verifying the positive effects of the anthropomorphism of social robots on consumers. In addition, the formation of anthropomorphism of social robots may vary depending on the individual's motivation or temperament, but there are not many studies examining this. A vague understanding of anthropomorphism makes it difficult to derive design optimal points for shaping the anthropomorphism of social robots. The purpose of this study is to verify the mechanism by which the anthropomorphism of social robots is formed. This study confirmed the effect of the human-likeness of social robots(Within-subjects) and the construal level of consumers(Between-subjects) on the formation of anthropomorphism through an experimental study of 3×2 mixed design. Research hypotheses on the mechanism by which anthropomorphism is formed were presented, and the hypotheses were verified by analyzing data from a sample of 206 people. The first hypothesis in this study is that the higher the human-likeness of the robot, the higher the level of anthropomorphism for the robot. Hypothesis 1 was supported by a one-way repeated measures ANOVA and a post hoc test. The second hypothesis in this study is that depending on the construal level of consumers, the effect of human-likeness on the level of anthropomorphism will be different. First, this study predicts that the difference in the level of anthropomorphism as human-likeness increases will be greater under high construal condition than under low construal condition.Second, If the robot has no human-likeness, there will be no difference in the level of anthropomorphism according to the construal level. Thirdly,If the robot has low human-likeness, the low construal level condition will make the robot more anthropomorphic than the high construal level condition. Finally, If the robot has high human-likeness, the high construal levelcondition will make the robot more anthropomorphic than the low construal level condition. We performed two-way repeated measures ANOVA to test these hypotheses, and confirmed that the interaction effect of human-likeness and construal level was significant. Further analysis to specifically confirm interaction effect has also provided results in support of our hypotheses. The analysis shows that the human-likeness of the robot increases the level of anthropomorphism of social robots, and the effect of human-likeness on anthropomorphism varies depending on the construal level of consumers. This study has implications in that it explains the mechanism by which anthropomorphism is formed by considering the human-likeness, which is the design attribute of social robots, and the construal level of consumers, which is the way of thinking of individuals. We expect to use the findings of this study as the basis for design optimization for the formation of anthropomorphism in social robots.

A Legal Study on liability for damages cause of the air carrier : With an emphasis upon liability of passenger (항공운송인의 손해배상책임 원인에 관한 법적 고찰 - 여객 손해배상책임을 중심으로 -)

  • So, Jae-Seon;Lee, Chang-Kyu
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.28 no.2
    • /
    • pp.3-35
    • /
    • 2013
  • Air transport today is a means of transport that is optimized for exchanges between nations. Around the world, has experienced an increase in operating and the number of airline route expansion that has entered into the international aviation agreements in order to take advantage of the air transport efficient, but the possibility of the occurrence of air transport accidents increased. When compared to the accident of other means of transport, development of air transport accidents, not high, but it leads to catastrophe aviation accident occurs. Air Transport accident many international transportation accident than domestic transportation accident, in the event of an accident, the analysis of the legal responsibility of the shipper or the like is necessary or passenger air carrier. Judgment of the legal order of discipline of air transport accident is a classification of the type of air transport agreement. Depending on the object, air transport agreements are classified into the contract of carriage of aviation of the air passenger transportation contract. For casualties occurs, air passenger transportation accident is a need more discussion of legal discipline for this particular. Korean Commercial Code, it is possible to reflect in accordance with the actual situation of South Korea the contents of the treaty, which is utilized worldwide in international air transport, even on the system, to control land, sea, air transport and welcoming to international standards. However, Korean Commercial Code, the problem of the Montreal Convention has occurred as it is primarily reflecting the Montreal Convention. As a cause of liability for damages, under the Commercial Code of Korea and the contents of the treaty precedent is reflected, the concept of accident is necessary definition of the exact concept for damages of passengers in particular. Cause of personal injury or death of passengers, in the event of an accident to the "working for the elevation" or "aircraft" on, the Montreal Convention is the mother method of Korea Commercial Code, liability for damages of air carrier defines. The Montreal Convention such, continue to be a matter of debate so far in connection with the scope of "working for the lifting of" the concepts defined in the same way from Warsaw Convention "accident". In addition, it is discussed and put to see if you can be included mental damage passenger suffered in air transport in the "personal injury" in the damage of the passenger is in the range of damages. If the operation of aircraft, injury accident, in certain circumstances, compensation for mental damage is possible, in the same way as serious injury, mental damage caused by aviation accidents not be able to live a normal life for the victim it is damage to make. So it is necessary to interpret and what is included in the injury to the body in Korea Commercial Code and related conventions, non-economic damage of passengers, clearly demonstrated from the point of view of prevention of abuse of litigation and reasonable protection of air carrier it must compensate only psychological damage that can be. Since the compensation of delay damages, Warsaw Convention, the Montreal Convention, Korea Commercial Code, there are provisions of the liability of the air carrier due to the delayed arrival of passenger and baggage, but you do not have a reference to delayed arrival, the concept of delay arrangement is necessary. The strict interpretation of the concept of delayed arrival, because it may interfere with safe operation of the air carrier, within the time agreed to the airport of arrival that is described in the aviation contract of carriage of passenger baggage, or, these agreements I think the absence is to be defined as when it is possible to consider this situation, requests the carrier in good faith is not Indian or arrive within a reasonable time is correct. The loss of passenger, according to the international passenger Conditions of Carriage of Korean Air, in addition to the cases prescribed by law and other treaties, loss of airline contracts, resulting in passengers from a service that Korean Air and air transport in question do damage was is, that the fact that Korean Air does not bear the responsibility as a general rule, that was caused by the negligence or intentional negligence of Korean Air is proof, negligence of passengers of the damage has not been interposed bear responsibility only when it is found. It is a clause in the case of damage that is not mandated by law or treaty, and responsible only if the negligence of the airline side has been demonstrated, but of the term negligence "for" intentional or negligent "Korean Air's Terms" I considered judgment of compatibility is required, and that gross negligence is appropriate. The "Korean Air international passenger Conditions of Carriage", airlines about the damage such as electronic equipment that is included in the checked baggage of passengers does not bear the responsibility, but the loss of baggage, international to arrive or depart the U.S. it is not the case of transportation. Therefore, it is intended to discriminate unfairly passengers of international flights arriving or departure to another country passengers of international flights arriving or departure, the United States, airlines will bear the responsibility for the goods in the same way as the contents of the treaty it should be revised in the direction.

  • PDF

The Effect of Corporate SNS Marketing on User Behavior: Focusing on Facebook Fan Page Analytics (기업의 SNS 마케팅 활동이 이용자 행동에 미치는 영향: 페이스북 팬페이지 애널리틱스를 중심으로)

  • Jeon, Hyeong-Jun;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.75-95
    • /
    • 2020
  • With the growth of social networks, various forms of SNS have emerged. Based on various motivations for use such as interactivity, information exchange, and entertainment, SNS users are also on the fast-growing trend. Facebook is the main SNS channel, and companies have started using Facebook pages as a public relations channel. To this end, in the early stages of operation, companies began to secure a number of fans, and as a result, the number of corporate Facebook fans has recently increased to as many as millions. from a corporate perspective, Facebook is attracting attention because it makes it easier for you to meet the customers you want. Facebook provides an efficient advertising platform based on the numerous data it has. Advertising targeting can be conducted using their demographic characteristics, behavior, or contact information. It is optimized for advertisements that can expose information to a desired target, so that results can be obtained more effectively. it rethink and communicate corporate brand image to customers through contents. The study was conducted through Facebook advertising data, and could be of great help to business people working in the online advertising industry. For this reason, the independent variables used in the research were selected based on the characteristics of the content that the actual business is concerned with. Recently, the company's Facebook page operation goal is to go beyond securing the number of fan pages, branding to promote its brand, and further aiming to communicate with major customers. the main figures for this assessment are Facebook's 'OK', 'Attachment', 'Share', and 'Number of Click' which are the dependent variables of this study. in order to measure the outcome of the target, the consumer's response is set as a key measurable key performance indicator (KPI), and a strategy is set and executed to achieve this. Here, KPI uses Facebook's ad numbers 'reach', 'exposure', 'like', 'share', 'comment', 'clicks', and 'CPC' depending on the situation. in order to achieve the corresponding figures, the consideration of content production must be prior, and in this study, the independent variables were organized by dividing into three considerations for content production into three. The effects of content material, content structure, and message styles on Facebook's user behavior were analyzed using regression analysis. Content materials are related to the content's difficulty, company relevance, and daily involvement. According to existing research, it was very important how the content would attract users' interest. Content could be divided into informative content and interesting content. Informational content is content related to the brand, and information exchange with users is important. Interesting content is defined as posts that are not related to brands related to interesting movies or anecdotes. Based on this, this study started with the assumption that the difficulty, company relevance, and daily involvement have an effect on the dependent variable. In addition, previous studies have found that content types affect Facebook user activity. I think it depends on the combination of photos and text used in the content. Based on this study, the actual photos were used and the hashtag and independent variables were also examined. Finally, we focused on the advertising message. In the previous studies, the effect of advertising messages on users was different depending on whether they were narrative or non-narrative, and furthermore, the influence on message intimacy was different. In this study, we conducted research on the behavior that Facebook users' behavior would be different depending on the language and formality. For dependent variables, 'OK' and 'Full Click Count' are set by every user's action on the content. In this study, we defined each independent variable in the existing study literature and analyzed the effect on the dependent variable, and found that 'good' factors such as 'self association', 'actual use', and 'hidden' are important. Could. Material difficulties', 'actual participation' and 'large scale * difficulties'. In addition, variables such as 'Self Connect', 'Actual Engagement' and 'Sexual Sexual Attention' have been shown to have a significant impact on 'Full Click'. It is expected that through research results, it is possible to contribute to the operation and production strategy of company Facebook operators and content creators by presenting a content strategy optimized for the purpose of the content. In this study, we defined each independent variable in the existing research literature and analyzed its effect on the dependent variable, and we could see that factors on 'good' were significant such as 'self-association', 'reality use', 'concernal material difficulty', 'real-life involvement' and 'massive*difficulty'. In addition, variables such as 'self-connection', 'real-life involvement' and 'formative*attention' were shown to have significant effects for 'full-click'. Through the research results, it is expected that by presenting an optimized content strategy for content purposes, it can contribute to the operation and production strategy of corporate Facebook operators and content producers.

The Effect of Using Two Different Type of Dose Calibrators on In Vivo Standard Uptake Value of FDG PET (FDG 사용 시 Dose Calibrator에 따른 SUV에 미치는 영향)

  • Park, Young-Jae;Bang, Seong-Ae;Lee, Seung-Min;Kim, Sang-Un;Ko, Gil-Man;Lee, Kyung-Jae;Lee, In-Won
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.115-121
    • /
    • 2010
  • Purpose: The purpose of this study is to measure F-18 FDG with two different types of dose calibrator measuring radionuclide and radioactivity and investigate the effect of F-18 FDG on SUV (Standard Uptake Value) in human body. Materials and Methods: Two different dose calibrators used in this study are CRC-15 Dual PET (Capintec) and CRC-15R (Capintec). Inject 1 mL, 2 mL, 3 mL of F-18 FDG into three 2 mL syringes, respectively, and measure initial radioactivity from each dose calibrator. Then measure and record radioactivity at 30 minute interval for 270 minutes. According to the initial radioactivity, linearity between decay factor driven from radioactive decay formula and the values measured by dose calibrator have been analyzed by simple linear regression. Fine linear regression line optimizing values measured with CRC-15 through regression analysis on the basis of the volume of which the measured value is close to the most ideal one in CRC-15 Dual PET. Create ROI on lung, liver, and region part of 50 persons who has taken PET/CT test, applying values from linear regression equation, and find SUV. We have also performed paired t-test to examine statistically significant difference in the radioactivity measured with CRC-15 Dual PET, CRC-15R and its SUV. Results: Regression analysis of radioactivity measured with CRC-15 Dual PET and CRC-15R shows results as follows: in the case 1 mL, the r statistic representing correlation was 0.9999 and linear regression equation was y=1.0345x+0.2601; in 2 mL case, r=0.9999, linear regression equation y=1.0226x+0.1669; in 3 mL case, r=0.9999, linear regression equation y=1.0094x+0.1577. Based on the linear regression equation from each volume, t-test results show significant difference in SUV of ROI in lung, liver, region part in all three case. P-values in each case are as follows: in 1 mL case, lung, liver and region (p<0.0001); in 2 mL case, lung (p<0.002), liver and region (p<0.0001); in 3 mL case, lung (p<0.044), liver and region (p<0.0001). Conclusion: Radioactivity measured with CRC-15 Dual PET, CRC-15R, dose calibrator for F-18 FDG test, do not show difference correlation, while these values infer that SUV has significant differences in the aspect of uptake in human body. Therefore, it is necessary to consider the difference of SUV in human body when using these dose calibrator.

  • PDF

Converting Ieodo Ocean Research Station Wind Speed Observations to Reference Height Data for Real-Time Operational Use (이어도 해양과학기지 풍속 자료의 실시간 운용을 위한 기준 고도 변환 과정)

  • BYUN, DO-SEONG;KIM, HYOWON;LEE, JOOYOUNG;LEE, EUNIL;PARK, KYUNG-AE;WOO, HYE-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.23 no.4
    • /
    • pp.153-178
    • /
    • 2018
  • Most operational uses of wind speed data require measurements at, or estimates generated for, the reference height of 10 m above mean sea level (AMSL). On the Ieodo Ocean Research Station (IORS), wind speed is measured by instruments installed on the lighthouse tower of the roof deck at 42.3 m AMSL. This preliminary study indicates how these data can best be converted into synthetic 10 m wind speed data for operational uses via the Korea Hydrographic and Oceanographic Agency (KHOA) website. We tested three well-known conventional empirical neutral wind profile formulas (a power law (PL); a drag coefficient based logarithmic law (DCLL); and a roughness height based logarithmic law (RHLL)), and compared their results to those generated using a well-known, highly tested and validated logarithmic model (LMS) with a stability function (${\psi}_{\nu}$), to assess the potential use of each method for accurately synthesizing reference level wind speeds. From these experiments, we conclude that the reliable LMS technique and the RHLL technique are both useful for generating reference wind speed data from IORS observations, since these methods produced very similar results: comparisons between the RHLL and the LMS results showed relatively small bias values ($-0.001m\;s^{-1}$) and Root Mean Square Deviations (RMSD, $0.122m\;s^{-1}$). We also compared the synthetic wind speed data generated using each of the four neutral wind profile formulas under examination with Advanced SCATterometer (ASCAT) data. Comparisons revealed that the 'LMS without ${\psi}_{\nu}^{\prime}$ produced the best results, with only $0.191m\;s^{-1}$ of bias and $1.111m\;s^{-1}$ of RMSD. As well as comparing these four different approaches, we also explored potential refinements that could be applied within or through each approach. Firstly, we tested the effect of tidal variations in sea level height on wind speed calculations, through comparison of results generated with and without the adjustment of sea level heights for tidal effects. Tidal adjustment of the sea levels used in reference wind speed calculations resulted in remarkably small bias (<$0.0001m\;s^{-1}$) and RMSD (<$0.012m\;s^{-1}$) values when compared to calculations performed without adjustment, indicating that this tidal effect can be ignored for the purposes of IORS reference wind speed estimates. We also estimated surface roughness heights ($z_0$) based on RHLL and LMS calculations in order to explore the best parameterization of this factor, with results leading to our recommendation of a new $z_0$ parameterization derived from observed wind speed data. Lastly, we suggest the necessity of including a suitable, experimentally derived, surface drag coefficient and $z_0$ formulas within conventional wind profile formulas for situations characterized by strong wind (${\geq}33m\;s^{-1}$) conditions, since without this inclusion the wind adjustment approaches used in this study are only optimal for wind speeds ${\leq}25m\;s^{-1}$.

Analysis of shopping website visit types and shopping pattern (쇼핑 웹사이트 탐색 유형과 방문 패턴 분석)

  • Choi, Kyungbin;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.85-107
    • /
    • 2019
  • Online consumers browse products belonging to a particular product line or brand for purchase, or simply leave a wide range of navigation without making purchase. The research on the behavior and purchase of online consumers has been steadily progressed, and related services and applications based on behavior data of consumers have been developed in practice. In recent years, customization strategies and recommendation systems of consumers have been utilized due to the development of big data technology, and attempts are being made to optimize users' shopping experience. However, even in such an attempt, it is very unlikely that online consumers will actually be able to visit the website and switch to the purchase stage. This is because online consumers do not just visit the website to purchase products but use and browse the websites differently according to their shopping motives and purposes. Therefore, it is important to analyze various types of visits as well as visits to purchase, which is important for understanding the behaviors of online consumers. In this study, we explored the clustering analysis of session based on click stream data of e-commerce company in order to explain diversity and complexity of search behavior of online consumers and typified search behavior. For the analysis, we converted data points of more than 8 million pages units into visit units' sessions, resulting in a total of over 500,000 website visit sessions. For each visit session, 12 characteristics such as page view, duration, search diversity, and page type concentration were extracted for clustering analysis. Considering the size of the data set, we performed the analysis using the Mini-Batch K-means algorithm, which has advantages in terms of learning speed and efficiency while maintaining the clustering performance similar to that of the clustering algorithm K-means. The most optimized number of clusters was derived from four, and the differences in session unit characteristics and purchasing rates were identified for each cluster. The online consumer visits the website several times and learns about the product and decides the purchase. In order to analyze the purchasing process over several visits of the online consumer, we constructed the visiting sequence data of the consumer based on the navigation patterns in the web site derived clustering analysis. The visit sequence data includes a series of visiting sequences until one purchase is made, and the items constituting one sequence become cluster labels derived from the foregoing. We have separately established a sequence data for consumers who have made purchases and data on visits for consumers who have only explored products without making purchases during the same period of time. And then sequential pattern mining was applied to extract frequent patterns from each sequence data. The minimum support is set to 10%, and frequent patterns consist of a sequence of cluster labels. While there are common derived patterns in both sequence data, there are also frequent patterns derived only from one side of sequence data. We found that the consumers who made purchases through the comparative analysis of the extracted frequent patterns showed the visiting pattern to decide to purchase the product repeatedly while searching for the specific product. The implication of this study is that we analyze the search type of online consumers by using large - scale click stream data and analyze the patterns of them to explain the behavior of purchasing process with data-driven point. Most studies that typology of online consumers have focused on the characteristics of the type and what factors are key in distinguishing that type. In this study, we carried out an analysis to type the behavior of online consumers, and further analyzed what order the types could be organized into one another and become a series of search patterns. In addition, online retailers will be able to try to improve their purchasing conversion through marketing strategies and recommendations for various types of visit and will be able to evaluate the effect of the strategy through changes in consumers' visit patterns.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.