• Title/Summary/Keyword: Information Objects

Search Result 5,114, Processing Time 0.034 seconds

Acorn Production and Characteristics of Quercus acuta Thunb - Focused on Wando, Jindo and Haenam in Jeollanam-do, Korea - (붉가시나무의 종실 생산량 및 형질특성 - 전라남도 완도, 진도, 해남을 중심으로 -)

  • Kim, Sodam;Park, In-Hyeop
    • Korean Journal of Environment and Ecology
    • /
    • v.35 no.6
    • /
    • pp.621-631
    • /
    • 2021
  • The purpose of this study is to survey and analyze acorn production and characteristics of the Quercus acuta Thunb. according to the need for information on seed supply and seedling cultivation during the restoration of warm broad-leaved forests. For the survey, a total of 30 seed traps with a surface area of 1 m2 were set up, 3 in each of 10 quadrats (8 in Wando, 1 in Haenam, and 1 in Jindo). The acorns that fell in the seed trap at the end of each month were collected from August to December each year between 2013 to 2016. The collected acorns were then classified into sound, damaged, decayed, or empty grade, and the number of acorns produced was calculated. In the case of sound acorns, acorn traits, such as length, diameter and weight of acorns without cupule, were measured. Duncan's multiple tests of acorn production and characteristics were conducted for comparative analysis of the annual average values with the values by year, stand, month, and treatment plot. The annual number of acorn dropped into the seed traps in each quadrat from 2013 to 2016 was 5-350 acorns/3 m2 in 2013, 17-551 acorns/3 m2 in 2014, 5-454 acorns/3 m2 in 2015, and 14-705 acorns/3 m2 in 2016. There was a large difference in acorn production between the quadrats, presumably attributed to the difference in the amount of light received due to the density of trees in the square. Annual acorn production per area was 335,000 acorns/ha in 2013, 932,000 acorns/ha in 2014, 556,000 acorns/ha in 2015, and 1,037,000 acorns/ha in 2016. That was a sharp variation of acorn production in the two-year cycle. As the fluctuation in the production of Q. acuta showed simultaneity between stands, it is judged that Quercus acuta Thunb. had a clear cycle of fruitfulness and fruitiness between forest objects. September showed the biggest amount of fallen acorns and largest damage from insect pests, indicating that preventing early fall of acorns could increase the fruiting period and enable mass production of sound acorns. There was no significant difference between annual average acorn length in each region. In the case of the acorn diameter and weight, the average values of acorns from Haenam were significantly higher than those from Wando and Jindo. There was no significant difference in the average annual acorn characteristics by month, and the average annual acorn length, diameter, and weight in November were 19.72mm, 12.23mm, and 1.64g, respectively, the highest between August and November.

Utilization of Smart Farms in Open-field Agriculture Based on Digital Twin (디지털 트윈 기반 노지스마트팜 활용방안)

  • Kim, Sukgu
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2023.04a
    • /
    • pp.7-7
    • /
    • 2023
  • Currently, the main technologies of various fourth industries are big data, the Internet of Things, artificial intelligence, blockchain, mixed reality (MR), and drones. In particular, "digital twin," which has recently become a global technological trend, is a concept of a virtual model that is expressed equally in physical objects and computers. By creating and simulating a Digital twin of software-virtualized assets instead of real physical assets, accurate information about the characteristics of real farming (current state, agricultural productivity, agricultural work scenarios, etc.) can be obtained. This study aims to streamline agricultural work through automatic water management, remote growth forecasting, drone control, and pest forecasting through the operation of an integrated control system by constructing digital twin data on the main production area of the nojinot industry and designing and building a smart farm complex. In addition, it aims to distribute digital environmental control agriculture in Korea that can reduce labor and improve crop productivity by minimizing environmental load through the use of appropriate amounts of fertilizers and pesticides through big data analysis. These open-field agricultural technologies can reduce labor through digital farming and cultivation management, optimize water use and prevent soil pollution in preparation for climate change, and quantitative growth management of open-field crops by securing digital data for the national cultivation environment. It is also a way to directly implement carbon-neutral RED++ activities by improving agricultural productivity. The analysis and prediction of growth status through the acquisition of the acquired high-precision and high-definition image-based crop growth data are very effective in digital farming work management. The Southern Crop Department of the National Institute of Food Science conducted research and development on various types of open-field agricultural smart farms such as underground point and underground drainage. In particular, from this year, commercialization is underway in earnest through the establishment of smart farm facilities and technology distribution for agricultural technology complexes across the country. In this study, we would like to describe the case of establishing the agricultural field that combines digital twin technology and open-field agricultural smart farm technology and future utilization plans.

  • PDF

Analysis of trends in the use of geophysical exploration techniques for underwater cultural heritage (수중문화유산에 대한 지구물리탐사 기법 활용 동향 분석)

  • LEE Sang-Hee;KIM Sung-Bo;KIM Jin-Hoo;HYUN Chang-Uk
    • Korean Journal of Heritage: History & Science
    • /
    • v.56 no.3
    • /
    • pp.174-193
    • /
    • 2023
  • Korea is surrounded by the sea and has rivers connecting to it throughout the inland areas, which has been a geographical characteristic since ancient times. As a result, there have been exchanges and conflicts with various countries through the sea, and rivers have facilitated the transportation of ships carrying grain, goods paid for by taxes, and passengers. Since the past, the sea and rivers have had a significant impact on the lives of Koreans. Consequently, it is expected that there are many cultural heritages submerged in the sea and rivers, and continuous efforts are being made to discover and preserve them. Underwater cultural heritage is difficult to discover due to its location in the sea or rivers, making direct visual observation and exploration challenging. To overcome these limitations, various geophysical survey techniques are employed. Geophysical survey methods utilize the physical properties of elastic waves, including their reflection and refraction, to conduct surveys such as bathymetry, underwater topography and strata. These techniques detect the physical characteristics of underwater objects and seafloor formation in the underwater environment, analyze differences, and identify underwater cultural heritage located on or buried in the seabed. Bathymetry uses an echo sounder, and an underwater topography survey uses a side-scan sonar to find underwater artifacts lying on or partially exposed to the seabed, and a marine shallow strata survey uses a sub-bottom profiler to find underwater heritages buried in the seabed. However, the underwater cultural heritage discovered in domestic waters thus far has largely been accidental findings by fishermen, divers, or octopus hunters. This study aims to analyze and summarize the latest research trends in equipment used for underwater cultural heritage exploration, including bathymetric surveys, underwater topography surveys and strata surveys. The goal is to contribute to research on underwater cultural heritage investigation in the domestic context.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

A Brief Review of Backgrounds behind "Multi-Purpose Performance Halls" in South Korea (우리나라 다목적 공연장의 탄생배경에 관한 소고)

  • Kim, Kyoung-A
    • (The) Research of the performance art and culture
    • /
    • no.41
    • /
    • pp.5-38
    • /
    • 2020
  • The current state of performance halls in South Korea is closely related to the performance art and culture of the nation as the culture of putting on and enjoying a performance is deeply rooted in public culture and arts halls representing each area at the local government level. Today, public culture and arts halls have multiple management purposes, and the subjects of their management are in the public domain including the central and local governments or investment and donation foundations in overwhelming cases. Public culture and arts halls thus have close correlations with the institutional aspect of cultural policies as the objects of culture and art policies at the central and local government level. The full-blown era of public culture and arts halls opened up in the 1980s~1990s, during which multi-purpose performance halls of a similar structure became universal around the nation. Public culture and arts halls of the uniform shape were distributed around the nation with no premise of genre characteristics or local environments for arts, and this was attributed to the cultural policies of the military regime. The Park Chung-hee regime proclaimed Yusin that was beyond the Constitution and enacted the Culture and Arts Promotion Act(September, 1972), which was the first culture and arts act in the nation. Based on the act, a five-year plan for the promotion of culture and arts(1973) was made and led to the construction of cultural facilities. "Public culture and arts" halls or "culture" halls were built to serve multiple purposes around the nation because the Culture and Arts Promotion Act, which is called the starting point of the nation's legal system for culture and arts, defined "culture and arts" as "matters regarding literature, art, music, entertainment, and publications." The definition became a ground for the current "multi-purpose" concept. The organization of Ministry of Culture and Public Information set up a culture and administration system to state its supervision of "culture and arts" and distinguish popular culture from the promotion of arts. During the period, former President Park exhibited his perception of "culture=arts=culture and arts" in his speeches. Arts belonged to the category of culture, but it was considered as "culture and arts." There was no department devoted to arts policies when the act was enacted with a broad scope of culture accepted. This ambiguity worked as a mechanism to mobilize arts in ideological utilizations as a policy. Against this backdrop, the Sejong Center for the Performing Arts, a multi-purpose performance hall, was established in 1978 based on the Culture and Arts Promotion Act under the supervision of Ministry of Culture and Public Information. There were, however, conflicts of value over the issue of accepting the popular music among the "culture and arts = multiple purposes" of the system, "culture ≠ arts" of the cultural organization that pushed forward its establishment, and "culture and arts = arts" perceived by the powerful class. The new military regime seized power after Coup d'état of December 12, 1979 and failed at its culture policy of bringing the resistance force within the system. It tried to differentiate itself from the Park regime by converting the perception into "expansion of opportunities for the people to enjoy culture" to gain people's supports both from the side of resistance and that of support. For the Chun Doo-hwan regime, differentiating itself from the previous regime was to secure legitimacy. Expansion of opportunities to enjoy culture was pushed forward at the level of national distribution. This approach thus failed to settle down as a long-term policy of arts development, and the military regime tried to secure its legitimacy through the symbolism of hardware. During the period, the institutional ground for public culture and arts halls was based on the definition of "culture and arts" in the Culture and Arts Promotion Act enacted under the Yusin system of the Park regime. The "multi-purpose" concept, which was the management goal of public performance halls, was born based on this. In this context of the times, proscenium performance halls of a similar structure and public culture and arts halls with a similar management goal were established around the nation, leading to today's performance art and culture in the nation.

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Study on the Effect of Booth Recommendation System on Exhibition Visitors Unplanned Visit Behavior (전시장 참관객의 계획되지 않은 방문행동에 있어서 부스추천시스템의 영향에 대한 연구)

  • Chung, Nam-Ho;Kim, Jae-Kyung
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.175-191
    • /
    • 2011
  • With the MICE(Meeting, Incentive travel, Convention, Exhibition) industry coming into the spotlight, there has been a growing interest in the domestic exhibition industry. Accordingly, in Korea, various studies of the industry are being conducted to enhance exhibition performance as in the United States or Europe. Some studies are focusing particularly on analyzing visiting patterns of exhibition visitors using intelligent information technology in consideration of the variations in effects of watching exhibitions according to the exhibitory environment or technique, thereby understanding visitors and, furthermore, drawing the correlations between exhibiting businesses and improving exhibition performance. However, previous studies related to booth recommendation systems only discussed the accuracy of recommendation in the aspect of a system rather than determining changes in visitors' behavior or perception by recommendation. A booth recommendation system enables visitors to visit unplanned exhibition booths by recommending visitors suitable ones based on information about visitors' visits. Meanwhile, some visitors may be satisfied with their unplanned visits, while others may consider the recommending process to be cumbersome or obstructive to their free observation. In the latter case, the exhibition is likely to produce worse results compared to when visitors are allowed to freely observe the exhibition. Thus, in order to apply a booth recommendation system to exhibition halls, the factors affecting the performance of the system should be generally examined, and the effects of the system on visitors' unplanned visiting behavior should be carefully studied. As such, this study aims to determine the factors that affect the performance of a booth recommendation system by reviewing theories and literature and to examine the effects of visitors' perceived performance of the system on their satisfaction of unplanned behavior and intention to reuse the system. Toward this end, the unplanned behavior theory was adopted as the theoretical framework. Unplanned behavior can be defined as "behavior that is done by consumers without any prearranged plan". Thus far, consumers' unplanned behavior has been studied in various fields. The field of marketing, in particular, has focused on unplanned purchasing among various types of unplanned behavior, which has been often confused with impulsive purchasing. Nevertheless, the two are different from each other; while impulsive purchasing means strong, continuous urges to purchase things, unplanned purchasing is behavior with purchasing decisions that are made inside a store, not before going into one. In other words, all impulsive purchases are unplanned, but not all unplanned purchases are impulsive. Then why do consumers engage in unplanned behavior? Regarding this question, many scholars have made many suggestions, but there has been a consensus that it is because consumers have enough flexibility to change their plans in the middle instead of developing plans thoroughly. In other words, if unplanned behavior costs much, it will be difficult for consumers to change their prearranged plans. In the case of the exhibition hall examined in this study, visitors learn the programs of the hall and plan which booth to visit in advance. This is because it is practically impossible for visitors to visit all of the various booths that an exhibition operates due to their limited time. Therefore, if the booth recommendation system proposed in this study recommends visitors booths that they may like, they can change their plans and visit the recommended booths. Such visiting behavior can be regarded similarly to consumers' visit to a store or tourists' unplanned behavior in a tourist spot and can be understand in the same context as the recent increase in tourism consumers' unplanned behavior influenced by information devices. Thus, the following research model was established. This research model uses visitors' perceived performance of a booth recommendation system as the parameter, and the factors affecting the performance include trust in the system, exhibition visitors' knowledge levels, expected personalization of the system, and the system's threat to freedom. In addition, the causal relation between visitors' satisfaction of their perceived performance of the system and unplanned behavior and their intention to reuse the system was determined. While doing so, trust in the booth recommendation system consisted of 2nd order factors such as competence, benevolence, and integrity, while the other factors consisted of 1st order factors. In order to verify this model, a booth recommendation system was developed to be tested in 2011 DMC Culture Open, and 101 visitors were empirically studied and analyzed. The results are as follows. First, visitors' trust was the most important factor in the booth recommendation system, and the visitors who used the system perceived its performance as a success based on their trust. Second, visitors' knowledge levels also had significant effects on the performance of the system, which indicates that the performance of a recommendation system requires an advance understanding. In other words, visitors with higher levels of understanding of the exhibition hall learned better the usefulness of the booth recommendation system. Third, expected personalization did not have significant effects, which is a different result from previous studies' results. This is presumably because the booth recommendation system used in this study did not provide enough personalized services. Fourth, the recommendation information provided by the booth recommendation system was not considered to threaten or restrict one's freedom, which means it is valuable in terms of usefulness. Lastly, high performance of the booth recommendation system led to visitors' high satisfaction levels of unplanned behavior and intention to reuse the system. To sum up, in order to analyze the effects of a booth recommendation system on visitors' unplanned visits to a booth, empirical data were examined based on the unplanned behavior theory and, accordingly, useful suggestions for the establishment and design of future booth recommendation systems were made. In the future, further examination should be conducted through elaborate survey questions and survey objects.

Effects of Joining Coalition Loyalty Program : How the Brand affects Brand Loyalty Based on Brand Preference (브랜드 선호에 따라 제휴 로열티 프로그램 가입이 가맹점 브랜드 충성도에 미치는 영향)

  • Rhee, Jin-Hwa
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.87-115
    • /
    • 2012
  • Introduction: In these days, a loyalty program is one of the most common marketing mechanisms (Lacey & Sneath, 2006; Nues & Dreze, 2006; Uncles et al., 20003). In recent years, Coalition Loyalty Program is more noticeable as one of progressed forms. In the past, loyalty program was operating independently by single product brand or single retail channel brand. Now, companies using Coalition Loyalty Program share their programs as one single service and companies to participate to this program continue to have benefits from their existing program as well as positive spillover effect from the other participating network companies. Instead of consumers to earn or spend points from single retail channel or brand, consumers will have more opportunities to utilize their points and be able to purchase other participating companies products. Issues that are related to form of loyalty programs are essentially connected with consumers' perceived view on convenience of using its program. This can be a problem for distribution companies' strategic marketing plan. Although Coalition Loyalty Program is popular corporate marketing strategy to most companies, only few researches have been published. However, compared to independent loyalty program, coalition loyalty program operated by third parties of partnership has following conditions: Companies cannot autonomously modify structures of program for individual companies' benefits, and there is no guarantee to operate and to participate its program continuously by signing a contract. Thus, it is important to conduct the study on how coalition loyalty program affects companies' success and its process as much as conducting the study on effects of independent program. This study will complement the lack of coalition loyalty program study. The purpose of this study is to find out how consumer loyalty affects affiliated brands, its cause and mechanism. The past study about loyalty program only provided the variation of performance analysis, but this study will specifically focus on causes of results. In order to do these, this study is designed and to verify three primary objects as following; First, based on opinions of Switching Barriers (Fornell, 1992; Ping, 1993; Jones, et at., 2000) about causes of loyalty of coalition brand, 'brand attractiveness' and 'brand switching cost' are antecedents and causes of change in 'brand loyalty' will be investigated. Second, influence of consumers' perception and attitude prior to joining coalition loyalty program, influence of program in retail brands, brand attractiveness and spillover effect of switching cost after joining coalition program will be verified. Finally, the study will apply 'prior brand preference' as a variable and will provide a relationship between effects of coalition loyalty program and prior preference level. Hypothesis Hypothesis 1. After joining coalition loyalty program, more preferred brand (compared to less preferred brand) will increase influence on brand attractiveness to brand loyalty. Hypothesis 2. After joining coalition loyalty program, less preferred brand (compared to more preferred brand) will increase influence on brand switching cost to brand loyalty. Hypothesis 3. (1)Brand attractiveness and (2)brand switching cost of more preferred brand (before joining the coalition loyalty program) will influence more positive effects from (1)program attractiveness and (2)program switching cost of coalition loyalty program (after joining) than less preferred brand. Hypothesis 4. After joining coalition loyalty program, (1)brand attractiveness and (2)brand switching cost of more preferred brand will receive more positive impacts from (1)program attractiveness and (2)program switching cost of coalition loyalty program than less preferred brand. Hypothesis 5. After joining coalition loyalty program, (1)brand attractiveness and (2)brand switching cost of more preferred brand will receive less impacts from (1)brand attractiveness and (2)brand switching cost of different brands (having different preference level), which joined simultaneously, than less preferred brand. Method : In order to validate hypotheses, this study will apply experimental method throughout virtual scenario of coalition loyalty program if consumers have used or available for the actual brands. The experiment is conducted twice to participants. In a first experiment, the study will provide six coalition brands which are already selected based on prior research. The survey asked each brand attractiveness, switching cost, and loyalty after they choose high preference brand and low preference brand. One hour break was provided prior to the second experiment. In a second experiment, virtual coalition loyalty program "SaveBag" was introduced to participants. Participants were informed that "SaveBag" will be new alliance with six coalition brands from the first experiment. Brand attractiveness and switching cost about coalition program were measured and brand attractiveness and switching cost of high preference brand and low preference brand were measured as same method of first experiment. Limitation and future research This study shows limitations of effects of coalition loyalty program by using virtual scenario instead of actual research. Thus, future study should compare and analyze CLP panel data to provide more in-depth information. In addition, this study only proved the effectiveness of coalition loyalty program. However, there are two types of loyalty program, which are Single and Coalition, and success of coalition loyalty program will be dependent on market brand power and prior customer attitude. Therefore, it will be interesting to compare effects of two programs in the future.

  • PDF