• Title/Summary/Keyword: Three-input

Search Result 2,946, Processing Time 0.039 seconds

A Three-year Study on the Leaf and Soil Nitrogen Contents Influenced by Irrigation Frequency, Clipping Return or Removal and Nitrogen Rate in a Creeping Bentgrass Fairway (크리핑 벤트그라스 훼어웨이에서 관수회수.예지물과 질소시비수준이 엽조직 및 토양 질소함유량에 미치는 효과)

  • 김경남;로버트쉬어만
    • Asian Journal of Turfgrass Science
    • /
    • v.11 no.2
    • /
    • pp.105-115
    • /
    • 1997
  • Responses of 'Penncross' creeping bentgrass turf to various fairway cultural practices are not well-established or supported by research results. This study was initiated to evaluate the effects of irrigation frequency, clipping return or removal, and nitrogen rate on leaf and soil nitrogen con-tent in the 'Penncross' creeping bentgrass (Agrostis palustris Huds.) turf. A 'Penncross' creeping bentgrass turf was established in 1988 on a Sharpsburg silty-clay loam (Typic Argiudoll). The experiment was conducted from 1989 to 1991 under nontraffic conditions. A split-split-plot experimental design was used. Daily or biweekly irrigation, clipping return or removal, and 5, 15, or 25 g N $m-^2$ $yr-^1$ were the main-, sub-, and sub-sub-plot treatments, respectively. Treatments were replicated 3 times in a randomized complete block design. The turf was mowed 4 times weekly at a l3 mm height of cut. Leaf tissue nitrogen content was analyzed twice in 1989 and three times in both 1990 and 1991. Leaf samples were collected from turfgrass plants in the treatment plots, dried immediately at 70˚C for 48 hours, and evaluated for total-N content, using the Kjeldahl method. Concurrently, six soil cores (18mm diam. by 200 mm depth) were collected, air dried, and analyzed for total-N content. Nitrogen analysis on the soil and leaf samples were made in the Soil and Plant Analyical Laboratory, at the University of Nebraska, Lincoln, USA. Data were analyzed as a split-split-plot with analysis of variance (ANOVA), using the General Linear Model procedures of the Statistical Analysis System. The nitrogen content of the leaf tissue is variable in creeping bentgrass fairway turf with clip-ping recycles, nitrogen application rate and time after establishment. Leaf tissue nitrogen content increased with clipping return and nitrogen rate. Plots treated with clipping return had 8% and 5% more nitrogen content in the leaf tissue in 1989 and 1990, respectively, as compared to plots treated with clipping removal. Plots applied with high-N level (25g N $m-^2$ $yr-^1$)had 10%, 17%, and 13% more nitrogen content in leaf tissue in 1989, 1990, and 1991, respectively, when compared with plots applied with low-N level (5g N $m-^2$ $yr-^1$). Overall observations during the study indicated that leaf tissue nitrogen content increased at any nitrogen rate with time after establishment. At the low-N level treatment (5g N $m-^2$ $yr-^1$ ), plots sampled in 1991 had 15% more leaf nitrogen content, as compared to plots sampled in 1989. Similar responses were also found from the high-N level treatment (25g N $m-^2$ $yr-^1$ ).Plots analyzed in 1991 were 18% higher than that of plots analyzed in 1989. No significant treatment effects were observed for soil nitrogen content over the first 3 years after establishment. Strategic management application is necessary for the golf course turf, depending on whether clippings return or not. Different approaches should be addressed to turf fertilization program from a standpoint of clipping recycles. It is recommended that regular analysis of the soil and leaf tissue of golf course turf must be made and fertilization program should be developed through the interpretation of its analytic data result. In golf courses where clippings are recycled, the fertilization program need to be adjusted, being 20% to 30% less nitrogen input over the clipping-removed areas. Key words: Agrostis palustris Huds., 'Penncross' creeping bentgrass fairway, Irrigation frequency, Clipping return, Nitrogen rate, Leaf nitrogen content, Soil nitrogen content.

  • PDF

Distribution Pattern of Inhibitory and Excitatory Nerve Terminals in the Rat Genioglossus Motoneurons (흰쥐의 턱끝혀근 지배 운동신경원에 대한 억제성 및 흥분성 신경종말의 분포 양식)

  • Moon, Yong-Suk
    • Journal of Life Science
    • /
    • v.21 no.1
    • /
    • pp.102-109
    • /
    • 2011
  • The genioglossus muscle plays an important role in maintaining upper airway patency during inspiration; if this muscle does not contract normally, breathing disorders occur due to closing of the upper airway. These occur because of disorders of synaptic input to the genioglossus motoneurons, however, little is known about it. In this study, the distribution of GABA-, glycine-, and glutamate-like immunoreactivity in axon terminals on dendrites of the rat genioglossus motoneurons, stained intracellularly with horseradish peroxidase (HRP), was examined by using postembedding immunogold histochemistry in serial ultrathin sections. The motoneurons were divided into four compartments: the soma, and primary (Pd), intermediate (Id), and distal dendrites (Dd). Quantitative analysis of 157, 188, 181, and 96 boutons synapsing on 3 soma, 14 Pd, 35 Id, and 28 Dd, respectively, was performed. 71.9% of the total number of studied boutons had immunoreactivity for at least one of the three amino acids. 32.8% of the total number of studied boutons were immunopositive for GABA and/or glycine and 39.1% for glutamate. Among the former, 14.2% showed glycine immunoreactivity only and 13.3% were immunoreactive to both glycine and GABA. The remainder (5.3%) showed immunoreactivity for GABA only. Most boutons immunoreactive to inhibitory amino acids contained a mixture of flattened, oval, and round synaptic vesicles. Most boutons immunoreactive to excitatory amino acids contained clear and spherical synaptic vesicles with a few dense-cored vesicles. When comparisons of the inhibitory and excitatory boutons were made between the soma and three dendritic segments, the proportion of the inhibitory to the excitatory boutons was high in the Dd (23.9% vs. 43.8%) but somewhat low in the soma (35.7% vs. 38.2%), Pd (34.6% vs. 37.8%) and Id (33.1% vs. 38.7%). The percentage of synaptic covering of the inhibitory synaptic boutons decreased in the order of soma, Pd, Id, and Dd, but this trend was not applicable to the excitatory boutons. The present study provides possible evidence that the spatial distribution patterns of inhibitory and excitatory synapses are different in the soma and dendritic tree of the rat genioglussus motoneurons.

Automatic Text Extraction from News Video using Morphology and Text Shape (형태학과 문자의 모양을 이용한 뉴스 비디오에서의 자동 문자 추출)

  • Jang, In-Young;Ko, Byoung-Chul;Kim, Kil-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.479-488
    • /
    • 2002
  • In recent years the amount of digital video used has risen dramatically to keep pace with the increasing use of the Internet and consequently an automated method is needed for indexing digital video databases. Textual information, both superimposed and embedded scene texts, appearing in a digital video can be a crucial clue for helping the video indexing. In this paper, a new method is presented to extract both superimposed and embedded scene texts in a freeze-frame of news video. The algorithm is summarized in the following three steps. For the first step, a color image is converted into a gray-level image and applies contrast stretching to enhance the contrast of the input image. Then, a modified local adaptive thresholding is applied to the contrast-stretched image. The second step is divided into three processes: eliminating text-like components by applying erosion, dilation, and (OpenClose+CloseOpen)/2 morphological operations, maintaining text components using (OpenClose+CloseOpen)/2 operation with a new Geo-correction method, and subtracting two result images for eliminating false-positive components further. In the third filtering step, the characteristics of each component such as the ratio of the number of pixels in each candidate component to the number of its boundary pixels and the ratio of the minor to the major axis of each bounding box are used. Acceptable results have been obtained using the proposed method on 300 news images with a recognition rate of 93.6%. Also, my method indicates a good performance on all the various kinds of images by adjusting the size of the structuring element.

Classification of Urban Green Space Using Airborne LiDAR and RGB Ortho Imagery Based on Deep Learning (항공 LiDAR 및 RGB 정사 영상을 이용한 딥러닝 기반의 도시녹지 분류)

  • SON, Bokyung;LEE, Yeonsu;IM, Jungho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.24 no.3
    • /
    • pp.83-98
    • /
    • 2021
  • Urban green space is an important component for enhancing urban ecosystem health. Thus, identifying the spatial structure of urban green space is required to manage a healthy urban ecosystem. The Ministry of Environment has provided the level 3 land cover map(the highest (1m) spatial resolution map) with a total of 41 classes since 2010. However, specific urban green information such as street trees was identified just as grassland or even not classified them as a vegetated area in the map. Therefore, this study classified detailed urban green information(i.e., tree, shrub, and grass), not included in the existing level 3 land cover map, using two types of high-resolution(<1m) remote sensing data(i.e., airborne LiDAR and RGB ortho imagery) in Suwon, South Korea. U-Net, one of image segmentation deep learning approaches, was adopted to classify detailed urban green space. A total of three classification models(i.e., LRGB10, LRGB5, and RGB5) were proposed depending on the target number of classes and the types of input data. The average overall accuracies for test sites were 83.40% (LRGB10), 89.44%(LRGB5), and 74.76%(RGB5). Among three models, LRGB5, which uses both airborne LiDAR and RGB ortho imagery with 5 target classes(i.e., tree, shrub, grass, building, and the others), resulted in the best performance. The area ratio of total urban green space(based on trees, shrub, and grass information) for the entire Suwon was 45.61%(LRGB10), 43.47%(LRGB5), and 44.22%(RGB5). All models were able to provide additional 13.40% of urban tree information on average when compared to the existing level 3 land cover map. Moreover, these urban green classification results are expected to be utilized in various urban green studies or decision making processes, as it provides detailed information on urban green space.

The prediction of the stock price movement after IPO using machine learning and text analysis based on TF-IDF (증권신고서의 TF-IDF 텍스트 분석과 기계학습을 이용한 공모주의 상장 이후 주가 등락 예측)

  • Yang, Suyeon;Lee, Chaerok;Won, Jonggwan;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.237-262
    • /
    • 2022
  • There has been a growing interest in IPOs (Initial Public Offerings) due to the profitable returns that IPO stocks can offer to investors. However, IPOs can be speculative investments that may involve substantial risk as well because shares tend to be volatile, and the supply of IPO shares is often highly limited. Therefore, it is crucially important that IPO investors are well informed of the issuing firms and the market before deciding whether to invest or not. Unlike institutional investors, individual investors are at a disadvantage since there are few opportunities for individuals to obtain information on the IPOs. In this regard, the purpose of this study is to provide individual investors with the information they may consider when making an IPO investment decision. This study presents a model that uses machine learning and text analysis to predict whether an IPO stock price would move up or down after the first 5 trading days. Our sample includes 691 Korean IPOs from June 2009 to December 2020. The input variables for the prediction are three tone variables created from IPO prospectuses and quantitative variables that are either firm-specific, issue-specific, or market-specific. The three prospectus tone variables indicate the percentage of positive, neutral, and negative sentences in a prospectus, respectively. We considered only the sentences in the Risk Factors section of a prospectus for the tone analysis in this study. All sentences were classified into 'positive', 'neutral', and 'negative' via text analysis using TF-IDF (Term Frequency - Inverse Document Frequency). Measuring the tone of each sentence was conducted by machine learning instead of a lexicon-based approach due to the lack of sentiment dictionaries suitable for Korean text analysis in the context of finance. For this reason, the training set was created by randomly selecting 10% of the sentences from each prospectus, and the sentence classification task on the training set was performed after reading each sentence in person. Then, based on the training set, a Support Vector Machine model was utilized to predict the tone of sentences in the test set. Finally, the machine learning model calculated the percentages of positive, neutral, and negative sentences in each prospectus. To predict the price movement of an IPO stock, four different machine learning techniques were applied: Logistic Regression, Random Forest, Support Vector Machine, and Artificial Neural Network. According to the results, models that use quantitative variables using technical analysis and prospectus tone variables together show higher accuracy than models that use only quantitative variables. More specifically, the prediction accuracy was improved by 1.45% points in the Random Forest model, 4.34% points in the Artificial Neural Network model, and 5.07% points in the Support Vector Machine model. After testing the performance of these machine learning techniques, the Artificial Neural Network model using both quantitative variables and prospectus tone variables was the model with the highest prediction accuracy rate, which was 61.59%. The results indicate that the tone of a prospectus is a significant factor in predicting the price movement of an IPO stock. In addition, the McNemar test was used to verify the statistically significant difference between the models. The model using only quantitative variables and the model using both the quantitative variables and the prospectus tone variables were compared, and it was confirmed that the predictive performance improved significantly at a 1% significance level.

Estimation of Dynamic Material Properties for Fill Dam : II. Nonlinear Deformation Characteristics (필댐 제체 재료의 동적 물성치 평가 : II. 비선형 동적 변형특성)

  • Lee, Sei-Hyun;Kim, Dong-Soo;Choo, Yun-Wook;Choo, Hyek-Kee
    • Journal of the Korean Geotechnical Society
    • /
    • v.25 no.12
    • /
    • pp.87-105
    • /
    • 2009
  • Nonlinear dynamic deformation characteristics, expressed in terms of normalized shear modulus reduction curve (G/$G_{max}-\log\gamma$, G/$G_{max}$ curve) and damping curve (D-$\log\gamma$), are important input parameters with shear wave velocity profile ($V_s$-profile) in the seismic analysis of (new or existing) fill dam. In this paper, the reasonable and economical methods to evaluate the nonlinear dynamic deformation characteristics for core zone and rockfill zone respectively are presented. For the core zone, 111 G/$G_{max}$ curves and 98 damping curves which meet the requirements of core material were compiled and representative curves and ranges were proposed for the three ranges of confining pressure (0~100 kPa, 100 kPa~200 kPa, more than 200 kPa). The reliability of the proposed curves for the core zone were verified by comparing with the resonant column test results of two kinds of core materials. For the rockfill zone, 135 G/$G_{max}$ curves and 65 damping curves were compiled from the test results of gravelly materials using large scale testing equipments. The representative curves and ranges for G/$G_{max}$ were proposed for the three ranges of confining pressure (0~50 kPa, 50 kPa~100 kPa, more than 100 kPa) and those for damping were proposed independently of confining pressure. The reliability of the proposed curves for the rockfill zone were verified by comparing with the large scale triaxial test results of rockfill materials in the B-dam which is being constructed.

Studies on the Functional Interrelation between the Vestibular Canals and the Extraocular Muscles (미로반규관(迷路半規管)과 외안근(外眼筋)의 기능적(機能的) 관계(關係)에 관(關)한 연구(硏究))

  • Kim, Jeh-Hyub
    • The Korean Journal of Physiology
    • /
    • v.8 no.2
    • /
    • pp.1-17
    • /
    • 1974
  • This experiment was designed to explore the specific functional interrelations between the vestibular semicircular canals and the extraocular muscles which may disclose the neural organization, connecting the vestibular canals and each ocular motor nuclei in the brain system, for vestibuloocular reflex mechanism. In urethane anesthetized rabbits, a fine wire insulated except the cut cross section of its tip was inserted into the canals closely to the ampullary receptor organs through the minute holes provided on the osseous canal wall for monopolar stimulation of each canal nerve. All extraocular muscles of both eyes were ligated and cut at their insertio, and the isometric tension and EMG responses of the extraocular muscles to the vestibular canal nerve stimulation were recorded by means of a physiographic recorder. Upon stimulation of the semicircular canal nerve, direction if the eye movement was also observed. The experimental results were as follows. 1) Single canal nerve stimulation with high frequency square waves (240 cps, 0. 1 msec) caused excitation of three extraocular muscles and inhibition of remaining three muscles in the bilateral eyes; stimulation of any canal nerve of a unilateral labyrinth caused excitation (contraction) of the superior rectus, superior oblique and medial rectus muscles and inhibition (relaxation) of the inferior rectus, inferior oblique and lateral rectos muscles in the ipsilateral eye, and it caused the opposite events in the contralateral eye. 2) By the overlapped stimulation of triple canal nerves of a unilateral labyrinth, unidirectional (excitatory or inhibitory) summation of the individual canal effects on a given extraocular muscles was demonstrated, and this indicates that three different canals of a unilateral vestibular system exert similar effect on a given extraocular muscles. 3) Based on the above experimental evidences, a simple rule by which one can define the vestibular excitatory and inhibitory input sources to all the extraocular muscles is proposed; the superior rectus, superior oblique and medial rectus muscles receive excitatory impulses from the ipsilateral vestibular canals, and the inferior rectus, inferior oblique and lateral rectus muscles from the contralateral canals; the opposite relationship applies for vestibular inhibitory impulses to the extraocular muscles. 4) According to the specific direction of the eye movements induced by the individual canal nerve stimulation, an extraocutar muscle exerting major role (a muscle of primary contraction) and two muscles of synergistic contraction could be differentiated in both eyes. 5) When these experimental results were compared to the well known observations of Cohen et al. (1964) made in the cats, extraocular muscles of primary contraction were the same but those of synergistic contraction were partially different. Moreover, the oblique muscle responses to each canal nerve excitation appeared to be all identical. However, the responnes of horizontal (medial and lateral) and vertical (superior and inferior) rectus muscles showed considerable differences. By critical analysis of these data, the author was able to locate theoretical contradictions in the observations of Cohen et al. but not in the author's results. 6) An attempt was also made to compare the functional observation of this experiment to the morphological findings of Carpenter and his associates obtained by degeneration experiments in the monkeys, and it was able to find some significant coincidence between there two works of different approach. In summary, the author has demonstrated that the well known observations of Cohen et al. on the vestibulo-ocular interrelation contain important experimental errors which can he proved by theoretical evaluation and substantiated by a series of experiments. Based on such experimental evidences, a new rule is proposed to define the interrelation between the vestibular canals and the extraocular muscles.

  • PDF

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

The Evaluation of SUV Variations According to the Errors of Entering Parameters in the PET-CT Examinations (PET/CT 검사에서 매개변수 입력오류에 따른 표준섭취계수 평가)

  • Kim, Jia;Hong, Gun Chul;Lee, Hyeok;Choi, Seong Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.43-48
    • /
    • 2014
  • Purpose: In the PET/CT images, The SUV (standardized uptake value) enables the quantitative assessment according to the biological changes of organs as the index of distinction whether lesion is malignant or not. Therefore, It is too important to enter parameters correctly that affect to the SUV. The purpose of this study is to evaluate an allowable error range of SUV as measuring the difference of results according to input errors of Activity, Weight, uptake Time among the parameters. Materials and Methods: Three inserts, Hot, Teflon and Air, were situated in the 1994 NEMA Phantom. Phantom was filled with 27.3 MBq/mL of 18F-FDG. The ratio of hotspot area activity to background area activity was regulated as 4:1. After scanning, Image was re-reconstructed after incurring input errors in Activity, Weight, uptake Time parameters as ${\pm}5%$, 10%, 15%, 30%, 50% from original data. ROIs (region of interests) were set one in the each insert areas and four in the background areas. $SUV_{mean}$ and percentage differences were calculated and compared in each areas. Results: $SUV_{mean}$ of Hot. Teflon, Air and BKG (Background) areas of original images were 4.5, 0.02. 0.1 and 1.0. The min and max value of $SUV_{mean}$ according to change of Activity error were 3.0 and 9.0 in Hot, 0.01 and 0.04 in Teflon, 0.1 and 0.3 in Air, 0.6 and 2.0 in BKG areas. And percentage differences were equally from -33% to 100%. In case of Weight error showed $SUV_{mean}$ as 2.2 and 6.7 in Hot, 0.01 and 0.03 in Tefron, 0.09 and 0.28 in Air, 0.5 and 1.5 in BKG areas. And percentage differences were equally from -50% to 50% except Teflon area's percentage deference that was from -50% to 52%. In case of uptake Time error showed $SUV_{mean}$ as 3.8 and 5.3 in Hot, 0.01 and 0.02 in Teflon, 0.1 and 0.2 in Air, 0.8 and 1.2 in BKG areas. And percentage differences were equally from 17% to -14% in Hot and BKG areas. Teflon area's percentage difference was from -50% to 52% and Air area's one was from -12% to 20%. Conclusion: As shown in the results, It was applied within ${\pm}5%$ of Activity and Weight errors if the allowable error range was configured within 5%. So, The calibration of dose calibrator and weighing machine has to conduct within ${\pm}5%$ error range because they can affect to Activity and Weight rates. In case of Time error, it showed separate error ranges according to the type of inserts. It showed within 5% error when Hot and BKG areas error were within ${\pm}15%$. So we have to consider each time errors if we use more than two clocks included scanner's one during the examinations.

  • PDF

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.