• Title/Summary/Keyword: Composite System

Search Result 3,320, Processing Time 0.032 seconds

WHICH INFORMATION MOVES PRICES: EVIDENCE FROM DAYS WITH DIVIDEND AND EARNINGS ANNOUNCEMENTS AND INSIDER TRADING

  • Kim, Chan-Wung;Lee, Jae-Ha
    • The Korean Journal of Financial Studies
    • /
    • v.3 no.1
    • /
    • pp.233-265
    • /
    • 1996
  • We examine the impact of public and private information on price movements using the thirty DJIA stocks and twenty-one NASDAQ stocks. We find that the standard deviation of daily returns on information days (dividend announcement, earnings announcement, insider purchase, or insider sale) is much higher than on no-information days. Both public information matters at the NYSE, probably due to masked identification of insiders. Earnings announcement has the greatest impact for both DJIA and NASDAQ stocks, and there is some evidence of positive impact of insider asle on return volatility of NASDAQ stocks. There has been considerable debate, e.g., French and Roll (1986), over whether market volatility is due to public information or private information-the latter gathered through costly search and only revealed through trading. Public information is composed of (1) marketwide public information such as regularly scheduled federal economic announcements (e.g., employment, GNP, leading indicators) and (2) company-specific public information such as dividend and earnings announcements. Policy makers and corporate insiders have a better access to marketwide private information (e.g., a new monetary policy decision made in the Federal Reserve Board meeting) and company-specific private information, respectively, compated to the general public. Ederington and Lee (1993) show that marketwide public information accounts for most of the observed volatility patterns in interest rate and foreign exchange futures markets. Company-specific public information is explored by Patell and Wolfson (1984) and Jennings and Starks (1985). They show that dividend and earnings announcements induce higher than normal volatility in equity prices. Kyle (1985), Admati and Pfleiderer (1988), Barclay, Litzenberger and Warner (1990), Foster and Viswanathan (1990), Back (1992), and Barclay and Warner (1993) show that the private information help by informed traders and revealed through trading influences market volatility. Cornell and Sirri (1992)' and Meulbroek (1992) investigate the actual insider trading activities in a tender offer case and the prosecuted illegal trading cased, respectively. This paper examines the aggregate and individual impact of marketwide information, company-specific public information, and company-specific private information on equity prices. Specifically, we use the thirty common stocks in the Dow Jones Industrial Average (DJIA) and twenty one National Association of Securities Dealers Automated Quotations (NASDAQ) common stocks to examine how their prices react to information. Marketwide information (public and private) is estimated by the movement in the Standard and Poors (S & P) 500 Index price for the DJIA stocks and the movement in the NASDAQ Composite Index price for the NASDAQ stocks. Divedend and earnings announcements are used as a subset of company-specific public information. The trading activity of corporate insiders (major corporate officers, members of the board of directors, and owners of at least 10 percent of any equity class) with an access to private information can be cannot legally trade on private information. Therefore, most insider transactions are not necessarily based on private information. Nevertheless, we hypothesize that market participants observe how insiders trade in order to infer any information that they cannot possess because insiders tend to buy (sell) when they have good (bad) information about their company. For example, Damodaran and Liu (1993) show that insiders of real estate investment trusts buy (sell) after they receive favorable (unfavorable) appraisal news before the information in these appraisals is released to the public. Price discovery in a competitive multiple-dealership market (NASDAQ) would be different from that in a monopolistic specialist system (NYSE). Consequently, we hypothesize that NASDAQ stocks are affected more by private information (or more precisely, insider trading) than the DJIA stocks. In the next section, we describe our choices of the fifty-one stocks and the public and private information set. We also discuss institutional differences between the NYSE and the NASDAQ market. In Section II, we examine the implications of public and private information for the volatility of daily returns of each stock. In Section III, we turn to the question of the relative importance of individual elements of our information set. Further analysis of the five DJIA stocks and the four NASDAQ stocks that are most sensitive to earnings announcements is given in Section IV, and our results are summarized in Section V.

  • PDF

AN EXPERIMENTAL STUDY ON THE MICROTENSILE BONDING STRENGTH OF DENTIN TREATED BY $CARISOLV^{TM}$ ($Carisolv^{TM}$ 에 의한 우식제거후 Microtensile Bonding Strength에 관한 연구)

  • Baik, Byeong-Ju;Kwon, Byoung-Woo;Kim, Jae-Gon;Cheon, Cheol-Wan
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.29 no.3
    • /
    • pp.389-396
    • /
    • 2002
  • The purpose of this study was to compare the microtensile bonding strength of chemomechanically excavated dentin($Carisolv^{TM}$) to conventional caries removal(bur). The following adhesive systems were used; AB: All-Bond 2(3M, USA), PB: Prime & Bond 2.1(Dentsply, DE), AQ: AQ Bond(sun medical, Japan). 42 human molars with occlusal caries were assigned to 6 groups. Sequential caries removal was controlled with laser fluorescence. Each group was devided as follows; group A, B, C were $Carisolv^{TM}$ applied, group D,E,F were bur used. In group A and D, AB was used as a dentin adhesive. group B,E and group C,F was AQ and AQ was used each. The cavity was filled with composite resin(Z-100). The specimens were sectioned vertically into multiple serial 0.7 mm thick slabs. And then those slabs were sectioned into rectangular parts under 0.7 mm width. Finally 0.7-1.0 mm a right hexahedron shape stick become. Microtensile bonding test was carried out with testing apparatus at cross-head speed of $0.5\;mm/min^{-1}$ and fractured surfaces were observed with scanning electron microscope(JSM-6400, Jeol, Japan). The obtained results were summarized as follows ; 1. In the group of caries removal with $Carisolv^{TM}$, micro-tensile bonding strength decreased to $75.8{\sim}80$ percent of bur used group. 2. In the group of caries removal with $Carisolv^{TM}$, decreased degree of micro-tensile bonding strength is not so different in 3 kinds of dentin adhesives(p<0.05). 3. In the group of caries removal with $Carisolv^{TM}$, microtensile bonding strength of AB, PB, AQ was 32.6MPa(2.4), 30.1Mpa (1.8), 21.2Mpa(1.9). 4. In the group of caries removal with Bur and $Carisolv^{TM}$, microtensile bonding strength of AQ was significantly lower than that of AB and PB(p<0.01).

  • PDF

THE COMPARATIVE STUDY ON THE COLOR OF THE DECIDUOUS TEETH AND RESTORATIVE MATERIALS (유치의 치아색과 수복재의 색조선택에 관한 비교연구)

  • Baik, Byeong-Ju;Oh, Kyoung-Seon;Kim, Jae-Gon;Yang, Cheol-Hee
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.29 no.3
    • /
    • pp.376-381
    • /
    • 2002
  • The purpose of this study was to analyse the color of natural deciduous teeth in Korean children and to compare with that of composite resin specimens. The subjects were 148 children (80 boys and 68 girls) with good general condition and normal teeth color, aged between 3 and 6 years. The color of middle third of maxillary central incisor in deciduous teeth was examined with shade guide and then measured by means of the colorimeter CV300 which can be measured by CIELAB system. The data were analyzed statistically by SPSS program. The results were summerized as follows; 1. Over 90% of the color for the deciduous anterior teeth was in A1, A2, B1, B2 and P shade. 2. The means of deciduous teeth color were $L^*=58.72,\;a^*=-1.18,\;b^*=-0.63$ by colorimeter CV300. 3. $L^*,\;a^*\;and\;b^*$ prices for A1, A2, B1, B2, P were $L^*=52.52,\;a^*=-1.90,\;b^*=1.18$ in A1 specimen, $L^*=54.90,\;a^*=-1.87,\;b^*=1.60$ in A2 specimen, $L^*=59.80,\;a^*=-2.70,\;b^*=-0.63$ in B1 specimen, $L^*=56.90,\;a^*=-1.70,\;b^*=1.63$ in B2 specimen, $L^*=52.93,\;a^*=-2.33,\;b^*=1.10$ in P specimen. The means of B1 color specimen were most similar to those of deciduous teeth color. The A1 color values were similar to the P color values. 4. The standard deviation of $L^*,\;a^*$ was small among colors, but that of $b^*$, in the yellowish color, was large.

  • PDF

MICROTENSILE BOND STRENGTH OF ALL-IN-ONE ADHESIVE TO CARIES-AFFECTED DENTIN (우식이환 상아질에 대한 all-in-one adhesive의 미세인장결합강도)

  • Moon, Ji-Deok;Park, Jeong-Kil;Hur, Bock;Kim, Hyeon-Cheol
    • Restorative Dentistry and Endodontics
    • /
    • v.30 no.1
    • /
    • pp.49-57
    • /
    • 2005
  • The purpose of this study was to evaluate the effect of multiple application of all-in-one dentin adhesive system on microtensile bond strength to caries-affected dentin. Twenty one extracted human molars with occlusal caries extending into mid-dentin were prepared by grinding the occlusal surface flat. The carious lesions were excavated with the aid of caries detector dye. The following adhesives were applied to caries-affected dentin according to manufacturer's directions; $Scotchbond^{TM}$ Multi-Purpose in SM group, Adper Prompt $L-Pop^{TM}$ 1 coat in LP1 group, 2 coats in LP2 group, 3 coats in LP3 group, $Xeno^{(R)}$ III 1 coat in XN1 group, 2 coats in XN2 group. and 3 coats in XN3 group. After application of the adhesives, a cylinder of resin-based composite was built up on the occlusal surface. Each tooth was sectioned vertically to obtain the $1{\times}1\;mm^2$ sticks. The microtensile bond strength was determined. Each specimen was observed under SEM to examine the failure mode. Data were analyzed with one-way ANOVA. The results of this study were as follows; 1. The microtensile bond strength values were; SM ($14.38{\pm}2.01$ MPa), LP1 ($9.15{\pm}1.81$ MPa), LP2(14.08{\pm}1.75$ MPa), LP3 ($14.06{\pm}1.45$ MPa). XN1 (13.65{\pm}1.95$ MPa). XN2 ($13.98{\pm}1.60$) MPa, XN3 ($13.88{\pm}1.66$) MPa, LP1 was significantly lower than the other groups in bond strength (p < 0.05). All groups except LP1 were not significantly different in bond strength (p > 0.05). 2. In LP1, there were a higher number of specimens showing adhesive failure. Most specimens of all groups except LP1 showed mixed failure.

An essay on appraisal method over official administration records ill-balanced. -For development of appraisal process and method over chosun government-general office records- (불균형 잔존 행정기록의 평가방법 시론 - 조선총독부 공문서의 평가절차론 수립을 위하여 -)

  • Kim, Ik-Han
    • The Korean Journal of Archival Studies
    • /
    • no.13
    • /
    • pp.179-203
    • /
    • 2006
  • This study develops the process and method of official administration documents which have remained ill-balanced like the official documents of the government-general of Chosun(the pro-Japanese colonial government (1910-1945)). At first, the existing Appraisal-theories are recomposed. The Appraisal-Theories of Schellenberg is focused valuation about value of records itself, but fuction-Appraisal theory is attached importance to operational activities which take the record into action. But given that the record is a re-presentation of operational activities, the both are the same on the philosophy aspect. Therefore, in the case that the process - method is properly designed, it can be possible to use a composite type between operational activities and records. Also, a method of the Curve has its strong points in the macro and balanced aspect while the Absolute has it's strength in the micro aspect, so that chances are that both alternate methodologies are applied to the study. Hereby, the existing Appraisal theories are concluded to be the mutually-complemented things that can be easily put together into various forms according to the characteristics of an object and its situation, in the terms of the specific Appraisal methodology. Especially, in the case of this article dealing with the imbalance remains official-documents, it is necessary to compromise more properly process with a indicated useful method than establishing a method and process by choosing the only one theory. In order to appraise the official-documents of the pro-Japanese colonial government (1910-1945), a macro appraisal of value has to be appraised about them by understanding a system, functions and using the historical-cultural evolution, after analysing Disposal Authority. From this, map the record so that organization function maps are constructed regarding the value rank of functions and detailed-functions. After this, establish the appraisal strategy considering the internal environment of archival agencies and based on micro appraisal to a great quantity of records remained and supplying other meaning to a small quantity of records remained for example, the oral resources production are accomplished. The study has not yet reached the following aspects ; a function analysis, historical decoding techniques, a curve valuation of the record, the official gazette of the government general of Chosun( the pro-Japanese government for 1910-1945), an analysis method of the other historical materials and it's process, presentation of appraisal output image. As the result, that's just simply a proposal and we should fill in the above-mentioned shortages of the study through development of all the up-coming studies.

Analysis of the relationship between interest rate spreads and stock returns by industry (금리 스프레드와 산업별 주식 수익률 관계 분석)

  • Kim, Kyuhyeong;Park, Jinsoo;Suh, Jihae
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.105-117
    • /
    • 2022
  • This study analyzes the effects between stock returns and interest rate spread, difference between long-term and short-term interest rate through the polynomial linear regression analysis. The existing research concentrated on the business forecast through the interest rate spread focusing on the US market. The previous studies verified the interest rate spread based on the leading indicators of business forecast by moderating the period of long-term/short-term interest rates and analyzing the degree of leading. After the 7th reform of composite indices of business indicators in Korea of 2006, the interest rate spread was included in the items of composing the business leading indicators, which is utilized till today. Nevertheless, there are a few research on stock returns of each industry and interest rate spread in domestic stock market. Therefore, this study analyzed the stock returns of each industry and interest rate spread targeting Korean stock market. This study selected the long-term/short-term interest rates with high causality through the regression analysis, and then understood the correlations with each leading period and industry. To overcome the limitation of the simple linear regression analysis, polynomial linear regression analysis is used, which raised explanatory power. As a result, the high causality was verified when using differences between returns of corporate bond(AA-) without guarantee for three years by leading six months and call rate returns as interest rate spread. In addition, analyzing the stock returns of each industry, the relation between the relevant interest rate spread and returns of the automobile industry was the closest. This study is significant in the aspect of verifying the causality of interest rate spread, business forecast, and stock returns in Korea. Even though it could be limited to forecast the stock price by using only the interest rate spread, it would be working as a strong factor when it is properly utilized with other various factors.

A Study on the Effects of Export Insurance on the Exports of SMEs and Conglomerates (수출보험이 국내 중소기업 및 대기업의 수출에 미치는 영향에 관한 연구)

  • Lee, Dong-Joo
    • Korea Trade Review
    • /
    • v.42 no.2
    • /
    • pp.145-174
    • /
    • 2017
  • Recently, due to the worsening global economic recession, Korea which is a small, export-oriented economy has decreased exports and the domestic economy also continues to stagnate. Therefore, for continued growth of our economy through export growth, we need to analyze the validity of export support system such as export insurance and prepare ways to expand exports. This study is to investigate the effects of Export Insurance on the exports of SMEs as well as LEs. For this purpose, this study conducted Time Series Analysis using data such as export, export insurance acquisition, export price index, exchange rate, and coincident composite index(CCI). First, as a result of the Granger Causality Test, the exports of LEs has found to have a causal relationship with the CCI, and CCI is to have a causal relationship with the short-term export insurance record. Second, the results of VAR analysis show that the export insurance acquisition result and the export price index have a positive effect on the exports of LEs, while the short - term export insurance has a negative effect on the exports of LEs. Third, as a result of variance decomposition, the export of LEs has much more influenced for mid to long term by the short-term export insurance acquisition compared to SMEs. Fourth, short-term export insurance has a positive effect on exports of SMEs. In order to activate short-term export insurance against SMEs, it is necessary to expand support for SMEs by local governments. This study aims to suggest policy implications for establishing effective export insurance policy by analyzing the effects of export insurance on the export of SMEs as well as LEs. It is necessary to carry out a time series analysis on the export results according to the insurance acquisition results by industry to measure the export support effect of export insurance more precisely.

  • PDF

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.