• Title/Summary/Keyword: 사회 변화

Search Result 8,654, Processing Time 0.041 seconds

A Study on the Traumatic Teeth Damage of Children (어린이의 외상성 치아손상에 관한 연구)

  • Yoo, Su-Min;Park, Ho-won
    • Journal of dental hygiene science
    • /
    • v.4 no.1
    • /
    • pp.21-25
    • /
    • 2004
  • In modern times, children's trauma is increasing every year because of car accidents and life environment changes. There is a limit to prevent traumatic damage for oral cavity organization. The fundamental data of trauma treatment and prevention will be presented through the survey and analysis of traumatic teeth damage. I examined 113 patients from Oct. 4th, 2000 to Feb. 27th, 2004 at Dept. of Children's Dental Clinic, Kangnung National University. The results are as follows. (1) The trauma frequency of male subjects is higher than that of female at a rate of 2.05:1. The average age is 5.27 for men and 5.27 for women. The highest percentage of trauma patients is among 2 year old children. It is 21.2%. (2) A patient survey was taken at a trauma treatment hospital. On the first day 34.4% of the patients had come to receive treatment of their first set of teeth. However, after a week, 38.8% of the patients had received treatment on their permanent teeth. (3) As a result of falling, 59% of patients needing treatment on their first set of teeth. 55.1% of patients is permanent teeth. As a result of bump against physical solid, 26.6% of patients is the first set of teeth and 26.5% of patients is permanent teeth. (4) Teeth damage happened at home. 42.1% were male. 35.1% were female. According to trauma, 59.4% of teeth damage happened at home. 28.6% of permanent teeth damage happened at school or kindergarten. (5) According to trauma, the number of teeth damaged was in the first set of teeth are as follows: 56.3%, one-31.3%, three or four-6.3% each. For permanent teeth: two-46.9%, one-28.6%, four over-16.3% and three-8.2%. Over four teeth is larger number for permanent teeth. (6) 56% of first set of teeth patients and 43.4% of permanent teeth patients were male. 56.8% of first set of teeth patients and 43.2% of permanent teeth were female. Trauma happened to both male and female frequently in the first set of teeth. (7) Most of the tooth damage which was in the first set of teeth and permanent teeth was done to the upper jaw. 75% of patients are the first set of teeth. 63.8% of patients are permanent teeth. Trauma is very high in the two mid teeth of the upper jaw. (8) According to trauma survey, 30.2% is from impulse. 28.0% is from crown fracture, 14.7% is from depression. 8.9% is from concussion. 7.1% is from full dislocation of a joint. 2.2% of patients are extrusion. 1.8% is from displacement. According to teeth damage trauma, 35.8% is pulse in the first set of teeth. The breaking of the crown of a tooth happened a lot in permanent teeth. (9) According to data, 43.2% of teeth damage in the first set of teeth goes without treatment. In permanent teeth, it is 38.9%. After treatment, 22.0% of first set of teeth treatment requires a dental pulp treatment. In permanent teeth, which is used for temporary acid etching resin restoration.

  • PDF

An Analysis on the Curricula and Recognitions of the Home Economics Teachers who were the Participants of the First-Grade Home Economics Regular Teacher Qualification Program (중등 가정과 1급 정교사 자격 연수 프로그램 운영 실태 분석 및 연수 참여자의 인식)

  • Lim, Il-Young;Kweon, Li-Ra;Lee, Hye-Suk;Park, Mi-Jin;Ryu, Sang-Hee
    • Journal of Korean Home Economics Education Association
    • /
    • v.19 no.4
    • /
    • pp.37-56
    • /
    • 2007
  • The purpose of this study is to provide basic resources to the first-grade Home Economics Regular Teacher Qualification Program (FGHERTQP) in order to improve its operation plans. For the study, the three methods were carried out: an analysis on the curricula of FGHERTQP over six years since 2000, a questionnaire asking their satisfaction degrees and needs on the programs which was answered by the home economics teachers who were the participants of FGHERTQP, and several statistical analyses such as a descriptive-test, a $X^2$-test, a t-test, and one way ANOVA by using SPSS Win ver 10.0. The results of the study were as follows; Firstly, FGHERTQP has been operated ten times by five training centers during resent six years. Subject matters ($1{\sim}7$), whole numbers of lectures ($11{\sim}29$), and their allotted working hours ($111{\sim}136$) vary with individual training centers and operation years. Secondly, when using 5 point likert scales, Contents and Methods of evaluation marked 3.08 which were the lowest scores, and Qualification Training in General marked 3.72 which was the highest score among five fields of Qualification Training in General, Contents, Organizations, Methods and Evaluation. The overall scores were low. Thirdly, in needs analysis on offering subject matters, the participants wanted to study the field of home economics education more than that of subject contents. Looking about the highest needs classified by domains, Food Principles & Meal Management showed the highest in Foods. And Consumer Issues in Clothing & Textiles in Textiles, Upcoming Housing Cultures in Housing, Family Relationship in Child Development & Family Relationship, Juveniles and their daily life as a consumer in Family & Consumer Resources Management. Fourthly, training centers' lectures available had a significant influence on the satisfaction degrees according to general characteristic variations of the participants. That is, as a training center offers more lectures in the field of subject education than those of subject contents, the participants showed higher satisfaction degrees (p<.05).

  • PDF

A Study on the Determinants of Patent Citation Relationships among Companies : MR-QAP Analysis (기업 간 특허인용 관계 결정요인에 관한 연구 : MR-QAP분석)

  • Park, Jun Hyung;Kwahk, Kee-Young;Han, Heejun;Kim, Yunjeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.21-37
    • /
    • 2013
  • Recently, as the advent of the knowledge-based society, there are more people getting interested in the intellectual property. Especially, the ICT companies leading the high-tech industry are working hard to strive for systematic management of intellectual property. As we know, the patent information represents the intellectual capital of the company. Also now the quantitative analysis on the continuously accumulated patent information becomes possible. The analysis at various levels becomes also possible by utilizing the patent information, ranging from the patent level to the enterprise level, industrial level and country level. Through the patent information, we can identify the technology status and analyze the impact of the performance. We are also able to find out the flow of the knowledge through the network analysis. By that, we can not only identify the changes in technology, but also predict the direction of the future research. In the field using the network analysis there are two important analyses which utilize the patent citation information; citation indicator analysis utilizing the frequency of the citation and network analysis based on the citation relationships. Furthermore, this study analyzes whether there are any impacts between the size of the company and patent citation relationships. 74 S&P 500 registered companies that provide IT and communication services are selected for this study. In order to determine the relationship of patent citation between the companies, the patent citation in 2009 and 2010 is collected and sociomatrices which show the patent citation relationship between the companies are created. In addition, the companies' total assets are collected as an index of company size. The distance between companies is defined as the absolute value of the difference between the total assets. And simple differences are considered to be described as the hierarchy of the company. The QAP Correlation analysis and MR-QAP analysis is carried out by using the distance and hierarchy between companies, and also the sociomatrices that shows the patent citation in 2009 and 2010. Through the result of QAP Correlation analysis, the patent citation relationship between companies in the 2009's company's patent citation network and the 2010's company's patent citation network shows the highest correlation. In addition, positive correlation is shown in the patent citation relationships between companies and the distance between companies. This is because the patent citation relationship is increased when there is a difference of size between companies. Not only that, negative correlation is found through the analysis using the patent citation relationship between companies and the hierarchy between companies. Relatively it is indicated that there is a high evaluation about the patent of the higher tier companies influenced toward the lower tier companies. MR-QAP analysis is carried out as follow. The sociomatrix that is generated by using the year 2010 patent citation relationship is used as the dependent variable. Additionally the 2009's company's patent citation network and the distance and hierarchy networks between the companies are used as the independent variables. This study performed MR-QAP analysis to find the main factors influencing the patent citation relationship between the companies in 2010. The analysis results show that all independent variables have positively influenced the 2010's patent citation relationship between the companies. In particular, the 2009's patent citation relationship between the companies has the most significant impact on the 2010's, which means that there is consecutiveness regarding the patent citation relationships. Through the result of QAP correlation analysis and MR-QAP analysis, the patent citation relationship between companies is affected by the size of the companies. But the most significant impact is the patent citation relationships that had been done in the past. The reason why we need to maintain the patent citation relationship between companies is it might be important in the use of strategic aspect of the companies to look into relationships to share intellectual property between each other, also seen as an important auxiliary of the partner companies to cooperate with.

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

A Methodology to Develop a Curriculum based on National Competency Standards - Focused on Methodology for Gap Analysis - (국가직무능력표준(NCS)에 근거한 조경분야 교육과정 개발 방법론 - 갭분석을 중심으로 -)

  • Byeon, Jae-Sang;Ahn, Seong-Ro;Shin, Sang-Hyun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.43 no.1
    • /
    • pp.40-53
    • /
    • 2015
  • To train the manpower to meet the requirements of the industrial field, the introduction of the National Qualification Frameworks(hereinafter referred to as NQF) was determined in 2001 by National Competency Standards(hereinafter referred to as NCS) centrally of the Office for Government Policy Coordination. Also, for landscape architecture in the construction field, the "NCS -Landscape Architecture" pilot was developed in 2008 to be test operated for 3 years starting in 2009. Especially, as the 'realization of a competence-based society, not by educational background' was adopted as one of the major government projects in the Park Geun-Hye government(inaugurated in 2013) the NCS system was constructed on a nationwide scale as a detailed method for practicing this. However, in the case of the NCS developed by the nation, the ideal job performing abilities are specified, therefore there are weaknesses of not being able to reflect the actual operational problem differences in the student level between universities, problems of securing equipment and professors, and problems in the number of current curricula. For soft landing to practical curriculum, the process of clearly analyzing the gap between the current curriculum and the NCS must be preceded. Gap analysis is the initial stage methodology to reorganize the existing curriculum into NCS based curriculum, and based on the ability unit elements and performance standards for each NCS ability unit, the discrepancy between the existing curriculum within the department or the level of coincidence used a Likert scale of 1 to 5 to fill in and analyze. Thus, the universities wishing to operate NCS in the future measuring the level of coincidence and the gap between the current university curriculum and NCS can secure the basic tool to verify the applicability of NCS and the effectiveness of further development and operation. The advantages of reorganizing the curriculum through gap analysis are, first, that the government financial support project can be connected to provide quantitative index of the NCS adoption rate for each qualitative department, and, second, an objective standard is provided on the insufficiency or sufficiency when reorganizing to NCS based curriculum. In other words, when introducing in the subdivisions of the relevant NCS, the insufficient ability units and the ability unit elements can be extracted, and the supplementary matters for each ability unit element per existing subject can be extracted at the same time. There is an advantage providing directions for detailed class program and basic subject opening. The Ministry of Education and the Ministry of Employment and Labor must gather people from the industry to actively develop and supply the NCS standard a practical level to systematically reflect the requirements of the industrial field the educational training and qualification, and the universities wishing to apply NCS must reorganize the curriculum connecting work and qualification based on NCS. To enable this, the universities must consider the relevant industrial prospect and the relation between the faculty resources within the university and the local industry to clearly select the NCS subdivision to be applied. Afterwards, gap analysis must be used for the NCS based curriculum reorganization to establish the direction of the reorganization more objectively and rationally in order to participate in the process evaluation type qualification system efficiently.

Literature Analysis of Radiotherapy in Uterine Cervix Cancer for the Processing of the Patterns of Care Study in Korea (한국에서 자궁경부알 방사선치료의 Patterns of Care Study 진행을 위한 문헌 비교 연구)

  • Choi Doo Ho;Kim Eun Seog;Kim Yong Ho;Kim Jin Hee;Yang Dae Sik;Kang Seung Hee;Wu Hong Gyun;Kim Il Han
    • Radiation Oncology Journal
    • /
    • v.23 no.2
    • /
    • pp.61-70
    • /
    • 2005
  • Purpose: Uterine cervix cancer is one of the most prevalent women cancer in Korea. We analysed published papers in Korea with comparing Patterns of Care Study (PCS) articles of United States and Japan for the purpose of developing and processing Korean PCS. Materials and Methods: We searched PCS related foreign-produced papers in the PCS homepage (212 articles and abstracts) and from the Pub Med to find Structure and Process of the PCS. To compare their study with Korean papers, we used the internet site 'Korean Pub Med' to search 99 articles regarding uterine cervix cancer and radiation therapy. We analysed Korean paper by comparing them with selected PCS papers regarding Structure, Process and Outcome and compared their items between the period of before 1980's and 1990's. Results: Evaluable papers were 28 from United States, 10 from the Japan and 73 from the Korea which treated cervix PCS items. PCS papers for United States and Japan commonly stratified into $3\~4$ categories on the bases of the scales characteristics of the facilities, numbers of the patients, doctors, Researchers restricted eligible patients strictly. For the process of the study, they analysed factors regarding pretreatment staging in chronological order, treatment related factors, factors in addition to FIGO staging and treatment machine. Papers in United States dealt with racial characteristics, socioeconomic characteristics of the patients, tumor size (6), and bilaterality of parametrial or pelvic side wail invasion (5), whereas papers from Japan treated of the tumor markers. The common trend in the process of staging work-up was decreased use of lymphangiogram, barium enema and increased use of CT and MRI over the times. The recent subject from the Korean papers dealt with concurrent chemoradiotherapy (9 papers), treatment duration (4), tumor markers (B) and unconventional fractionation. Conclusion: By comparing papers among 3 nations, we collected items for Korean uterine cervix cancer PCS. By consensus meeting and close communication, survey items for cervix cancer PCS were developed to measure structure, process and outcome of the radiation treatment of the cervix cancer. Subsequent future research will focus on the use of brachytherapy and its impact on outcome including complications. These finding and future PCS studies will direct the development of educational programs aimed at correcting identified deficits in care.

A Case Study on the Effective Liquid Manure Treatment System in Pig Farms (양돈농가의 돈분뇨 액비화 처리 우수사례 실태조사)

  • Kim, Soo-Ryang;Jeon, Sang-Joon;Hong, In-Gi;Kim, Dong-Kyun;Lee, Myung-Gyu
    • Journal of Animal Environmental Science
    • /
    • v.18 no.2
    • /
    • pp.99-110
    • /
    • 2012
  • The purpose of the study is to collect basis data for to establish standard administrative processes of liquid fertilizer treatment. From this survey we could make out the key point of each step through a case of effective liquid manure treatment system in pig house. It is divided into six step; 1. piggery slurry management step, 2. Solid-liquid separation step, 3. liquid fertilizer treatment (aeration) step, 4. liquid fertilizer treatment (microorganism, recirculation and internal return) step, 5. liquid fertilizer treatment (completion) step, 6. land application step. From now on, standardization process of liquid manure treatment technologies need to be develop based on the six steps process.

A Study on the 1889 'Nanjukseok' (Orchid, Bamboo and Rock) Paintings of Seo Byeong-o (석재 서병오(1862-1936)의 1889년작 난죽석도 연구)

  • Choi, Kyoung Hyun
    • Korean Journal of Heritage: History & Science
    • /
    • v.51 no.4
    • /
    • pp.4-23
    • /
    • 2018
  • Seo Byeong-o (徐丙五, 1862-1936) played a central role in the formation of the Daegu artistic community-which advocated artistic styles combining poetry, calligraphy and painting-during the Japanese colonial period, when the introduction of the Western concept of 'art' led to the adoption of Japanese and Western styles of painting in Korea. Seo first entered the world of calligraphy and painting after meeting Lee Ha-eung (李昰應, 1820-1898) in 1879, but his career as a scholar-artist only began in earnest after Korea was annexed by Japan in 1910. Seo's oeuvre can be broadly divided into three periods. In his initial period of learning, from 1879 to 1897, his artistic activity was largely confined to copying works from Chinese painting albums and painting works in the "Four Gentlemen" genre, influenced by the work of Lee Ha-eung, in his spare time. This may have been because Seo's principal aim at this time was to further his career as a government official. His subsequent period of development, which lasted from 1898 until 1920, saw him play a leading social role in such areas as the patriotic enlightenment movement until 1910, after which he reoriented his life to become a scholar-artist. During this period, Seo explored new styles based on the orchid paintings of Min Yeong-ik (閔泳翊, 1860-1914), whom he met during his second trip to Shanghai, and on the bamboo paintings of Chinese artist Pu Hua (蒲華, 1830-1911). At the same time, he painted in various genres including landscapes, flowers, and gimyeong jeolji (器皿折枝; still life with vessels and flowers). In his final mature period, from 1921 to 1936, Seo divided his time between Daegu and Seoul, becoming a highly active calligrapher and painter in Korea's modern art community. By this time his unique personal style, characterized by broad brush strokes and the use of abundant ink in orchid and bamboo paintings, was fully formed. Records on, and extant works from, Seo's early period are particularly rare, thus confining knowledge of his artistic activities and painting style largely to the realm of speculation. In this respect, eleven recently revealed nanjukseok (蘭竹石圖; orchid, bamboo and rock) paintings, produced by Seo in 1889, provide important clues about the origins and standards of his early-period painting style. This study uses a comparative analysis to confirm that Seo's orchid paintings show the influence of the early gunran (群蘭圖; orchid) and seongnan (石蘭圖; rock and orchid) paintings produced by Lee Ha-eung before his arrest by Qing troops in July 1882. Seo's bamboo paintings appear to show both that he adopted the style of Zheng Xie (鄭燮, 1693-1765) of the Yangzhou School (揚州畵派), a style widely known in Seoul from the late eighteenth century onward, and of Heo Ryeon (許鍊, 1809-1892), a student of Joseon artist Kim Jeong-hui (金正喜,1786-1856), and that he attempted to apply a modified version of Lee Ha-eung's seongnan painting technique. It was not possible to find other works by Seo evincing a direct relationship with the curious rocks depicted in his 1889 paintings, but I contend that they show the influence of both the late-nineteenth-century-Qing rock painter Zhou Tang (周棠, 1806-1876) and the curious rock paintings of the middle-class Joseon artist Jeong Hak-gyo (丁學敎, 1832-1914). In conclusion, this study asserts that, for his 1889 nanjukseok paintings, Seo Byeong-o adopted the styles of contemporary painters such as Heo Ryeon and Jeong Hak-gyo, whom he met during his early period at the Unhyeongung through his connection with its occupant, Lee Ha-eung, and those of artists such as Zheng Xie and Zhou Tang, whose works he was able to directly observe in Korea.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.