• Title/Summary/Keyword: Java Language

Search Result 421, Processing Time 0.034 seconds

Interactive Statistics Laboratory using R and Sage (R을 활용한 '대화형 통계학 입문 실습실' 개발과 활용)

  • Lee, Sang-Gu;Lee, Geung-Hee;Choi, Yong-Seok;Lee, Jae Hwa;Lee, Jenny Jyoung
    • Communications of Mathematical Education
    • /
    • v.29 no.4
    • /
    • pp.573-588
    • /
    • 2015
  • In this paper, we introduce development process and application of a simple and effective model of a statistics laboratory using open source software R, one of leading language and environment for statistical computing and graphics. This model consists of HTML files, including Sage cells, video lectures and enough internet resources. Users do not have to install statistical softwares to run their code. Clicking 'evaluate' button in the web page displays the result that is calculated through cloud-computing environment. Hence, with any type of mobile equipment and internet, learners can freely practice statistical concepts and theorems via various examples with sample R (or Sage) codes which were given, while instructors can easily design and modify it for his/her lectures, only gathering many existing resources and editing HTML file. This will be a resonable model of laboratory for studying statistics. This model with bunch of provided materials will reduce the time and effort needed for R-beginners to be acquainted with and understand R language and also stimulate beginners' interest in statistics. We introduce this interactive statistical laboratory as an useful model for beginners to learn basic statistical concepts and R.

An RDB to RDF Mapping System Considering Semantic Relations of RDB Components (관계형 데이터베이스 구성 요소의 의미 관계를 고려한 RDB to RDF 매핑 시스템)

  • Sung, Hajung;Gim, Jangwon;Lee, Sukhoon;Baik, Doo-Kwon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.1
    • /
    • pp.19-30
    • /
    • 2014
  • For the expansion of the Semantic Web, studies in converting the data stored in the relational database into the ontology are actively in process. Such studies mainly use an RDB to RDF mapping model, the model to map relational database components to RDF components. However, pre-proposed mapping models have got different expression modes and these damage the accessibility and reusability of the users. As a consequence, the necessity of the standardized mapping language was raised and the W3C suggested the R2RML as the standard mapping language for the RDB to RDF model. The R2RML has a characteristic that converts only the relational database schema data to RDF. For the same reasons above, the ontology about the relation data between table name and column name of the relational database cannot be added. In this paper, we propose an RDB to RDF mapping system considering semantic relations of RDB components in order to solve the above issue. The proposed system generates the mapping data by adding the RDFS attribute data into the schema data defined by the R2RML in the relational database. This mapping data converts the data stored in the relational database into RDF which includes the RDFS attribute data. In this paper, we implement the proposed system as a Java-based prototype, perform the experiment which converts the data stored in the relational database into RDF for the comparison evaluation purpose and compare the results against D2RQ, RDBToOnto and Morph. The proposed system expresses semantic relations which has richer converted ontology than any other studies and shows the best performance in data conversion time.

Visualization and Localization of Fusion Image Using VRML for Three-dimensional Modeling of Epileptic Seizure Focus (VRML을 이용한 융합 영상에서 간질환자 발작 진원지의 3차원적 가시화와 위치 측정 구현)

  • 이상호;김동현;유선국;정해조;윤미진;손혜경;강원석;이종두;김희중
    • Progress in Medical Physics
    • /
    • v.14 no.1
    • /
    • pp.34-42
    • /
    • 2003
  • In medical imaging, three-dimensional (3D) display using Virtual Reality Modeling Language (VRML) as a portable file format can give intuitive information more efficiently on the World Wide Web (WWW). The web-based 3D visualization of functional images combined with anatomical images has not studied much in systematic ways. The goal of this study was to achieve a simultaneous observation of 3D anatomic and functional models with planar images on the WWW, providing their locational information in 3D space with a measuring implement using VRML. MRI and ictal-interictal SPECT images were obtained from one epileptic patient. Subtraction ictal SPECT co-registered to MRI (SISCOM) was performed to improve identification of a seizure focus. SISCOM image volumes were held by thresholds above one standard deviation (1-SD) and two standard deviations (2-SD). SISCOM foci and boundaries of gray matter, white matter, and cerebrospinal fluid (CSF) in the MRI volume were segmented and rendered to VRML polygonal surfaces by marching cube algorithm. Line profiles of x and y-axis that represent real lengths on an image were acquired and their maximum lengths were the same as 211.67 mm. The real size vs. the rendered VRML surface size was approximately the ratio of 1 to 605.9. A VRML measuring tool was made and merged with previous VRML surfaces. User interface tools were embedded with Java Script routines to display MRI planar images as cross sections of 3D surface models and to set transparencies of 3D surface models. When transparencies of 3D surface models were properly controlled, a fused display of the brain geometry with 3D distributions of focal activated regions provided intuitively spatial correlations among three 3D surface models. The epileptic seizure focus was in the right temporal lobe of the brain. The real position of the seizure focus could be verified by the VRML measuring tool and the anatomy corresponding to the seizure focus could be confirmed by MRI planar images crossing 3D surface models. The VRML application developed in this study may have several advantages. Firstly, 3D fused display and control of anatomic and functional image were achieved on the m. Secondly, the vector analysis of a 3D surface model was defined by the VRML measuring tool based on the real size. Finally, the anatomy corresponding to the seizure focus was intuitively detected by correlations with MRI images. Our web based visualization of 3-D fusion image and its localization will be a help to online research and education in diagnostic radiology, therapeutic radiology, and surgery applications.

  • PDF

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

User Centered Interface Design of Web-based Attention Testing Tools: Inhibition of Return(IOR) and Graphic UI (웹 기반 주의력 검사의 사용자 인터페이스 설계: 회귀억제 과제와 그래픽 UI를 중심으로)

  • Kwahk, Ji-Eun;Kwak, Ho-Wan
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.4
    • /
    • pp.331-367
    • /
    • 2008
  • This study aims to validate a web-based neuropsychological testing tool developed by Kwak(2007) and to suggest solutions to potential problems that can deteriorate its validity. When it targets a wider range of subjects, a web-based neuropsychological testing tool is challenged by high drop-out rates, lack of motivation, lack of interactivity with the experimenter, fear of computer, etc. As a possible solution to these threats, this study aims to redesign the user interface of a web-based attention testing tool through three phases of study. In Study 1, an extensive analysis of Kwak's(2007) attention testing tool was conducted to identify potential usability problems. The Heuristic Walkthrough(HW) method was used by three usability experts to review various design features. As a result, many problems were found throughout the tool. The findings concluded that the design of instructions, user information survey forms, task screen, results screen, etc. did not conform to the needs of users and their tasks. In Study 2, 11 guidelines for the design of web-based attention testing tools were established based on the findings from Study 1. The guidelines were used to optimize the design and organization of the tool so that it fits to the user and task needs. The resulting new design alternative was then implemented as a working prototype using the JAVA programming language. In Study 3, a comparative study was conducted to demonstrate the excellence of the new design of attention testing tool(named graphic style tool) over the existing design(named text style tool). A total of 60 subjects participated in user testing sessions where their error frequency, error patterns, and subjective satisfaction were measured through performance observation and questionnaires. Through the task performance measurement, a number of user errors in various types were observed in the existing text style tool. The questionnaire results were also in support of the new graphic style tool, users rated the new graphic style tool higher than the existing text style tool in terms of overall satisfaction, screen design, terms and system information, ease of learning, and system performance.

  • PDF

Social Tagging-based Recommendation Platform for Patented Technology Transfer (특허의 기술이전 활성화를 위한 소셜 태깅기반 지적재산권 추천플랫폼)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.53-77
    • /
    • 2015
  • Korea has witnessed an increasing number of domestic patent applications, but a majority of them are not utilized to their maximum potential but end up becoming obsolete. According to the 2012 National Congress' Inspection of Administration, about 73% of patents possessed by universities and public-funded research institutions failed to lead to creating social values, but remain latent. One of the main problem of this issue is that patent creators such as individual researcher, university, or research institution lack abilities to commercialize their patents into viable businesses with those enterprises that are in need of them. Also, for enterprises side, it is hard to find the appropriate patents by searching keywords on all such occasions. This system proposes a patent recommendation system that can identify and recommend intellectual rights appropriate to users' interested fields among a rapidly accumulating number of patent assets in a more easy and efficient manner. The proposed system extracts core contents and technology sectors from the existing pool of patents, and combines it with secondary social knowledge, which derives from tags information created by users, in order to find the best patents recommended for users. That is to say, in an early stage where there is no accumulated tag information, the recommendation is done by utilizing content characteristics, which are identified through an analysis of key words contained in such parameters as 'Title of Invention' and 'Claim' among the various patent attributes. In order to do this, the suggested system extracts only nouns from patents and assigns a weight to each noun according to the importance of it in all patents by performing TF-IDF analysis. After that, it finds patents which have similar weights with preferred patents by a user. In this paper, this similarity is called a "Domain Similarity". Next, the suggested system extract technology sector's characteristics from patent document by analyzing the international technology classification code (International Patent Classification, IPC). Every patents have more than one IPC, and each user can attach more than one tag to the patents they like. Thus, each user has a set of IPC codes included in tagged patents. The suggested system manages this IPC set to analyze technology preference of each user and find the well-fitted patents for them. In order to do this, the suggeted system calcuates a 'Technology_Similarity' between a set of IPC codes and IPC codes contained in all other patents. After that, when the tag information of multiple users are accumulated, the system expands the recommendations in consideration of other users' social tag information relating to the patent that is tagged by a concerned user. The similarity between tag information of perferred 'patents by user and other patents are called a 'Social Simialrity' in this paper. Lastly, a 'Total Similarity' are calculated by adding these three differenent similarites and patents having the highest 'Total Similarity' are recommended to each user. The suggested system are applied to a total of 1,638 korean patents obtained from the Korea Industrial Property Rights Information Service (KIPRIS) run by the Korea Intellectual Property Office. However, since this original dataset does not include tag information, we create virtual tag information and utilized this to construct the semi-virtual dataset. The proposed recommendation algorithm was implemented with JAVA, a computer programming language, and a prototype graphic user interface was also designed for this study. As the proposed system did not have dependent variables and uses virtual data, it is impossible to verify the recommendation system with a statistical method. Therefore, the study uses a scenario test method to verify the operational feasibility and recommendation effectiveness of the system. The results of this study are expected to improve the possibility of matching promising patents with the best suitable businesses. It is assumed that users' experiential knowledge can be accumulated, managed, and utilized in the As-Is patent system, which currently only manages standardized patent information.

A Static Analysis Technique for Android Apps Written with Xamarin (자마린으로 개발된 안드로이드 앱의 정적 분석 연구)

  • Lim, Kyeong-hwan;Kim, Gyu-sik;Shim, Jae-woo;Cho, Seong-je
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.3
    • /
    • pp.643-653
    • /
    • 2018
  • Xamarin is a representative cross-platform development framework that allows developers to write mobile apps in C# for multiple mobile platforms, such as Android, iOS, or Windows Phone. Using Xamarin, mobile app developers can reuse existing C# code and share significant code across multiple platforms, reducing development time and maintenance costs. Meanwhile, malware authors can also use Xamarin to spread malicious apps on more platforms, minimizing the time and cost of malicious app creation. In order to cope with this problem, it is necessary to analyze and detect malware written with Xamarin. However, little studies have been conducted on static analysis methods of the apps written in Xamarin. In this paper, we examine the structure of Android apps written with Xamarin and propose a static analysis technique for the apps. We also demonstrate how to statically reverse-engineer apps that have been transformed using code obfuscation. Because the Android apps written with Xamarin consists of Java bytecode, C# based DLL libraries, and C/C++ based native libraries, we have studied static reverse engineering techniques for these different types of code.

Catalytic CVD-Kinetics of Pyrolytic Carbon and SiC on the Stainless Steel Stent (Stainless Steel Stent에 Pyrolytic Carbon과 SiC의 촉매적 CVD-Kinetic연구)

  • 이보성;이무용
    • Proceedings of the KAIS Fall Conference
    • /
    • 2000.10a
    • /
    • pp.30-33
    • /
    • 2000
  • 최근 국내에서도 관 동맥 질환 환자의 수가 급증하고 있으며, 관 동맥 질환의 치료 방법인 관 동맥 성형 술은 관 동맥 stent의 도입에 의하여 보편화되어 국내에서 년간 5000개 이상의 stent가 시술되고 있다. 그러나 stent는 고가(1,200천원/개)로 전량 수입에 의존하고 있으며, 시술 후 사망까지 이를 수 있는 혈전에 의한 급성 페쇠와 재 협착이 문제점이다. 이를 위한 한가지 방법이 생체 적합성이 뛰어난 복합 stent의 개발인데 SiC나 Carbon을 coating한 stent는 시술 후 혈전 형성을 억제하는 것으로 알려져 있다. 특히 가장 순수한 Pyrolytic carbon은 hemocompatibility가 탁월하고 기밀 성이기 때문에 본 연구에서 그의 CVB-Kinetics를 연구코저 하는 것이다. methane으로부터 pyrolytic carbon의 CVD는 온도에 따라서 다양한 구조를 가지며 따라서 그의 mechanism도 다양하다는 것은 잘 알려져 있다. 더구나 광간(균질)반응과 표면(불균질)반응의 정량적 관계에 따라서도 다르다는 것도 확인되었다. 그러나 stainless steel 316L로 만든 stent는 12 - 15 %의 Ni과 2%의 Mo을 함유해서 금속성을 잃지 않는 저온(600℃)에서도 pyrolytic carbon의 속매적 CVD가 가능함을 그리고 SiC의 코팅에 적합한 buffer layer 역할을 함을 확인하였다. 그리하여 본 연구는 반응기 설계에 필요한 저온 촉매적 pyrolytic carbon의 CVD-kinetics의 연구결로 그의 mechanism과 함께 rate law 식을 유도, 확인하였으며 600℃, 90kPa에서 P/sub ch4//P/sub H2/=5:1과 체류시간 1.8 sec가 최적임을 발견하였다. 이때 석출속도 11.2 g-mol/g-cat.h 혹은 두께속도로 73 nm/sec를 나타내었다.메타놀-물 (1 : 1) 유출액에서 $(0.80\;{\mu}g)$ 검출되었다. 하면 morey eel내장에서 얻은 독물질도 DEAE-셀루로즈에서 ST-1 과 ST-2로 나누어지며, 이 ST-1의 TLC, HPLC 및 알루미나 컬럼상의 거동이 파랑비늘돔에서 얻은 ST-1의 그것과 같으므로 scaritoxin으로 보고한 ST-1은 ciguatoxin의 형태인 less polar cigutoxin (LPCTX) 으로 생각된다.에서 각각 대조구의 57, 413 및 315% 증진되었다. 거품의 열안정성은 15분 whipping시, pH 4.0(대조구, 30.2%) 및 5.0(대조구, 23.7%)에서 각각 $0{\sim}38.0$$0{\sim}57.0%$이었고 pH 7.0(대조구, 39.6%) 및 8.0(대조구, 43.6%)에서 각각 $0{\sim}59.4$$36.6{\sim}58.4%$이었으며 sodium alginate 첨가시가 가장 양호하였다. 전체적으로 보아 거품안정성이 높은 것은 열안정성도 높은 경향이며, 표면장력이 낮으면 거품형성능이 높아지고, 비점도가 높으면 거품안정성 및 열안정성이 높아지는 경향이 있었다.protocol.eractions between application agents that are developed using different languages. Dynamic agent invocation is accomplished by Java Native Interface(JNI) that links two heterogeneous methods, and by KQML language interface that facilitates the communications between heterogeneous agents. This scheme of dyna

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.