• Title/Summary/Keyword: java

Search Result 2,691, Processing Time 0.035 seconds

Development of User Based Recommender System using Social Network for u-Healthcare (사회 네트워크를 이용한 사용자 기반 유헬스케어 서비스 추천 시스템 개발)

  • Kim, Hyea-Kyeong;Choi, Il-Young;Ha, Ki-Mok;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.181-199
    • /
    • 2010
  • As rapid progress of population aging and strong interest in health, the demand for new healthcare service is increasing. Until now healthcare service has provided post treatment by face-to-face manner. But according to related researches, proactive treatment is resulted to be more effective for preventing diseases. Particularly, the existing healthcare services have limitations in preventing and managing metabolic syndrome such a lifestyle disease, because the cause of metabolic syndrome is related to life habit. As the advent of ubiquitous technology, patients with the metabolic syndrome can improve life habit such as poor eating habits and physical inactivity without the constraints of time and space through u-healthcare service. Therefore, lots of researches for u-healthcare service focus on providing the personalized healthcare service for preventing and managing metabolic syndrome. For example, Kim et al.(2010) have proposed a healthcare model for providing the customized calories and rates of nutrition factors by analyzing the user's preference in foods. Lee et al.(2010) have suggested the customized diet recommendation service considering the basic information, vital signs, family history of diseases and food preferences to prevent and manage coronary heart disease. And, Kim and Han(2004) have demonstrated that the web-based nutrition counseling has effects on food intake and lipids of patients with hyperlipidemia. However, the existing researches for u-healthcare service focus on providing the predefined one-way u-healthcare service. Thus, users have a tendency to easily lose interest in improving life habit. To solve such a problem of u-healthcare service, this research suggests a u-healthcare recommender system which is based on collaborative filtering principle and social network. This research follows the principle of collaborative filtering, but preserves local networks (consisting of small group of similar neighbors) for target users to recommend context aware healthcare services. Our research is consisted of the following five steps. In the first step, user profile is created using the usage history data for improvement in life habit. And then, a set of users known as neighbors is formed by the degree of similarity between the users, which is calculated by Pearson correlation coefficient. In the second step, the target user obtains service information from his/her neighbors. In the third step, recommendation list of top-N service is generated for the target user. Making the list, we use the multi-filtering based on user's psychological context information and body mass index (BMI) information for the detailed recommendation. In the fourth step, the personal information, which is the history of the usage service, is updated when the target user uses the recommended service. In the final step, a social network is reformed to continually provide qualified recommendation. For example, the neighbors may be excluded from the social network if the target user doesn't like the recommendation list received from them. That is, this step updates each user's neighbors locally, so maintains the updated local neighbors always to give context aware recommendation in real time. The characteristics of our research as follows. First, we develop the u-healthcare recommender system for improving life habit such as poor eating habits and physical inactivity. Second, the proposed recommender system uses autonomous collaboration, which enables users to prevent dropping and not to lose user's interest in improving life habit. Third, the reformation of the social network is automated to maintain the quality of recommendation. Finally, this research has implemented a mobile prototype system using JAVA and Microsoft Access2007 to recommend the prescribed foods and exercises for chronic disease prevention, which are provided by A university medical center. This research intends to prevent diseases such as chronic illnesses and to improve user's lifestyle through providing context aware and personalized food and exercise services with the help of similar users'experience and knowledge. We expect that the user of this system can improve their life habit with the help of handheld mobile smart phone, because it uses autonomous collaboration to arouse interest in healthcare.

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

A Comparative Study about Industrial Structure Feature between TL Carriers and LTL Carriers (구역화물운송업과 노선화물운송업의 산업구조 특성 비교)

  • 민승기
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.1
    • /
    • pp.101-114
    • /
    • 2001
  • Transportation enterprises should maintain constant and qualitative operation. Thus, in short period, transportation enterprises don't change supply in accordance with demand. In the result, transportation enterprises don't reduce operation in spite of management deficit at will. In freight transportation type, less-than-truckload(LTL) has more relation with above transportation feature than truckload(TL) does. Because freight transportation supply of TL is more flexible than that of LTL in correspondence of freight transportation demand. Relating to above mention, it appears that shortage of road and freight terminal of LTL is larger than that of TL. Especially in road and freight terminal comparison, shortage of freight terminal is larger than that of road. Shortage of road is the largest in 1990, and improved after-ward. But shortage of freight terminal is serious lately. So freight terminal needs more expansion than road, and shows better investment condition than road. Freight terminal expansion brings road expansion in LTL, on the contrary, freight terminal expansion substitutes freight terminal for road in TL. In transportation revenue, freight terminal's contribution to LTL is larger than that to TL. However, when we adjust quasi-fixed factor - road and freight terminal - to optimal level in the long run, in TL, diseconomies of scale becomes large, but in LTL, economies of scale becomes large. Consequently, it is necessary for TL to make counterplans to activate management of small size enterprises and owner drivers. And LTL should make use of economies of scale by solving the problem, such as nonprofit route, excess of rental freight handling of office, insufficiency of freight terminal, shortage of driver, and unpreparedness of freight insurance.

  • PDF

Probabilistic Anatomical Labeling of Brain Structures Using Statistical Probabilistic Anatomical Maps (확률 뇌 지도를 이용한 뇌 영역의 위치 정보 추출)

  • Kim, Jin-Su;Lee, Dong-Soo;Lee, Byung-Il;Lee, Jae-Sung;Shin, Hee-Won;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.317-324
    • /
    • 2002
  • Purpose: The use of statistical parametric mapping (SPM) program has increased for the analysis of brain PET and SPECT images. Montreal Neurological Institute (MNI) coordinate is used in SPM program as a standard anatomical framework. While the most researchers look up Talairach atlas to report the localization of the activations detected in SPM program, there is significant disparity between MNI templates and Talairach atlas. That disparity between Talairach and MNI coordinates makes the interpretation of SPM result time consuming, subjective and inaccurate. The purpose of this study was to develop a program to provide objective anatomical information of each x-y-z position in ICBM coordinate. Materials and Methods: Program was designed to provide the anatomical information for the given x-y-z position in MNI coordinate based on the Statistical Probabilistic Anatomical Map (SPAM) images of ICBM. When x-y-z position was given to the program, names of the anatomical structures with non-zero probability and the probabilities that the given position belongs to the structures were tabulated. The program was coded using IDL and JAVA language for 4he easy transplantation to any operating system or platform. Utility of this program was shown by comparing the results of this program to those of SPM program. Preliminary validation study was peformed by applying this program to the analysis of PET brain activation study of human memory in which the anatomical information on the activated areas are previously known. Results: Real time retrieval of probabilistic information with 1 mm spatial resolution was archived using the programs. Validation study showed the relevance of this program: probability that the activated area for memory belonged to hippocampal formation was more than 80%. Conclusion: These programs will be useful for the result interpretation of the image analysis peformed on MNI coordinate, as done in SPM program.

Nutrient Balance and Vegetable Crop Production as Affected by Different Sources of Organic Fertilizers (유기자원에 따른 양분수지 및 작물생산)

  • Agus, Fahmuddin;Setyorini, Diah;Hartatik, Wiwik;Lee, Sang-Min;Sung, Jwa-Kyung;Shin, Jae-Hoon
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.42 no.1
    • /
    • pp.1-13
    • /
    • 2009
  • Understanding the net nutrient balance in a farming system is crucial in assessing the system's sustainability. We quantified N, P and K balances under vegetable organic farming in a Eutric Haplud and in West Java, Indonesia in five planting seasons from 2005 to 2007. The ten treatments and three replications, arranged in a completely randomized block design, included single or combined sources of organic fertilizers: barnyard manure, compos ts or green manures. The organic matter rates were adjusted every planting season depending on the previous crop responses. The result sshowed that the application of ${\geq}20$ t $ha^{-1}$ barnyard manure per crop resulted in positive balances of N, P, and K, except in the second crops of 2006 where potassium balance were -25 to -11 kg $ha^{-1}$ under the treatments involving cattle barnyard manure, because of low K content of these treatments and high K uptake by Chinese cabbage. Application of 20 to 25 t $ha^{-1}$ of plant residue or 5 t $ha^{-1}$ of Tithonia compost also resulted in a negative K balance. Soil available P increased significantly under ${\geq}25$ t $ha^{-1}$ barnyard manure and that under chicken manure had the highest available P. Accordingly, chicken barnyard manure gave the highest crop yield because of relatively higher N, P, and K contents. Plant residues gave the lowest yield due to the lowest nutrient content among all sources. Reducing the use of barnyard manure to 12.5 t $ha^{-1}$ and substituting it with Tithonia compost, Tithonia green manure or vegetable plant residue compost gave insignificantly different yield compared to the application of 25 t $ha^{-1}$ barnyard manure singly. In the long run, application of 25 t ha-1 cattle, goat, and horse manure or about 20 t $ha^{-1}$ chicken manure is recommendable for sustaining the fertility of this Andisol for vegetable production.

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.

A Study on Differences of Contents and Tones of Arguments among Newspapers Using Text Mining Analysis (텍스트 마이닝을 활용한 신문사에 따른 내용 및 논조 차이점 분석)

  • Kam, Miah;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.53-77
    • /
    • 2012
  • This study analyses the difference of contents and tones of arguments among three Korean major newspapers, the Kyunghyang Shinmoon, the HanKyoreh, and the Dong-A Ilbo. It is commonly accepted that newspapers in Korea explicitly deliver their own tone of arguments when they talk about some sensitive issues and topics. It could be controversial if readers of newspapers read the news without being aware of the type of tones of arguments because the contents and the tones of arguments can affect readers easily. Thus it is very desirable to have a new tool that can inform the readers of what tone of argument a newspaper has. This study presents the results of clustering and classification techniques as part of text mining analysis. We focus on six main subjects such as Culture, Politics, International, Editorial-opinion, Eco-business and National issues in newspapers, and attempt to identify differences and similarities among the newspapers. The basic unit of text mining analysis is a paragraph of news articles. This study uses a keyword-network analysis tool and visualizes relationships among keywords to make it easier to see the differences. Newspaper articles were gathered from KINDS, the Korean integrated news database system. KINDS preserves news articles of the Kyunghyang Shinmun, the HanKyoreh and the Dong-A Ilbo and these are open to the public. This study used these three Korean major newspapers from KINDS. About 3,030 articles from 2008 to 2012 were used. International, national issues and politics sections were gathered with some specific issues. The International section was collected with the keyword of 'Nuclear weapon of North Korea.' The National issues section was collected with the keyword of '4-major-river.' The Politics section was collected with the keyword of 'Tonghap-Jinbo Dang.' All of the articles from April 2012 to May 2012 of Eco-business, Culture and Editorial-opinion sections were also collected. All of the collected data were handled and edited into paragraphs. We got rid of stop-words using the Lucene Korean Module. We calculated keyword co-occurrence counts from the paired co-occurrence list of keywords in a paragraph. We made a co-occurrence matrix from the list. Once the co-occurrence matrix was built, we used the Cosine coefficient matrix as input for PFNet(Pathfinder Network). In order to analyze these three newspapers and find out the significant keywords in each paper, we analyzed the list of 10 highest frequency keywords and keyword-networks of 20 highest ranking frequency keywords to closely examine the relationships and show the detailed network map among keywords. We used NodeXL software to visualize the PFNet. After drawing all the networks, we compared the results with the classification results. Classification was firstly handled to identify how the tone of argument of a newspaper is different from others. Then, to analyze tones of arguments, all the paragraphs were divided into two types of tones, Positive tone and Negative tone. To identify and classify all of the tones of paragraphs and articles we had collected, supervised learning technique was used. The Na$\ddot{i}$ve Bayesian classifier algorithm provided in the MALLET package was used to classify all the paragraphs in articles. After classification, Precision, Recall and F-value were used to evaluate the results of classification. Based on the results of this study, three subjects such as Culture, Eco-business and Politics showed some differences in contents and tones of arguments among these three newspapers. In addition, for the National issues, tones of arguments on 4-major-rivers project were different from each other. It seems three newspapers have their own specific tone of argument in those sections. And keyword-networks showed different shapes with each other in the same period in the same section. It means that frequently appeared keywords in articles are different and their contents are comprised with different keywords. And the Positive-Negative classification showed the possibility of classifying newspapers' tones of arguments compared to others. These results indicate that the approach in this study is promising to be extended as a new tool to identify the different tones of arguments of newspapers.

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

King Sejo's Establishment of the Thirteen-story Stone Pagoda of Wongaksa Temple and Its Semantics (세조의 원각사13층석탑 건립과 그 의미체계)

  • Nam, Dongsin
    • MISULJARYO - National Museum of Korea Art Journal
    • /
    • v.101
    • /
    • pp.12-46
    • /
    • 2022
  • Completed in 1467, the Thirteen-story Stone Pagoda of Wongaksa Temple is the last Buddhist pagoda erected at the center of the capital (present-day Seoul) of the Joseon Dynasty. It was commissioned by King Sejo, the final Korean king to favor Buddhism. In this paper, I aim to examine King Sejo's intentions behind celebrating the tenth anniversary of his enthronement with the construction of the thirteen-story stone pagoda in the central area of the capital and the enshrinement of sarira from Shakyamuni Buddha and the Newly Translated Sutra of Perfect Enlightenment (圓覺經). This paper provides a summary of this examination and suggests future research directions. The second chapter of the paper discusses the scriptural background for thirteen-story stone pagodas from multiple perspectives. I was the first to specify the Latter Part of the Nirvana Sutra (大般涅槃經後分) as the most direct and fundamental scripture for the erection of a thirteen-story stone pagoda. I also found that this sutra was translated in Central Java in the latter half of the seventh century and was then circulated in East Asia. Moreover, I focused on the so-called Kanishka-style stupa as the origin of thirteen-story stone pagodas and provided an overview of thirteen-story stone pagodas built around East Asia, including in Korea. In addition, by consulting Buddhist references, I prove that the thirteen stories symbolize the stages of the practice of asceticism towards enlightenment. In this regard, the number thirteen can be viewed as a special and sacred number to Buddhist devotees. The third chapter explores the Buddhist background of King Sejo's establishment of the Thirteen-story Stone Pagoda of Wongaksa Temple. I studied both the Dictionary of Sanskrit-Chinese Translation of Buddhist Terms (翻譯名義集) (which King Sejo personally purchased in China and published for the first time in Korea) and the Sutra of Perfect Enlightenment. King Sejo involved himself in the first translation of the Sutra of Perfect Enlightenment into Korean. The Dictionary of Sanskrit-Chinese Translation of Buddhist Terms was published in the fourteenth century as a type of Buddhist glossary. King Sejo is presumed to have been introduced to the Latter Part of the Nirvana Sutra, the fundamental scripture regarding thirteen-story pagodas, through the Dictionary of Sanskrit-Chinese Translation of Buddhist Terms, when he was set to erect a pagoda at Wongaksa Temple. King Sejo also enshrined the Newly Translated Sutra of Perfect Enlightenment inside the Wongaksa pagoda as a scripture representing the entire Tripitaka. This enshrined sutra appears to be the vernacular version for which King Sejo participated in the first Korean translation. Furthermore, I assert that the original text of the vernacular version is the Abridged Commentary on the Sutra of Perfect Enlightenment (圓覺經略疏) by Zongmi (宗密, 780-841), different from what has been previously believed. The final chapter of the paper elucidates the political semantics of the establishment of the Wongaksa pagoda by comparing and examining stone pagodas erected at neungsa (陵寺) or jinjeonsawon (眞殿寺院), which were types of temples built to protect the tombs of royal family members near their tombs during the early Joseon period. These stone pagodas include the Thirteen-story Pagoda of Gyeongcheonsa Temple, the Stone Pagoda of Gaegyeongsa Temple, the Stone Pagoda of Yeongyeongsa Temple, and the Multi-story Stone Pagoda of Silleuksa Temple. The comparative analysis of these stone pagodas reveals that King Sejo established the Thirteen-story Stone Pagoda at Wongaksa Temple as a political emblem to legitimize his succession to the throne. In this paper, I attempt to better understand the scriptural and political semantics of the Wongaksa pagoda as a thirteen-story pagoda. By providing a Korean case study, this attempt will contribute to the understanding of Buddhist pagoda culture that reached its peak during the late Goryeo and early Joseon periods. It also contributes to the research on thirteen-story pagodas in East Asia that originated with Kanishka stupa and were based on the Latter Part of the Nirvana Sutra.