• Title/Summary/Keyword: Search

Search Result 17,197, Processing Time 0.042 seconds

Development of an evaluation tool for dietary guideline adherence in the elderly (노인의 식생활지침 실천 평가도구 개발)

  • Young-Suk Lim;Ji Soo Oh;Hye-Young Kim
    • Journal of Nutrition and Health
    • /
    • v.57 no.1
    • /
    • pp.1-15
    • /
    • 2024
  • Purpose: This study aimed to develop a comprehensive tool for assessing dietary guideline adherence among older Korean adults, focusing on the domains of food and nutrient intake, eating habits, and dietary culture. Methods: Candidate items were selected through a literature search and expert advice. The degree of adherence to dietary guidelines was then evaluated through a face-to-face survey conducted on 800 elderly individuals across five nationwide regions. The items for dietary guideline adherence evaluation tool were selected through exploratory factor analysis of the candidate items in each of the three areas of the dietary guidelines, and construct validity was verified by performing confirmatory factor analysis. Using the path coefficient of the structural equation model, weights were assigned to each area and item to calculate the dietary guideline adherence score. A rating system for the evaluation tool was established based on national survey results. Results: A total of twenty-eight items were selected for evaluating dietary guideline adherence among the elderly. Thirteen items related to food intake, seven to eating habits, and eight to dietary culture. The average score for dietary guideline adherence was 56.9 points, with 49.8 points in the food intake area, 63.2 points in the eating habits area, and 58.6 points in the dietary culture area. Statistically significant correlations were found between dietary guideline adherence scores and food literacy (r = 0.679) and nutrition quotient scores (r = 0.750). Conclusion: The developed evaluation tool for dietary guideline adherence among Korean older adults can be used as a simple and effective instrument for comprehensively assessing their food and nutrient intake, dietary habits, and dietary culture.

Human Mind Within and Beyond the Culture - Toward a Better Encounter between East and West - (문화속의 인간심성과 문화를 넘어선 인간심성 - 동과 서의 보다 나은 만남을 위하여 -)

  • Bou-Yong Rhi
    • Sim-seong Yeon-gu
    • /
    • v.28 no.2
    • /
    • pp.107-138
    • /
    • 2013
  • The purpose of this article is to awaken our colleagues to the culture and mind issues that have been forgotten or neglected by contemporary psychiatry under the prevalence of materialistic orientation. Cultural psychiatry too, though it has been contributed a great deal to widen the mental vision of psychiatry, has revealed several limitations in its approach. In the course of one sided search for culture specific factors in relation to mental health, conventional cultural psychiatry has neglected an effort to explore the common root underlying the different cultures and the common foundation of human mind. Cross sectional comparisons of the cultures alone have inevitably prevented the global considerations to culutre and mind in historical aspects and the dynamic interactions between mind and culture more in depth. The author suggested that the total view of mind and total approach of analytical psychology of C.G. Jung might be capable to replenish those limitations. Author explained the ways of C.G. Jung's observations and experiences of non-western culture and his concepts of culture and mind. The author demonstrated Jung's view of culture with the example of Filial Piety, Hyo, the Confucian moral norm which can be regarded as components of the collective consciousness though connected with archetypal patterns of behavior of intimacy between parent and child. In regard to the coexistence of multi-religious cultures in Korea the author made a proposal of 'culture spectrum' model for understanding value orientations of person in religious cultures. He identified in case of the Korean 4 types of cultural spectrums: Person with predominantly the Buddhist culture; with the Confucian; with the Shamanist; and with the Christian culture. The author also made an attempt to depict the dynamic interactions of different religious cultures in historical perspectives of Korea. Concepts of mind from the Eastern thoughts were reviewed in comparison with Jung's view of mind. The Dao of Lao Zi, One Mind by Wonhyo, the Korean Zen master from the 7th century, the Diagram of the Heaven's Decree by Toegye, a renowned Neo-Confucianist of Korea from the 16th century and his theory of Li-Ki, were explored and came to conclusion that they represent certainly the symbol of the Self in term of C.G. Jung. The goal of healing is 'the becoming whole person'. Becoming whole person means bringing the person as an individual to live not only within the specific culture but also to live in the world beyond the culture which is deeply rooted in the primordial foundation of human mind.

A Jungian Perspective on 'Spiritual Exercises' of St. Ignatius (이냐시오 '영신수련'에 대한 분석심리학적 고찰)

  • Jung Taek Kim
    • Sim-seong Yeon-gu
    • /
    • v.25 no.1
    • /
    • pp.27-64
    • /
    • 2010
  • The main focus of this article investigates Jung's analytic implications of the Spiritual Exercises by St. Ignatius of Loyola. The Exercises is referred to not only as the tool for transformation that transformed Ignatius from a soldier of the world into a soldier of God and led him to a completely changed life but also as a tool which galvanizes self-realization, i.e., individuation process, in which a faithful experiences the presence of God in his life and is in search for himself in a new way. The interest in the Exercises regarded as a Western version of Yoga of the East which is a tool for transformation led Jung to give a series of 20 lectures on the Exercises in a seminar held in Zurich from 1939 to 1940. Curiosity about Jung's understanding on the Exercises provokes my desire to step into this research. The Exercises is a book for spiritual exercises that prepare and dispose one's soul to rid itself of all disordered attachments and to order one's life. The Exercises is made up of four Weeks. The First begins with 'Principle and Foundation' which illustrates what human beings are created for. It leads retreatants to rid themselves of disordered attachments and to have a new perspective on life by the consideration and contemplation of sins as the subversion of the Principle and Foundation. The Second is the period in which retreatants accept Christ as the Master of their lives through the meditation and contemplation of the life of Christ. In the Third, retreatants take part in the salvation history of Christ not only by actively participating in the Passion of Christ but also by incorporating the Passion into their lives. The Fourth aids retreatants to undergo their transformation and experience it deeply in order to participate in the new life of Christ who by His resurrection overcame death. In conclusion, Jung viewed the Exercises as a Western tool which plays the similar role of Yoga of the East which engenders inner transformation. The four-week-long retreat helps retreatants to meditate on God who unifies everything and is Himself/Herself the perfect union or the unity so that imperfect retreatants are given opportunities to undergo complete metamorphosis into the immortal, indivisible, and impeccable God. Jung understood that this metamorphosis leads human beings to the totality, that is, the genuine self as the image of God. The author interprets that it is the transformation that the Exercises tries to attain, which resonates with individuation, the key element of analytic psychology.

An Analysis of Web Services in the Legal Works of the Metropolitan Representative Library (광역대표도서관 법정업무의 웹서비스 분석)

  • Seon-Kyung Oh
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.58 no.2
    • /
    • pp.177-198
    • /
    • 2024
  • Article 22(1) of the Library Act, which was completely revised in December 2006, stipulated that regional representative libraries are statutory organizations, and Article 25(1) of the Library Act, which was revised again in late 2021, renamed them as metropolitan representative libraries and expanded their duties. The reason why cities and provinces are required to specify or establish and operate metropolitan representative libraries is that in addition to their role as public libraries for public information use, cultural activities, and lifelong learning as stipulated in Article 23 of the Act, they are also responsible for the legal works of metropolitan representative libraries as stipulated in Article 26, and lead the development of libraries and knowledge culture by serving as policy libraries, comprehensive knowledge information centers, support and cooperation centers, research centers, and joint preservation libraries for all public libraries in the city or province. Therefore, it is necessary to analyze and diagnose whether the metropolitan representative library has been faithfully fulfilling its legal works for the past 15 years(2009-2023), and whether it is properly providing the results of its statutory planning and implementation on its website to meet the digital and mobile era. Therefore, this study investigated and analyzed the performance of the metropolitan representative library for the last two years based on the current statutory tasks and evaluated the extent to which it provides them through its website, and suggested complementary measures to strengthen its web services. As a result, it was analyzed that the web services for legal works that the metropolitan representative library should perform are quite insufficient and inadequate, so it suggested complementary measures such as building a website for legal works on the homepage, enhancing accessibility and visibility through providing an independent website, providing various policy information and web services (portal search, inter-library loan, one-to-one consultation, joint DB construction, data transfer and preservation, etc.), and ensuring digital accessibility of knowledge information for the vulnerable.

The Usefulness of 18F-FDG PET to Differentiate Subtypes of Dementia: The Systematic Review and Meta-Analysis

  • Seunghee Na;Dong Woo Kang;Geon Ha Kim;Ko Woon Kim;Yeshin Kim;Hee-Jin Kim;Kee Hyung Park;Young Ho Park;Gihwan Byeon;Jeewon Suh;Joon Hyun Shin;YongSoo Shim;YoungSoon Yang;Yoo Hyun Um;Seong-il Oh;Sheng-Min Wang;Bora Yoon;Hai-Jeon Yoon;Sun Min Lee;Juyoun Lee;Jin San Lee;Hak Young Rhee;Jae-Sung Lim;Young Hee Jung;Juhee Chin;Yun Jeong Hong;Hyemin Jang;Hongyoon Choi;Miyoung Choi;Jae-Won Jang;Korean Dementia Association
    • Dementia and Neurocognitive Disorders
    • /
    • v.23 no.1
    • /
    • pp.54-66
    • /
    • 2024
  • Background and Purpose: Dementia subtypes, including Alzheimer's dementia (AD), dementia with Lewy bodies (DLB), and frontotemporal dementia (FTD), pose diagnostic challenges. This review examines the effectiveness of 18F-Fluorodeoxyglucose Positron Emission Tomography (18F-FDG PET) in differentiating these subtypes for precise treatment and management. Methods: A systematic review following Preferred Reporting Items for Systematic reviews and Meta-Analyses guidelines was conducted using databases like PubMed and Embase to identify studies on the diagnostic utility of 18F-FDG PET in dementia. The search included studies up to November 16, 2022, focusing on peer-reviewed journals and applying the goldstandard clinical diagnosis for dementia subtypes. Results: From 12,815 articles, 14 were selected for final analysis. For AD versus FTD, the sensitivity was 0.96 (95% confidence interval [CI], 0.88-0.98) and specificity was 0.84 (95% CI, 0.70-0.92). In the case of AD versus DLB, 18F-FDG PET showed a sensitivity of 0.93 (95% CI 0.88-0.98) and specificity of 0.92 (95% CI, 0.70-0.92). Lastly, when differentiating AD from non-AD dementias, the sensitivity was 0.86 (95% CI, 0.80-0.91) and the specificity was 0.88 (95% CI, 0.80-0.91). The studies mostly used case-control designs with visual and quantitative assessments. Conclusions: 18F-FDG PET exhibits high sensitivity and specificity in differentiating dementia subtypes, particularly AD, FTD, and DLB. This method, while not a standalone diagnostic tool, significantly enhances diagnostic accuracy in uncertain cases, complementing clinical assessments and structural imaging.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

Methodology for Identifying Issues of User Reviews from the Perspective of Evaluation Criteria: Focus on a Hotel Information Site (사용자 리뷰의 평가기준 별 이슈 식별 방법론: 호텔 리뷰 사이트를 중심으로)

  • Byun, Sungho;Lee, Donghoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.23-43
    • /
    • 2016
  • As a result of the growth of Internet data and the rapid development of Internet technology, "big data" analysis has gained prominence as a major approach for evaluating and mining enormous data for various purposes. Especially, in recent years, people tend to share their experiences related to their leisure activities while also reviewing others' inputs concerning their activities. Therefore, by referring to others' leisure activity-related experiences, they are able to gather information that might guarantee them better leisure activities in the future. This phenomenon has appeared throughout many aspects of leisure activities such as movies, traveling, accommodation, and dining. Apart from blogs and social networking sites, many other websites provide a wealth of information related to leisure activities. Most of these websites provide information of each product in various formats depending on different purposes and perspectives. Generally, most of the websites provide the average ratings and detailed reviews of users who actually used products/services, and these ratings and reviews can actually support the decision of potential customers in purchasing the same products/services. However, the existing websites offering information on leisure activities only provide the rating and review based on one stage of a set of evaluation criteria. Therefore, to identify the main issue for each evaluation criterion as well as the characteristics of specific elements comprising each criterion, users have to read a large number of reviews. In particular, as most of the users search for the characteristics of the detailed elements for one or more specific evaluation criteria based on their priorities, they must spend a great deal of time and effort to obtain the desired information by reading more reviews and understanding the contents of such reviews. Although some websites break down the evaluation criteria and direct the user to input their reviews according to different levels of criteria, there exist excessive amounts of input sections that make the whole process inconvenient for the users. Further, problems may arise if a user does not follow the instructions for the input sections or fill in the wrong input sections. Finally, treating the evaluation criteria breakdown as a realistic alternative is difficult, because identifying all the detailed criteria for each evaluation criterion is a challenging task. For example, if a review about a certain hotel has been written, people tend to only write one-stage reviews for various components such as accessibility, rooms, services, or food. These might be the reviews for most frequently asked questions, such as distance between the nearest subway station or condition of the bathroom, but they still lack detailed information for these questions. In addition, in case a breakdown of the evaluation criteria was provided along with various input sections, the user might only fill in the evaluation criterion for accessibility or fill in the wrong information such as information regarding rooms in the evaluation criteria for accessibility. Thus, the reliability of the segmented review will be greatly reduced. In this study, we propose an approach to overcome the limitations of the existing leisure activity information websites, namely, (1) the reliability of reviews for each evaluation criteria and (2) the difficulty of identifying the detailed contents that make up the evaluation criteria. In our proposed methodology, we first identify the review content and construct the lexicon for each evaluation criterion by using the terms that are frequently used for each criterion. Next, the sentences in the review documents containing the terms in the constructed lexicon are decomposed into review units, which are then reconstructed by using the evaluation criteria. Finally, the issues of the constructed review units by evaluation criteria are derived and the summary results are provided. Apart from the derived issues, the review units are also provided. Therefore, this approach aims to help users save on time and effort, because they will only be reading the relevant information they need for each evaluation criterion rather than go through the entire text of review. Our proposed methodology is based on the topic modeling, which is being actively used in text analysis. The review is decomposed into sentence units rather than considering the whole review as a document unit. After being decomposed into individual review units, the review units are reorganized according to each evaluation criterion and then used in the subsequent analysis. This work largely differs from the existing topic modeling-based studies. In this paper, we collected 423 reviews from hotel information websites and decomposed these reviews into 4,860 review units. We then reorganized the review units according to six different evaluation criteria. By applying these review units in our methodology, the analysis results can be introduced, and the utility of proposed methodology can be demonstrated.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

Influence analysis of Internet buzz to corporate performance : Individual stock price prediction using sentiment analysis of online news (온라인 언급이 기업 성과에 미치는 영향 분석 : 뉴스 감성분석을 통한 기업별 주가 예측)

  • Jeong, Ji Seon;Kim, Dong Sung;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.37-51
    • /
    • 2015
  • Due to the development of internet technology and the rapid increase of internet data, various studies are actively conducted on how to use and analyze internet data for various purposes. In particular, in recent years, a number of studies have been performed on the applications of text mining techniques in order to overcome the limitations of the current application of structured data. Especially, there are various studies on sentimental analysis to score opinions based on the distribution of polarity such as positivity or negativity of vocabularies or sentences of the texts in documents. As a part of such studies, this study tries to predict ups and downs of stock prices of companies by performing sentimental analysis on news contexts of the particular companies in the Internet. A variety of news on companies is produced online by different economic agents, and it is diffused quickly and accessed easily in the Internet. So, based on inefficient market hypothesis, we can expect that news information of an individual company can be used to predict the fluctuations of stock prices of the company if we apply proper data analysis techniques. However, as the areas of corporate management activity are different, an analysis considering characteristics of each company is required in the analysis of text data based on machine-learning. In addition, since the news including positive or negative information on certain companies have various impacts on other companies or industry fields, an analysis for the prediction of the stock price of each company is necessary. Therefore, this study attempted to predict changes in the stock prices of the individual companies that applied a sentimental analysis of the online news data. Accordingly, this study chose top company in KOSPI 200 as the subjects of the analysis, and collected and analyzed online news data by each company produced for two years on a representative domestic search portal service, Naver. In addition, considering the differences in the meanings of vocabularies for each of the certain economic subjects, it aims to improve performance by building up a lexicon for each individual company and applying that to an analysis. As a result of the analysis, the accuracy of the prediction by each company are different, and the prediction accurate rate turned out to be 56% on average. Comparing the accuracy of the prediction of stock prices on industry sectors, 'energy/chemical', 'consumer goods for living' and 'consumer discretionary' showed a relatively higher accuracy of the prediction of stock prices than other industries, while it was found that the sectors such as 'information technology' and 'shipbuilding/transportation' industry had lower accuracy of prediction. The number of the representative companies in each industry collected was five each, so it is somewhat difficult to generalize, but it could be confirmed that there was a difference in the accuracy of the prediction of stock prices depending on industry sectors. In addition, at the individual company level, the companies such as 'Kangwon Land', 'KT & G' and 'SK Innovation' showed a relatively higher prediction accuracy as compared to other companies, while it showed that the companies such as 'Young Poong', 'LG', 'Samsung Life Insurance', and 'Doosan' had a low prediction accuracy of less than 50%. In this paper, we performed an analysis of the share price performance relative to the prediction of individual companies through the vocabulary of pre-built company to take advantage of the online news information. In this paper, we aim to improve performance of the stock prices prediction, applying online news information, through the stock price prediction of individual companies. Based on this, in the future, it will be possible to find ways to increase the stock price prediction accuracy by complementing the problem of unnecessary words that are added to the sentiment dictionary.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.