• Title/Summary/Keyword: Selection Analysis

Search Result 7,039, Processing Time 0.039 seconds

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

A study on the Success Factors and Strategy of Information Technology Investment Based on Intelligent Economic Simulation Modeling (지능형 시뮬레이션 모형을 기반으로 한 정보기술 투자 성과 요인 및 전략 도출에 관한 연구)

  • Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.35-55
    • /
    • 2013
  • Information technology is a critical resource necessary for any company hoping to support and realize its strategic goals, which contribute to growth promotion and sustainable development. The selection of information technology and its strategic use are imperative for the enhanced performance of every aspect of company management, leading a wide range of companies to have invested continuously in information technology. Despite researchers, managers, and policy makers' keen interest in how information technology contributes to organizational performance, there is uncertainty and debate about the result of information technology investment. In other words, researchers and managers cannot easily identify the independent factors that can impact the investment performance of information technology. This is mainly owing to the fact that many factors, ranging from the internal components of a company, strategies, and external customers, are interconnected with the investment performance of information technology. Using an agent-based simulation technique, this research extracts factors expected to affect investment performance on information technology, simplifies the analyses of their relationship with economic modeling, and examines the performance dependent on changes in the factors. In terms of economic modeling, I expand the model that highlights the way in which product quality moderates the relationship between information technology investments and economic performance (Thatcher and Pingry, 2004) by considering the cost of information technology investment and the demand creation resulting from product quality enhancement. For quality enhancement and its consequences for demand creation, I apply the concept of information quality and decision-maker quality (Raghunathan, 1999). This concept implies that the investment on information technology improves the quality of information, which, in turn, improves decision quality and performance, thus enhancing the level of product or service quality. Additionally, I consider the effect of word of mouth among consumers, which creates new demand for a product or service through the information diffusion effect. This demand creation is analyzed with an agent-based simulation model that is widely used for network analyses. Results show that the investment on information technology enhances the quality of a company's product or service, which indirectly affects the economic performance of that company, particularly with regard to factors such as consumer surplus, company profit, and company productivity. Specifically, when a company makes its initial investment in information technology, the resultant increase in the quality of a company's product or service immediately has a positive effect on consumer surplus, but the investment cost has a negative effect on company productivity and profit. As time goes by, the enhancement of the quality of that company's product or service creates new consumer demand through the information diffusion effect. Finally, the new demand positively affects the company's profit and productivity. In terms of the investment strategy for information technology, this study's results also reveal that the selection of information technology needs to be based on analysis of service and the network effect of customers, and demonstrate that information technology implementation should fit into the company's business strategy. Specifically, if a company seeks the short-term enhancement of company performance, it needs to have a one-shot strategy (making a large investment at one time). On the other hand, if a company seeks a long-term sustainable profit structure, it needs to have a split strategy (making several small investments at different times). The findings from this study make several contributions to the literature. In terms of methodology, the study integrates both economic modeling and simulation technique in order to overcome the limitations of each methodology. It also indicates the mediating effect of product quality on the relationship between information technology and the performance of a company. Finally, it analyzes the effect of information technology investment strategies and information diffusion among consumers on the investment performance of information technology.

Major Class Recommendation System based on Deep learning using Network Analysis (네트워크 분석을 활용한 딥러닝 기반 전공과목 추천 시스템)

  • Lee, Jae Kyu;Park, Heesung;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.95-112
    • /
    • 2021
  • In university education, the choice of major class plays an important role in students' careers. However, in line with the changes in the industry, the fields of major subjects by department are diversifying and increasing in number in university education. As a result, students have difficulty to choose and take classes according to their career paths. In general, students choose classes based on experiences such as choices of peers or advice from seniors. This has the advantage of being able to take into account the general situation, but it does not reflect individual tendencies and considerations of existing courses, and has a problem that leads to information inequality that is shared only among specific students. In addition, as non-face-to-face classes have recently been conducted and exchanges between students have decreased, even experience-based decisions have not been made as well. Therefore, this study proposes a recommendation system model that can recommend college major classes suitable for individual characteristics based on data rather than experience. The recommendation system recommends information and content (music, movies, books, images, etc.) that a specific user may be interested in. It is already widely used in services where it is important to consider individual tendencies such as YouTube and Facebook, and you can experience it familiarly in providing personalized services in content services such as over-the-top media services (OTT). Classes are also a kind of content consumption in terms of selecting classes suitable for individuals from a set content list. However, unlike other content consumption, it is characterized by a large influence of selection results. For example, in the case of music and movies, it is usually consumed once and the time required to consume content is short. Therefore, the importance of each item is relatively low, and there is no deep concern in selecting. Major classes usually have a long consumption time because they have to be taken for one semester, and each item has a high importance and requires greater caution in choice because it affects many things such as career and graduation requirements depending on the composition of the selected classes. Depending on the unique characteristics of these major classes, the recommendation system in the education field supports decision-making that reflects individual characteristics that are meaningful and cannot be reflected in experience-based decision-making, even though it has a relatively small number of item ranges. This study aims to realize personalized education and enhance students' educational satisfaction by presenting a recommendation model for university major class. In the model study, class history data of undergraduate students at University from 2015 to 2017 were used, and students and their major names were used as metadata. The class history data is implicit feedback data that only indicates whether content is consumed, not reflecting preferences for classes. Therefore, when we derive embedding vectors that characterize students and classes, their expressive power is low. With these issues in mind, this study proposes a Net-NeuMF model that generates vectors of students, classes through network analysis and utilizes them as input values of the model. The model was based on the structure of NeuMF using one-hot vectors, a representative model using data with implicit feedback. The input vectors of the model are generated to represent the characteristic of students and classes through network analysis. To generate a vector representing a student, each student is set to a node and the edge is designed to connect with a weight if the two students take the same class. Similarly, to generate a vector representing the class, each class was set as a node, and the edge connected if any students had taken the classes in common. Thus, we utilize Node2Vec, a representation learning methodology that quantifies the characteristics of each node. For the evaluation of the model, we used four indicators that are mainly utilized by recommendation systems, and experiments were conducted on three different dimensions to analyze the impact of embedding dimensions on the model. The results show better performance on evaluation metrics regardless of dimension than when using one-hot vectors in existing NeuMF structures. Thus, this work contributes to a network of students (users) and classes (items) to increase expressiveness over existing one-hot embeddings, to match the characteristics of each structure that constitutes the model, and to show better performance on various kinds of evaluation metrics compared to existing methodologies.

Optimum Size Selection and Machinery Costs Analysis for Farm Machinery Systems - Programming for Personal Computer - (농기계(農機械) 투입모형(投入模型) 설정(設定) 및 기계이용(機械利用) 비용(費用) 분석연구(分析硏究) - PC용(用) 프로그램 개발(開發) -)

  • Lee, W.Y.;Kim, S.R.;Jung, D.H.;Chang, D.I.;Lee, D.H.;Kim, Y.H.
    • Journal of Biosystems Engineering
    • /
    • v.16 no.4
    • /
    • pp.384-398
    • /
    • 1991
  • A computer program was developed to select the optimum size of farm machine and analyze its operation costs according to various farming conditions. It was written in FORTRAN 77 and BASIC languages and can be run on any personal computer having Korean Standard Complete Type and Korean Language Code. The program was developed as a user-friendly type so that users can carry out easily the costs analysis for the whole farm work or respective operation in rice production, and for plowing, rotarying and pest controlling in upland. The program can analyze simultaneously three different machines in plowing & rotarying and two machines in transplanting, pest controlling and harvesting operations. The input data are the sizes of arable lands, possible working days and number of laborers during the opimum working period, and custom rates varying depending on regions and individual farming conditions. We can find out the results such as the selected optimum combination farm machines, the overs and shorts of working days relative to the planned working period, capacities of the machines, break-even points by custom rate, fixed costs for a month, and utilization costs in a hectare.

  • PDF

A Complexity Reduction Method of MPEG-4 Audio Lossless Coding Encoder by Using the Joint Coding Based on Cross Correlation of Residual (여기신호의 상관관계 기반 joint coding을 이용한 MPEG-4 audio lossless coding 인코더 복잡도 감소 방법)

  • Cho, Choong-Sang;Kim, Je-Woo;Choi, Byeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.87-95
    • /
    • 2010
  • Portable multi-media products which can service the highest audio-quality by using lossless audio codec has been released and the international lossless codecs, MPEG-4 audio lossless coding(ALS) and MPEG-4 scalable lossless coding(SLS), were standardized by MPEG in 2006. The simple profile of MPEG-4 ALS, it supports up to stereo, was defined by MPEG in 2009. The lossless audio codec should have low-complexity in stereo to be widely used in portable multi-media products. But the previous researches of MPEG-4 ALS have focused on an improvement of compression ratio, a complexity reduction in multi-channels coding, and a selection of linear prediction coefficients(LPCs) order. In this paper, the complexity and compression ratio of MPEG-4 ALS encoder is analyzed in simple profile of MPEG-4 ALS, the method to reduce a complexity of MPEG-4 ALS encoder is proposed. Based on an analysis of complexity of MPEG-4 ALS encoder, the complexity of short-term prediction filter of MPEG-4 ALS encoder is reduced by using the low-complexity filter that is proposed in previous research to reduce the complexity of MPEG-4 ALS decoder. Also, we propose a joint coding decision method, it reduces the complexity and keeps the compression ratio of MPEG-4 ALS encoder. In proposed method, the operation of joint coding is decided based on the relation between cross-correlation of residual and compression ratio of joint coding. The performance of MPEG-4 ALS encoder that has the method and low-complexity filter is evaluated by using the MPEG-4 ALS conformance test file and normal music files. The complexity of MPEG-4 ALS encoder is reduced by about 24% by comparing with MPEG-4 ALS reference encoder, while the compression ratio by the proposed method is comparable to MPEG-4 ALS reference encoder.

Adjuvant Postoperative Radiation Therapy for Carcinoma of the Uterine Cervix (자궁경부암의 수술 후 방사선치료)

  • Lee Kyung-Ja;Moon Hye Seong;Kim Seung Cheol;Kim Chong Il;Ahn Jung Ja
    • Radiation Oncology Journal
    • /
    • v.21 no.3
    • /
    • pp.199-206
    • /
    • 2003
  • Purpose: This study was undertaken to evaluate the efficacy of postoperative radiotherapy, and to investigate the prognostic factors for FIGO stages IB-IIB cervical cancer patients who were treated with simple hysterectomy, or who had high-risk factors following radical hysterectomy and pelvic lymph node dissection. Materials and Methods: Between March 1986 and December 1998, 58 patients, with FIGO stages IB-IIB cervical cancer were included in this study. The indications for postoperative radiation therapy were based on the pathological findings, including lymph node metastasis, positive surgical margin, parametrial extension, lymphovascular invasion, invasion of more than half the cervical stroma, uterine extension and the incidental finding of cervix cancer fellowing simple hysterectomy. All patients received external pelvic radiotherapy, and 5 patients, received an additional intracavitary radiation therapy. The radiation dose from the external beam to the whole pelvis was $40\~50$ Gy. Vagina cuff Irradiation was peformed, after completion of the external beam irradiation, at a low-dose rate of Cs-137, with the total dose of $4488\~4932$ chy (median: 4500 chy) at 5 mm depth from the vagina surface. The median follow-up period was 44 months ($15\~108$ months). Results: The 5-yr actuarial local control rate, distant free survival and disease-free survival rate were $98\%,\;95\%\;and\;94\%$, respectively. A univariate analysis of the clinical and pathological parameters revealed that the clinical stage (p=0.0145), status of vaginal resection margin (p=0.0002) and parametrial extension (p=0.0001) affected the disease-free survival. From a multivariate analysis, only a parametrial extension independently influenced the disease-free survival. Five patients ($9\%$) experienced Grade 2 late treatment-related complications, such as radiation proctitis (1 patient), cystitis (3 patients) and lymphedema of the leg (1 patient). No patient had grade 3 or 4 complications. Conclusion: Our results indicate that postoperative radiation therapy can achieve good local control and survival rates for patients with stages IB-IIB cervical cancer, treated with a simple hysterectomy, as well as for those treated with a radical hysterectomy, and with unfavorable pathological findings. The prognostic factor for disease-free survival was invasion of the parametrium. The prognosic factor identified in this study for treatment failure can be used as a selection criterion for the combined treatment of radiation and che motherapy.

Relationships between Geographical Conditions and Distribution Pattern of Plant Species on Uninhabited Islands in Korea (우리나라 無人島嶼의 地理的 還境과 植物의 分布 pattern 사이의 相關性 分析)

  • 정재민;홍경낙
    • The Korean Journal of Ecology
    • /
    • v.25 no.5
    • /
    • pp.341-348
    • /
    • 2002
  • Correlations among the island area, distance to mainland, latitude, longitude, human impacts, diversity and composition of vascular plants were investigated by analyzing data on 261 islands(10.3% of total number of islands in Korea) selected from the annual reports for 'the natural evironment survey of the uninhabited islands in Korea' published by 'Ministry of Environment' during three years from 1999. The area of surveyed 261 islands ranged 1,100 to 961,000㎡(average of 75,000㎡), and the distance to mainland ranged 0.15 to 51.5km (average of 14.9km). Total number of plant species recorded in those islands was 1,109 species throughout 30 families, and mean mumber of plant species of each island was 98.7 species. Native species were 1,003 species (90.4%), and exotic species were 106 species(9.6%). The families with the largest number of species was the Compositae with 114 species, and followed in the order of Gramineae(90), Leguminosae(54), and Rosaceae(53). The result of multi-dimensional scaling analysis based on the plant species composition showed that 261 islands were distinctly divided into two groups, western sea group(131 islands) and southern sea group(130 islands). The islands of western sea group(average area of 93,000㎡) had greatly larger area than them of southern sea group(average area of 57,000㎡), but the average number of species (average species of 192) per island were less than in southern sea group (average species of 233). And, the partitioning into two groups was responsible for the species restricted to southern than to western sea group. Therefore, this results suggest that the distribution pattern and the composition of plant species could be also affected by the latitude of the island. When the species-area model was applied to total island and plant species, these results indicate that the island area was the most significant predictor of plant species diversity, and the distance to mainland and the human impacts were also shown to be significant predictors of plant species richness. But when applied to both groups of islands by the stepwise selection method, the result showed that islands of southern sea group were greatly affected by the factors such as human impacts, distance to mainland and longitude than western sea group. For the purpose of conservation of natural ecosystem on the uninhabited islands in Korea, we will also examine how the human impacts and the invasion of exotic plant species will disturb the native species diversity.

Development of Work-related Musculoskeletal Disorder Questionnaire Using Receiver Operating Characteristic Analysis (Receiver Operating Characteristic 분석법을 이용한 업무관련성 근골격계질환 설문지 개발)

  • Kwon, Ho-Jang;Ju, Yeong-Su;Cho, Soo-Hun;Kang, Dae-Hee;Sung, Joo-Hon;Choi, Seong-Woo;Choi, Jae-Wook;Kim, Jae-Young;Kim, Don-Gyu;Kim, Jai-Yong
    • Journal of Preventive Medicine and Public Health
    • /
    • v.32 no.3
    • /
    • pp.361-373
    • /
    • 1999
  • Objectives: Receive Operating Characteristic(ROC) curve with the area under the ROC curve(AUC) is one of the most popular indicator to evaluate the criterion validity of the measurement tool. This study was conducted to develop a standardized questionnaire to discriminate workers at high-risk of work-related musculoskeletal disorders using ROC analysis. Methods: The diagnostic results determined by rehabilitation medicine specialists in 370 persons(89 shipyard CAD workers, 113 telephone directory assistant operators, 79 women with occupation, and 89 housewives) were compared with participant's own replies to 'the questionnair on the worker's subjective physical symptoms'(Kwon, 1996). The AUC's from four models with different methods in item selection and weighting were compared with each other. These 4 models were applied to 225 persons, working in an assembly line of motor vehicle, for the purpose of AUC reliability test. Results: In a weighted model with 11 items, the AUC was 0.8155 in the primary study population, and 0.8026 in the secondary study population(p=0.3780). It was superior in the aspects of discriminability, reliability and convenience. A new questionnaire of musculoskeletal disorder could be constructed by this model. Conclusion: A more valid questionnaire with a small number of items and the quantitative weight scores useful for the relative comparisons are the main results of this study. While the absolute reference value applicable to the wide range of populations was not estimated, the basic intent of this study, developing a surveillance fool through quantitative validation of the measures, would serve for the systematic disease prevention activities.

  • PDF

Analysis of the Relationships Between ESD and DAP, and Image SNR·CNR According to the Frame Change of Cine Imaging in CAG : With Focus on 10 f/s and 15 f/s (심장혈관 조영술에서 씨네(cine)촬영의 프레임변화에 따른 ESD와 DAP 및 영상의 SNR·CNR 관계 분석: 10f/s과 15f/s을 중심으로)

  • Jung, Myo-Young;Seo, Young-Hyun;Song, Jong-Nam;Han, Jae-Bok
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.5
    • /
    • pp.669-675
    • /
    • 2018
  • This study aimed to investigate the difference of X-ray exposure by comparing and analyzing entrance surface dose and absorbed dose according to the frame change in coronary angiography using an X-ray machine. Moreover, appropriate frame selection measures for examination, including the effect of frame change on the image quality, were sought by measuring and analyzing the SNR and CNR of the image through image J. The study was conducted on 30 patients (19 males and 11 females) who underwent CAG at this hospital from June 2017 to October 2017. In regard to the patients, their age range was 49-82 years (mean of $65{\pm}9$ years), body weight was 45-91 kg (mean of $67{\pm}8.9kg$), height was 150-179cm (mean of $165.1{\pm}8.9kg$), and BMI was 19.5-30.5(mean of $24.5{\pm}2.9$). For the entrance surface dose and absorbed dose, air kerma value and DAP were obtained and analyzed retrospectively. The SNR and CNR were measured and analyzed through imageJ, and the result values were derived by applying the values to the formula. As for the statistical analyses, the correlations between the entrance surface dose and absorbed dose, and between the SNR and CNR were analyzed by using the SPSS statistical program. The relationship between the entrance surface dose and absorbed dose was not statistically significant for both 10 f/s and 15 f/s (p>0.05). In terms of the relationship between the SNR and CNR, the SNR ($3.374{\pm}2.1297$) and CNR ($0.234{\pm}0.2249$) in 10 f/s were $1.43{\pm}0.4861$ and $0.132{\pm}0.0555$ lower, respectively, than the SNR ($4.929{\pm}2.8532$) and CNR ($0.391{\pm}0.3025$) in 15 f/s, which were not statistically significant (p>0.05). In the correlation analysis, statistically significant results were obtained among the BMI, air kerma, and DAP; between air kerma and DAP; and between SNR and CNR (p<0.001, p<0.001). In conclusion, there was no significant difference between the entrance surface dose and absorbed dose even when the images were taken by changing the frame from 10 f/s to 15 f/s at the time of the coronary angiography. SNR and CNR increased at 15 f/s than at 10 f/s, but they were not statistically significant. Therefore, this study suggests that the concern of the patient and practitioner regarding image quality degradation, as well as the problem of X-ray exposure caused by imaging at 10 f/s and 15 f/s, may be reduced.

An Exploration For Future Emerging Technologies by Science Mapping and a Dynamic Portfolio Setting for Government R&D Strategy (과학지도 작성을 통한 미래기술 발굴 및 정부R&D의 동적 투자방향성 설정 연구)

  • Yang, He-Young;Son, Suk-Ho;Han, Min-Kyu;Han, Jong-Min;Yim, Hyun
    • Journal of Technology Innovation
    • /
    • v.19 no.3
    • /
    • pp.1-29
    • /
    • 2011
  • Korean government built "2040 Science and Technology Future Vision" in order to show positive future scenarios and suggest a long-term guideline for a progress in science and technology. The S&T Future Vision was built based on an analysis of global megatrends and a prospect of domestic social change. After building S&T Future Vision, the "Government R&E Strategy"s was established as a follow-up action plan. The Government R&D Strategy consists of lists of future emerging technologies for future leadership, government R&D investment status and investment portfolio plans. Exploring future emerging technologies aggressively and making a governmental R&D strategic policy are requirements for national competitiveness, leadership in the world. Therefore search and selection for future emerging technologies is getting more and more important recently. Generally qualitative methodologies have been used such as expert-panel discussion method and portfolio analysis with expert valuation method in order to explore future technologies. These experts-based qualitative methodologies are well defined but lacking in some objectivity because size of expert-panels has limitations. We suggest a quantitative methodology, science mapping method to compensate this shortcoming in this study. There is another limitation related governmental R&D strategy which is that general R&D portfolios are static until a point of technology realization. We also propose a dynamic R&D investment portfolio which present different portfolios at a intermediate point and a point of technology realization. We expect this try with science mapping method and a dynamic R&D portfolio could strengthen strategic aspect of government R&D policy.

  • PDF