• Title/Summary/Keyword: Research Evaluation Indicators

Search Result 717, Processing Time 0.022 seconds

Analysing Evaluation Indicators for the Research Institutes in Science & Technology Sector in the Perspective of Intellectual Capital Model (지적자본 관점에서의 과학기술계 연구기관 평가지표 분석)

  • Yi, Chan-Goo
    • Journal of Technology Innovation
    • /
    • v.15 no.2
    • /
    • pp.177-209
    • /
    • 2007
  • This work firstly aims to analyse the balance between tangibles and intangibles as well as among human capital, structural capital and relational capital of evaluation indicators for the research institutes in science and technology sector conducted in 2006, by adopting intellectual capital model. The research question of this work comes from that while the R&D activity can produce both tangibles and intangibles, there have no been methodologies to relevantly measure and rationally judge these, in particular, intangible performance. The result shows that the institute evaluation system in 2006 had given more weight on tangibles and structural capital than intangibles and other intellectual capitals such as human capital and relational capital, in comparison to the past evaluation system, even though, in principle, the current evaluation system has to deal with the intangible research performances as well as tangible ones in economic, social and cultural perspective. Finally, based on these analysis, I will try to suggest some policy directions for overcoming the deficits of indicators in institute evaluation system.

  • PDF

A STATISTICAL APPROACH FOR DERIVING KEY NFC EVALUATION CRITERIA

  • Kim, S.K.;Kang, G.B.;Ko, W.I.;Youn, S.R.;Gao, R.X.
    • Nuclear Engineering and Technology
    • /
    • v.46 no.1
    • /
    • pp.81-92
    • /
    • 2014
  • This study suggests 5 evaluation criteria (safety and technology, environmental impact, economic feasibility, social factors, and institutional factors) and 24 evaluation indicators for a NFC (nuclear fuel cycle) derived using factor analysis. To do so, a survey using 1 on 1 interview was given to nuclear energy experts and local residents who live near nuclear power plants. In addition, by conducting a factor analysis, homogeneous evaluation indicators were grouped with the same evaluation criteria, and unnecessary evaluation criteria and evaluation indicators were dropped out. As a result of analyzing the weight of evaluation criteria with the sample of nuclear power experts and the general public, both sides recognized safety as the most important evaluation criterion, and the social factors such as public acceptance appeared to be ranked as more important evaluation criteria by the nuclear energy experts than the general public.

A Study on the Development of Teaching Evaluation Indicators for Faculty in Engineering College (공과대학 교수의 교육업적평가 지표 개발 연구)

  • Kang, So Yeon;Choi, Keum Jin;Park, Sun Hee;Han, Jiyoung;Lee, Hyemi;Cho, Sung Hee
    • Journal of Engineering Education Research
    • /
    • v.20 no.4
    • /
    • pp.38-50
    • /
    • 2017
  • The purpose of this study is to analyze the current evaluation methods on faculty performance at Korean engineering colleges and develop teaching evaluation indicators for faculty performance. We investigated the faculty performance cases in engineering colleges inside and outside of the Korea, the engineering faculty's awareness of evaluation factors for their educational performance, and the appropriate ratios by indicating factors. Also we developed evaluation indicators for educational achievements to improve the current faculty performance system. 227 engineering faculty members answered our survey questionnaire. The result in the case study on faculty performance evaluation is as follows. First, most items of faculty performance evaluation are about quantitative indicators that can easily conduct objective evaluation. Second, evaluation items of faculty performance are mostly focused on instruction in a classroom. Third, the evaluation by students and administrative managers is more dominant than that by professors or their colleagues, document evaluation than on site evaluation, general evaluation than formative evaluation, and static evaluation than dynamic evaluation. Lastly, Some universities tend to substitute outstanding articles for underperforming instruction. The evaluation indicators that we have developed can be implemented by four types of subjects, such as students, professors, their colleagues, and deans. Also, based on the evaluation indicators, faculties can freely select their evaluation domains depending on the their tracks, such as a teaching track, a research track, or an industry-university cooperation track. The mandatory evaluation fields include teaching, student counselling, teaching portfolio evaluation by mentors or colleagues, class management evaluation by deans, and self-evaluation. The other areas are optional and professors can choose their evaluation factors.

Development of Key Indicators for Nurses Performance Evaluation and Estimation of Their Weights for Management by Objectives (목표관리를 적용한 간호사 성과평가 핵심 지표개발과 가중치 산정)

  • Lee, Eun-Hwa;Ahn, Sung-Hee
    • Journal of Korean Academy of Nursing
    • /
    • v.40 no.1
    • /
    • pp.69-77
    • /
    • 2010
  • This methodological research was designed to develop performance evaluation key indicators (PEKIs) for management by objectives (MBO) and to estimate their weights for hospital nurses. Methods: The PEKIs were developed by selecting preliminary indicators from a literature review, examining content validity and identifying their level of importance. Data were collected from November 14, 2007 to February 18, 2008. Data set for importance of indicators was obtained from 464 nurses and weights of PEKIs domain was from 453 nurses, who worked for at least 2 yr in one of three hospitals. Data were analyzed using $X^2$-test, factor analysis, and the Analytical Hierarchy Process. Results: Based upon Content Validity Index of .8 or above, 61 indicators were selected from the 100 preliminary indicators. Finally, 40 PEKIs were developed from the 61 indicators, and categorized into 10 domains. The highest weight of the 10 domains was customer satisfaction, which was followed by patient education, direct nursing care, profit increase, safety management, improvement of nursing quality, completeness of nursing records, enhancing competence of nurses, indirect nursing care, and cost reduction, in that order. Conclusion: PEKIs and their weights can be utilized for impartial evaluation and MBO for hospital nurses. Further research to verify PEKIs would lead to successful implementation of MBO.

Evaluation model for scientific research performance based on journal articles (연구성과의 질 제고를 위한 논문평가 모형개발)

  • Lee, Hyuck-Jai;Yeo, Woon-Dong;Lee, Sang-Pil
    • Journal of Korea Technology Innovation Society
    • /
    • v.9 no.3
    • /
    • pp.538-557
    • /
    • 2006
  • The interest in performance evaluation of public R&D has been dramatically increased. Many tries have been made to measure and to evaluate the quality of research performance. Quantitative indicators are considered increasingly important for this purpose. The most widely used are the number of research articles, citations and Impact Factors. However, such indicators are strongly discipline-dependent, and hence they should be used with careful attention. In this study we tried to provide quantitative evidence on the latent problems of discipline-dependent indicators. We also tried to provide a set of quantitative indicators for evaluating publications as a result of research activities. We put our focus on the quantitative indicators reflecting the characteristics on disciplines and provided several indicators with guideline on their usage.

  • PDF

A Study on Development of Evaluation Indicators for Measuring Educational Value of Libraries (도서관의 교육적 가치 측정을 위한 평가지표 개발에 관한 연구)

  • Noh, Younghee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.51 no.4
    • /
    • pp.5-34
    • /
    • 2017
  • The purpose of this research is to develop evaluation indicators for evaluating the educational value of libraries and, for this, frist of all, preliminary evaluation indicators were derived by analyzing comprehensively about 60 domestic and overseas papers which researched on the value of libraries. On the basis of the derived preliminary evaluation indicators, 10 experts were selected and the final evaluation indicators were developed by conducting Delphi survey three times. The final evaluation indicators are composed of five evaluation areas, which are the divided areas of the educational value of libraries, such as the literacy improvement, supporting of learning and education, research supporting and provision of information resources, improvement of educational environment and quality of education, strengthen competence, 13 evaluation items, and 62 evaluation indicators. In the future, it seems that researches to measure the educational value of libraries based on this will have to be conducted.

Research on Development of Social Value Evaluation Indicators for Public Libraries (공공도서관의 사회적 가치 평가지표 개발에 관한 연구)

  • Noh, Younghee
    • Journal of the Korean Society for information Management
    • /
    • v.34 no.2
    • /
    • pp.181-214
    • /
    • 2017
  • The purpose of this research is to develop evaluation indicators for evaluating the social value of libraries and, for this, first of all, preliminary evaluation indicators were derived by analyzing comprehensively about 60 domestic and overseas papers which researched on the value of libraries. On the basis of the derived preliminary evaluation indicators, 11 experts were selected and the final evaluation indicators were developed by conducting Delphi survey three times. The final evaluation indicators are composed of five evaluation areas, which are the divided areas of the social value of libraries, such as the development of local communities, the linkage of local communities, the improvement of local residents' life quality, the equalization of local residents, and the information services necessary for local communities, 13 evaluation items, and 64 evaluation indicators. In the future, it seems that researches to measure the social value of libraries based on this will have to be conducted.

Development of the Evaluation Indicators of Positive Nursing Organizational Culture in a Clinical Setting (임상현장에서의 긍정적인 간호조직문화 평가지표 개발)

  • Yom, Young Hee;Noh, Sang Mi;Kim, Kyung Hee;Ji, Soon Ju;Kim, Hyun Jung
    • Journal of Korean Clinical Nursing Research
    • /
    • v.19 no.2
    • /
    • pp.233-244
    • /
    • 2013
  • Purpose: The purpose of this study was to develop the evaluation indicators of positive nursing organizational culture in a clinical setting. Methods: The evaluation indicators of positive nursing organizational culture were developed from a literature review and a focus group interview. The content validity testing was done using a clinical expert panel. The content utility testing was done using a survey questionnaire. Results: The evaluation indicators of positive nursing organizational culture consists of 88 indicators representing the eight domains with the 24 categories. The average scores in evaluation indicators of positive nursing culture included the importance (3.29 points in average), the potential for further utilization (3.14 points in average) and the current state of extension agency (2.80 points in average). Conclusion: The developed evaluation indicators can be applied to measure the nursing organizational culture, which would be the basic data to manage human resources effectively in a clinical setting.

A Study on the Development of Meta Evaluation Indicators based on AHP Technique for Defense R&D Programs (AHP를 이용한 국방연구개발사업 메타평가 지표개발)

  • Kim, Soon-Yeong
    • Knowledge Management Research
    • /
    • v.10 no.2
    • /
    • pp.65-84
    • /
    • 2009
  • The purpose of this study is to develop Meta Evaluation Indicators for Defense R&D Programs in Korea. At first, the four components of this meta evaluation model were designed, which are evaluation context, evaluation input, evaluation process and evaluation outcome. And fifty two indicators for this meta evaluation were developed by experts who performed evaluations for Defense R&D Programs. The reliability of components and items was verified by Cronbach's ${\alpha}$ coefficient. It was over 0.6 in all areas. The validity of components and items was verified by Factor Analysis. Analytic Hierarchy Process method was used in assigning the evaluation weight. The survey of twenty two evaluators participated in Defense R&D Programs showed that the Consistency Ratio was under 0.1 in evaluation components and items. In this study, an objective and reasonable set of Defense R&D Meta Evaluation Indicators was developed to increase the responsibility of Defense R&D Programs and improve the quality of evaluation results.

  • PDF

Development of LINC 3.0 Self-Evaluation Indicators Based on CIPP Evaluation Model - Focusing on the Case of K University - (CIPP모형에 기반한 LINC 3.0 자체평가지표 개발 -K대학 기술혁신선도형 사례 중심으로-)

  • Jinyoung Kwak;Hyeree Min;Mija Shim;Youngeun Wee;Jiyoung Kim
    • Journal of Practical Engineering Education
    • /
    • v.16 no.3_spc
    • /
    • pp.309-325
    • /
    • 2024
  • The purpose of this study was to develop self-evaluation criteria for objective verification and performance analysis of LINC 3.0. To achieve this goal, evaluation indicators in the fields of human resources development and skill development and commercialization were developed and their validity was verified. We investigated previous evaluation-related studies and similar cases to construct an evaluation model and system and develop indicators. The validity of the developed evaluation indicators was secured through two round Delphi surveys. As a result of the research, LINC 3.0 evaluation indicators can be divided into the field of human resources development and skill development and commercialization. A total of 66 evaluation indicators were developed. CIPP in the field of human resources development was developed with 13 categories and 38 evaluation indicators, and CIPP in the field of skill development and commercialization was developed with 12 categories and 28 evaluation indicators. The significance of this study is that it suggests a way to increase the objective verification and validity of the university industry-academia cooperation model by developing self-evaluation indicators for the LINC 3.0 project. The evaluation indicators developed in the research need to be continuously upgraded based on field usability, and it is necessary to improve the quality and competitiveness of university education by sharing and spreading excellent affairs.