• Title/Summary/Keyword: Evaluation performance

Search Result 17,457, Processing Time 0.044 seconds

Seismic-Performance Evaluation for Existing Railway Bridges (기존 철도 교량의 내진성능 평가)

  • 임남형;강영종;양재성;엄주환
    • Proceedings of the KSR Conference
    • /
    • 1999.05a
    • /
    • pp.422-427
    • /
    • 1999
  • This is the basic study on the evaluation of seismic performance for existing railway bridges. This study presents all evaluation items and a progressive method of seismic performance for existing railway bridges. In the evaluation of seismic performance, a two-stage is used. Firstly, according to the seismic performance evaluation categories, preliminary screening of bridges is recommended. And using tile seismic rating system, seismic rank of bridges is calculated. Secondly, for the selected bridges in the first stage, detailed evaluation is recommended.

  • PDF

A Study on Faculty Perception of Research Performance Evaluation (연구업적 평가에 관한 대학 교수 인식 연구)

  • Yong Hwan, Kim;Ji Hei, Kang;Jongwook, Lee;Younghee, Noh
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.56 no.4
    • /
    • pp.309-333
    • /
    • 2022
  • A survey was conducted to analyze the perceptions related to research performance evaluation from 2,618 professors. The survey is to find out the perception about two parts. One is a survey of professors' perceptions about the faculty performance evaluation, which is currently being conducted at each university. The other is that we analyzed the perception of the introduction of qualitative performance evaluation indicators, an alternative to the quantitative performance evaluation. As a result, we confirmed followings. Quantitative research performance evaluation is carried out in most universities. Research performance evaluation is not appropriate for a department or research field. And an extension of the evaluation period is required. Quantitative evaluation have a negative impact on the academic community. Quantitative evaluation needs to be improved. As regard to the introduction of the qualitative evaluation, we confirmed that professors perceived that qualitative evaluation is necessary to evaluate research performance, and they also have negative opinions about introduction of qualitative evaluation.

Evaluation of Structural Integrity and Performance Using Nondestructive Testing and Monitoring Techniques

  • Rhim, Hong-Chul
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.2 no.3
    • /
    • pp.73-81
    • /
    • 1998
  • In this paper, the necessity of developing effective nondestructive testing and monitoring techniques for the evaluation of structural integrity and performance is described. The evaluation of structural integrity and performance is especially important when the structures and subject to abrupt external forces such as earthquake. A prompt and extensive inspection is required over a large area of earthquake-damaged zone. This evaluation process is regarded as a part of performance-based design. In the paper, nondestructive testing and monitoring techniques particularly for concrete structures are presented as methods for the evaluation of structural integrity and performance. The concept of performance-based design is first defined in the paper followed by the role of evaluation of structures in the context of overall performance=based design concept. Among possible techniques for the evaluation, nondestructive testing methods for concrete structures using radar and a concept of using fiber sensor for continuous monitoring of structures are presented.

  • PDF

MAINTENANCE PERFORMANCE EVALUATION OF THE BUILDINGS IN THE DESIGN PHASE

  • Hakyu Baeck;Chansik Lee
    • International conference on construction engineering and project management
    • /
    • 2005.10a
    • /
    • pp.1138-1143
    • /
    • 2005
  • As the importance of building maintenance is being emphasized, studies regarding the efficient building maintenance focus on the evaluation of the maintenance plan, implementation and its cost only while the evaluation of the maintenance performance is nearly ignored. At this point when the importance is placed on the performance design we need to have methods to improve maintenance performance of the buildings in Design Phase. The purpose of this study was to set the design evaluation items to improve the maintenance performance of the buildings. Through the review of the existing literature, we set the concept of the maintenance performance and suggested evaluation items to evaluate the maintenance performance through the analysis of domestic and overseas certification and design guidelines.

  • PDF

A Study on Standardization of Performance Evaluation for Autonomous Cleaning Robot (자율청소로봇 성능평가 표준화에 관한 연구)

  • Ryu Jae-Chang;Hong Ju-Pyo;Rhim Sung-Soo;Lee Soon-Geul;Park Kwang-Ho
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.1054-1059
    • /
    • 2005
  • To support the expansion of the autonomous robot market, the establishment of evaluation standards of the robot performance are essential. In this paper, to venture the standardization of the performance evaluation of the autonomous robot, the authors take the autonomous cleaning robot(ACR) as the initial stepping stone. Recently, the ACR has been being developed and marketed actively in many countries including Korea and it believes to be the fore-runner among various types of autonomous robot products. Standards of the performance evaluation for the ACR could be easily modified and applied to other autonomous robots. This paper formulates and suggests a group of standards for the performance evaluation based on a evaluation platform for the ACR. The newly developed performance evaluation platform has been designed to include all the important aspects of living environments in reality. In the platform the performance of the ACR is measured in terms of mobility, cleaning performance, avoidance of obstruction(safety), and operation noise. A few commercially available ACR products are collected and tested in the evaluation platform and compared against the performance evaluation standards formulated.

  • PDF

Relevance Analysis of Performance Evaluation Systems of Government S&T Research Groups (출연연구기관의 연구회 단위 기관평가제도의 적합성 분석)

  • Nam, Yeong-Ho;Kim, Byeong-Tae
    • Journal of Technology Innovation
    • /
    • v.14 no.3
    • /
    • pp.117-154
    • /
    • 2006
  • This research examines performance evaluatees' opinions regarding the current institutional performance evaluation systems of Government S&T Research Institutes (GRIs). Under the current evaluation systems, twenty GRIs are grouped into three Research Groups and each Group has its own evaluation systems. One problem of the current institutional evaluation systems is that the systems cannot reflect individual GRIs' characteristics. The following methods are used. First, based on four perspectives of Kaplan & Norton(1992)'s Balanced Scorecard(BSC) model, six perspectives appropriate to GRUs' characteristics are derived. Second, experts classify current performance evaluation measures into the six perspectives. This enables different evaluation systems of three GRI Research Groups to be compared under the same evaluation measures. Third, GRIs' evaluatees are asked to allocate appropriate weights on the performance measures. Evaluatees' weights of a GRI are compared with average weights of the related Group. Finally in every BSC's perspective, GRIs that have extraordinarily over-scored or under-scored weights are analyzed in terms of GRIs' missions, customers, capability of human resources, etc. In the Basic Research Group, the Korea Basic Science Institute is deviated in the financial perspective and the strategic direction perspectives. In the Public Research Group, Korea Institute of Construction Technology is significantly different from other GRIs in three perspectives. Five out of eight GRIs in the Industrial Research Group, GRIs are significantly different each other in several perspectives. It could be concluded that the current institutional evaluation systems are least appropriate in measuring performance of the GRIs of the Industrial Research Group.

  • PDF

A Simulation Model Construction for Performance Evaluation of Public Innovation Project

  • Koh, Chan
    • 한국디지털정책학회:학술대회논문집
    • /
    • 2006.06a
    • /
    • pp.87-109
    • /
    • 2006
  • The purpose of this paper is to examine the present performance evaluation methods and to make Monte Carlo Simulation Model for the IT-based Government innovation project. It is suggested the proper ways in applying of Monte Carlo Simulation Model by integration of present evaluation methods. It develops the theoretical framework for this paper, examining the existing literature on proposing an approach to the key concepts of the economic impact analysis methods. It examines the actual conditions of performance evaluation focusing on the It-based Government Innovation project. It considers how the simulation model is applied to the performance management in the public innovation project focusing on the framework, process and procedure of performance management.

  • PDF

A Study on the Strategy of Performance Assessment based on Classroom (수업 연계 수행평가 전략 설계 방안 연구)

  • Won, Hyo-Heon;Heo, Gyun
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.27 no.1
    • /
    • pp.125-132
    • /
    • 2015
  • The purpose of this study was to search the concept and meaning of performance assessment(PA) in the classroom, and the to proposed some strategies to apply PA. The result of this study was presented below. First, differentiation strategy is necessary in the evaluation goal. Individualization will be an example of differentiation strategy in the evaluation of group goal. Second, various strategies are needed based on the subjects of evaluation. Self-evaluation, peer-evaluation, and small group-evaluation are some examples evaluation for the subjects. Third, there is a need for phase-strategy assessment. For the class associated performance assessment, we have to consider the evaluation activities based on the time likes before (pre), on, and after class. How to select the evaluation task is also one of the key sucess factors for improving the class associated performance assessment.

Implementation of RPMS, the Evaluation and Management Tool for Urban Residential Performance and Possible Applications (도시주거지역 거주성 및 거주성능의 평가 및 관리도구 RPMS의 구현과 활용제안)

  • Park, Soo-Hoon;Lee, Sang-Hyun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.15 no.1
    • /
    • pp.51-59
    • /
    • 2010
  • People evaluate urban residential regions quite frequently and sensitively, considering issues such as locations or ease of use of in-site facilities or nearby urban facilities, and those results are bound to be reflected to real estate costs quite immediately. However, there have been frequently recurring questions regarding objectivity of evaluations in terms of results and methods reflected on indexes such as land costs for various reasons. RPMS -Residence Performance Management System- which targets currently in most cases on urban residential areas, suggests instrumental methodology of objective approach toward sensitive urban residence performance evaluation. This paper explains and suggests instrumental utilization of RPMS and its implementations, evaluation methodology and quantitative way of evaluation. In terms of implementation we explain issues such as adding target locations into new residence planning sites, quantification of properties on evaluation indexes of residential performance and/or habitability in terms of checklists, formulas for evaluation, delicate adjustment of evaluation results by setting weights on evaluation indexes, as well as reports on results. Research on appropriate weights and weight settings regarding evaluation indexes, however, exceeds the range of this paper so that this paper focuses on explaining residence performance evaluation and management methodology.

A Case Study of the Discrepancy between Ex Ante and Ex Post Evaluation of IS Performance (정보화 성과의 사전-사후평가 차이에 관한 사례연구)

  • Lee, Kuk-Hie;Park, So-Hyun;Gu, Bon-Jae;Lee, Mi-Young
    • Journal of Information Technology Applications and Management
    • /
    • v.19 no.2
    • /
    • pp.59-78
    • /
    • 2012
  • The purpose of this longitudinal case study is to shed light on the reliability problems of IS performance evaluation by analyzing the discrepancy between ex ante evaluation in 2011 and ex post evaluation in 2012. Through an information system development project of a public enterprise, the gap between ex ante and ex post evaluation was ascertained and the causes as to why such gap occurs and the success factors that can solve the problems were derived. The ex ante evaluation of IS performance was performed based on both IS success model and BSC model and the ex post evaluation was carried out at the time after the target system has been built up by applying the evaluation measures and process that are similar to the ex ante evaluation. In the ex ante evaluation the business performance improved by the target system was estimated to 18.2%. On the other hand, it was seen as being low at 15.2% in the ex post evaluation and the differences were found to be statistically significant in 6 out of 10 measures. The reasons as to why such gap occurs were diagnosed as being of 2 types : (1) changes in the evaluation psychology according to the differences in the evaluation objectives; and (2) excessive expectation levels of the target system that is formed at the time of the ex ante evaluation. In other words, the users who have excessive expectations tend to overestimate in the ex ante evaluation and, in the ex post evaluation, tend to underestimate lower than the actual performance mainly due to disappointment on the results that do not meet the early expectations. As solutions to overcome the reliability problem of ex ante evaluation, 3 factors were derived : (1) the temperance of excessive expectation levels of the users; (2) a clear definition of the scope and functionality of the target system; and (3) actual commitments for the evaluations of IT performance.