• Title/Summary/Keyword: Stage-Based Reliability

Search Result 316, Processing Time 0.024 seconds

Development of A Nurse는s Suffering Experience Scale (말기 암 환자를 간호하는 간호사의 고통경험 척도개발)

  • 조계화
    • Journal of Korean Academy of Nursing
    • /
    • v.32 no.2
    • /
    • pp.243-253
    • /
    • 2002
  • The purpose of this study was to develop Nurse's Suffering Experience Scale and to test the reliability and validity of the instrument. Method: The subjects used to verify the scale's reliability and validity were 220 nurses who were taking care of the end stage cancer patients, while working at university and general hospitals in Daegu and Kyungbuk province from April 20. to July 10, 2001. The data was analyzed by the SPSS/WIN 8.0 program. Results: A factor analysis was conducted, and items that had a factor loading more than .40, and an eigen value more than 1.0. were selected. The factor analysis classified a total of seven factors statistically, and it's communality was 44%. The explanation of factors based on the conceptual framework and item content are as follows: The first factor was expanding self consciousness, the second factor was forming empathy with family, the third factor was professional challenge, the fourth factor was change of values, the fifth factor was spiritual sublimation, the sixth factor was helplessness, and finally the seventh factor was rejection to death. Cronbach's coefficient to test reliability of the scale was .8665 for total of 44 items. The Scale for Nurse's Suffering Experience developed in the study was identified as a tool with a high degree of reliability and validity. Therefore this scale can be effectively utilized for the evaluation of the degree of nurse's suffering experience in clinical settings.

A Study on the Imperfect Debugging of Logistic Testing Function (로지스틱 테스트함수의 불완전 디버깅에 관한 연구)

  • Che, Gyu-Shik;Moon, Myung-Ho;Yang, Kye-Tak
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.1
    • /
    • pp.119-126
    • /
    • 2010
  • The software reliability growth model(SRGM) has been developed in order to estimate such reliability measures as remaining fault number, failure rate and reliability for the developing stage software. Almost of them assumed that the faults detected during testing were eventually removed. Namely, they have studied SRGM based on the assumption that the faults detected during testing were perfectly removed. The fault removing efficiency, however, is imperfect and it is widely known as so in general. It is very difficult to remove detected fault perfectly because the fault detecting is not easy and new error may be introduced during debugging and correcting. Therefore, We want to study imperfect software testing effort for the logistic testing effort which is thought to be the most adequate in this paper.

Reliability-Centered Maintenance Model for Maintenance of Electric Power Distribution System Equipment (배전계통 기기 유지보수를 위한 RCM 모델)

  • Moon, Jong-Fil;Shon, Jin-Geun
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.58 no.4
    • /
    • pp.410-415
    • /
    • 2009
  • With the implementation of electric power industry reform, the utilities are looking for effective ways to improve the economic efficiency. One area in particular, the equipment maintenance, is being scrutinized for reducing costs while keeping a reasonable level of the reliability in the overall system. Here the conventional RCM requires the tradeoff between the upfront maintenance costs and the potential costs of losing loads. In this paper we describe the issues related to applying so-called the "Reliability-centered Maintenance" (RCM) method in managing electric power distribution equipment. The RCM method is especially useful as it explicitly incorporates the cost-tradeoff of interest, i.e. the upfront maintenance costs and the potential interruption costs, in determining which equipment to be maintained and how often. In comparison, the "Time-based Maintenance" (TBM) method, the traditional method widely used, only takes the lifetime of equipment into consideration. In this paper, the modified Markov model for maintenance is developed. First, the existing Markov model for maintenance is explained and analyzed about transformer and circuit breaker, so on. Second, developed model is introduced and described. This model has two different points compared with existing model: TVFR and nonlinear customer interruption cost (CIC). That is, normal stage at the middle of bathtub curve has not CFR but the gradual increasing failure rate and the unit cost of CIC is increasing as the interruption time is increasing. The results of case studies represent the optimal maintenance interval to maintain the equipment with minimum costs. A numerical example is presented for illustration purposes.

RRSEB: A Reliable Routing Scheme For Energy-Balancing Using A Self-Adaptive Method In Wireless Sensor Networks

  • Shamsan Saleh, Ahmed M.;Ali, Borhanuddin Mohd.;Mohamad, Hafizal;Rasid, Mohd Fadlee A.;Ismail, Alyani
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.7
    • /
    • pp.1585-1609
    • /
    • 2013
  • Over recent years, enormous amounts of research in wireless sensor networks (WSNs) have been conducted, due to its multifarious applications such as in environmental monitoring, object tracking, disaster management, manufacturing, monitoring and control. In some of WSN applications dependent the energy-efficient and link reliability are demanded. Hence, this paper presents a routing protocol that considers these two criteria. We propose a new mechanism called Reliable Routing Scheme for Energy-Balanced (RRSEB) to reduce the packets dropped during the data communications. It is based on Swarm Intelligence (SI) using the Ant Colony Optimization (ACO) method. The RRSEB is a self-adaptive method to ensure the high routing reliability in WSNs, if the failures occur due to the movement of the sensor nodes or sensor node's energy depletion. This is done by introducing a new method to create alternative paths together with the data routing obtained during the path discovery stage. The goal of this operation is to update and offer new routing information in order to construct the multiple paths resulting in an increased reliability of the sensor network. From the simulation, we have seen that the proposed method shows better results in terms of packet delivery ratio and energy efficiency.

A Study on Establishing Target Reliability Levels for Flammable Gas Transmission Pipelines (가연성가스 수송배관에 대한 목표 신뢰도 수준 설정에 관한 연구)

  • Lee, Jin-Han;Jo, Young-Do;Moon, Jong-Sam
    • Journal of the Korean Institute of Gas
    • /
    • v.22 no.6
    • /
    • pp.52-58
    • /
    • 2018
  • In reliability based design and assessment (RBDA) methodology, reliability targets are used to ensure that safety levels are met relevant limit states in the stage of design and maintenance. The target reliability for flammable gas pipelines have not been developed yet in Korea. Instead of the reliability targets, the tolerable criteria for risk measures such as societal and individual risk have been applied in pipeline risk management. This paper introduces the procedures to develop the target reliability using tolerable risk criteria for societal and individual risk which can be enforced for high pressure natural gas pipelines in quantitative risk assessment. In addition, we propose the target reliability for natural gas and hydrogen gas transmission pipelines by the procedures.

Efficient power allocation algorithm in downlink cognitive radio networks

  • Abdulghafoor, Omar;Shaat, Musbah;Shayea, Ibraheem;Mahmood, Farhad E.;Nordin, Rosdiadee;Lwas, Ali Khadim
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.400-412
    • /
    • 2022
  • In cognitive radio networks (CRNs), the computational complexity of resource allocation algorithms is a significant problem that must be addressed. However, the high computational complexity of the optimal solution for tackling resource allocation in CRNs makes it inappropriate for use in practical applications. Therefore, this study proposes a power-based pricing algorithm (PPA) primarily to reduce the computational complexity in downlink CRN scenarios while restricting the interference to primary users to permissible levels. A two-stage approach reduces the computational complexity of the proposed mathematical model. Stage 1 assigns subcarriers to the CRN's users, while the utility function in Stage 2 incorporates a pricing method to provide a power algorithm with enhanced reliability. The PPA's performance is simulated and tested for orthogonal frequency-division multiplexing-based CRNs. The results confirm that the proposed algorithm's performance is close to that of the optimal algorithm, albeit with lower computational complexity of O(M log(M)).

A Methodology for SDLC of AI-based Defense Information System (AI 기반 국방정보시스템 개발 생명주기 단계별 보안 활동 수행 방안)

  • Gyu-do Park;Young-ran Lee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.577-589
    • /
    • 2023
  • Ministry of National Defense plans to harness AI as a key technology to bolster overall defense capability for cultivation of an advanced strong military based on science and technology based on Defense Innovation 4.0 Plan. However, security threats due to the characteristics of AI can be a real threat to AI-based defense information system. In order to solve them, systematic security activities must be carried out from the development stage. This paper proposes security activities and considerations that must be carried out at each stage of AI-based defense information system. Through this, It is expected to contribute to preventing security threats caused by the application of AI technology to the defense field and securing the safety and reliability of defense information system.

Visualization analysis of the progressive failure mechanism of tunnel face in transparent clay

  • Lei, Huayang;Zhai, Saibei;Liu, Yingnan;Jia, Rui
    • Geomechanics and Engineering
    • /
    • v.29 no.2
    • /
    • pp.193-205
    • /
    • 2022
  • The face stability of shield tunnelling is the most important control index for safety risk management. Based on the reliability of the transparent clay (TC) model test, a series of TC model tests under different buried depth were conducted to investigate the progressive failure mechanism of tunnel face. The support pressure was divided into the rapid descent stage, the slow descent stage and the basically stable stage with company of the local failure and integral failure in the internal of the soil during the failure process. The relationship between the support pressure and the soil movement characteristics of each failure stage was defined. The failure occurred from the soil in front of the tunnel face and propagated as the slip zone and the loose zone. The fitted formulas were proposed for the calculation of the failure process. The failure mode in clay was specified as the basin shape with an inverted trapezoid shape for shallow buried and appeared as the basin shape with a teardrop-like shape in deep case. The implications of these findings could help in the safety risk management of the underground construction.

Efficiency Improvement of the Fixed-Complexity Sphere Decoder

  • Mohaisen, Manar;Chang, Kyung-Hi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.3
    • /
    • pp.494-507
    • /
    • 2011
  • In this paper, we propose two schemes to reduce the complexity of fixed-complexity sphere decoder (FSD) algorithm in the ordering and tree-search stages, respectively, while achieving quasi-ML performance. In the ordering stage, we propose a QR-decomposition-based FSD signal ordering based on the zero-forcing criterion (FSD-ZF-SQRD) that requires only a few number of additional complex flops compared to the unsorted QRD. Also, the proposed ordering algorithm is extended using the minimum mean square error (MMSE) criterion to achieve better performance. In the tree-search stage, we introduce a threshold-based complexity reduction approach for the FSD depending on the reliability of the signal with the largest noise amplification. Numerical results show that in 8 ${\times}$ 8 MIMO system, the proposed FSD-ZF-SQRD and FSD-MMSE-SQRD only require 19.5% and 26.3% of the computational efforts required by Hassibi's scheme, respectively. Moreover, a third threshold vector is outlined which can be used for high order modulation schemes. In 4 ${\times}$ 4 MIMO system using 16-QAM and 64-QAM, simulation results show that when the proposed threshold-based approach is employed, FSD requires only 62.86% and 53.67% of its full complexity, respectively.

Efficiency Improvement of the Fixed-complexity Sphere Decoder

  • Mohaisen, Manar;Chang, Kyung-Hi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.2
    • /
    • pp.330-343
    • /
    • 2011
  • In this paper, we propose two schemes to reduce the complexity of fixed-complexity sphere decoder (FSD) algorithm in the ordering and tree-search stages, respectively, while achieving quasi-ML performance. In the ordering stage, we propose a QR-decomposition-based FSD signal ordering based on the zero-forcing criterion (FSD-ZF-SQRD) that requires only a few number of additional complex flops compared to the unsorted QRD. Also, the proposed ordering algorithm is extended using the minimum mean square error (MMSE) criterion to achieve better performance. In the tree-search stage, we introduce a threshold-based complexity reduction approach for the FSD depending on the reliability of the signal with the largest noise amplification. Numerical results show that in $8{\times}8$ MIMO system, the proposed FSD-ZF-SQRD and FSD-MMSE-SQRD only require 19.5% and 26.3% of the computational efforts required by Hassibi’s scheme, respectively. Moreover, a third threshold vector is outlined which can be used for high order modulation schemes. In $4{\times}4$ MIMO system using 16-QAM and 64-QAM, simulation results show that when the proposed threshold-based approach is employed, FSD requires only 62.86% and 53.67% of its full complexity, respectively.