• Title/Summary/Keyword: reliability based optimization

Search Result 481, Processing Time 0.022 seconds

A Study on the Optimal Number of Air Tanker for Patrol Operations (초계작전을 위한 공중급유기 적정 대수 산정 연구)

  • Park, Sehoon;Chung, Ui-Chang;Chung, Je-Hoon
    • Journal of the Korea Society for Simulation
    • /
    • v.28 no.1
    • /
    • pp.57-65
    • /
    • 2019
  • Air refueling is expected to increase the efficiency of the air force operations. This follows from the introduction of air refueling aircraft, which should to increase operational time by increasing the range and duration of fighter jets. Despite the effectiveness of the air refueling air crafts, the astronomical costs of adapting the air tankers call for careful discussions on whether to acquire any air craft and if so, how many. However there is no academic study on the subject to our knowledge. Thus, we use the ABM(Agent Based Modeling) technique to calculate the optimal number of air tankers during patrol operation. We have enhanced the reliability of the simulation by entering the specifications of the current aircraft operated by the Korean Air Force. As an optimization tool for determining the optimal number of counts, we use OptQuest built into the simulation tools and show that the optimal number of air tanker is 4.

Data abnormal detection using bidirectional long-short neural network combined with artificial experience

  • Yang, Kang;Jiang, Huachen;Ding, Youliang;Wang, Manya;Wan, Chunfeng
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.117-127
    • /
    • 2022
  • Data anomalies seriously threaten the reliability of the bridge structural health monitoring system and may trigger system misjudgment. To overcome the above problem, an efficient and accurate data anomaly detection method is desiderated. Traditional anomaly detection methods extract various abnormal features as the key indicators to identify data anomalies. Then set thresholds artificially for various features to identify specific anomalies, which is the artificial experience method. However, limited by the poor generalization ability among sensors, this method often leads to high labor costs. Another approach to anomaly detection is a data-driven approach based on machine learning methods. Among these, the bidirectional long-short memory neural network (BiLSTM), as an effective classification method, excels at finding complex relationships in multivariate time series data. However, training unprocessed original signals often leads to low computation efficiency and poor convergence, for lacking appropriate feature selection. Therefore, this article combines the advantages of the two methods by proposing a deep learning method with manual experience statistical features fed into it. Experimental comparative studies illustrate that the BiLSTM model with appropriate feature input has an accuracy rate of over 87-94%. Meanwhile, this paper provides basic principles of data cleaning and discusses the typical features of various anomalies. Furthermore, the optimization strategies of the feature space selection based on artificial experience are also highlighted.

Development of Bridge Maintenance Method based on Life-Cycle Performance and Cost (생애주기 성능 및 비용에 기초한 교량 유지관리기법 개발)

  • Park, Kyung Hoon;Kong, Jung Sik;Hwang, Yoon Koog;Cho, Hyo Nam
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.6A
    • /
    • pp.1023-1032
    • /
    • 2006
  • In this paper, a new method for the bridge maintenance is proposed to overcome the limit of the existing methods and to implement the preventive bridge maintenance system. The proposed method can establish the lifetime optimum maintenance strategy of the deteriorating bridges considering the life-cycle performance as well as the life-cycle cost. The lifetime performance of the deteriorating bridges is evaluated by the safety index based on the structural reliability and the condition index detailing the condition state. The life-cycle cost is estimated by considering not only the direct maintenance cost but also the user and failure cost. The genetic algorithm is applied to generate a set of maintenance scenarios which is the multi-objective combinatorial optimization problem related to the life-cycle cost and performance. The study examined the proposed method by establishing a maintenance strategy for the existing bridge and its advantages. The result shows that the proposed method can be effectively applied to deciding the bridge maintenance strategy.

Development of Biosignal-based Urban Air Mobility Emergency Response System (생체신호 기반 도심 항공 모빌리티 비상 대응 시스템 개발)

  • Gihong Ku;Jeongouk Lee;Hanseong Lim;Sungwook Cho
    • Journal of Aerospace System Engineering
    • /
    • v.18 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • This paper introduces an emergency response system in urban air mobility scenarios. A biometric responsive smartwatch was designed to monitor passengers' real-time heart rates. When an anomaly was detected, the system would send an alert via Morse code vibration and voice notification. It was integrated with the assumed control system of the ROS environment and communicates to implement a system for generating the shortest path for emergency landing to a nearby vertical port during urban air mobility operations. System stability was verified through high-fidelity simulation environments and testing based on actual geographic locations. Our technology improved the reliability and convenience of urban air mobility, demonstrating its effectiveness through simulations and tests in real-world scenarios.

Optimum Design of the Intake Tower of Rerervoir -With Application of Strength Design Method- (저수지 취수탑의 최적설계에 관한 연구(II) -강도설계법을 중심으로-)

  • 김종옥;고재군
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.30 no.3
    • /
    • pp.82-94
    • /
    • 1988
  • A growing attention has been paid to the optimum design of structures in recent years. Most studies on the optimum design of reinforced concrete structures has been mainly focussed to the design of structural members such as beams, slabs and columns, and there exist few studies that deal with the optimum design of large-scale concrete shell structures. The purpose of the present investigation is, therefore, to set up an efficient optimum design method for the large-scale reinforced concrete cylindrical shell structures like intake tower of reservoir. The major design variables are the dimensions and steel areas of each member of structures. The construction cost which is compo8ed of the concrete, steel, and form work costs, respectively, is taken as the objective function. The constraint equations for the design of intake-tower are derived on the basis of strength design method. The results obtained are summarized as follows 1. The efficient optimlzation algorithrns which can execute the automatic optimum design of reinforced concrete intake tower based on the strength design method were developed. 2. Since the objective function and design variables were converged to their optimum values within the first or second iteration, the optimization algorithms developed in this study seem to be efficient and stable. 3. When using the strength design method, the construction cost could be saved about 9% compared with working stress design method. Therefore, the reliability of algorithm was proved. 4. The difference in construction cost between the optimum designs with substructures and with entire structure was found to be small and thus the optimum design with substructures may conveniently be used in practical design. 5. The major active constraints of each structural member were found to be the 'bending moment constraint for slab, the minimum longitudinal steel ratio constraint for tower body and the shearing force, bending moment and maximum eccentricity constraints for footing, respectively. 6. The computer program developed in the present study can be effectively used even by an uneiperienced designer for the optimum design of reinforced concrete intake-tower on the basis of strength design method.

  • PDF

Determination and Verification of Flow Stress of Low-alloy Steel Using Cutting Test (절삭실험을 이용한 저합금강의 유동응력 결정 및 검증)

  • Ahn, Kwang-Woo;Kim, Dong-Hoo;Kim, Tae-Ho;Jeon, Eon-Chan
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.13 no.5
    • /
    • pp.50-56
    • /
    • 2014
  • A technique based on the finite element method (FEM) is used in the simulation of metal cutting process. This offers the advantages of the prediction of the cutting force, the stresses, the temperature, the tool wear, and optimization of the cutting condition, the tool shape and the residual stress of the surface. However, the accuracy and reliability of prediction depend on the flow stress of the workpiece. There are various models which describe the relationship between the flow stress and the strain. The Johnson-Cook model is a well-known material model capable of doing this. Low-alloy steel is developed for a dry storage container for used nuclear fuel. Related to this, a process analysis of the plastic machining capability is necessary. For a plastic processing analysis of machining or forging, there are five parameters that must be input into the Johnson-Cook model in this paper. These are (1) the determination of the strain-hardening modulus and the strain hardening exponent through a room-temperature tensile test, (2) the determination of the thermal softening exponent through a high-temperature tensile test, (3) the determination of the cutting forces through an orthogonal cutting test at various cutting speeds, (4) the determination of the strain-rate hardening modulus comparing the orthogonal cutting test results with FEM results. (5) Finally, to validate the Johnson-Cook material parameters, a comparison of the room-temperature tensile test result with a quasi-static simulation using LS-Dyna is necessary.

A Study on Standardization of Data Bus for Modular Small Satellite (모듈화 소형위성의 Data Bus 표준화 방안 연구)

  • Jang, Yun-Uk;Chang, Young-Keun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.6
    • /
    • pp.620-628
    • /
    • 2010
  • Small satellites can be used for various space research and scientific or educational purposes due to advantages in small size, low-cost, and rapid development. Small Satellites have many advantages of application to Responsive Space. Compared to traditional larger satellites, however, Small satellites have many constraints due to limitations in size. Therefore, it is difficult to expect high performance. To approach maximum capability with minimal size, weight, and cost, standard modular platform of Small satellites is necessary. Modularity supports plug-and-play architecture. The result is Small satellites that can be combined quickly and reliably using plug-and-play mechanisms. For communication between modules, standard bus interface is needed. Controller Area Network(CAN) protocol is considered optimum data bus for modular Small satellite. CAN can be applied to data communication with high reliability. Hence, design optimization and simplification can also be expected. For ease of assembly and integration, modular design can be considered. This paper proposes development method for standardized modular Small satellites, and describes design of data interface based on CAN and a method of testing for modularity.

A Modified EGEAS Model with Avoided Cost and the Optimization of Generation Expansion Plan (회피비용을 고려한 EGEAS 모형 개발과 전원개발계획의 최적화)

  • 이재관;홍성의
    • Korean Management Science Review
    • /
    • v.17 no.1
    • /
    • pp.117-134
    • /
    • 2000
  • Pubilc utility industries including the electric utility industry are facing a new stream of privatization com-petition with the private sector and deregulation. The necewssity to solve now and in the future power supply and demand problems has been increasing through the sophisticated generation expansion plan(GEP) approach con-sidering not only KEPCo's supply-side resources but also outside resources such as non-utility generation(NUG) demand-side management (DSM). Under the environmental situation in the current electric utility industry a new approach is needed to acquire multiple resources competitively. This study presents the development of a modified electric generation expansion analysis system(EGEAS) model with avoided cost based on the existing EGEAS model which is a dynamic program to develope an optimal generation expansion plan for the electric utility. We are trying to find optimal GEP in Korea's case using our modified model and observe the difference for the level of reliabilities such as the reserve margin(RM) loss of load probability(LOLP) and expected unserved energy percent(EUEP) between the existing EGEAS model and our model. In addition we are trying to calculate avoided cost for NUG resources which is a criterion to evaluate herem and test possibility of connection calculation of avoided cost with GEP implementation using our modified model. The results of our case study are as follows. First we were able to find that the generation expansion plan and reliability measures were largely influenced by capacity size and loading status of NUG resources, Second we were able to find that avoided cost which are criteria to evaluate NUG resources could be calculated by using our modified EGEAS model with avoided cost. We also note that avoided costs were calculated by our model in connection with generation expansion plans.

  • PDF

A Study on Selecting the Optimal Location of BTB HVDC for Reducing Fault Current in Metropolitan Regions Based on Genetic Algorithm Using Python (Python을 이용한 유전 알고리즘 기반의 수도권 고장전류 저감을 위한 BTB HVDC 최적 위치 선정 기법에 관한 연구)

  • Song, Min-Seok;Kim, Hak-Man;Lee, Byung Ha
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.8
    • /
    • pp.1163-1171
    • /
    • 2017
  • The problem of fault current to exceed the rated capacity of a power circuit breaker can cause a serious accident to hurt the reliability of the power system. In order to solve this issue, current limiting reactors and circuit breakers with increased capacity are utilized but these solutions have some technical limitations. Back-to-back high voltage direct current(BTB HVDC) may be applied for reducing the fault current. When BTB HVDCs are installed for reduction in fault current, selecting the optimal location of the BTB HVDC without causing overload of line power becomes a key point. In this paper, we use genetic algorithm to find optimal location effectively in a short time. We propose a new methodology for determining the BTB HVDC optimal location to reduce fault current without causing overload of line power in metropolitan areas. Also, the procedure of performing the calculation of fault current and line power flow by PSS/E is carried out automatically using Python. It is shown that this optimization methodology can be applied effectively for determining the BTB HVDC optimal location to reduce fault current without causing overload of line power by a case study.

Assessing the Vulnerability of Network Topologies under Large-Scale Regional Failures

  • Peng, Wei;Li, Zimu;Liu, Yujing;Su, Jinshu
    • Journal of Communications and Networks
    • /
    • v.14 no.4
    • /
    • pp.451-460
    • /
    • 2012
  • Natural disasters often lead to regional failures that can cause network nodes and links co-located in a large geographical area to fail. Novel approaches are required to assess the network vulnerability under such regional failures. In this paper, we investigate the vulnerability of networks by considering the geometric properties of regional failures and network nodes. To evaluate the criticality of node locations and determine the critical areas in a network, we propose the concept of ${\alpha}$-critical-distance with a given failure impact ratio ${\alpha}$, and we formulate two optimization problems based on the concept. By analyzing the geometric properties of the problems, we show that although finding critical nodes or links in a pure graph is a NP-complete problem, the problem of finding critical areas has polynomial time complexity. We propose two algorithms to deal with these problems and analyze their time complexities. Using real city-level Internet topology data, we conducted experiments to compute the ${\alpha}$-critical-distances for different networks. The computational results demonstrate the differences in vulnerability of different networks. The results also indicate that the critical area of a network can be estimated by limiting failure centers on the locations of network nodes. Additionally, we find that with the same impact ratio ${\alpha}$, the topologies examined have larger ${\alpha}$-critical-distances when the network performance is measured using the giant component size instead of the other two metrics. Similar results are obtained when the network performance is measured using the average two terminal reliability and the network efficiency, although computation of the former entails less time complexity than that of the latter.