• Title/Summary/Keyword: Trade-Off Analysis

Search Result 362, Processing Time 0.027 seconds

Effects of Intermediate Heat Treatment on the Corrosion and Mechanical Properties of Zr Alloy Strip Incorporating Nb (니오븀이 첨가된 Zr 합금 스트립의 부식 및 기계적 특성에 대한 중간열처리 영향)

  • Lee, Myung Ho;Jung, Yang Il;Choi, Byoung Kwon;Park, Sang Yoon;Kim, Hyun Gil;Park, Jeong Yong;Jeong, Yong Hwan
    • Korean Journal of Metals and Materials
    • /
    • v.47 no.8
    • /
    • pp.482-487
    • /
    • 2009
  • In order to investigate the effects of intermediate heat treatment between cold rolling passes on the hardness and corrosion properties of a Zr alloy incorporating Nb (Zr-1.49Nb-0.38Sn-0.20Fe-0.11Cr) strip, three different intermediate heat treatment processes ($580^{\circ}C{\times}4hrs$, $600^{\circ}C{\times}2hrs$ and $620^{\circ}{\times}1hrs$) were designed based on a recrystallization map and an accumulated annealing parameter. Test samples from the different processes were investigated by a hardness test, corrosion test, and microstructure analysis and appropriate heat-treatment conditions were thereupon proposed. The sample subjected to an intermediate heat treatment of $580^{\circ}C{\times}4hrs$ was harder than that undergoing $600^{\circ}C{\times}2hrs$ and $620^{\circ}C{\times}1hr$ while the corrosion resistance of the sample that received an intermediate heat treatment of $580^{\circ}C{\times}4hrs$ was superior to that of the other specimens. Considering the trade-off of hardness and corrosion resistance, an intermediate heat treatment process of $600^{\circ}C{\times}2hrs$ is proposed to improve the manufacturing process of the alloy strip.

A Study on the Efficient Operation of Harpoon Missile Maintenance Personnel Using Simulation Model (시뮬레이션을 활용한 효율적인 하푼 유도탄 정비인력 운영 연구)

  • Choi, Youngjae;Ma, Jungmok
    • Journal of the Korea Society for Simulation
    • /
    • v.30 no.1
    • /
    • pp.65-73
    • /
    • 2021
  • The maintenance of the guided missiles typically requires the efficient management of spare parts and maintenance time. This study analyzed the impact of the maintenance time on operational availability. This study classifies the maintenance team with consideration of the skill level of the Harpoon guided missile maintenance and the goal is to analyze the impact on the operational availability with the skill levels quantitatively. Based on the real maintenance data of Harpoon guided missiles, a simulation model is constructed and analyzed. The analysis of the simulation result shows the trade-off between the maintenance time and operational availability. It is expected that the simulation model can help the maintenance policies of guided missiles.

Assessment of Historical Earthquake Magnitudes and Epicenters Using Ground Motion Simulations (지진동 모사를 통한 역사지진 규모와 진앙 평가)

  • Kim, Seongryong;Lee, Sang-Jun
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.59-69
    • /
    • 2021
  • Historical records of earthquakes are generally used as a basis to extrapolate the instrumental earthquake catalog in time and space during the probabilistic seismic hazard analysis (PSHA). However, the historical catalogs' input parameters determined through historical descriptions rather than any quantitative measurements are accompanied by considerable uncertainty in PSHA. Therefore, quantitative assessment to verify the historical earthquake parameters is essential for refining the reliability of PSHA. This study presents an approach and its application to constrain reliable ranges of the magnitude and corresponding epicenter of historical earthquakes. First, ranges rather than specific values of ground motion intensities are estimated at multiple locations with distances between each other for selected historical earthquakes by reviewing observed co-seismic natural phenomena, structural damage levels, or felt areas described in their historical records. Based on specific objective criteria, this study selects only one earthquake (July 24, 1643), which is potentially one of the largest historical earthquakes. Then, ground motion simulations are performed for sufficiently broadly distributed epicenters, with a regular grid to prevent one from relying on strong assumptions. Calculated peak ground accelerations and velocities in areas with the historical descriptions on corresponding earthquakes are converted to intensities with an empirical ground motion-intensity conversion equation to compare them with historical descriptions. For the ground motion simulation, ground motion prediction equations and a frequency-wavenumber method are used to consider the effects of possible source mechanisms and stress drop. From these quantitative calculations, reliable ranges of epicenters and magnitudes and the trade-off between them are inferred for the earthquake that can conservatively match the upper and lower boundaries of intensity values from historical descriptions.

Development of Fitness and Interactive Decision Making in Multi-Objective Optimization (다목적 유전자 알고리즘에 있어서 적합도 평가방법과 대화형 의사결정법의 제안 )

  • Yeboon Yun;Dong Joon Park;Min Yoon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.4
    • /
    • pp.109-117
    • /
    • 2022
  • Most of real-world decision-making processes are used to optimize problems with many objectives of conflicting. Since the betterment of some objectives requires the sacrifice of other objectives, different objectives may not be optimized simultaneously. Consequently, Pareto solution can be considered as candidates of a solution with respect to a multi-objective optimization (MOP). Such problem involves two main procedures: finding Pareto solutions and choosing one solution among them. So-called multi-objective genetic algorithms have been proved to be effective for finding many Pareto solutions. In this study, we suggest a fitness evaluation method based on the achievement level up to the target value to improve the solution search performance by the multi-objective genetic algorithm. Using numerical examples and benchmark problems, we compare the proposed method, which considers the achievement level, with conventional Pareto ranking methods. Based on the comparison, it is verified that the proposed method can generate a highly convergent and diverse solution set. Most of the existing multi-objective genetic algorithms mainly focus on finding solutions, however the ultimate aim of MOP is not to find the entire set of Pareto solutions, but to choose one solution among many obtained solutions. We further propose an interactive decision-making process based on a visualized trade-off analysis that incorporates the satisfaction of the decision maker. The findings of the study will serve as a reference to build a multi-objective decision-making support system.

CONNECTING TECHNOLOGY, INDUSTRY AND RESEARCH: A VERTICAL INTEGRAL PROJECT COURSE FOR BIM EDUCATION OPPORTUNITIES

  • F. H. (Bud) Griffis;Mei Liu;Andrew Bates
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.252-259
    • /
    • 2013
  • Building Information Modeling (BIM) is utilizing CAD technology in a way that ultimately ties all the components of a building together as objects imbedded with information, and has been changing the way we design and build over the last 20-30 years. In Polytechnic Institute of NYU, there are four BIM courses offered which provide students with different levels of knowledge regarding BIM Technique, BIM Standards, BIM Guideline and Roadmap for Private and Public Implementation, BIM Application in Real Projects, the Cooperation of BIM and IPD for Public Works in New York City. With advanced BIM technology, BIM's integration into the construction process and its incorporation into project delivery systems, especially Integrated Project Delivery (IPD) are the bridges between technology, industry and research. This paper presents an integrated BIM curriculum with three modules: 1) BIM functions and Bid Preparation; 2) Time-Cost Trade-off Analysis; and 3) Problems Solving in BIM/IPD Environment. In this project-based curriculum developed by the common efforts of academia, public agency and industry, the objectives are: (1) to provide the information and skills needed to successfully implement BIM into the construction phase; (2) to identify BIM's role in construction and the project delivery system; (3) to develop a module in conjunction with leading BIM into project delivery system, particularly coordination between BIM and IPD; (4) to connect technology and research into industry. The course assessment was conducted and the results indicate that it is a successful reform in construction management education.

  • PDF

Montgomery Multiplier with Very Regular Behavior

  • Yoo-Jin Baek
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.17-28
    • /
    • 2024
  • As listed as one of the most important requirements for Post-Quantum Cryptography standardization process by National Institute of Standards and Technology, the resistance to various side-channel attacks is considered very critical in deploying cryptosystems in practice. In fact, cryptosystems can easily be broken by side-channel attacks, even though they are considered to be secure in the mathematical point of view. The timing attack(TA) and the simple power analysis attack(SPA) are such side-channel attack methods which can reveal sensitive information by analyzing the timing behavior or the power consumption pattern of cryptographic operations. Thus, appropriate measures against such attacks must carefully be considered in the early stage of cryptosystem's implementation process. The Montgomery multiplier is a commonly used and classical gadget in implementing big-number-based cryptosystems including RSA and ECC. And, as recently proposed as an alternative of building blocks for implementing post quantum cryptography such as lattice-based cryptography, the big-number multiplier including the Montgomery multiplier still plays a role in modern cryptography. However, in spite of its effectiveness and wide-adoption, the multiplier is known to be vulnerable to TA and SPA. And this paper proposes a new countermeasure for the Montgomery multiplier against TA and SPA. Briefly speaking, the new measure first represents a multiplication operand without 0 digits, so the resulting multiplication operation behaves in a very regular manner. Also, the new algorithm removes the extra final reduction (which is intrinsic to the modular multiplication) to make the resulting multiplier more timing-independent. Consequently, the resulting multiplier operates in constant time so that it totally removes any TA and SPA vulnerabilities. Since the proposed method can process multi bits at a time, implementers can also trade-off the performance with the resource usage to get desirable implementation characteristics.

Analysis of the Process Capability Index According to the Sample Size of Multi-Measurement (다측정 표본크기에 대한 공정능력지수 분석)

  • Lee, Do-Kyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.42 no.1
    • /
    • pp.151-157
    • /
    • 2019
  • This study is about the process capability index (PCI). In this study, we introduce several indices including the index $C_{PR}$ and present the characteristics of the $C_{PR}$ as well as its validity. The difference between the other indices and the $C_{PR}$ is the way we use to estimate the standard deviation. Calculating the index, most indices use sample standard deviation while the index $C_{PR}$ uses range R. The sample standard deviation is generally a better estimator than the range R. But in the case of the panel process, the $C_{PR}$ has more consistency than the other indices at the point of non-conforming ratio which is an important term in quality control. The reason why the $C_{PR}$ using the range has better consistency is explained by introducing the concept of 'flatness ratio'. At least one million cells are present in one panel, so we can't inspect all of them. In estimating the PCI, it is necessary to consider the inspection cost together with the consistency. Even though we want smaller sample size at the point of inspection cost, the small sample size makes the PCI unreliable. There is 'trade off' between the inspection cost and the accuracy of the PCI. Therefore, we should obtain as large a sample size as possible under the allowed inspection cost. In order for $C_{PR}$ to be used throughout the industry, it is necessary to analyze the characteristics of the $C_{PR}$. Because the $C_{PR}$ is a kind of index including subgroup concept, the analysis should be done at the point of sample size of the subgroup. We present numerical analysis results of $C_{PR}$ by the data from the random number generating method. In this study, we also show the difference between the $C_{PR}$ using the range and the $C_P$ which is a representative index using the sample standard deviation. Regression analysis was used for the numerical analysis of the sample data. In addition, residual analysis and equal variance analysis was also conducted.

DEVELOPMENT OF SAFETY-BASED LEVEL-OF-SERVICE CRITERIA FOR ISOLATED SIGNALIZED INTERSECTIONS (독립신호 교차로에서의 교통안전을 위한 서비스수준 결정방법의 개발)

  • Dr. Tae-Jun Ha
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.3-32
    • /
    • 1995
  • The Highway Capacity Manual specifies procedures for evaluating intersection performance in terms of delay per vehicle. What is lacking in the current methodology is a comparable quantitative procedure for ass~ssing the safety-based level of service provided to motorists. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections based on the relative hazard of alternative intersection designs and signal timing plans. Conflict opportunity models were developed for those crossing, diverging, and stopping maneuvers which are associated with left-turn and rear-end accidents. Safety¬based level-of-service criteria were then developed based on the distribution of conflict opportunities computed from the developed models. A case study evaluation of the level of service analysis methodology revealed that the developed safety-based criteria were not as sensitive to changes in prevailing traffic, roadway, and signal timing conditions as the traditional delay-based measure. However, the methodology did permit a quantitative assessment of the trade-off between delay reduction and safety improvement. The Highway Capacity Manual (HCM) specifies procedures for evaluating intersection performance in terms of a wide variety of prevailing conditions such as traffic composition, intersection geometry, traffic volumes, and signal timing (1). At the present time, however, performance is only measured in terms of delay per vehicle. This is a parameter which is widely accepted as a meaningful and useful indicator of the efficiency with which an intersection is serving traffic needs. What is lacking in the current methodology is a comparable quantitative procedure for assessing the safety-based level of service provided to motorists. For example, it is well¬known that the change from permissive to protected left-turn phasing can reduce left-turn accident frequency. However, the HCM only permits a quantitative assessment of the impact of this alternative phasing arrangement on vehicle delay. It is left to the engineer or planner to subjectively judge the level of safety benefits, and to evaluate the trade-off between the efficiency and safety consequences of the alternative phasing plans. Numerous examples of other geometric design and signal timing improvements could also be given. At present, the principal methods available to the practitioner for evaluating the relative safety at signalized intersections are: a) the application of engineering judgement, b) accident analyses, and c) traffic conflicts analysis. Reliance on engineering judgement has obvious limitations, especially when placed in the context of the elaborate HCM procedures for calculating delay. Accident analyses generally require some type of before-after comparison, either for the case study intersection or for a large set of similar intersections. In e.ither situation, there are problems associated with compensating for regression-to-the-mean phenomena (2), as well as obtaining an adequate sample size. Research has also pointed to potential bias caused by the way in which exposure to accidents is measured (3, 4). Because of the problems associated with traditional accident analyses, some have promoted the use of tqe traffic conflicts technique (5). However, this procedure also has shortcomings in that it.requires extensive field data collection and trained observers to identify the different types of conflicts occurring in the field. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections that would be compatible and consistent with that presently found in the HCM for evaluating efficiency-based level of service as measured by delay per vehicle (6). The intent was not to develop a new set of accident prediction models, but to design a methodology to quantitatively predict the relative hazard of alternative intersection designs and signal timing plans.

  • PDF

An Analysis of the Changes in the Housing Instability by the Residential Mobility of Low-Income Households (주거이동을 통한 주거 불안정성 변화에 관한 연구 -저소득층을 대상으로 하여-)

  • Noh, Seung-Chul;Lee, Hee-Yeon
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.12 no.4
    • /
    • pp.507-520
    • /
    • 2009
  • The purpose of this study is to analyze the changes in the housing instability of low-income households through their residential mobility. The concept of housing instability is measured by taking into consideration of housing types, number of moves, period of homelessness, and housing affordability index. The result of this study shows that housing instability of low-income households owned their homes is mainly caused from their old housing built in at least 1980, and that of tenant households is due to the heavy burden of rent-to-income ratio. By using multinominal logit model, the study finds that low-income tenant households are more likely to move upwards as they are man-headed, aged and relatively high-income if we categorize residential mobility into four types: upwards, equivalent, trade-off, and downwards migration. Considering that the share of homeowners moving downwards increases while the share of tenants moving upwards decreases as they reside increasingly nearby Seoul, the study finds that low-income households living in big cities are no better off to improve their residential instability for themselves than the low-incomes in local small and midium cities. Furthermore, both low-income owners and tenants are less likely to move downwards as the ratio of single-family housing in former residence increases. Such finding has a policy implication that government needs to maintain affordable single-family housing stock rather than supplying excessive unaffordable multi-family housing in order to enhance residential instability of low-incomes households.

  • PDF

Performance analysis and operation simulation of the beamforming antenna applied to cellular CDMA basestation (셀룰러 CDMA 기지국에 beamforming 안테나를 적용하기 위한 동작 시뮬레이션 및 성능해석에 관한 연구)

  • Park, Jae-Jun;Bae, Byeong-Jae;Jang, Tae-Gyu
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.37 no.2
    • /
    • pp.32-45
    • /
    • 2000
  • This paper presents the analytic derivation of the SINR, when a linear array antenna is accommodated into the cellular CDMA basestation receiver, in relation to the two major performance effecting factors in beamforming(BF) applications, i. e., the direction selectivity, which refers to the narrowness of the mainbeam width, and the direction-of-arrival(DOA) estimation accuracy. The analytically derived results are compared with the operation simulation of the receiver realized with the several BF algorithms and their agreements are confirmed, consequently verifying the correctness of the analysis and the operation simulation. In order to investigate separately the effects of the errors occurring in the direction estimation and in the interference suppression, which are the two major functional components of general BF algorithms, both the algorithms of steering BF and the minimum- variance- distortionless-response(MVDR) BF are applied to the analysis. A signal model to reflect the spatially scattering phenomenon of the RF waves entering into the .:nay antenna, which directly affects on the accuracy of the BF algorithm's direction estimation, is also suggested in this paper and applied to the analysis and the operation simulation. It is confirmed from the results that the enhancement of the direction selectivity of the away antenna is not desirable in view of both the implementation economy and the BF algorithm's robustness to the erroneous factors. Such a trade-off characteristics is significant in the sense that it can be capitalized to obtain an economic means of BF implementation that does not severely deteriorate its performance while ensuring the robustness to the erroneous effects, consequently manifesting the significance of the analysis results of this paper that can be used as a design reference in developing BF algorithms to the cellular CDMA system.

  • PDF