• Title/Summary/Keyword: Cost-efficient

Search Result 4,141, Processing Time 0.036 seconds

A Study on the Analysis of Current Situation of Safety Inspections cost in Apartment houses (공동주택 안전점검대가 현황분석에 관한 연구)

  • Go, Seong-Seok;Song, Do-Heom;Yun, Yeong-Chae
    • Korean Journal of Construction Engineering and Management
    • /
    • v.15 no.2
    • /
    • pp.62-70
    • /
    • 2014
  • An apartment house, a private facility, is a house built for many persons to live independently in a building. An apartment house is a building where life damage can happen most in case of a safety accident due to insufficient inspections. However, formal inspections are being realized due to awareness shortage of managing bodies about safety inspection and low-price order-receiving of diagnosing enterprises via the lowest bidding method. This is because it is judged that remarkable costs are inputted in repairs and reinforcement such as maintenance of a structure and that there is a large possibility of human damage in case of a safety accident in a structure. So, this paper aims to derive the points to improve in the current criterion to execute an efficient detailed inspection. As its method, the design price and execution price situation of 66 buildings inspected in detail for recent 3 years for the class 2 facilities in Gwangju Metropolitan City and Jeonranam-do are examined and analyzed. The state2 object buildings to inspect are selected through this. And this paper aims to present the points to improve through the analysis of current problems by evaluating the detailed inspection report and the detailed inspection execution price calculation criterion for the selected 10 apartment houses.

Generation of Testability on High Density /Speed ATM MCM and Its Library Build-up using BCB Thin Film Substrate (고속/고집적 ATM Switching MCM 구현을 위한 설계 Library 구축 밀 시험성 확보)

  • 김승곤;지성근;우준환;임성완
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.6 no.2
    • /
    • pp.37-43
    • /
    • 1999
  • Modules of the system that requires large capacity and high-speed information processing are implemented in the form of MCM that allows high-speed data processing, high density circuit integration and widely applied to such fields as ATM, GPS and PCS. Hence we developed the ATM switching module that is consisted of three chips and 2.48 Gbps data throughput, in the form of 10 multi-layer by Cu/Photo-BCB and 491pin PBGA which size is $48 \times 48 \textrm {mm}^2$. hnologies required for the development of the MCM includes extracting parameters for designing the substrate/package through the interconnect characterization to implement the high-speed characteristics, thermal management at the high-density MCM, and the generation of the testability that is one of the most difficult issues for developing the MCM. For the development of the ATM Switching MCM, we extracted signaling delay, via characteristics and crosstalk parameters through the interconnect characterization on the MCM-D. For the thermal management of 15.6 Watt under the high-density structure, we carried out the thermal analysis. formed 1.108 thermal vias through the substrate, and performed heat-proofing processing for the entire package so that it can keep the temperature less than $85^{\circ}C$. Lastly, in order to ensure the testability, we verified the substrate through fine pitch probing and applied the Boundary Scan Test (BST) for verifying the complex packaging/assembling processes, through which we developed an efficient and cost-effective product.

  • PDF

Line Tracking Method of AGV using Sensor Fusion (센서융합을 이용한 AGV의 라인 트레킹 방법)

  • Jung, Kyung-Hoon;Kim, Jung-Min;Park, Jung-Je;Kim, Sung-Shin;Bae, Sun-Il
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.1
    • /
    • pp.54-59
    • /
    • 2010
  • This paper present to study the guidance system as localization technique using sensor fusion and line tracking technique using virtual line for AGV(autonomous guided vehicle). An existing AGV could drive on decided line only. And representative guidance systems of such guidance system are magnet-gyro guidance and wired guidance. However, those have had the high cost of installation and maintenance, and the difficulty of system change according to variation of working environment. To solve such problems, we make the localization system which is fused with a laser navigation and gyro, encoder. The system is robust against noise, and flexible according to working environment through sensor fusion. For line tracking of laser navigation without wire guidance, we set the virtual line in program, and design the driving controller based on difference of angle and distance between AGV's position and decided virtual line. To experiment, we use the AGV which is made by ourselves, and experiment the line tracking repeatedly on same experimental environment. In result, maximum distance error between decided virtual line and AGV's position was less than 49.93mm, and we verified that the proposed system is efficient for line tracking of actual AGV.

Preceding Research for Estimating the Maximal Fat Oxidation Point through Heart Rate and Heart Rate Variability (심박 및 심박변화를 통한 최대 지방 연소 시점의 추정)

  • Sim, Myeong-Heon;Kim, Min-Yong;Yoon, Chan-Sol;Chung, Joo-Hong;Noh, Yeon-Sik;Park, Sung-Bin;Yoon, Hyung-Ro
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.9
    • /
    • pp.1340-1349
    • /
    • 2012
  • Increasing the oxidation of fat through exercise is the recommendable method for weight control. Preceding researches have proposed increase in the usage of fat during exercise in stabilized state and under maximum exertion through aerobic training. However, such researches require additional equipment for gas analysis in order to measure the caloric value or gas exchange of subjects during exercise. Such equipments become highly restrictive for those exercise and cause substantially higher cost. According to this, we have presented the method of estimating the maximal fat oxidation point through changes in LF & HF which reflects changes in heart rate and the autonomic nervous system in order to induce exercise for a less restrictive and efficient fat oxidation than existing methods. We have conducted exercise stress test on subject with similar exercise abilities, and have detected the changes in heart rate and changes in LF & HF by measuring changes in fat oxidation and measuring ECG signals at the same time through a gas analyzer. Changes in heart rate and HRV of the subjects during exercising was detected through only the electrocardiographic signals from exercising and detected the point of maximum fat oxidation that differs from person to person. The experiment was carried out 16 healthy males, and used Modified Bruce Protocol, which is one of the methods of exercise stress test methods that use treadmill. The fat oxidation amount during exercise of all the subjects showed fat oxidation of more than 4Fkcal/min in the exercise intensity from about 5 minutes to 10 minutes. The correlation between the maximal fat oxidation point obtained through gas analysis and the point when 60% starts to be relevant in the range from -0.01 to 0.01 seconds for values of R-R interval from changes in heart rate had correlation coefficients of 0.855 in Kendall's method and in Spearman's rho, it showed significant results of it being p<0.01 with 0.950, respectively. Furthermore, in the changes in LF & HF, we have determined the point where the normalized area value starts to become the same as the maximal fat oxidation point, and the correlation here showed 0.620 in Kendall and 0.780 in Spearma of which both showed significant results as p<0.01.

An Improved Estimation Model of Server Power Consumption for Saving Energy in a Server Cluster Environment (서버 클러스터 환경에서 에너지 절약을 위한 향상된 서버 전력 소비 추정 모델)

  • Kim, Dong-Jun;Kwak, Hu-Keun;Kwon, Hui-Ung;Kim, Young-Jong;Chung, Kyu-Sik
    • The KIPS Transactions:PartA
    • /
    • v.19A no.3
    • /
    • pp.139-146
    • /
    • 2012
  • In the server cluster environment, one of the ways saving energy is to control server's power according to traffic conditions. This is to determine the ON/OFF state of servers according to energy usage of data center and each server. To do this, we need a way to estimate each server's energy. In this paper, we use a software-based power consumption estimation model because it is more efficient than the hardware model using power meter in terms of energy and cost. The traditional software-based power consumption estimation model has a drawback in that it doesn't know well the computing status of servers because it uses only the idle status field of CPU. Therefore it doesn't estimate consumption power effectively. In this paper, we present a CPU field based power consumption estimation model to estimate more accurate than the two traditional models (CPU/Disk/Memory utilization based power consumption estimation model and CPU idle utilization based power consumption estimation model) by using the various status fields of CPU to get the CPU status of servers and the overall status of system. We performed experiments using 2 PCs and compared the power consumption estimated by the power consumption model (software) with that measured by the power meter (hardware). The experimental results show that the traditional model has about 8-15% average error rate but our proposed model has about 2% average error rate.

Incremental Generation of A Decision Tree Using Global Discretization For Large Data (대용량 데이터를 위한 전역적 범주화를 이용한 결정 트리의 순차적 생성)

  • Han, Kyong-Sik;Lee, Soo-Won
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.487-498
    • /
    • 2005
  • Recently, It has focused on decision tree algorithm that can handle large dataset. However, because most of these algorithms for large datasets process data in a batch mode, if new data is added, they have to rebuild the tree from scratch. h more efficient approach to reducing the cost problem of rebuilding is an approach that builds a tree incrementally. Representative algorithms for incremental tree construction methods are BOAT and ITI and most of these algorithms use a local discretization method to handle the numeric data type. However, because a discretization requires sorted numeric data in situation of processing large data sets, a global discretization method that sorts all data only once is more suitable than a local discretization method that sorts in every node. This paper proposes an incremental tree construction method that efficiently rebuilds a tree using a global discretization method to handle the numeric data type. When new data is added, new categories influenced by the data should be recreated, and then the tree structure should be changed in accordance with category changes. This paper proposes a method that extracts sample points and performs discretiration from these sample points to recreate categories efficiently and uses confidence intervals and a tree restructuring method to adjust tree structure to category changes. In this study, an experiment using people database was made to compare the proposed method with the existing one that uses a local discretization.

Effects of varying nursery phase-feeding programs on growth performance of pigs during the nursery and subsequent grow-finish phases

  • Lee, Chai Hyun;Jung, Dae-Yun;Park, Man Jong;Lee, C. Young
    • Journal of Animal Science and Technology
    • /
    • v.56 no.7
    • /
    • pp.24.1-24.6
    • /
    • 2014
  • The present study investigated the effects of varying durations of nursery diets differing in percentages of milk products on growth performance of pigs during the nursery phase (NP) and subsequent grow-finish phase (GFP) to find the feasibility of reducing the use of nursery diets containing costly milk products. A total of 204 21-d-old weanling female and castrated male pigs were subjected to one of three nursery phase feeding programs differing in durations on the NP 1 and 2 and GFP diets containing 20%, 7%, and 0% lacrosse and 35%, 8%, and 0% dried whey, respectively, in 6 pens (experimental units) for 33 d: HIGH (NP 1, 2 and 3 diets for 7, 14, and 12 d), MEDIUM (NP 2 and 3 for 14 and 19 d), and LOW (NP 2 and 3 and GFP 1 for 7, 14, and 12 d). Subsequently, 84 randomly selected pigs [14 pigs (replicates)/pen] were fed the GFP 1, 2 and 3 diets during d 54-96, 96-135, and 135-182 of age, respectively. The final body weight (BW) and average daily gain (ADG) of nursery pigs did not differ among the HIGH, MEDIUM, and LOW groups (14.8, 13.3, and 13.7 kg in BW and 273, 225, and 237 g in ADG, respectively). The average daily feed intake during the nursery phase was greater (p < 0.01) in the HIGH group than in the MEDIUM and LOW groups, whereas the gain:feed ratio did not differ across the treatments. The BW on d 182 and ADG during d 54-182 were greater in the HIGH and MEDIUM groups vs. the LOW group (110.0, 107.6, and 99.6 kg in BW, respectively; p < 0.01). The backfat thickness and carcass grade at slaughter on d 183 did not differ across the treatments. In conclusion, the MEDIUM program may be inferior to the commonly used HIGH program in supporting nursery pig growth. Nevertheless, the former appears to be more efficient than the latter in production cost per market pig whereas the LOW program is thought to be inefficient because of its negative effect on post-nursery pig growth.

Objective Reduction Approach for Efficient Decision Making of Multi-Objective Optimum Service Life Management (다목적 최적화 기반 구조물 수명관리의 효율적 의사결정을 위한 목적감소 기법의 적용)

  • Kim, Sunyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.2
    • /
    • pp.254-260
    • /
    • 2017
  • The service life of civil infrastructure needs to be maintained or extended through appropriate inspections and maintenance planning, which results from the optimization process. A multi-objective optimization process can lead to more rational and flexible trade-off solutions rather than a single-objective optimization for the service life management of civil infrastructure. Recent investigations on the service life management of civil infrastructure were generally based on minimizing the life-cycle cost analysis and maximizing the structural performance. Various objectives for service life management have been developed using novel probabilistic concepts and methods over the last few decades. On the other hand, an increase in the number of objectives in a multi-objective optimization problem can lead to difficulties in computational efficiency, visualization, and decision making. These difficulties can be overcome using the objective reduction approach to identify the redundant and essential objectives. As a result, the efficiency in computational efforts, visualization, and decision making can be improved. In this paper, the multi-objective optimization using the objective reduction approach was applied to the service life management of concrete bridges. The results showed that four initial objectives can be reduced by two objectives for the optimal service life management.

A Node Relocation Strategy of Trajectory Indexes for Efficient Processing of Spatiotemporal Range Queries (효율적인 시공간 영역 질의 처리를 위한 궤적 색인의 노드 재배치 전략)

  • Lim Duksung;Cho Daesoo;Hong Bonghee
    • Journal of KIISE:Databases
    • /
    • v.31 no.6
    • /
    • pp.664-674
    • /
    • 2004
  • The trajectory preservation property that stores only one trajectory in a leaf node is the most important feature of an index structure, such as the TB-tree for retrieving object's moving paths in the spatio-temporal space. It performs well in trajectory-related queries such as navigational queries and combined queries. But, the MBR of non-leaf nodes in the TB-tree have large amounts of dead space because trajectory preservation is achieved at the sacrifice of the spatial locality of trajectories. As dead space increases, the overlap between nodes also increases, and, thus, the classical range query cost increases. We present a new split policy and entry relocation policies, which have no deterioration of the performance for trajectory-related queries, for improving the performance of range queries. To maximally reduce the dead space of a non-leaf node's MBR, the Maximal Area Reduction (MAR) policy is used as a split policy for non-leaf nodes. The entry relocation policy induces entries in non-leaf nodes to exchange each other for the purpose of reducing dead spaces in these nodes. We propose two algorithms for the entry relocation policy, and evaluate the performance studies of new algorithms comparing to the TB-tree under a varying set of spatio-temporal queries.

The Study for Performance Analysis of Software Reliability Model using Fault Detection Rate based on Logarithmic and Exponential Type (로그 및 지수형 결함 발생률에 따른 소프트웨어 신뢰성 모형에 관한 신뢰도 성능분석 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.306-311
    • /
    • 2016
  • Software reliability in the software development process is an important issue. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, reliability software cost model considering logarithmic and exponential fault detection rate based on observations from the process of software product testing was studied. Adding new fault probability using the Goel-Okumoto model that is widely used in the field of reliability problems presented. When correcting or modifying the software, finite failure non-homogeneous Poisson process model. For analysis of software reliability model considering the time-dependent fault detection rate, the parameters estimation using maximum likelihood estimation of inter-failure time data was made. The logarithmic and exponential fault detection model is also efficient in terms of reliability because it (the coefficient of determination is 80% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, the software developers have to consider life distribution by prior knowledge of the software to identify failure modes which can be able to help.