• Title/Summary/Keyword: construction engineers

Search Result 4,550, Processing Time 0.033 seconds

An Evaluation on Quality of Field Trial Protocol using Pay Factor and Analysis of Fatigue Life (지불계수를 이용한 시험포장구간의 품질평가와 피로수명 분석)

  • Lee, Jae-Hack;Rhee, Suk-Keun;Kim, Seong-Min;Hwang, Sang-Min
    • International Journal of Highway Engineering
    • /
    • v.11 no.4
    • /
    • pp.133-142
    • /
    • 2009
  • This research is performed to estimate quality of trial pavement for construction and analyze effect of fatigue life by using the pay factor. Specially, asphalt content which is difficult to control the pavement quality, is selected by pay adjustment standard factors and pay factor is calculated by asphalt content. This research is also analyzed to present relation of fatigue life according to asphalt content, to evaluate quality of the road pavement by calculating pay factor of sampling trial field mixture 2 times. This research confirms that it is different quality of road pavement according to pay factor changes. To analyze the fatigue life of pavement by using asphalt mixture for trial field. As a result, it is conformed that high pay factor could be high fatigue life of trial field. This means that pay factor using probability theory reflects road pavement fatigue life. Also, this study is included that beam fatigue test manufacturing specimen such as mixing type of plant which purvey asphalt mixture to trial field, compared with fatigue life of trial field. As a result, the fatigue life of specimen that is manufactured by mix type is higher than trial field specimen. This means that performance of road pavement can be reduced by gradation or other effects. Therefore, to exactly evaluate the quality of road pavement, pay factor should be calculated appling various pay adjustment standard factors such as gradation, air-void in U.S. states which is adopted pay adjustment.

  • PDF

A Development of concrete Pavement Material with Low Shrinkage and Reflection, High Strength and Performance (저수축 저반사 고강도 고내구성 콘크리트 포장재료 개발)

  • Kim, Hyo-Sung;Nam, Jeong-Hee;Eum, Ju-Yong;Cho, Yoon-Ho
    • International Journal of Highway Engineering
    • /
    • v.11 no.1
    • /
    • pp.13-24
    • /
    • 2009
  • This study developed a high strength and performance concrete pavement material with low shrinkage and reflection of sunlight. Based on the literature review, a new mix-design of applying flash ash to improve the strength and performance of the concrete as well as to reduce the dry shrinkage is suggested. In addition, adding black pigment to reduce the reflection and technique of applying OAG (Optimized Aggregate Gradation) is also included. The result of the laboratory experiment indicates that the brightness and the reflection, which depends on the ratio of black pigment addition, did not deviate from the normal range. When OAG is considered for the mix-design, the strength and performance of the concrete improved greatly. In addition, the mix-design using fly ash reduced the dry shrinkage of concrete and improved the resistance to the permeation of chloride ion. Furthermore, the mix-design, which uses fly ash (25% replacement) and black pigment (3% addition) with the application of OAG, is found to be the most effective mix to reduce the shrinkage and reflection as well as improving the strength and performance of the concrete. The result of an economic analysis indicates that the initial construction cost of this proposed mix is more expensive than that of normal concrete pavement material. However, it can be more economic in the long run because the normal concrete pavement material is likely to cost more due to higher probability of maintenance and repair and higher social cost due to traffic accident, etc.

  • PDF

Road Accident Trends Analysis with Time Series Models for Various Road Types (도로종류별 교통사고 추세분석 및 시제열 분석모형 개발)

  • Han, Sang-Jin;Kim, Kewn-Jung
    • International Journal of Highway Engineering
    • /
    • v.9 no.3
    • /
    • pp.1-12
    • /
    • 2007
  • Roads in Korea can be classified into four types according to their responsible authorities. For example, Motorway is constructed, managed, and operated by the Korea Highway Corporation. Ministry of Construction and Transportation is in charge of National Highway, and Province Roads are run by each province government. Urban/county Roads are run by corresponding local government. This study analyses the trends of road accidents for each road type. For this purpose, the numbers of accidents, fatalities, and injuries are compared for each road type for last 15 years. The result shows that Urban/County Roads are the most dangerous, while Motorways are the safest, when we simply compare the numbers of accidents, fatalities, and injuries. However, when we compare these numbers by dividing by total road length, National Highway becomes the most dangerous while Province Roads becomes the safest. In the case of road accidents, fatalities, and injuries per vehicle km, which is known as the most objective comparison measure, it turns out that National Highway is the most dangerous roads again. This study also developed time series models to estimate trends of fatalities for each road type. These models will be useful when we set up or evaluate targets of national road safety.

  • PDF

n-Gram/2L: A Space and Time Efficient Two-Level n-Gram Inverted Index Structure (n-gram/2L: 공간 및 시간 효율적인 2단계 n-gram 역색인 구조)

  • Kim Min-Soo;Whang Kyu-Young;Lee Jae-Gil;Lee Min-Jae
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.12-31
    • /
    • 2006
  • The n-gram inverted index has two major advantages: language-neutral and error-tolerant. Due to these advantages, it has been widely used in information retrieval or in similar sequence matching for DNA and Protein databases. Nevertheless, the n-gram inverted index also has drawbacks: the size tends to be very large, and the performance of queries tends to be bad. In this paper, we propose the two-level n-gram inverted index (simply, the n-gram/2L index) that significantly reduces the size and improves the query performance while preserving the advantages of the n-gram inverted index. The proposed index eliminates the redundancy of the position information that exists in the n-gram inverted index. The proposed index is constructed in two steps: 1) extracting subsequences of length m from documents and 2) extracting n-grams from those subsequences. We formally prove that this two-step construction is identical to the relational normalization process that removes the redundancy caused by a non-trivial multivalued dependency. The n-gram/2L index has excellent properties: 1) it significantly reduces the size and improves the Performance compared with the n-gram inverted index with these improvements becoming more marked as the database size gets larger; 2) the query processing time increases only very slightly as the query length gets longer. Experimental results using databases of 1 GBytes show that the size of the n-gram/2L index is reduced by up to 1.9${\~}$2.7 times and, at the same time, the query performance is improved by up to 13.1 times compared with those of the n-gram inverted index.

A Classification and Extraction Method of Object Structure Patterns for Framework Hotspot Testing (프레임워크 가변부위 시험을 위한 객체 구조 패턴의 분류 및 추출 방법)

  • Kim, Jang-Rae;Jeon, Tae-Woong
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.7
    • /
    • pp.465-475
    • /
    • 2002
  • An object-oriented framework supports efficient component-based software development by providing a flexible architecture that can be decomposed into easily modifiable and composable classes. Object-oriented frameworks require thorough testing as they are intended to be reused repeatedly In developing numerous applications. Furthermore, additional testing is needed each time the framework is modified and extended for reuse. To test a framework, it must be instantiated into a complete, executable system. It is, however, practically impossible to test a framework exhaustively against all kinds of framework instantiations, as possible systems into which a framework can be configured are infinitely diverse. If we can classify possible configurations of a framework into a finite number of groups so that all configurations of a group have the same structural or behavioral characteristics, we can effectively cover all significant test cases for the framework testing by choosing a representative configuration from each group. This paper proposes a systematic method of classifying object structures of a framework hotspot and extracting structural test patterns from them. This paper also presents how we can select an instance of object structure from each extracted test pattern for use in the frameworks hotspot testing. This method is useful for selection of optimal test cases and systematic construction of executable test target.

Analytical Formula for the Equivalent Mohr-Coulomb Strength Parameters Best-fitting the Generalized Hoek-Brown Criterion in an Arbitrary Range of Minor Principal Stress (임의 최소주응력 구간에서 일반화된 Hoek-Brown 파괴기준식을 최적 근사하는 등가 Mohr-Coulomb 강도정수 계산식)

  • Lee, Youn-Kyou
    • Tunnel and Underground Space
    • /
    • v.29 no.3
    • /
    • pp.172-183
    • /
    • 2019
  • The generalized Hoek-Brown (GHB) failure criterion developed by Hoek et al. (2002) is a nonlinear function which defines a stress condition at failure of rock mass. The relevant strength parameter values are systematically determined using the GSI value. Since GSI index is a value quantifying the condition of in-situ rock mass, the GHB criterion is a practical failure condition which can take into the consideration of in-situ rock mass quality. Considering that most rock mechanics engineers are familiar with the linear Mohr-Coulomb criterion and that many rock engineering softwares incorporate Mohr-Coulomb criterion, the equations for the equivalent friction angle and cohesion were also proposed along with the release of the GHB criterion. The proposed equations, however, fix the lower limit of the minor principal stress range, where the linear best-fitting is performed, with the tensile strength of the rock mass. Therefore, if the tensile stress is not expected in the domain of analysis, the calculated equivalent friction angle and cohesion based on the equations in Hoek et al. (2002) could be less accurate. In order to overcome this disadvantage of the existing equations for equivalent friction angle and cohesion, this study proposes the analytical formula which can calculate optimal equivalent friction angle and cohesion in any minor principal stress interval, and verified the accuracy of the derived formula.

The Research of Layout Optimization for LNG Liquefaction Plant to Save the Capital Expenditures (LNG 액화 플랜트 배치 최적화를 통한 투자비 절감에 관한 연구)

  • Yang, Jin Seok;Lee, Chang Jun
    • Korean Chemical Engineering Research
    • /
    • v.57 no.1
    • /
    • pp.51-57
    • /
    • 2019
  • A plant layout problem has a large impact on the overall construction cost of a plant. When determining a plant layout, various constraints associating with safety, environment, sufficient maintenance area, passages for workers, etc have to be considered together. In general plant layout problems, the main goal is to minimize the length of piping connecting equipments as satisfying various constraints. Since the process may suffer from the heat and friction loss, the piping length between equipments should be shorter. This problem can be represented by the mathematical formulation and the optimal solutions can be investigated by an optimization solver. General researches have overlooked many constraints such as maintenance spaces and safety distances between equipments. And, previous researches have tested benchmark processes. What the lack of general researches is that there is no realistic comparison. In this study, the plant layout of a real industrial C3MR (Propane precooling Mixed Refrigerant) process is studied. A MILP (Mixed Integer Linear Programming) including various constraints is developed. To avoid the violation of constraints, penalty functions are introduced. However, conventional optimization solvers handling the derivatives of an objective functions can not solve this problem due to the complexities of equations. Therefore, the PSO (Particle Swarm Optimization), which investigate an optimal solutions without differential equations, is selected to solve this problem. The results show that a proposed method contributes to saving the capital expenditures.

Dual CNN Structured Sound Event Detection Algorithm Based on Real Life Acoustic Dataset (실생활 음향 데이터 기반 이중 CNN 구조를 특징으로 하는 음향 이벤트 인식 알고리즘)

  • Suh, Sangwon;Lim, Wootaek;Jeong, Youngho;Lee, Taejin;Kim, Hui Yong
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.855-865
    • /
    • 2018
  • Sound event detection is one of the research areas to model human auditory cognitive characteristics by recognizing events in an environment with multiple acoustic events and determining the onset and offset time for each event. DCASE, a research group on acoustic scene classification and sound event detection, is proceeding challenges to encourage participation of researchers and to activate sound event detection research. However, the size of the dataset provided by the DCASE Challenge is relatively small compared to ImageNet, which is a representative dataset for visual object recognition, and there are not many open sources for the acoustic dataset. In this study, the sound events that can occur in indoor and outdoor are collected on a larger scale and annotated for dataset construction. Furthermore, to improve the performance of the sound event detection task, we developed a dual CNN structured sound event detection system by adding a supplementary neural network to a convolutional neural network to determine the presence of sound events. Finally, we conducted a comparative experiment with both baseline systems of the DCASE 2016 and 2017.

Deep Learning Structure Suitable for Embedded System for Flame Detection (불꽃 감지를 위한 임베디드 시스템에 적합한 딥러닝 구조)

  • Ra, Seung-Tak;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.23 no.1
    • /
    • pp.112-119
    • /
    • 2019
  • In this paper, we propose a deep learning structure suitable for embedded system. The flame detection process of the proposed deep learning structure consists of four steps : flame area detection using flame color model, flame image classification using deep learning structure for flame color specialization, $N{\times}N$ cell separation in detected flame area, flame image classification using deep learning structure for flame shape specialization. First, only the color of the flame is extracted from the input image and then labeled to detect the flame area. Second, area of flame detected is the input of a deep learning structure specialized in flame color and is classified as flame image only if the probability of flame class at the output is greater than 75%. Third, divide the detected flame region of the images classified as flame images less than 75% in the preceding section into $N{\times}N$ units. Fourthly, small cells divided into $N{\times}N$ units are inserted into the input of a deep learning structure specialized to the shape of the flame and each cell is judged to be flame proof and classified as flame images if more than 50% of cells are classified as flame images. To verify the effectiveness of the proposed deep learning structure, we experimented with a flame database of ImageNet. Experimental results show that the proposed deep learning structure has an average resource occupancy rate of 29.86% and an 8 second fast flame detection time. The flame detection rate averaged 0.95% lower compared to the existing deep learning structure, but this was the result of light construction of the deep learning structure for application to embedded systems. Therefore, the deep learning structure for flame detection proposed in this paper has been proved suitable for the application of embedded system.

Comparative Experiment of 2D and 3D DCT Point Cloud Compression (2D 및 3D DCT를 활용한 포인트 클라우드 압축 비교 실험)

  • Nam, Kwijung;Kim, Junsik;Han, Muhyen;Kim, Kyuheon;Hwang, Minkyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.553-565
    • /
    • 2021
  • Point cloud is a set of points for representing a 3D object, and consists of geometric information, which is 3D coordinate information, and attribute information, which is information representing color, reflectance, and the like. In this way of expressing, it has a vast amount of data compared to 2D images. Therefore, a process of compressing the point cloud data in order to transmit the point cloud data or use it in various fields is required. Unlike color information corresponding to all 2D geometric information constituting a 2D image, a point cloud represents a point cloud including attribute information such as color in only a part of the 3D space. Therefore, separate processing of geometric information is also required. Based on these characteristics of point clouds, MPEG under ISO/IEC standardizes V-PCC, which imitates point cloud images and compresses them into 2D DCT-based 2D image compression codecs, as a compression method for high-density point cloud data. This has limitations in accurately representing 3D spatial information to proceed with compression by converting 3D point clouds to 2D, and difficulty in processing non-existent points when utilizing 3D DCT. Therefore, in this paper, we present 3D Discrete Cosine Transform-based Point Cloud Compression (3DCT PCC), a method to compress point cloud data, which is a 3D image by utilizing 3D DCT, and confirm the efficiency of 3D DCT compared to V-PCC based on 2D DCT.