• Title/Summary/Keyword: cost method

Search Result 12,469, Processing Time 0.04 seconds

The Effective Approach for Non-Point Source Management (효과적인 비점오염원관리를 위한 접근 방향)

  • Park, Jae Hong;Ryu, Jichul;Shin, Dong Seok;Lee, Jae Kwan
    • Journal of Wetlands Research
    • /
    • v.21 no.2
    • /
    • pp.140-146
    • /
    • 2019
  • In order to manage non-point sources, the paradigm of the system should be changed so that the management of non-point sources will be systematized from the beginning of the use and development of the land. It is necessary to change the method of national subsidy support and poeration plan for the non-point source management area. In order to increase the effectiveness of the non-point source reduction project, it is necessary to provide a minimum support ratio and to provide additional support according to the performance of the local government. A new system should be established to evaluate the performance of non-point source reduction projects and to monitor the operational effectiveness. It is necessary to establish the related rules that can lead the local government to take responsible administration so that the local governments faithfully carry out the non-point source reduction project and achieve the planned achievement and become the sustainable maintenance. Alternative solutions are needed, such as problems with the use of $100{\mu}m$ filter in automatic sampling and analysis, timely acquisition of water sampling and analysis during rainfall, and effective management of non-point sources network operation management. As an alternative, it is necessary to consider improving the performance of sampling and analysis equipment, and operate the base station. In addition, countermeasures are needed if the amount of pollutant reduction according to the non-point source reduction facility promoted by the national subsidy is required to be used as the development load of the TMDLs. As an alternative, it is possible to consider supporting incentive type of part of the maintenance cost of the non-point source reduction facility depending on the amount of pollutants reduction.

A Study on the Development of High Sensitivity Collision Simulation with Digital Twin (디지털 트윈을 적용한 고감도 충돌 시뮬레이션 개발을 위한 연구)

  • Ki, Jae-Sug;Hwang, Kyo-Chan;Choi, Ju-Ho
    • Journal of the Society of Disaster Information
    • /
    • v.16 no.4
    • /
    • pp.813-823
    • /
    • 2020
  • Purpose: In order to maximize the stability and productivity of the work through simulation prior to high-risk facilities and high-cost work such as dismantling the facilities inside the reactor, we intend to use digital twin technology that can be closely controlled by simulating the specifications of the actual control equipment. Motion control errors, which can be caused by the time gap between precision control equipment and simulation in applying digital twin technology, can cause hazards such as collisions between hazardous facilities and control equipment. In order to eliminate and control these situations, prior research is needed. Method: Unity 3D is currently the most popular engine used to develop simulations. However, there are control errors that can be caused by time correction within Unity 3D engines. The error is expected in many environments and may vary depending on the development environment, such as system specifications. To demonstrate this, we develop crash simulations using Unity 3D engines, which conduct collision experiments under various conditions, organize and analyze the resulting results, and derive tolerances for precision control equipment based on them. Result: In experiments with collision experiment simulation, the time correction in 1/1000 seconds of an engine internal function call results in a unit-hour distance error in the movement control of the collision objects and the distance error is proportional to the velocity of the collision. Conclusion: Remote decomposition simulators using digital twin technology are considered to require limitations of the speed of movement according to the required precision of the precision control devices in the hardware and software environment and manual control. In addition, the size of modeling data such as system development environment, hardware specifications and simulations imitated control equipment and facilities must also be taken into account, available and acceptable errors of operational control equipment and the speed required of work.

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.

A Machine Learning-based Total Production Time Prediction Method for Customized-Manufacturing Companies (주문생산 기업을 위한 기계학습 기반 총생산시간 예측 기법)

  • Park, Do-Myung;Choi, HyungRim;Park, Byung-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.177-190
    • /
    • 2021
  • Due to the development of the fourth industrial revolution technology, efforts are being made to improve areas that humans cannot handle by utilizing artificial intelligence techniques such as machine learning. Although on-demand production companies also want to reduce corporate risks such as delays in delivery by predicting total production time for orders, they are having difficulty predicting this because the total production time is all different for each order. The Theory of Constraints (TOC) theory was developed to find the least efficient areas to increase order throughput and reduce order total cost, but failed to provide a forecast of total production time. Order production varies from order to order due to various customer needs, so the total production time of individual orders can be measured postmortem, but it is difficult to predict in advance. The total measured production time of existing orders is also different, which has limitations that cannot be used as standard time. As a result, experienced managers rely on persimmons rather than on the use of the system, while inexperienced managers use simple management indicators (e.g., 60 days total production time for raw materials, 90 days total production time for steel plates, etc.). Too fast work instructions based on imperfections or indicators cause congestion, which leads to productivity degradation, and too late leads to increased production costs or failure to meet delivery dates due to emergency processing. Failure to meet the deadline will result in compensation for delayed compensation or adversely affect business and collection sectors. In this study, to address these problems, an entity that operates an order production system seeks to find a machine learning model that estimates the total production time of new orders. It uses orders, production, and process performance for materials used for machine learning. We compared and analyzed OLS, GLM Gamma, Extra Trees, and Random Forest algorithms as the best algorithms for estimating total production time and present the results.

An Outlier Detection Using Autoencoder for Ocean Observation Data (해양 이상 자료 탐지를 위한 오토인코더 활용 기법 최적화 연구)

  • Kim, Hyeon-Jae;Kim, Dong-Hoon;Lim, Chaewook;Shin, Yongtak;Lee, Sang-Chul;Choi, Youngjin;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.6
    • /
    • pp.265-274
    • /
    • 2021
  • Outlier detection research in ocean data has traditionally been performed using statistical and distance-based machine learning algorithms. Recently, AI-based methods have received a lot of attention and so-called supervised learning methods that require classification information for data are mainly used. This supervised learning method requires a lot of time and costs because classification information (label) must be manually designated for all data required for learning. In this study, an autoencoder based on unsupervised learning was applied as an outlier detection to overcome this problem. For the experiment, two experiments were designed: one is univariate learning, in which only SST data was used among the observation data of Deokjeok Island and the other is multivariate learning, in which SST, air temperature, wind direction, wind speed, air pressure, and humidity were used. Period of data is 25 years from 1996 to 2020, and a pre-processing considering the characteristics of ocean data was applied to the data. An outlier detection of actual SST data was tried with a learned univariate and multivariate autoencoder. We tried to detect outliers in real SST data using trained univariate and multivariate autoencoders. To compare model performance, various outlier detection methods were applied to synthetic data with artificially inserted errors. As a result of quantitatively evaluating the performance of these methods, the multivariate/univariate accuracy was about 96%/91%, respectively, indicating that the multivariate autoencoder had better outlier detection performance. Outlier detection using an unsupervised learning-based autoencoder is expected to be used in various ways in that it can reduce subjective classification errors and cost and time required for data labeling.

A Relative Study of 3D Digital Record Results on Buried Cultural Properties (매장문화재 자료에 대한 3D 디지털 기록 결과 비교연구)

  • KIM, Soohyun;LEE, Seungyeon;LEE, Jeongwon;AHN, Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.55 no.1
    • /
    • pp.175-198
    • /
    • 2022
  • With the development of technology, the methods of digitally converting various forms of analog information have become common. As a result, the concept of recording, building, and reproducing data in a virtual space, such as digital heritage and digital reconstruction, has been actively used in the preservation and research of various cultural heritages. However, there are few existing research results that suggest optimal scanners for small and medium-sized relics. In addition, scanner prices are not cheap for researchers to use, so there are not many related studies. The 3D scanner specifications have a great influence on the quality of the 3D model. In particular, since the state of light reflected on the surface of the object varies depending on the type of light source used in the scanner, using a scanner suitable for the characteristics of the object is the way to increase the efficiency of the work. Therefore, this paper conducted a study on nine small and medium-sized buried cultural properties of various materials, including earthenware and porcelain, by period, to examine the differences in quality of the four types of 3D scanners. As a result of the study, optical scanners and small and medium-sized object scanners were the most suitable digital records of the small and medium-sized relics. Optical scanners are excellent in both mesh and texture but have the disadvantage of being very expensive and not portable. The handheld method had the advantage of excellent portability and speed. When considering the results compared to the price, the small and medium-sized object scanner was the best. It was the photo room measurement that was able to obtain the 3D model at the lowest cost. 3D scanning technology can be largely used to produce digital drawings of relics, restore and duplicate cultural properties, and build databases. This study is meaningful in that it contributed to the use of scanners most suitable for buried cultural properties by material and period for the active use of 3D scanning technology in cultural heritage.

A Study on Precision of 3D Spatial Model of a Highly Dense Urban Area based on Drone Images (드론영상 기반 고밀 도심지의 3차원 공간모형의 정밀도에 관한 연구)

  • Choi, Yeon Woo;Yoon, Hye Won;Choo, Mi Jin;Yoon, Dong Keun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.2
    • /
    • pp.69-77
    • /
    • 2022
  • The 3D spatial model is an analysis framework for solving urban problems and is used in various fields such as urban planning, environment, land and housing management, and disaster simulation. The utilization of drones that can capture 3D images in a short time at a low cost is increasing for the construction of 3D spatial model. In terms of building a virtual city and utilizing simulation modules, high location accuracy of aerial survey and precision of 3D spatial model function as important factors, so a method to increase the accuracy has been proposed. This study analyzed location accuracy of aerial survey and precision of 3D spatial model by each condition of aerial survey for urban areas where buildings are densely located. We selected Daerim 2-dong, Yeongdeungpo-gu, Seoul as a target area and applied shooting angle, shooting altitude, and overlap rate as conditions for the aerial survey. In this study, we calculated the location accuracy of aerial survey by analyzing the difference between an actual survey value of CPs and a predicted value of 3D spatial Model. Also, We calculated the precision of 3D spatial Model by analyzing the difference between the position of Point cloud and the 3D spatial Model (3D Mesh). As a result of this study, the location accuracy tended to be high at a relatively high rate of overlap, but the higher the rate of overlap, the lower the precision of 3D spatial model and the higher the shooting angle, the higher precision. Also, there was no significant relationship with precision. In terms of baseline-height ratio, the precision tended to be improved as the baseline-height ratio increased.

A basic study on explosion pressure of hydrogen tank for hydrogen fueled vehicles in road tunnels (도로터널에서 수소 연료차 수소탱크 폭발시 폭발압력에 대한 기초적 연구)

  • Ryu, Ji-Oh;Ahn, Sang-Ho;Lee, Hu-Yeong
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.6
    • /
    • pp.517-534
    • /
    • 2021
  • Hydrogen fuel is emerging as an new energy source to replace fossil fuels in that it can solve environmental pollution problems and reduce energy imbalance and cost. Since hydrogen is eco-friendly but highly explosive, there is a high concern about fire and explosion accidents of hydrogen fueled vehicles. In particular, in semi-enclosed spaces such as tunnels, the risk is predicted to increase. Therefore, this study was conducted on the applicability of the equivalent TNT model and the numerical analysis method to evaluate the hydrogen explosion pressure in the tunnel. In comparison and review of the explosion pressure of 6 equivalent TNT models and Weyandt's experimental results, the Henrych equation was found to be the closest with a deviation of 13.6%. As a result of examining the effect of hydrogen tank capacity (52, 72, 156 L) and tunnel cross-section (40.5, 54, 72, 95 m2) on the explosion pressure using numerical analysis, the explosion pressure wave in the tunnel initially it propagates in a hemispherical shape as in open space. Furthermore, when it passes the certain distance it is transformed a plane wave and propagates at a very gradual decay rate. The Henrych equation agrees well with the numerical analysis results in the section where the explosion pressure is rapidly decreasing, but it is significantly underestimated after the explosion pressure wave is transformed into a plane wave. In case of same hydrogen tank capacity, an explosion pressure decreases as the tunnel cross-sectional area increases, and in case of the same cross-sectional area, the explosion pressure increases by about 2.5 times if the hydrogen tank capacity increases from 52 L to 156 L. As a result of the evaluation of the limiting distance affecting the human body, when a 52 L hydrogen tank explodes, the limiting distance to death was estimated to be about 3 m, and the limiting distance to serious injury was estimated to be 28.5~35.8 m.

An Evaluation of Allowable Bearing Capacity of Weathered Rock by Large-Scale Plate-Bearing Test and Numerical Analysis (대형평판재하시험 및 수치해석에 의한 풍화암 허용지지력 평가)

  • Hong, Seung-Hyeun
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.10
    • /
    • pp.61-74
    • /
    • 2022
  • Considering that the number of cases in which a structure foundation is located on weathered rock has been increasing recently, for adequate design bearing capacity of a foundation on weathered rock, allowable bearing capacities of such foundations in geotechnical investigation reports were studied. With reference to the study results, the allowable bearing capacity of a foundation on weathered rock was approximately 400-700 kN/m2, with a large variation, and was considered a conservative value. Because the allowable bearing capacity of the foundation ground is an important index in determining the foundation type in the early design stage, it can have a significant influence on the construction cost and period according to the initial decision. Thus, in this study, six large-scale plate-bearing tests were conducted on weathered rock, and the bearing capacity and settlement characteristics were analyzed. According to the test results, the bearing capacities from the six tests exceeded 1,500 kN/m2, and it shows that the results are similar with the one of bearing capacity formula by Pressuremeter tests when compared with the various bearing capacity formula. In addition, the elastic modulus determined by the inverse calculation of the load-settlement behavior from the large-scale plate-bearing tests was appropriate for applying the elastic modulus of the Pressuremeter tests. With consideration of the large-scale plate-bearing tests in this study and other results of plate-bearing tests on weathered rock in Korea, the allowable bearing capacity of weathered rock is evaluated to be over 1,000 kN/m2. However, because the settlement of the foundation increases as the foundation size increases, the allowable bearing capacity should be restrained by the allowable settlement criteria of an upper structure. Therefore, in this study, the anticipated foundation settlements along the foundation size and the thickness of weathered rocks have been evaluated by numerical analysis, and the foundation size and ground conditions, with an allowable bearing capacity of over 1,000 kN/m2, have been proposed as a table. These findings are considered useful in determining the foundation type in the early foundation design.

A Study on the Revitalization of BIM in the Field of Architecture Using AHP Method (AHP 기법을 이용한 건축분야 BIM 활성화 방안 연구)

  • Kim, Jin-Ho;Hwang, Chan-Gyu;Kim, Ji-Hyung
    • Journal of the Korea Institute of Building Construction
    • /
    • v.22 no.5
    • /
    • pp.473-483
    • /
    • 2022
  • BIM(Building Information Modeling) is a technology that can manage information throughout the entire life cycle of the construction industry and serves as a platform for improving productivity and integrating the entire construction industry. Currently, BIM is actively applied in developed countries, and its use at various overseas construction sites is increasing This is unclear. due to air shortening and budget savings. However, there is still a lack of institutional basis and technical limitations in the domestic construction sector, which have led to the lack of utilization of BIM. Various activation measures and institutional frameworks will need to be established for the early establishment of these productive BIMs in Korea. Therefore, as part of the research for the domestic settlement and revitalization of BIM, this study derived a number of key factors necessary for the development of the construction industry through brainstorming and expert surveys using AHP techniques and analyzed the relative importance of each factor. In addition, prior surveys by a group of experts resulted in 1, 3 items in level, 2, 9 items in level, and 3, 27 items in level, and priorities analysis was performed through pairwise comparisons. As a result of the AHP analysis, it was found that the relative importance weight of policy aspects was highest in level 1, and the policy factors in level 2 and the cost-based and incentive system introduction factors were considered most important in level 3. These findings show that the importance of the policy guidance or institutions underlying the activation of BIM rather than research and development or corporate innovation is relatively high, and that the preparation of policy plans by public institutions should be the first priority. Therefore, it is considered that the development of a policy system or guideline must be prioritized before it can be advanced to the next activation stage. The use of BIM technologies will not only contribute to improving the productivity of the construction industry, but also to the overall development of the industry and the growth of the construction industry. It is expected that the results of this study can provide as useful information when establishing policies for activating BIM in central government, relevant local governments, and related public institutions.