• Title/Summary/Keyword: Optimization process

Search Result 4,762, Processing Time 0.041 seconds

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

Performance analysis and prediction through various over-provision on NAND flash memory based storage (낸드 플래시 메모리기반 저장 장치에서 다양한 초과 제공을 통한 성능 분석 및 예측)

  • Lee, Hyun-Seob
    • Journal of Digital Convergence
    • /
    • v.20 no.3
    • /
    • pp.343-348
    • /
    • 2022
  • Recently, With the recent rapid development of technology, the amount of data generated by various systems is increasing, and enterprise servers and data centers that have to handle large amounts of big data need to apply high-stability and high-performance storage devices even if costs increase. In such systems, SSD(solid state disk) that provide high performance of read/write are often used as storage devices. However, due to the characteristics of reading and writing on a page-by-page basis, erasing operations on a block basis, and erassing-before-writing, there is a problem that performance is degraded when duplicate writes occur. Therefore, in order to delay this performance degradation problem, over-provision technology of SSD has been applied internally. However, since over-provided technologies have the disadvantage of consuming a lot of storage space instead of performance, the application of inefficient technologies above the right performance has a problem of over-costing. In this paper, we proposed a method of measuring the performance and cost incurred when various over-provisions are applied in an SSD and predicting the system-optimized over-provided ratio based on this. Through this research, we expect to find a trade-off with costs to meet the performance requirements in systems that process big data.

Sensitivity Analysis of Wake Diffusion Patterns in Mountainous Wind Farms according to Wake Model Characteristics on Computational Fluid Dynamics (전산유체역학 후류모델 특성에 따른 산악지형 풍력발전단지 후류확산 형태 민감도 분석)

  • Kim, Seong-Gyun;Ryu, Geon Hwa;Kim, Young-Gon;Moon, Chae-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.265-278
    • /
    • 2022
  • The global energy paradigm is rapidly changing by centering on carbon neutrality, and wind energy is positioning itself as a leader in renewable energy-based power sources. The success of onshore and offshore wind energy projects focuses on securing the economic feasibility of the project, which depends on securing high-quality wind resources and optimal arrangement of wind turbines. In the process of constructing the wind farm, the optimal arrangement method of wind turbines considering the main wind direction is important, and this is related to minimizing the wake effect caused by the fluid passing through the structure located on the windward side. The accuracy of the predictability of the wake effect is determined by the wake model and modeling technique that can properly simulate it. Therefore, in this paper, using WindSim, a commercial CFD model, the wake diffusion pattern is analyzed through the sensitivity study of each wake model of the proposed onshore wind farm located in the mountainous complex terrain in South Korea, and it is intended to be used as basic research data for wind energy projects in complex terrain in the future.

A Deep Learning-based Real-time Deblurring Algorithm on HD Resolution (HD 해상도에서 실시간 구동이 가능한 딥러닝 기반 블러 제거 알고리즘)

  • Shim, Kyujin;Ko, Kangwook;Yoon, Sungjoon;Ha, Namkoo;Lee, Minseok;Jang, Hyunsung;Kwon, Kuyong;Kim, Eunjoon;Kim, Changick
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.3-12
    • /
    • 2022
  • Image deblurring aims to remove image blur, which can be generated while shooting the pictures by the movement of objects, camera shake, blurring of focus, and so forth. With the rise in popularity of smartphones, it is common to carry portable digital cameras daily, so image deblurring techniques have become more significant recently. Originally, image deblurring techniques have been studied using traditional optimization techniques. Then with the recent attention on deep learning, deblurring methods based on convolutional neural networks have been actively proposed. However, most of them have been developed while focusing on better performance. Therefore, it is not easy to use in real situations due to the speed of their algorithms. To tackle this problem, we propose a novel deep learning-based deblurring algorithm that can be operated in real-time on HD resolution. In addition, we improved the training and inference process and could increase the performance of our model without any significant effect on the speed and the speed without any significant effect on the performance. As a result, our algorithm achieves real-time performance by processing 33.74 frames per second at 1280×720 resolution. Furthermore, it shows excellent performance compared to its speed with a PSNR of 29.78 and SSIM of 0.9287 with the GoPro dataset.

Guide for Processing of Textured Piezoelectric Ceramics Through the Template Grain Growth Method

  • Temesgen Tadeyos Zate;Jeong-Woo Sun;Nu-Ri Ko;Hye-Lim Yu;Woo-Jin Choi;Jae-Ho Jeon;Wook Jo
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.36 no.4
    • /
    • pp.341-350
    • /
    • 2023
  • The templated grain growth (TGG) method has gained significant attention for its ability to produce highly textured piezoelectric ceramics with significantly enhanced performance, making it a promising method for transducer and actuator applications. However, the texturing process using the TGG method requires the optimization of multiple steps, which can be challenging for beginners in this field. Therefore, in this tutorial, we provide an overview of the TGG method mainly based on our previous published works, including its various processing steps such as synthesizing anisotropic-shaped templates with size and size distribution control using the molten salt synthesis technique, tape casting, and identifying key factors for proper alignment of the templates in the target matrix system. Our goal is to provide a resource that can serve as a basic reference for researchers and engineers looking to improve their understanding and utilization of the TGG method for producing textured piezoelectric ceramics.

Performance Evaluation of Loss Functions and Composition Methods of Log-scale Train Data for Supervised Learning of Neural Network (신경 망의 지도 학습을 위한 로그 간격의 학습 자료 구성 방식과 손실 함수의 성능 평가)

  • Donggyu Song;Seheon Ko;Hyomin Lee
    • Korean Chemical Engineering Research
    • /
    • v.61 no.3
    • /
    • pp.388-393
    • /
    • 2023
  • The analysis of engineering data using neural network based on supervised learning has been utilized in various engineering fields such as optimization of chemical engineering process, concentration prediction of particulate matter pollution, prediction of thermodynamic phase equilibria, and prediction of physical properties for transport phenomena system. The supervised learning requires training data, and the performance of the supervised learning is affected by the composition and the configurations of the given training data. Among the frequently observed engineering data, the data is given in log-scale such as length of DNA, concentration of analytes, etc. In this study, for widely distributed log-scaled training data of virtual 100×100 images, available loss functions were quantitatively evaluated in terms of (i) confusion matrix, (ii) maximum relative error and (iii) mean relative error. As a result, the loss functions of mean-absolute-percentage-error and mean-squared-logarithmic-error were the optimal functions for the log-scaled training data. Furthermore, we figured out that uniformly selected training data lead to the best prediction performance. The optimal loss functions and method for how to compose training data studied in this work would be applied to engineering problems such as evaluating DNA length, analyzing biomolecules, predicting concentration of colloidal suspension.

Drape Simulation Estimation for Non-Linear Stiffness Model (비선형 강성 모델을 위한 드레이프 시뮬레이션 결과 추정)

  • Eungjune Shim;Eunjung Ju;Myung Geol Choi
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.117-125
    • /
    • 2023
  • In the development of clothing design through virtual simulation, it is essential to minimize the differences between the virtual and the real world as much as possible. The most critical task to enhance the similarity between virtual and real garments is to find simulation parameters that can closely emulate the physical properties of the actual fabric in use. The simulation parameter optimization process requires manual tuning by experts, demanding high expertise and a significant amount of time. Especially, considerable time is consumed in repeatedly running simulations to check the results of applying the tuned simulation parameters. Recently, to tackle this issue, artificial neural network learning models have been proposed that swiftly estimate the results of drape test simulations, which are predominantly used for parameter tuning. In these earlier studies, relatively simple linear stiffness models were used, and instead of estimating the entirety of the drape mesh, they estimated only a portion of the mesh and interpolated the rest. However, there is still a scarcity of research on non-linear stiffness models, which are commonly used in actual garment design. In this paper, we propose a learning model for estimating the results of drape simulations for non-linear stiffness models. Our learning model estimates the full high-resolution mesh model of drape. To validate the performance of the proposed method, experiments were conducted using three different drape test methods, demonstrating high accuracy in estimation.

Driver Route Choice Models for Developing Real-Time VMS Operation Strategies (VMS 실시간 운영전략 구축을 위한 운전자 경로선택모형)

  • Kim, SukHee;Choi, Keechoo;Yu, JeongWhon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.3D
    • /
    • pp.409-416
    • /
    • 2006
  • Real-time traveler information disseminated through Variable Message Signs (VMS) is known to have effects on driver route choice decisions. In the past, many studies have attempted to optimize the system performance using VMS message content as the primary control variable of driver route choice. This research proposes a VMS information provision optimization model which searches the best combination of VMS message contents and display sequence to minimize the total travel time on a highway network considered. The driver route choice models under VMS information provision are developed using a stated preference (SP) survey data in order to realistically capture driver response behavior. The genetic algorithm (GA) is used to find the optimal VMS information provision strategies which consists of the VMS message contents and the sequence of message display. In the process of the GA module, the system performance is measured using micro traffic simulation. The experiment results highlight the capability of the proposed model to search the optimal solution in an efficient way. The results show that the traveler information conveyed via VMS can reduce the total travel time on a highway network. They also suggest that as the frequency of VMS message update gets shorter, a smaller number of VMS message contents performs better to reduce the total travel time, all other things being equal.

GIS Optimization for Bigdata Analysis and AI Applying (Bigdata 분석과 인공지능 적용한 GIS 최적화 연구)

  • Kwak, Eun-young;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.171-173
    • /
    • 2022
  • The 4th industrial revolution technology is developing people's lives more efficiently. GIS provided on the Internet services such as traffic information and time information makes people getting more quickly to destination. National geographic information service(NGIS) and each local government are making basic data to investigate SOC accessibility for analyzing optimal point. To construct the shortest distance, the accessibility from the starting point to the arrival point is analyzed. Applying road network map, the starting point and the ending point, the shortest distance, the optimal accessibility is calculated by using Dijkstra algorithm. The analysis information from multiple starting points to multiple destinations was required more than 3 steps of manual analysis to decide the position for the optimal point, within about 0.1% error. It took more time to process the many-to-many (M×N) calculation, requiring at least 32G memory specification of the computer. If an optimal proximity analysis service is provided at a desired location more versatile, it is possible to efficiently analyze locations that are vulnerable to business start-up and living facilities access, and facility selection for the public.

  • PDF

Empirical and Numerical Analyses of a Small Planing Ship Resistance using Longitudinal Center of Gravity Variations (경험식과 수치해석을 이용한 종방향 무게중심 변화에 따른 소형선박의 저항성능 변화에 관한 연구)

  • Michael;Jun-Taek Lim;Nam-Kyun Im;Kwang-Cheol Seo
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.971-979
    • /
    • 2023
  • Small ships (<499 GT) constitute 46% of the existing ships, therefore, it can be concluded that they produce relatively high CO2 gas emissions. Operating in optimal trim conditions can reduce the resistance of the ship, which results in fewer greenhouse gases. An affordable way for trim optimization is to adjust the weight distribution to obtain an optimum longitudinal center of gravity (LCG). Therefore, in this study, the effect of LCG changes on the resistance of a small planing ship is studied using empirical and numerical analyses. The Savitsky method employing Maxsurf resistance and the STAR-CCM+ commercial computational fluid dynamics (CFD) software is used for the empirical and numerical analyses, respectively. Finally, the total resistance from the ship design process is compared to obtain the optimum LCG. To summarize, using numerical analysis, optimum LCG is achieved at the 46.2% length overall (LoA) at Froude Number 0.56, and 43.4% LoA at Froude Number 0.63, which provides a significant resistance reduction of 41.12 - 45.16% compared to the reference point at 29.2% LoA.