• Title/Summary/Keyword: AI Technology

Search Result 2,564, Processing Time 0.035 seconds

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

A Study on the Fabrication of Heater based on Silicone Rubber (실리콘러버 기반의 히터제작에 관한 연구)

  • Jeong-Oh Hong;Jae Tack Hong;Shin-Hyeong Choi
    • Advanced Industrial SCIence
    • /
    • v.2 no.2
    • /
    • pp.9-15
    • /
    • 2023
  • Since silicone rubber heaters are flexible, they can be directly attached or installed in objects to be heated even in flat, curved or three-dimensional shapes. Since the current heating method heats the entire object to be heated and raises it to a required temperature, ignoring areas or positions where heat is not required, partial intensive heating cannot be performed. When using multi-heating zones, rather than heating the entire object to be heated, only the parts that need heat are intensively heated according to the process, so it is possible to heat quickly by local location by applying different amounts of heat with a small amount of electric capacity to each place that needs heat, and heat energy can reduce. In this study, the temperature and heating time of the partially concentrated region in the multi-heating region structure are measured so that a uniform temperature or temperature difference occurs in the region requiring thermal fusion. In order to determine the optimal power density range and reduce capacitance, the safety of a silicon rubber heater manufactured with a multi-heating zone structure is investigated. If the silicon rubber heater is manufactured in a multi-heating method, the multi-intensive heating technology can be ideally applied to all heating processes.

Adverse Effects on EEGs and Bio-Signals Coupling on Improving Machine Learning-Based Classification Performances

  • SuJin Bak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.133-153
    • /
    • 2023
  • In this paper, we propose a novel approach to investigating brain-signal measurement technology using Electroencephalography (EEG). Traditionally, researchers have combined EEG signals with bio-signals (BSs) to enhance the classification performance of emotional states. Our objective was to explore the synergistic effects of coupling EEG and BSs, and determine whether the combination of EEG+BS improves the classification accuracy of emotional states compared to using EEG alone or combining EEG with pseudo-random signals (PS) generated arbitrarily by random generators. Employing four feature extraction methods, we examined four combinations: EEG alone, EG+BS, EEG+BS+PS, and EEG+PS, utilizing data from two widely-used open datasets. Emotional states (task versus rest states) were classified using Support Vector Machine (SVM) and Long Short-Term Memory (LSTM) classifiers. Our results revealed that when using the highest accuracy SVM-FFT, the average error rates of EEG+BS were 4.7% and 6.5% higher than those of EEG+PS and EEG alone, respectively. We also conducted a thorough analysis of EEG+BS by combining numerous PSs. The error rate of EEG+BS+PS displayed a V-shaped curve, initially decreasing due to the deep double descent phenomenon, followed by an increase attributed to the curse of dimensionality. Consequently, our findings suggest that the combination of EEG+BS may not always yield promising classification performance.

Methods for Quantitative Disassembly and Code Establishment of CBS in BIM for Program and Payment Management (BIM의 공정과 기성 관리 적용을 위한 CBS 수량 분개 및 코드 정립 방안)

  • Hando Kim;Jeongyong Nam;Yongju Kim;Inhye Ryu
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.6
    • /
    • pp.381-389
    • /
    • 2023
  • One of the crucial components in building information modeling (BIM) is data. To systematically manage these data, various research studies have focused on the creation of object breakdown structures and property sets. Specifically, crucial data for managing programs and payments involves work breakdown structures (WBSs) and cost breakdown structures (CBSs), which are indispensable for mapping BIM objects. Achieving this requires disassembling CBS quantities based on 3D objects and WBS. However, this task is highly tedious owing to the large volume of CBS and divergent coding practices employed by different organizations. Manual processes, such as those based on Excel, become nearly impossible for such extensive tasks. In response to the challenge of computing quantities that are difficult to derive from BIM objects, this study presents methods for disassembling length-based quantities, incorporating significant portions of the bill of quantities (BOQs). The proposed approach recommends suitable CBS by leveraging the accumulated history of WBS-CBS mapping databases. Additionally, it establishes a unified CBS code, facilitating the effective operation of CBS databases.

Signal and Telegram Security Messenger Digital Forensic Analysis study in Android Environment (안드로이드 환경에서 Signal과 Telegram 보안 메신저 디지털 포렌식분석 연구)

  • Jae-Min Kwon;Won-Hyung Park;Youn-sung Choi
    • Convergence Security Journal
    • /
    • v.23 no.3
    • /
    • pp.13-20
    • /
    • 2023
  • This study conducted a digital forensic analysis of Signal and Telegram, two secure messengers widely used in the Android environment. As mobile messengers currently play an important role in daily life, data management and security within these apps have become very important issues. Signal and Telegram, among others, are secure messengers that are highly reliable among users, and they safely protect users' personal information based on encryption technology. However, much research is still needed on how to analyze these encrypted data. In order to solve these problems, in this study, an in-depth analysis was conducted on the message encryption of Signal and Telegram and the database structure and encryption method in Android devices. In the case of Signal, we were able to successfully decrypt encrypted messages that are difficult to access from the outside due to complex algorithms and confirm the contents. In addition, the database structure of the two messenger apps was analyzed in detail and the information was organized into a folder structure and file format that could be used at any time. It is expected that more accurate and detailed digital forensic analysis will be possible in the future by applying more advanced technology and methodology based on the analyzed information. It is expected that this research will help increase understanding of secure messengers such as Signal and Telegram, which will open up possibilities for use in various aspects such as personal information protection and crime prevention.

A Study on the evaluation technique rubric suitable for the characteristics of digital design subject (디지털 디자인 과목의 특성에 적합한 평가기법 루브릭에 관한 연구)

  • Cho, Hyun Kyung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.525-530
    • /
    • 2023
  • Digital drawing subjects require the subdivision of evaluation elements and the graduality of evaluation according to the recent movement of the innovative curriculum. The purpose of this paper is to present the criteria for evaluating the drawing and to propose it as a rubric evaluation. In the text, criteria for beginner evaluation were technical skills such as the accuracy and consistency of the line, the ratio and balance of the picture, and the ability to effectively utilize various brushes and tools at the intermediate levels. In the advanced evaluation section, it is a part of a new perspective or originality centered on creativity and originality, and a unique perspective or interpretation of a given subject. In addition, as an understanding of design principles, the evaluation of completeness was derived focusing on the ability to actively utilize various functions of digital drawing software through design principles such as placement, color, and shape. The importance of introducing rubric evaluation is to allow instructors to make objective and consistent evaluations, and the key to research in rubric evaluation in these art subjects is to help learners clearly grasp their strengths and weaknesses, and learners can identify what needs to be improved and develop better drawing skills accordingly through feedback on each item.

Comparative analysis of the digital circuit designing ability of ChatGPT (ChatGPT을 활용한 디지털회로 설계 능력에 대한 비교 분석)

  • Kihun Nam
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.967-971
    • /
    • 2023
  • Recently, a variety of AI-based platform services are available, and one of them is ChatGPT that processes a large quantity of data in the natural language and generates an answer after self-learning. ChatGPT can perform various tasks including software programming in the IT sector. Particularly, it may help generate a simple program and correct errors using C Language, which is a major programming language. Accordingly, it is expected that ChatGPT is capable of effectively using Verilog HDL, which is a hardware language created in C Language. Verilog HDL synthesis, however, is to generate imperative sentences in a logical circuit form and thus it needs to be verified whether the products are executed properly. In this paper, we aim to select small-scale logical circuits for ease of experimentation and to verify the results of circuits generated by ChatGPT and human-designed circuits. As to experimental environments, Xilinx ISE 14.7 was used for module modeling, and the xc3s1000 FPGA chip was used for module embodiment. Comparative analysis was performed on the use area and processing time of FPGA to compare the performance of ChatGPT products and Verilog HDL products.

Development of a Prediction Model for Fall Patients in the Main Diagnostic S Code Using Artificial Intelligence (인공지능을 이용한 주진단 S코드의 낙상환자 예측모델 개발)

  • Ye-Ji Park;Eun-Mee Choi;So-Hyeon Bang;Jin-Hyoung Jeong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.526-532
    • /
    • 2023
  • Falls are fatal accidents that occur more than 420,000 times a year worldwide. Therefore, to study patients with falls, we found the association between extrinsic injury codes and principal diagnosis S-codes of patients with falls, and developed a prediction model to predict extrinsic injury codes based on the data of principal diagnosis S-codes of patients with falls. In this study, we received two years of data from 2020 and 2021 from Institution A, located in Gangneung City, Gangwon Special Self-Governing Province, and extracted only the data from W00 to W19 of the extrinsic injury codes related to falls, and developed a prediction model using W01, W10, W13, and W18 of the extrinsic injury codes of falls, which had enough principal diagnosis S-codes to develop a prediction model. 80% of the data were categorized as training data and 20% as testing data. The model was developed using MLP (Multi-Layer Perceptron) with 6 variables (gender, age, principal diagnosis S-code, surgery, hospitalization, and alcohol consumption) in the input layer, 2 hidden layers with 64 nodes, and an output layer with 4 nodes for W01, W10, W13, and W18 exogenous damage codes using the softmax activation function. As a result of the training, the first training had an accuracy of 31.2%, but the 30th training had an accuracy of 87.5%, which confirmed the association between the fall extrinsic code and the main diagnosis S code of the fall patient.

Feasibility of Three-Dimensional Balanced Steady-State Free Precession Cine Magnetic Resonance Imaging Combined with an Image Denoising Technique to Evaluate Cardiac Function in Children with Repaired Tetralogy of Fallot

  • YaFeng Peng;XinYu Su;LiWei Hu;Qian Wang;RongZhen Ouyang;AiMin Sun;Chen Guo;XiaoFen Yao;Yong Zhang;LiJia Wang;YuMin Zhong
    • Korean Journal of Radiology
    • /
    • v.22 no.9
    • /
    • pp.1525-1536
    • /
    • 2021
  • Objective: To investigate the feasibility of cine three-dimensional (3D) balanced steady-state free precession (b-SSFP) imaging combined with a non-local means (NLM) algorithm for image denoising in evaluating cardiac function in children with repaired tetralogy of Fallot (rTOF). Materials and Methods: Thirty-five patients with rTOF (mean age, 12 years; range, 7-18 years) were enrolled to undergo cardiac cine image acquisition, including two-dimensional (2D) b-SSFP, 3D b-SSFP, and 3D b-SSFP combined with NLM. End-diastolic volume (EDV), end-systolic volume (ESV), stroke volume (SV), and ejection fraction (EF) of the two ventricles were measured and indexed by body surface index. Acquisition time and image quality were recorded and compared among the three imaging sequences. Results: 3D b-SSFP with denoising vs. 2D b-SSFP had high correlation coefficients for EDV, ESV, SV, and EF of the left (0.959-0.991; p < 0.001) as well as right (0.755-0.965; p < 0.001) ventricular metrics. The image acquisition time ± standard deviation (SD) was 25.1 ± 2.4 seconds for 3D b-SSFP compared with 277.6 ± 0.7 seconds for 2D b-SSFP, indicating a significantly shorter time with the 3D than the 2D sequence (p < 0.001). Image quality score was better with 3D b-SSFP combined with denoising than with 3D b-SSFP (mean ± SD, 3.8 ± 0.6 vs. 3.5 ± 0.6; p = 0.005). Signal-to-noise ratios for blood and myocardium as well as contrast between blood and myocardium were higher for 3D b-SSFP combined with denoising than for 3D b-SSFP (p < 0.05 for all but septal myocardium). Conclusion: The 3D b-SSFP sequence can significantly reduce acquisition time compared to the 2D b-SSFP sequence for cine imaging in the evaluation of ventricular function in children with rTOF, and its quality can be further improved by combining it with an NLM denoising method.

Deep Learning-Enabled Detection of Pneumoperitoneum in Supine and Erect Abdominal Radiography: Modeling Using Transfer Learning and Semi-Supervised Learning

  • Sangjoon Park;Jong Chul Ye;Eun Sun Lee;Gyeongme Cho;Jin Woo Yoon;Joo Hyeok Choi;Ijin Joo;Yoon Jin Lee
    • Korean Journal of Radiology
    • /
    • v.24 no.6
    • /
    • pp.541-552
    • /
    • 2023
  • Objective: Detection of pneumoperitoneum using abdominal radiography, particularly in the supine position, is often challenging. This study aimed to develop and externally validate a deep learning model for the detection of pneumoperitoneum using supine and erect abdominal radiography. Materials and Methods: A model that can utilize "pneumoperitoneum" and "non-pneumoperitoneum" classes was developed through knowledge distillation. To train the proposed model with limited training data and weak labels, it was trained using a recently proposed semi-supervised learning method called distillation for self-supervised and self-train learning (DISTL), which leverages the Vision Transformer. The proposed model was first pre-trained with chest radiographs to utilize common knowledge between modalities, fine-tuned, and self-trained on labeled and unlabeled abdominal radiographs. The proposed model was trained using data from supine and erect abdominal radiographs. In total, 191212 chest radiographs (CheXpert data) were used for pre-training, and 5518 labeled and 16671 unlabeled abdominal radiographs were used for fine-tuning and self-supervised learning, respectively. The proposed model was internally validated on 389 abdominal radiographs and externally validated on 475 and 798 abdominal radiographs from the two institutions. We evaluated the performance in diagnosing pneumoperitoneum using the area under the receiver operating characteristic curve (AUC) and compared it with that of radiologists. Results: In the internal validation, the proposed model had an AUC, sensitivity, and specificity of 0.881, 85.4%, and 73.3% and 0.968, 91.1, and 95.0 for supine and erect positions, respectively. In the external validation at the two institutions, the AUCs were 0.835 and 0.852 for the supine position and 0.909 and 0.944 for the erect position. In the reader study, the readers' performances improved with the assistance of the proposed model. Conclusion: The proposed model trained with the DISTL method can accurately detect pneumoperitoneum on abdominal radiography in both the supine and erect positions.