• Title/Summary/Keyword: camera

Search Result 10,582, Processing Time 0.039 seconds

Influence of Mixture Non-uniformity on Methane Explosion Characteristics in a Horizontal Duct (수평 배관의 메탄 폭발특성에 있어서 불균일성 혼합기의 영향)

  • Ou-Sup Han;Yi-Rac Choi;HyeongHk Kim;JinHo Lim
    • Korean Chemical Engineering Research
    • /
    • v.62 no.1
    • /
    • pp.27-35
    • /
    • 2024
  • Fuel gases such as methane and propane are used in explosion hazardous area of domestic plants and can form non-uniform mixtures with the influence of process conditions due to leakage. The fire-explosion risk assessment using literature data measured under uniform mixtures, damage prediction can be obtained the different results from actual explosion accidents by gas leaks. An explosion characteristics such as explosion pressure and flame velocity of non-uniform gas mixtures with concentration change similar to that of facility leak were examined. The experiments were conducted in a closed 0.82 m long stainless steel duct with observation recorded by color high speed camera and piezo pressure sensor. Also we proposed the quantification method of non-uniform mixtures from a regression analysis model on the change of concentration difference with time in explosion duct. For the non-uniform condition of this study, the area of flame surface enlarged with increasing the concentration non-uniform in the flame propagation of methane and was similar to the wrinkled flame structure existing in a turbulent flame. The time to peak pressure of methane decreased as the non-uniform increased and the explosion pressure increased with increasing the non-uniform. The ranges of KG (Deflagration index) of methane with the concentration non-uniform were 1.30 to 1.58 [MPa·m/s] and the increase rate of KG was 17.7% in methane with changing from uniform to non-uniform.

A Fusion Sensor System for Efficient Road Surface Monitorinq on UGV (UGV에서 효율적인 노면 모니터링을 위한 퓨전 센서 시스템 )

  • Seonghwan Ryu;Seoyeon Kim;Jiwoo Shin;Taesik Kim;Jinman Jung
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.18-26
    • /
    • 2024
  • Road surface monitoring is essential for maintaining road environment safety through managing risk factors like rutting and crack detection. Using autonomous driving-based UGVs with high-performance 2D laser sensors enables more precise measurements. However, the increased energy consumption of these sensors is limited by constrained battery capacity. In this paper, we propose a fusion sensor system for efficient surface monitoring with UGVs. The proposed system combines color information from cameras and depth information from line laser sensors to accurately detect surface displacement. Furthermore, a dynamic sampling algorithm is applied to control the scanning frequency of line laser sensors based on the detection status of monitoring targets using camera sensors, reducing unnecessary energy consumption. A power consumption model of the fusion sensor system analyzes its energy efficiency considering various crack distributions and sensor characteristics in different mission environments. Performance analysis demonstrates that setting the power consumption of the line laser sensor to twice that of the saving state when in the active state increases power consumption efficiency by 13.3% compared to fixed sampling under the condition of λ=10, µ=10.

Precision Evaluation of Expressway Incident Detection Based on Dash Cam (차량 내 영상 센서 기반 고속도로 돌발상황 검지 정밀도 평가)

  • Sanggi Nam;Younshik Chung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.6
    • /
    • pp.114-123
    • /
    • 2023
  • With the development of computer vision technology, video sensors such as CCTV are detecting incident. However, most of the current incident have been detected based on existing fixed imaging equipment. Accordingly, there has been a limit to the detection of incident in shaded areas where the image range of fixed equipment is not reached. With the recent development of edge-computing technology, real-time analysis of mobile image information has become possible. The purpose of this study is to evaluate the possibility of detecting expressway emergencies by introducing computer vision technology to dash cam. To this end, annotation data was constructed based on 4,388 dash cam still frame data collected by the Korea Expressway Corporation and analyzed using the YOLO algorithm. As a result of the analysis, the prediction accuracy of all objects was over 70%, and the precision of traffic accidents was about 85%. In addition, in the case of mAP(mean Average Precision), it was 0.769, and when looking at AP(Average Precision) for each object, traffic accidents were the highest at 0.904, and debris were the lowest at 0.629.

Ex situ combined in situ target strength of Japanese horse mackerel using a broadband echosounder (중심 주파수 200 kHz의 과학어군탐지기를 활용한 전갱이의 광대역 주파수 특성)

  • Myounghee KANG;Hansoo KIM;Dongha KANG;Jihoon JUNG;Fredrich SIMANUNGKALIT;Donhyug KANG
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.60 no.2
    • /
    • pp.142-151
    • /
    • 2024
  • Recently, domestic fishing production of Japanese horse mackerel has been continuously decreasing. To achieve sustainable fishing of this species, it is essential to acquire its target strength (TS) for accurate biomass estimation and to study its ecological characteristics. To date, there has been no TS research using a broadband echosounder targeting Japanese horse mackerel. In this study, for the first time, we synchronized an underwater camera with a broadband frequency (nominal center frequency of 200 kHz, range: 160-260 kHz) to measure the TS according to the body size (16.8-35.5 cm) and swimming angle of the species. The relationship between Japanese horse mackerel length and body weight showed a general tendency for body weight to increase as length increased. The pattern of the frequency spectra (average values) by body length exhibited a similar trend regardless of body length, with no significant fluctuations in frequency observed. The lowest TS value was observed at 243 kHz while the highest TS values were recorded at 180 and 257.5 kHz. The frequency spectra for the swimming angles appeared to be flat at angles of -5, 0, 30, 60, 75, and 80° while detecting more general trends of frequency spectra for swimming angle proved challenging. The results of this study can serve as fundamental data for Japanese horse mackerel biomass estimation and ecological research.

Potential Effects of Hikers on Activity Pattern of Mammals in Baekdudaegan Protected Area (등산객의 활동이 백두대간보호지역에 서식하는 포유류 군집의 활동 패턴에 미치는 잠재적 영향)

  • Hyun-Su Hwang;Hyoun-Gi Cha;Naeyoung Kim;Hyungsoo Seo
    • Korean Journal of Environment and Ecology
    • /
    • v.37 no.6
    • /
    • pp.418-428
    • /
    • 2023
  • This study was conducted to clarify the daily activity patterns overlap between hikers and mammals from 2015 to 2019 in the Baekdudaegan protected area. To investigate relationship behaviors between hikers and mammals, we set the camera traps on the ridge of the Baekdudaegan protected area. Daily activity patterns of yellow-throated marten (Martes flavigula) and Siberian chipmunk (Eutamias sibiricus) were highly overlapped with hiker total study periods. Moreover, daily activity patterns of Siberian roe deer (Caperohus pygargus) and water deer (Hydropotes inermis) were highly overlapped with hikers only in spring. In winter, daily activity patterns of wild boar (Sus scrofa) were overlapped with hikers. However, leopard cat (Prionailurus bengalensis), raccoon dog (Nyctereutes procyonoides), and Eurasian badger (Meles leucurus) did not significantly overlap with hikers during the study periods. The daily activity patterns of 8 mammals differed by species-specific behavior and temporal characteristics. Overlap of daily activity patterns between mammals and hikers were differed in each season. Differences in daily activity pattern overlap between mammals and humans may lead to differences in human impact on mammal populations. Information on the interaction between hikers and mammals on species-specific and temporal-specific behavior could be basic ecological data for management and conservation of mammal populations and their habitats.

Improvement of Face Recognition Algorithm for Residential Area Surveillance System Based on Graph Convolution Network (그래프 컨벌루션 네트워크 기반 주거지역 감시시스템의 얼굴인식 알고리즘 개선)

  • Tan Heyi;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.1-15
    • /
    • 2024
  • The construction of smart communities is a new method and important measure to ensure the security of residential areas. In order to solve the problem of low accuracy in face recognition caused by distorting facial features due to monitoring camera angles and other external factors, this paper proposes the following optimization strategies in designing a face recognition network: firstly, a global graph convolution module is designed to encode facial features as graph nodes, and a multi-scale feature enhancement residual module is designed to extract facial keypoint features in conjunction with the global graph convolution module. Secondly, after obtaining facial keypoints, they are constructed as a directed graph structure, and graph attention mechanisms are used to enhance the representation power of graph features. Finally, tensor computations are performed on the graph features of two faces, and the aggregated features are extracted and discriminated by a fully connected layer to determine whether the individuals' identities are the same. Through various experimental tests, the network designed in this paper achieves an AUC index of 85.65% for facial keypoint localization on the 300W public dataset and 88.92% on a self-built dataset. In terms of face recognition accuracy, the proposed network achieves an accuracy of 83.41% on the IBUG public dataset and 96.74% on a self-built dataset. Experimental results demonstrate that the network designed in this paper exhibits high detection and recognition accuracy for faces in surveillance videos.

Research on Bridge Maintenance Methods Using BIM Model and Augmented Reality (BIM 모델과 증강현실을 활용한 교량 유지관리방안 연구)

  • Choi, Woonggyu;Pa Pa Win Aung;Sanyukta Arvikar;Cha, Gichun;Park, Seunghee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.1
    • /
    • pp.1-9
    • /
    • 2024
  • Bridges, which are construction structures, have increased from 584 to 38,405 since the 1970s. However, as the number of bridges increases, the number of bridges with a service life of more than 30 years increases to 21,737 (71%) by 2030, resulting in fatal accidents due to basic human resource maintenance of facilities. Accordingly, the importance of bridge safety inspection and maintenance measures is increasing, and the need for decision-making support for supervisors who manage multiple bridges is also required. Currently, the safety inspection and maintenance method of bridges is to write down damage, condition, location, and specifications on the exterior survey map by hand or to record them by taking pictures with a camera. However, errors in notation of damage or defects or mistakes by supervisors are possible, typos, etc. may reduce the reliability of the overall safety inspection and diagnosis. To improve this, this study visualizes damage data recorded in the BIM model in an AR environment and proposes a maintenance plan for bridges with a small number of people through maintenance decision-making support for supervisors.

Research on soil composition measurement sensor configuration and UI implementation (토양 성분 측정 센서 구성 및 UI 구현에 관한 연구)

  • Ye Eun Park;Jin Hyoung Jeong;Jae Hyun Jo;Young Yoon Chang;Sang Sik Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.1
    • /
    • pp.76-81
    • /
    • 2024
  • Recently, agricultural methods are changing from experience-based agriculture to data-based agriculture. Changes in agricultural production due to the 4th Industrial Revolution are largely occurring in three areas: smart sensing and monitoring, smart analysis and planning, and smart control. In order to realize open-field smart agriculture, information on the physical and chemical properties of soil is essential. Conventional physicochemical measurements are conducted in a laboratory after collecting samples, which consumes a lot of cost, labor, and time, so they are quickly measured in the field. Measurement technology that can do this is urgently needed. In addition, a soil analysis system that can be carried and moved by the measurer and used in Korea's rice fields, fields, and facility houses is needed. To solve this problem, our goal is to develop and commercialize software that can collect soil samples and analyze the information. In this study, basic soil composition measurement was conducted using soil composition measurement sensors consisting of hardness measurement and electrode sensors. Through future research, we plan to develop a system that applies soil sampling using a CCD camera, ultrasonic sensor, and sampler. Therefore, we implemented a sensor and soil analysis UI that can measure and analyze the soil condition in real time, such as hardness measurement display using a load cell and moisture, PH, and EC measurement display using conductivity.

Color Sensing Technology using Arduino and Color Sensor (아두이노와 컬러센서를 이용한 색상 감지 기술)

  • Dusub Song;Hojun Yeom;Sangsoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.13-17
    • /
    • 2024
  • A color sensor is an optical sensor used to take pictures of objects, including the human body, and reproduce them on a monitor. A color sensor quantifies the red, green, and blue light coming from an object and expresses it as a digital number, and can judge the state of the object by comparing the values ​​or the ratio.In this study, the standard colors displayed on the monitor were measured using a color sensor, and the magnitudes of the red, green, and blue components, or RGB values, were compared with the values ​​indicated by the computer. When measured with the TCS 34725 color sensor, even when the light generated by the computer consists of only one or two of red, green, and blue light, the color sensor detected all three components. Additionally, when the colors of two monitors with the same RGB values ​​were measured using a color sensor, different RGB values ​​were measured. These results can be attributed to the imperfection of the color filters used to express colors on the monitor and the imperfect optical characteristics of the photodiodes used in the color sensor. When photographing an object and judging its condition based on its color, you must use the same type of camera or smartphone.

Development of Stream Cover Classification Model Using SVM Algorithm based on Drone Remote Sensing (드론원격탐사 기반 SVM 알고리즘을 활용한 하천 피복 분류 모델 개발)

  • Jeong, Kyeong-So;Go, Seong-Hwan;Lee, Kyeong-Kyu;Park, Jong-Hwa
    • Journal of Korean Society of Rural Planning
    • /
    • v.30 no.1
    • /
    • pp.57-66
    • /
    • 2024
  • This study aimed to develop a precise vegetation cover classification model for small streams using the combination of drone remote sensing and support vector machine (SVM) techniques. The chosen study area was the Idong stream, nestled within Geosan-gun, Chunbuk, South Korea. The initial stage involved image acquisition through a fixed-wing drone named ebee. This drone carried two sensors: the S.O.D.A visible camera for capturing detailed visuals and the Sequoia+ multispectral sensor for gathering rich spectral data. The survey meticulously captured the stream's features on August 18, 2023. Leveraging the multispectral images, a range of vegetation indices were calculated. These included the widely used normalized difference vegetation index (NDVI), the soil-adjusted vegetation index (SAVI) that factors in soil background, and the normalized difference water index (NDWI) for identifying water bodies. The third stage saw the development of an SVM model based on the calculated vegetation indices. The RBF kernel was chosen as the SVM algorithm, and optimal values for the cost (C) and gamma hyperparameters were determined. The results are as follows: (a) High-Resolution Imaging: The drone-based image acquisition delivered results, providing high-resolution images (1 cm/pixel) of the Idong stream. These detailed visuals effectively captured the stream's morphology, including its width, variations in the streambed, and the intricate vegetation cover patterns adorning the stream banks and bed. (b) Vegetation Insights through Indices: The calculated vegetation indices revealed distinct spatial patterns in vegetation cover and moisture content. NDVI emerged as the strongest indicator of vegetation cover, while SAVI and NDWI provided insights into moisture variations. (c) Accurate Classification with SVM: The SVM model, fueled by the combination of NDVI, SAVI, and NDWI, achieved an outstanding accuracy of 0.903, which was calculated based on the confusion matrix. This performance translated to precise classification of vegetation, soil, and water within the stream area. The study's findings demonstrate the effectiveness of drone remote sensing and SVM techniques in developing accurate vegetation cover classification models for small streams. These models hold immense potential for various applications, including stream monitoring, informed management practices, and effective stream restoration efforts. By incorporating images and additional details about the specific drone and sensors technology, we can gain a deeper understanding of small streams and develop effective strategies for stream protection and management.