• Title/Summary/Keyword: Cameras

Search Result 2,264, Processing Time 0.023 seconds

A Study on the Impact of Forklift Institutional, Technical, and Educational Factors on a Disaster Reduction (지게차의 제도적, 기술적, 교육적 요인이 재해감소에 미치는 영향에 관한 연구)

  • Young Min Park;Jin Eog Kim
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.4
    • /
    • pp.770-778
    • /
    • 2023
  • Purpose: In order to reduce forklift industrial accidents, it is necessary to classify them into institutional, technical, and educational factors and conduct research on whether each factor affects disaster reduction. Method: Descriptive statistical analysis, validity analysis, reliability analysis, and multiple regression analysis were conducted using SPSS 18 program based on an offline questionnaire based on a 5-point Likert scale. Result: As a result of multiple regression analysis, it was found that institutional, technical, and educational factors, which are independent variables for disaster reduction, explain about 62.5% of the variance in disaster prevention, which is the dependent variable. The regression model verification was found to be statistically significant with F=118.775 and significance probability p<0.01. Conclusion: First, there is a need to prevent disasters by including electric forklifts weighing less than 3 tons in the inspection system. Second, there is a need to make it mandatory to install front and rear cameras and forklift line beams to prevent forklift collision disasters. Third, there is a need to conduct special training related to forklifts every year, and drivers and nearby workers need to be included in the special training for forklifts.

Entropy-Based 6 Degrees of Freedom Extraction for the W-band Synthetic Aperture Radar Image Reconstruction (W-band Synthetic Aperture Radar 영상 복원을 위한 엔트로피 기반의 6 Degrees of Freedom 추출)

  • Hyokbeen Lee;Duk-jin Kim;Junwoo Kim;Juyoung Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1245-1254
    • /
    • 2023
  • Significant research has been conducted on the W-band synthetic aperture radar (SAR) system that utilizes the 77 GHz frequency modulation continuous wave (FMCW) radar. To reconstruct the high-resolution W-band SAR image, it is necessary to transform the point cloud acquired from the stereo cameras or the LiDAR in the direction of 6 degrees of freedom (DOF) and apply them to the SAR signal processing. However, there are difficulties in matching images due to the different geometric structures of images acquired from different sensors. In this study, we present the method to extract an optimized depth map by obtaining 6 DOF of the point cloud using a gradient descent method based on the entropy of the SAR image. An experiment was conducted to reconstruct a tree, which is a major road environment object, using the constructed W-band SAR system. The SAR image, reconstructed using the entropy-based gradient descent method, showed a decrease of 53.2828 in mean square error and an increase of 0.5529 in the structural similarity index, compared to SAR images reconstructed from radar coordinates.

A Study on the Development of an Indoor Positioning Support System for Providing Landmark Information (랜드마크 정보 제공을 위한 실내위치측위 지원 시스템 구축에 관한 연구)

  • Ock-Woo NAM;Chang-Soo SHIN;Yun-Soo CHOI
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.26 no.4
    • /
    • pp.130-144
    • /
    • 2023
  • Recently, various positioning technologies are being researched based on signal-based positioning and image-based positioning to obtain accurate indoor location information. Among these, various studies are being conducted on image positioning technology that determines the location of a mobile terminal using images acquired through cameras and sensor data collected as needed. For video-based positioning, a method of determining indoor location is used by matching mobile terminal photos with virtual landmark images, and for this purpose, it is necessary to build indoor spatial information about various landmarks such as billboards, vending machines, and ATM machines. In order to construct indoor spatial information on various landmarks, a panoramic image in the form of a road view and accurate 3D survey results were obtained through c 13 buildings of the Electronics and Telecommunications Research Institute(ETRI). When comparing the 3D total station final result and the terrestrial lidar panoramic image coordinates, the coordinates and distance performance were obtained within about 0.10m, confirming that accurate landmark construction for use in indoor positioning was possible. By utilizing these terrestrial lidar achievements to perform 3D landmark modeling necessary for image positioning, it was possible to more quickly model landmark information that could not be constructed only through 3D modeling using existing as-built drawings.

Quantitative Evaluation of Super-resolution Drone Images Generated Using Deep Learning (딥러닝을 이용하여 생성한 초해상화 드론 영상의 정량적 평가)

  • Seo, Hong-Deok;So, Hyeong-Yoon;Kim, Eui-Myoung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.53 no.2
    • /
    • pp.5-18
    • /
    • 2023
  • As the development of drones and sensors accelerates, new services and values are created by fusing data acquired from various sensors mounted on drone. However, the construction of spatial information through data fusion is mainly constructed depending on the image, and the quality of data is determined according to the specification and performance of the hardware. In addition, it is difficult to utilize it in the actual field because expensive equipment is required to construct spatial information of high-quality. In this study, super-resolution was performed by applying deep learning to low-resolution images acquired through RGB and THM cameras mounted on a drone, and quantitative evaluation and feature point extraction were performed on the generated high-resolution images. As a result of the experiment, the high-resolution image generated by super-resolution was maintained the characteristics of the original image, and as the resolution was improved, more features could be extracted compared to the original image. Therefore, when generating a high-resolution image by applying a low-resolution image to an super-resolution deep learning model, it is judged to be a new method to construct spatial information of high-quality without being restricted by hardware.

Analysis of Infrared Characteristics According to Common Depth Using RP Images Converted into Numerical Data (수치 데이터로 변환된 RP 이미지를 활용하여 공동 깊이에 따른 적외선 특성 분석)

  • Jang, Byeong-Su;Kim, YoungSeok;Kim, Sewon;Choi, Hyun-Jun;Yoon, Hyung-Koo
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.3
    • /
    • pp.77-84
    • /
    • 2024
  • Aging and damaged underground utilities cause cavity and ground subsidence under roads, which can cause economic losses and risk user safety. This study used infrared cameras to assess the thermal characteristics of such cavities and evaluate their reliability using a CNN algorithm. PVC pipes were embedded at various depths in a test site measuring 400 cm × 50 cm × 40 cm. Concrete blocks were used to simulate road surfaces, and measurements were taken from 4 PM to noon the following day. The initial temperatures measured by the infrared camera were 43.7℃, 43.8℃, and 41.9℃, reflecting atmospheric temperature changes during the measurement period. The RP algorithm generates images in four resolutions, i.e., 10,000 × 10,000, 2,000 × 2,000, 1,000 × 1,000, and 100 × 100 pixels. The accuracy of the CNN model using RP images as input was 99%, 97%, 98%, and 96%, respectively. These results represent a considerable improvement over the 73% accuracy obtained using time-series images, with an improvement greater than 20% when using the RP algorithm-based inputs.

A Fusion Sensor System for Efficient Road Surface Monitorinq on UGV (UGV에서 효율적인 노면 모니터링을 위한 퓨전 센서 시스템 )

  • Seonghwan Ryu;Seoyeon Kim;Jiwoo Shin;Taesik Kim;Jinman Jung
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.18-26
    • /
    • 2024
  • Road surface monitoring is essential for maintaining road environment safety through managing risk factors like rutting and crack detection. Using autonomous driving-based UGVs with high-performance 2D laser sensors enables more precise measurements. However, the increased energy consumption of these sensors is limited by constrained battery capacity. In this paper, we propose a fusion sensor system for efficient surface monitoring with UGVs. The proposed system combines color information from cameras and depth information from line laser sensors to accurately detect surface displacement. Furthermore, a dynamic sampling algorithm is applied to control the scanning frequency of line laser sensors based on the detection status of monitoring targets using camera sensors, reducing unnecessary energy consumption. A power consumption model of the fusion sensor system analyzes its energy efficiency considering various crack distributions and sensor characteristics in different mission environments. Performance analysis demonstrates that setting the power consumption of the line laser sensor to twice that of the saving state when in the active state increases power consumption efficiency by 13.3% compared to fixed sampling under the condition of λ=10, µ=10.

Implementation of A Vibration Notification System to Support Driving for Drivers with Cognitive Delay Impairment

  • Gyu-Seok Lee;Tae-Sung Kim;Myeong-Chul Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.115-123
    • /
    • 2024
  • In this paper, we propose a vibration notification system that combines navigation information and wearable bands to ensure safe driving for the transportation vulnerable. This system transmits navigation driving information to a linked application, converts it into a vibration signal, and provides notifications through a wearable band. Existing navigation systems focus on providing route guidance and location information, so the driver's concentration is dispersed, and safety and convenience are deteriorated, especially for those with mobility impairments, due to standard vision and delayed recognition of stimuli, resulting in an increasingly high traffic accident rate. To solve this problem, navigation driving information is converted into vibration signals through a linked application, and vibration notifications for events, left turns, right turns, and speeding are provided through a wearable band to ensure driver safety and convenience. In the future, we will use cameras and vehicle sensors to increase awareness of safety inside and outside the vehicle by adding a function that provides notifications with vibration and LED when the vehicle approaches or recognizes an object, and we will continue to conduct research to build a safer driving environment. plan.

Research on APC Verification for Disaster Victims and Vulnerable Facilities (재난약자 및 취약시설에 대한 APC실증에 관한 연구)

  • Seungyong Kim;Incheol Hwang;Dongsik Kim;Jungjae Shin;Seunggap Yong
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.199-205
    • /
    • 2024
  • Purpose: This study aims to improve the recognition rate of Auto People Counting (APC) in accurately identifying and providing information on remaining evacuees in disaster-vulnerable facilities such as nursing homes to firefighting and other response agencies in the event of a disaster. Methods: In this study, a baseline model was established using CNN (Convolutional Neural Network) models to improve the algorithm for recognizing images of incoming and outgoing individuals through cameras installed in actual disaster-vulnerable facilities operating APC systems. Various algorithms were analyzed, and the top seven candidates were selected. The research was conducted by utilizing transfer learning models to select the optimal algorithm with the best performance. Results: Experiment results confirmed the precision and recall of Densenet201 and Resnet152v2 models, which exhibited the best performance in terms of time and accuracy. It was observed that both models demonstrated 100% accuracy for all labels, with Densenet201 model showing superior performance. Conclusion: The optimal algorithm applicable to APC among various artificial intelligence algorithms was selected. Further research on algorithm analysis and learning is required to accurately identify the incoming and outgoing individuals in disaster-vulnerable facilities in various disaster situations such as emergencies in the future.

Towards Efficient Aquaculture Monitoring: Ground-Based Camera Implementation for Real-Time Fish Detection and Tracking with YOLOv7 and SORT (효율적인 양식 모니터링을 향하여: YOLOv7 및 SORT를 사용한 실시간 물고기 감지 및 추적을 위한 지상 기반 카메라 구현)

  • TaeKyoung Roh;Sang-Hyun Ha;KiHwan Kim;Young-Jin Kang;Seok Chan Jeong
    • The Journal of Bigdata
    • /
    • v.8 no.2
    • /
    • pp.73-82
    • /
    • 2023
  • With 78% of current fisheries workers being elderly, there's a pressing need to address labor shortages. Consequently, active research on smart aquaculture technologies, centered on object detection and tracking algorithms, is underway. These technologies allow for fish size analysis and behavior pattern forecasting, facilitating the development of real-time monitoring and automated systems. Our study utilized video data from cameras outside aquaculture facilities and implemented fish detection and tracking algorithms. We aimed to tackle high maintenance costs due to underwater conditions and camera corrosion from ammonia and pH levels. We evaluated the performance of a real-time system using YOLOv7 for fish detection and the SORT algorithm for movement tracking. YOLOv7 results demonstrated a trade-off between Recall and Precision, minimizing false detections from lighting, water currents, and shadows. Effective tracking was ascertained through re-identification. This research holds promise for enhancing smart aquaculture's operational efficiency and improving fishery facility management.

Development of Urban Wildlife Detection and Analysis Methodology Based on Camera Trapping Technique and YOLO-X Algorithm (카메라 트래핑 기법과 YOLO-X 알고리즘 기반의 도시 야생동물 탐지 및 분석방법론 개발)

  • Kim, Kyeong-Tae;Lee, Hyun-Jung;Jeon, Seung-Wook;Song, Won-Kyong;Kim, Whee-Moon
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.26 no.4
    • /
    • pp.17-34
    • /
    • 2023
  • Camera trapping has been used as a non-invasive survey method that minimizes anthropogenic disturbance to ecosystems. Nevertheless, it is labor-intensive and time-consuming, requiring researchers to quantify species and populations. In this study, we aimed to improve the preprocessing of camera trapping data by utilizing an object detection algorithm. Wildlife monitoring using unmanned sensor cameras was conducted in a forested urban forest and a green space on a university campus in Cheonan City, Chungcheongnam-do, Korea. The collected camera trapping data were classified by a researcher to identify the occurrence of species. The data was then used to test the performance of the YOLO-X object detection algorithm for wildlife detection. The camera trapping resulted in 10,500 images of the urban forest and 51,974 images of green spaces on campus. Out of the total 62,474 images, 52,993 images (84.82%) were found to be false positives, while 9,481 images (15.18%) were found to contain wildlife. As a result of wildlife monitoring, 19 species of birds, 5 species of mammals, and 1 species of reptile were observed within the study area. In addition, there were statistically significant differences in the frequency of occurrence of the following species according to the type of urban greenery: Parus varius(t = -3.035, p < 0.01), Parus major(t = 2.112, p < 0.05), Passer montanus(t = 2.112, p < 0.05), Paradoxornis webbianus(t = 2.112, p < 0.05), Turdus hortulorum(t = -4.026, p < 0.001), and Sitta europaea(t = -2.189, p < 0.05). The detection performance of the YOLO-X model for wildlife occurrence was analyzed, and it successfully classified 94.2% of the camera trapping data. In particular, the number of true positive predictions was 7,809 images and the number of false negative predictions was 51,044 images. In this study, the object detection algorithm YOLO-X model was used to detect the presence of wildlife in the camera trapping data. In this study, the YOLO-X model was used with a filter activated to detect 10 specific animal taxa out of the 80 classes trained on the COCO dataset, without any additional training. In future studies, it is necessary to create and apply training data for key occurrence species to make the model suitable for wildlife monitoring.