• Title/Summary/Keyword: object detect

Search Result 932, Processing Time 0.024 seconds

Ensemble Deep Network for Dense Vehicle Detection in Large Image

  • Yu, Jae-Hyoung;Han, Youngjoon;Kim, JongKuk;Hahn, Hernsoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.45-55
    • /
    • 2021
  • This paper has proposed an algorithm that detecting for dense small vehicle in large image efficiently. It is consisted of two Ensemble Deep-Learning Network algorithms based on Coarse to Fine method. The system can detect vehicle exactly on selected sub image. In the Coarse step, it can make Voting Space using the result of various Deep-Learning Network individually. To select sub-region, it makes Voting Map by to combine each Voting Space. In the Fine step, the sub-region selected in the Coarse step is transferred to final Deep-Learning Network. The sub-region can be defined by using dynamic windows. In this paper, pre-defined mapping table has used to define dynamic windows for perspective road image. Identity judgment of vehicle moving on each sub-region is determined by closest center point of bottom of the detected vehicle's box information. And it is tracked by vehicle's box information on the continuous images. The proposed algorithm has evaluated for performance of detection and cost in real time using day and night images captured by CCTV on the road.

Hand Motion Signal Extraction Based on Electric Field Sensors Using PLN Spectrum Analysis (PLN 성분 분석을 통한 전기장센서 기반 손동작신호 추출)

  • Jeong, Seonil;Kim, Youngchul
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.97-101
    • /
    • 2020
  • Using passive electric field sensor which operates in non-contact mode, we can measure the electric potential induced from the change of electric charges on a sensor caused by the movement of human body or hands. In this study, we propose a new method, which utilizes PLN induced to the sensor around the moving object, to detect one's hand movement and extract gesture frames from the detected signals. Signals from the EPS sensors include a large amount of power line noise usually existing in the places such as rooms or buildings. Using the fact that the PLN is shielded in part by human access to the sensor, signals caused by motion or hand movement are detected. PLN consists mainly of signals with frequency of 60 Hz and its harmonics. In our proposed method, signals only 120 Hz component in frequency domain are chosen selectively and exclusively utilized for detection of hand movement. We use FFT to measure a spectral-separated frequency signal. The signals obtained from sensors in this way are continued to be compared with the threshold preset in advance. Once motion signals are detected passing throng the threshold, we determine the motion frame based on period between the first threshold passing time and the last one. The motion detection rate of our proposed method was about 90% while the correct frame extraction rate was about 85%. The method like our method, which use PLN signal in order to extract useful data about motion movement from non-contact mode EPS sensors, has been rarely reported or published in recent. This research results can be expected to be useful especially in circumstance of having surrounding PLN.

Fundamental Study on Algorithm Development for Prediction of Smoke Spread Distance Based on Deep Learning (딥러닝 기반의 연기 확산거리 예측을 위한 알고리즘 개발 기초연구)

  • Kim, Byeol;Hwang, Kwang-Il
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.1
    • /
    • pp.22-28
    • /
    • 2021
  • This is a basic study on the development of deep learning-based algorithms to detect smoke before the smoke detector operates in the event of a ship fire, analyze and utilize the detected data, and support fire suppression and evacuation activities by predicting the spread of smoke before it spreads to remote areas. Proposed algorithms were reviewed in accordance with the following procedures. As a first step, smoke images obtained through fire simulation were applied to the YOLO (You Only Look Once) model, which is a deep learning-based object detection algorithm. The mean average precision (mAP) of the trained YOLO model was measured to be 98.71%, and smoke was detected at a processing speed of 9 frames per second (FPS). The second step was to estimate the spread of smoke using the coordinates of the boundary box, from which was utilized to extract the smoke geometry from YOLO. This smoke geometry was then applied to the time series prediction algorithm, long short-term memory (LSTM). As a result, smoke spread data obtained from the coordinates of the boundary box between the estimated fire occurrence and 30 s were entered into the LSTM learning model to predict smoke spread data from 31 s to 90 s in the smoke image of a fast fire obtained from fire simulation. The average square root error between the estimated spread of smoke and its predicted value was 2.74.

A Ship-Wake Joint Detection Using Sentinel-2 Imagery

  • Woojin, Jeon;Donghyun, Jin;Noh-hun, Seong;Daeseong, Jung;Suyoung, Sim;Jongho, Woo;Yugyeong, Byeon;Nayeon, Kim;Kyung-Soo, Han
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.77-86
    • /
    • 2023
  • Ship detection is widely used in areas such as maritime security, maritime traffic, fisheries management, illegal fishing, and border control, and ship detection is important for rapid response and damage minimization as ship accident rates increase due to recent increases in international maritime traffic. Currently, according to a number of global and national regulations, ships must be equipped with automatic identification system (AIS), which provide information such as the location and speed of the ship periodically at regular intervals. However, most small vessels (less than 300 tons) are not obligated to install the transponder and may not be transmitted intentionally or accidentally. There is even a case of misuse of the ship'slocation information. Therefore, in this study, ship detection was performed using high-resolution optical satellite images that can periodically remotely detect a wide range and detectsmallships. However, optical images can cause false-alarm due to noise on the surface of the sea, such as waves, or factors indicating ship-like brightness, such as clouds and wakes. So, it is important to remove these factors to improve the accuracy of ship detection. In this study, false alarm wasreduced, and the accuracy ofship detection wasimproved by removing wake.As a ship detection method, ship detection was performed using machine learning-based random forest (RF), and convolutional neural network (CNN) techniquesthat have been widely used in object detection fieldsrecently, and ship detection results by the model were compared and analyzed. In addition, in this study, the results of RF and CNN were combined to improve the phenomenon of ship disconnection and the phenomenon of small detection. The ship detection results of thisstudy are significant in that they improved the limitations of each model while maintaining accuracy. In addition, if satellite images with improved spatial resolution are utilized in the future, it is expected that ship and wake simultaneous detection with higher accuracy will be performed.

Upward, Downward Stair Detection Method by using Obliq ue Distance (사거리를 이용한 상향, 하향 계단 검출 방법)

  • Gu, Bongen;Lee, Haeun;Kwon, Hyeokmin;Yoo, Jihyeon;Lee, Daho;Kim, Taehoon
    • Journal of Platform Technology
    • /
    • v.10 no.2
    • /
    • pp.10-19
    • /
    • 2022
  • Moving assistant devices for people who are difficult to move are becoming electric-powered and automated. These moving assistant devices are not suitable for moving stairs at which the height between floor surfaces is different because these devices are designed and manufactured for flatland moving. An electric-powered and automated moving assistant device should change direction or stop when it approaches stairs in a movement direction. If the user or automatic control system does not change direction or stop in time, a moving assistant device can roll over or collide with stairs. In this paper, we propose a stairs detection method by using oblique distance measured by one sensor tilted to flatland. The method proposed in this paper can detect upward or downward stairs by using a difference between a predicted and measured oblique distance in considering a tilted angle of a sensor for measuring an oblique distance and installation height of the sensor on a moving object. Before the device enters a stairs region, if our proposed method provides information about detected stairs to a device's controller, the controller can do adequate action to avoid the accident.

Performance Evaluation of YOLOv5s for Brain Hemorrhage Detection Using Computed Tomography Images (전산화단층영상 기반 뇌출혈 검출을 위한 YOLOv5s 성능 평가)

  • Kim, Sungmin;Lee, Seungwan
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.1
    • /
    • pp.25-34
    • /
    • 2022
  • Brain computed tomography (CT) is useful for brain lesion diagnosis, such as brain hemorrhage, due to non-invasive methodology, 3-dimensional image provision, low radiation dose. However, there has been numerous misdiagnosis owing to a lack of radiologist and heavy workload. Recently, object detection technologies based on artificial intelligence have been developed in order to overcome the limitations of traditional diagnosis. In this study, the applicability of a deep learning-based YOLOv5s model was evaluated for brain hemorrhage detection using brain CT images. Also, the effect of hyperparameters in the trained YOLOv5s model was analyzed. The YOLOv5s model consisted of backbone, neck and output modules. The trained model was able to detect a region of brain hemorrhage and provide the information of the region. The YOLOv5s model was trained with various activation functions, optimizer functions, loss functions and epochs, and the performance of the trained model was evaluated in terms of brain hemorrhage detection accuracy and training time. The results showed that the trained YOLOv5s model is able to provide a bounding box for a region of brain hemorrhage and the accuracy of the corresponding box. The performance of the YOLOv5s model was improved by using the mish activation function, the stochastic gradient descent (SGD) optimizer function and the completed intersection over union (CIoU) loss function. Also, the accuracy and training time of the YOLOv5s model increased with the number of epochs. Therefore, the YOLOv5s model is suitable for brain hemorrhage detection using brain CT images, and the performance of the model can be maximized by using appropriate hyperparameters.

A Research on Autonomous Mobile LiDAR Performance between Lab and Field Environment (자율주행차량 모바일 LiDAR의 실내외 성능 비교 연구)

  • Ji yoon Kim;Bum jin Park;Jisoo Kim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.4
    • /
    • pp.194-210
    • /
    • 2023
  • LiDAR plays a key role in autonomous vehicles, where it is used to detect the environment in place of the driver's eyes, and its role is expanding. In recent years, there has been a growing need to test the performance of LiDARs installed in autonomous vehicles. Many LiDAR performance tests have been conducted in simulated and indoor(lab) environments, but the number of tests in outdoor(field) and real-world road environments has been minimal. In this study, we compared LiDAR performance under the same conditions lab and field to determine the relationship between lab and field tests and to establish the characteristics and roles of each test environment. The experimental results showed that LiDAR detection performance varies depending on the lighting environment (direct sunlight, led) and the detected object. In particular, the effect of decreasing intensity due to increasing distance and rainfall is greater outdoors, suggesting that both lab and field experiments are necessary when testing LiDAR detection performance on objects. The results of this study are expected to be useful for organizations conducting research on the use of LiDAR sensors and facilities for LiDAR sensors.

Investigation of Antibody Titers after Inoculation with Commercial Equine Influenza Vaccines in Thoroughbred Yearlings (Thoroughbred 1세말에서 상업용 말 인플루엔자 백신접종 후 항체역가 추적)

  • Yang, J.H.;Park, Y.S.
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.20 no.1
    • /
    • pp.89-96
    • /
    • 2018
  • The object of this study was to evaluate the change of antibody titers on virus strains after inoculation with commercial killed equine influenza (EI) vaccines in horses. Serum antibodies of 20 Thoroughbred yearlings were detected using hemagglutination inhibition test for 41 weeks. Second vaccination is inoculated 4 weeks after the initial vaccination. Most of antibody titers were not increased until 4 weeks after first vaccination. The highest titers were detected 6-10 weeks after vaccination. The titers were decreased slowly and maintained for 16 weeks after inoculation. We could barely detect the antibody 41 weeks after vaccination in most cases. Vaccine anergia were appeared in 3 horses (15%) but it depended on virus strains. A/Equine/La Plata/93(H3N8) strain that induce high and durable antibody responses was the most effective among three strains. This study presents the first comprehensive data on the endurance of antibody titers against EI. Our data also suggests that yearlings should be inoculated three times in order to maintaining optimal antibody titers against EI. We speculate the causes of anergia were vaccine break down or individual specificity. Further research is needed to investigate immunological unresponsiveness. This was the first study on strain of equine vaccine in Korea.

Detection of Steel Ribs in Tunnel GPR Images Based on YOLO Algorithm (YOLO 알고리즘을 활용한 터널 GPR 이미지 내 강지보재 탐지)

  • Bae, Byongkyu;Ahn, Jaehun;Jung, Hyunjun;Yoo, Chang Kyoon
    • Journal of the Korean Geotechnical Society
    • /
    • v.39 no.7
    • /
    • pp.31-37
    • /
    • 2023
  • Since tunnels are built underground, it is impossible to check visually the location and degree of deterioration of steel ribs. Therefore, in tunnel maintenance, GPR images are generally used to detect steel ribs. While research on GPR image analysis employing artificial neural networks has primarily focused on detecting underground pipes and road damage, there have been limited applications for analyzing tunnel GPR data, specifically for steel rib detection, both internationally and domestically. In this study, a one-step object detection algorithm called YOLO, based on a convolutional neural network, was utilized to automate the localization of steel ribs using GPR data. The performance of the algorithm is then analyzed. Two datasets were employed for the analysis. A dataset comprising 512 original images and another dataset consisting of 2,048 augmented images. The omission rate, which represents the ratio of undetected steel ribs to the total number of steel ribs, was 0.38% for the model using the augmented data, whereas the omission rate for the model using only the original data was 7.18%. Thus, from an automation standpoint, it is more practical to employ an augmented dataset.

Class Classification and Validation of a Musculoskeletal Risk Factor Dataset for Manufacturing Workers (제조업 노동자 근골격계 부담요인 데이터셋 클래스 분류와 유효성 검증)

  • Young-Jin Kang;;;Jeong, Seok Chan
    • The Journal of Bigdata
    • /
    • v.8 no.1
    • /
    • pp.49-59
    • /
    • 2023
  • There are various items in the safety and health standards of the manufacturing industry, but they can be divided into work-related diseases and musculoskeletal diseases according to the standards for sickness and accident victims. Musculoskeletal diseases occur frequently in manufacturing and can lead to a decrease in labor productivity and a weakening of competitiveness in manufacturing. In this paper, to detect the musculoskeletal harmful factors of manufacturing workers, we defined the musculoskeletal load work factor analysis, harmful load working postures, and key points matching, and constructed data for Artificial Intelligence(AI) learning. To check the effectiveness of the suggested dataset, AI algorithms such as YOLO, Lite-HRNet, and EfficientNet were used to train and verify. Our experimental results the human detection accuracy is 99%, the key points matching accuracy of the detected person is @AP0.5 88%, and the accuracy of working postures evaluation by integrating the inferred matching positions is LEGS 72.2%, NECT 85.7%, TRUNK 81.9%, UPPERARM 79.8%, and LOWERARM 92.7%, and considered the necessity for research that can prevent deep learning-based musculoskeletal diseases.