• Title/Summary/Keyword: 보행자 검출

Search Result 132, Processing Time 0.031 seconds

A Basic Study on the Fall Direction Recognition System Using Smart phone (스마트폰을 이용한 낙상 방향 검출 시스템의 기초 연구)

  • Na, Ye-Ji;Lee, Sang-Jun;Wang, Chang-Won;Jeong, Hwa-Young;Ho, Jong-Gab;Min, Se-Dong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1384-1387
    • /
    • 2015
  • 고령화 사회로 진입하면서 노인들은 노화과정에 의한 보행능력의 감소 및 근력 약화와 같은 신체적 변화로 인해 잦은 낙상을 경험한다. 이에 따라 낙상 사고를 감지하는 연구가 활발히 진행되고 있다. 낙상은 사전 예방도 중요하지만 사고 발생 후의 신속한 대처도 중요하다. 낙상을 감지하고 의료진에게 즉시 낙상정보를 제공하여 후속적 조치를 취하는 것은 사고 후 대처의 핵심이다. 본 논문에서는 스마트폰 환경에서 사용자의 낙상 후 방향을 판별하기 위해 두 가지 센서 데이터의 특정 값들을 추출하였으며, 이에 5 가지 기계학습 알고리즘을 적용하였다. 사용자는 스마트폰을 착용한 상태로 전후좌우 4 방향 낙상 실험을 진행하며 스마트폰 내에 내장된 3 축 가속도 센서와 3 축 자이로 센서값을 측정한다. 피험자 11 명을 대상으로 낙상 실험 결과, 5 가지의 분류기 중 k-NN에서 98.6%의 인식률을 나타내었다. 뽑아낸 특징 값과 분류 알고리즘은 낙상의 방향 검출에 유용한 것으로 판단된다.

A Study on the Applicability of Deep Learning Algorithm for Detection and Resolving of Occlusion Area (영상 폐색영역 검출 및 해결을 위한 딥러닝 알고리즘 적용 가능성 연구)

  • Bae, Kyoung-Ho;Park, Hong-Gi
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.11
    • /
    • pp.305-313
    • /
    • 2019
  • Recently, spatial information is being constructed actively based on the images obtained by drones. Because occlusion areas occur due to buildings as well as many obstacles, such as trees, pedestrians, and banners in the urban areas, an efficient way to resolve the problem is necessary. Instead of the traditional way, which replaces the occlusion area with other images obtained at different positions, various models based on deep learning were examined and compared. A comparison of a type of feature descriptor, HOG, to the machine learning-based SVM, deep learning-based DNN, CNN, and RNN showed that the CNN is used broadly to detect and classify objects. Until now, many studies have focused on the development and application of models so that it is impossible to select an optimal model. On the other hand, the upgrade of a deep learning-based detection and classification technique is expected because many researchers have attempted to upgrade the accuracy of the model as well as reduce the computation time. In that case, the procedures for generating spatial information will be changed to detect the occlusion area and replace it with simulated images automatically, and the efficiency of time, cost, and workforce will also be improved.

Human Tracking Technology using Convolutional Neural Network in Visual Surveillance (서베일런스에서 회선 신경망 기술을 이용한 사람 추적 기법)

  • Kang, Sung-Kwan;Chun, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.15 no.2
    • /
    • pp.173-181
    • /
    • 2017
  • In this paper, we have studied tracking as a training stage of considering the position and the scale of a person given its previous position, scale, as well as next and forward image fraction. Unlike other learning methods, CNN is thereby learning combines both time and spatial features from the image for the two consecutive frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences. The accuracy of the SVM classifier using the features learnt by the CNN is equivalent to the accuracy of the CNN. This fact confirms the importance of automatically optimized features. However, the computation time for the classification of a person using the convolutional neural network classifier is less than approximately 1/40 of the SVM computation time, regardless of the type of the used features.

Real-Time Step Count Detection Algorithm Using a Tri-Axial Accelerometer (3축 가속도 센서를 이용한 실시간 걸음 수 검출 알고리즘)

  • Kim, Yun-Kyung;Kim, Sung-Mok;Lho, Hyung-Suk;Cho, We-Duke
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.17-26
    • /
    • 2011
  • We have developed a wearable device that can convert sensor data into real-time step counts. Sensor data on gait were acquired using a triaxial accelerometer. A test was performed according to a test protocol for different walking speeds, e.g., slow walking, walking, fast walking, slow running, running, and fast running. Each test was carried out for 36 min on a treadmill with the participant wearing an Actical device, and the device developed in this study. The signal vector magnitude (SVM) was used to process the X, Y, and Z values output by the triaxial accelerometer into one representative value. In addition, for accurate step-count detection, we used three algorithms: an heuristic algorithm (HA), the adaptive threshold algorithm (ATA), and the adaptive locking period algorithm (ALPA). The recognition rate of our algorithm was 97.34% better than that of the Actical device(91.74%) by 5.6%.

Background and Local Histogram-Based Object Tracking Approach (도로 상황인식을 위한 배경 및 로컬히스토그램 기반 객체 추적 기법)

  • Kim, Young Hwan;Park, Soon Young;Oh, Il Whan;Choi, Kyoung Ho
    • Spatial Information Research
    • /
    • v.21 no.3
    • /
    • pp.11-19
    • /
    • 2013
  • Compared with traditional video monitoring systems that provide a video-recording function as a main service, an intelligent video monitoring system is capable of extracting/tracking objects and detecting events such as car accidents, traffic congestion, pedestrian detection, and so on. Thus, the object tracking is an essential function for various intelligent video monitoring and surveillance systems. In this paper, we propose a background and local histogram-based object tracking approach for intelligent video monitoring systems. For robust object tracking in a live situation, the result of optical flow and local histogram verification are combined with the result of background subtraction. In the proposed approach, local histogram verification allows the system to track target objects more reliably when the local histogram of LK position is not similar to the previous histogram. Experimental results are provided to show the proposed tracking algorithm is robust in object occlusion and scale change situation.

Design of Pedestrian Detection and Tracking System Using HOG-PCA and Object Tracking Algorithm (HOG-PCA와 객체 추적 알고리즘을 이용한 보행자 검출 및 추적 시스템 설계)

  • Jeon, Pil-Han;Park, Chan-Jun;Kim, Jin-Yul;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.4
    • /
    • pp.682-691
    • /
    • 2017
  • In this paper, we propose the fusion design methodology of both pedestrian detection and object tracking system realized with the aid of HOG-PCA based RBFNN pattern classifier. The proposed system includes detection and tracking parts. In the detection part, HOG features are extracted from input images for pedestrian detection. Dimension reduction is also dealt with in order to improve detection performance as well as processing speed by using PCA which is known as a typical dimension reduction method. The reduced features can be used as the input of the FCM-based RBFNNs pattern classifier to carry out the pedestrian detection. FCM-based RBFNNs pattern classifier consists of condition, conclusion, and inference parts. FCM clustering algorithm is used as the activation function of hidden layer. In the conclusion part of network, polynomial functions such as constant, linear, quadratic and modified quadratic are regarded as connection weights and their coefficients of polynomial function are estimated by LSE-based learning. In the tracking part, object tracking algorithms such as mean shift(MS) and cam shift(CS) leads to trace one of the pedestrian candidates nominated in the detection part. Finally, INRIA person database is used in order to evaluate the performance of the pedestrian detection of the proposed system while MIT pedestrian video as well as indoor and outdoor videos obtained from IC&CI laboratory in Suwon University are exploited to evaluate the performance of tracking.

A Study on Radar Video Fusion Systems for Pedestrian and Vehicle Detection (보행자 및 차량 검지를 위한 레이더 영상 융복합 시스템 연구)

  • Sung-Youn Cho;Yeo-Hwan Yoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.197-205
    • /
    • 2024
  • Development of AI and big data-based algorithms to advance and optimize the recognition and detection performance of various static/dynamic vehicles in front and around the vehicle at a time when securing driving safety is the most important point in the development and commercialization of autonomous vehicles. etc. are being studied. However, there are many research cases for recognizing the same vehicle by using the unique advantages of radar and camera, but deep learning image processing technology is not used, or only a short distance is detected as the same target due to radar performance problems. Therefore, there is a need for a convergence-based vehicle recognition method that configures a dataset that can be collected from radar equipment and camera equipment, calculates the error of the dataset, and recognizes it as the same target. In this paper, we aim to develop a technology that can link location information according to the installation location because data errors occur because it is judged as the same object depending on the installation location of the radar and CCTV (video).

A Study on the Moving Object Tracking Algorithm of Static Camera and Active Camera in Environment (고정카메라 및 능동카메라 환경에서 이동물체 추적 알고리즘에 관한 연구)

  • 남기환;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.344-352
    • /
    • 2003
  • An effective algorithm for implementation of which detects moving object from image sequences. predicts the direction of it. and drives the camera in real time is proposed. In static camera, for robust motion detection from a dynamic background scene, the proposed algorithm performs statistical modeling of moving objects and background, and trains the statistical modeling of moving objects and background, and trains the statistical feature of background with the initial parts of sequence which have no moving objects. Active camera moving objects are segmented by following procedure, an improved order adaptive lattice structured linear predictor is used. The proposed algorithm shows robust object tracking results in the environment of static or active camera. It can be used for the unmanned surveillance system, traffic monitoring system, and autonomous vehicle.

Information Fusion of Cameras and Laser Radars for Perception Systems of Autonomous Vehicles (영상 및 레이저레이더 정보융합을 통한 자율주행자동차의 주행환경인식 및 추적방법)

  • Lee, Minchae;Han, Jaehyun;Jang, Chulhoon;Sunwoo, Myoungho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.35-45
    • /
    • 2013
  • A autonomous vehicle requires improved and robust perception systems than conventional perception systems of intelligent vehicles. In particular, single sensor based perception systems have been widely studied by using cameras and laser radar sensors which are the most representative sensors for perception by providing object information such as distance information and object features. The distance information of the laser radar sensor is used for road environment perception of road structures, vehicles, and pedestrians. The image information of the camera is used for visual recognition such as lanes, crosswalks, and traffic signs. However, single sensor based perception systems suffer from false positives and true negatives which are caused by sensor limitations and road environments. Accordingly, information fusion systems are essentially required to ensure the robustness and stability of perception systems in harsh environments. This paper describes a perception system for autonomous vehicles, which performs information fusion to recognize road environments. Particularly, vision and laser radar sensors are fused together to detect lanes, crosswalks, and obstacles. The proposed perception system was validated on various roads and environmental conditions with an autonomous vehicle.

A Study on Design and Implementation of Driver's Blind Spot Assist System Using CNN Technique (CNN 기법을 활용한 운전자 시선 사각지대 보조 시스템 설계 및 구현 연구)

  • Lim, Seung-Cheol;Go, Jae-Seung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.149-155
    • /
    • 2020
  • The Korea Highway Traffic Authority provides statistics that analyze the causes of traffic accidents that occurred since 2015 using the Traffic Accident Analysis System (TAAS). it was reported Through TAAS that the driver's forward carelessness was the main cause of traffic accidents in 2018. As statistics on the cause of traffic accidents, 51.2 percent used mobile phones and watched DMB while driving, 14 percent did not secure safe distance, and 3.6 percent violated their duty to protect pedestrians, representing a total of 68.8 percent. In this paper, we propose a system that has improved the advanced driver assistance system ADAS (Advanced Driver Assistance Systems) by utilizing CNN (Convolutional Neural Network) among the algorithms of Deep Learning. The proposed system learns a model that classifies the movement of the driver's face and eyes using Conv2D techniques which are mainly used for Image processing, while recognizing and detecting objects around the vehicle with cameras attached to the front of the vehicle to recognize the driving environment. Then, using the learned visual steering model and driving environment data, the hazard is classified and detected in three stages, depending on the driver's view and driving environment to assist the driver with the forward and blind spots.