• 제목/요약/키워드: Human Object Detection

검색결과 245건 처리시간 0.026초

이미지 이어붙이기를 이용한 인간-객체 상호작용 탐지 데이터 증강 (Human-Object Interaction Detection Data Augmentation Using Image Concatenation)

  • 이상백;이규철
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제12권2호
    • /
    • pp.91-98
    • /
    • 2023
  • 인간-객체 상호작용 탐지는 객체 탐지와 상호작용 인식을 함께 풀어야하는 분야로 탐지 모델의 학습을 위해서 많은 데이터를 필요로 한다. 현재 공개된 데이터셋은 규모가 부족하여 데이터 증강 기법에 대한 요구가 커지고 있으나, 대부분의 연구에서 기존의 객체 탐지, 이미지 분할분야에서 활용하는 증강 기법을 활용하고 있는 실정이다. 이에 본 연구에서는 인간-객체 상호작용 탐지 분야에서 활용하는 데이터셋의 특성을 파악하고, 이를 통해 인간-객체 상호작용 탐지 모델 성능 향상에 효과적인 데이터 증강 기법을 제안한다. 본 연구에서 제안한 증강 기법에 대한 검증을 위하여 실험 환경을 구축하고, 기존의 학습 모델에 적용하여 증강 기법을 적용할 경우에 탐지 모델의 성능 향상이 가능함을 확인하였다.

인간의 지각적인 시스템을 기반으로 한 연속된 영상 내에서의 움직임 영역 결정 및 추적 (Object Motion Detection and Tracking Based on Human Perception System)

  • 정미영;최석림
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2120-2123
    • /
    • 2003
  • This paper presents the moving object detection and tracking algorithm using edge information base on human perceptual system The human visual system recognizes shapes and objects easily and rapidly. It's believed that perceptual organization plays on important role in human perception. It presents edge model(GCS) base on extracted feature by perceptual organization principal and extract edge information by definition of the edge model. Through such human perception system I have introduced the technique in which the computers would recognize the moving object from the edge information just like humans would recognize the moving object precisely.

  • PDF

다면기법 SPFACS 영상객체를 이용한 AAM 알고리즘 적용 미소검출 설계 분석 (Using a Multi-Faced Technique SPFACS Video Object Design Analysis of The AAM Algorithm Applies Smile Detection)

  • 최병관
    • 디지털산업정보학회논문지
    • /
    • 제11권3호
    • /
    • pp.99-112
    • /
    • 2015
  • Digital imaging technology has advanced beyond the limits of the multimedia industry IT convergence, and to develop a complex industry, particularly in the field of object recognition, face smart-phones associated with various Application technology are being actively researched. Recently, face recognition technology is evolving into an intelligent object recognition through image recognition technology, detection technology, the detection object recognition through image recognition processing techniques applied technology is applied to the IP camera through the 3D image object recognition technology Face Recognition been actively studied. In this paper, we first look at the essential human factor, technical factors and trends about the technology of the human object recognition based SPFACS(Smile Progress Facial Action Coding System)study measures the smile detection technology recognizes multi-faceted object recognition. Study Method: 1)Human cognitive skills necessary to analyze the 3D object imaging system was designed. 2)3D object recognition, face detection parameter identification and optimal measurement method using the AAM algorithm inside the proposals and 3)Face recognition objects (Face recognition Technology) to apply the result to the recognition of the person's teeth area detecting expression recognition demonstrated by the effect of extracting the feature points.

Real-time Human Detection under Omni-dir ectional Camera based on CNN with Unified Detection and AGMM for Visual Surveillance

  • Nguyen, Thanh Binh;Nguyen, Van Tuan;Chung, Sun-Tae;Cho, Seongwon
    • 한국멀티미디어학회논문지
    • /
    • 제19권8호
    • /
    • pp.1345-1360
    • /
    • 2016
  • In this paper, we propose a new real-time human detection under omni-directional cameras for visual surveillance purpose, based on CNN with unified detection and AGMM. Compared to CNN-based state-of-the-art object detection methods. YOLO model-based object detection method boasts of very fast object detection, but with less accuracy. The proposed method adapts the unified detecting CNN of YOLO model so as to be intensified by the additional foreground contextual information obtained from pre-stage AGMM. Increased computational time incurred by additional AGMM processing is compensated by speed-up gain obtained from utilizing 2-D input data consisting of grey-level image data and foreground context information instead of 3-D color input data. Through various experiments, it is shown that the proposed method performs better with respect to accuracy and more robust to environment changes than YOLO model-based human detection method, but with the similar processing speeds to that of YOLO model-based one. Thus, it can be successfully employed for embedded surveillance application.

Simple Online Multiple Human Tracking based on LK Feature Tracker and Detection for Embedded Surveillance

  • Vu, Quang Dao;Nguyen, Thanh Binh;Chung, Sun-Tae
    • 한국멀티미디어학회논문지
    • /
    • 제20권6호
    • /
    • pp.893-910
    • /
    • 2017
  • In this paper, we propose a simple online multiple object (human) tracking method, LKDeep (Lucas-Kanade feature and Detection based Simple Online Multiple Object Tracker), which can run in fast online enough on CPU core only with acceptable tracking performance for embedded surveillance purpose. The proposed LKDeep is a pragmatic hybrid approach which tracks multiple objects (humans) mainly based on LK features but is compensated by detection on periodic times or on necessity times. Compared to other state-of-the-art multiple object tracking methods based on 'Tracking-By-Detection (TBD)' approach, the proposed LKDeep is faster since it does not have to detect object on every frame and it utilizes simple association rule, but it shows a good object tracking performance. Through experiments in comparison with other multiple object tracking (MOT) methods using the public DPM detector among online state-of-the-art MOT methods reported in MOT challenge [1], it is shown that the proposed simple online MOT method, LKDeep runs faster but with good tracking performance for surveillance purpose. It is further observed through single object tracking (SOT) visual tracker benchmark experiment [2] that LKDeep with an optimized deep learning detector can run in online fast with comparable tracking performance to other state-of-the-art SOT methods.

CCD카메라와 적외선 카메라의 융합을 통한 효과적인 객체 추적 시스템 (Efficient Object Tracking System Using the Fusion of a CCD Camera and an Infrared Camera)

  • 김승훈;정일균;박창우;황정훈
    • 제어로봇시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.229-235
    • /
    • 2011
  • To make a robust object tracking and identifying system for an intelligent robot and/or home system, heterogeneous sensor fusion between visible ray system and infrared ray system is proposed. The proposed system separates the object by combining the ROI (Region of Interest) estimated from two different images based on a heterogeneous sensor that consolidates the ordinary CCD camera and the IR (Infrared) camera. Human's body and face are detected in both images by using different algorithms, such as histogram, optical-flow, skin-color model and Haar model. Also the pose of human body is estimated from the result of body detection in IR image by using PCA algorithm along with AdaBoost algorithm. Then, the results from each detection algorithm are fused to extract the best detection result. To verify the heterogeneous sensor fusion system, few experiments were done in various environments. From the experimental results, the system seems to have good tracking and identification performance regardless of the environmental changes. The application area of the proposed system is not limited to robot or home system but the surveillance system and military system.

An Automatic Camera Tracking System for Video Surveillance

  • Lee, Sang-Hwa;Sharma, Siddharth;Lin, Sang-Lin;Park, Jong-Il
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2010년도 하계학술대회
    • /
    • pp.42-45
    • /
    • 2010
  • This paper proposes an intelligent video surveillance system for human object tracking. The proposed system integrates the object extraction, human object recognition, face detection, and camera control. First, the object in the video signals is extracted using the background subtraction. Then, the object region is examined whether it is human or not. For this recognition, the region-based shape descriptor, angular radial transform (ART) in MPEG-7, is used to learn and train the shapes of human bodies. When it is decided that the object is human or something to be investigated, the face region is detected. Finally, the face or object region is tracked in the video, and the pan/tilt/zoom (PTZ) controllable camera tracks the moving object with the motion information of the object. This paper performs the simulation with the real CCTV cameras and their communication protocol. According to the experiments, the proposed system is able to track the moving object(human) automatically not only in the image domain but also in the real 3-D space. The proposed system reduces the human supervisors and improves the surveillance efficiency with the computer vision techniques.

  • PDF

2D Human Pose Estimation based on Object Detection using RGB-D information

  • Park, Seohee;Ji, Myunggeun;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권2호
    • /
    • pp.800-816
    • /
    • 2018
  • In recent years, video surveillance research has been able to recognize various behaviors of pedestrians and analyze the overall situation of objects by combining image analysis technology and deep learning method. Human Activity Recognition (HAR), which is important issue in video surveillance research, is a field to detect abnormal behavior of pedestrians in CCTV environment. In order to recognize human behavior, it is necessary to detect the human in the image and to estimate the pose from the detected human. In this paper, we propose a novel approach for 2D Human Pose Estimation based on object detection using RGB-D information. By adding depth information to the RGB information that has some limitation in detecting object due to lack of topological information, we can improve the detecting accuracy. Subsequently, the rescaled region of the detected object is applied to ConVol.utional Pose Machines (CPM) which is a sequential prediction structure based on ConVol.utional Neural Network. We utilize CPM to generate belief maps to predict the positions of keypoint representing human body parts and to estimate human pose by detecting 14 key body points. From the experimental results, we can prove that the proposed method detects target objects robustly in occlusion. It is also possible to perform 2D human pose estimation by providing an accurately detected region as an input of the CPM. As for the future work, we will estimate the 3D human pose by mapping the 2D coordinate information on the body part onto the 3D space. Consequently, we can provide useful human behavior information in the research of HAR.

Three-stream network with context convolution module for human-object interaction detection

  • Siadari, Thomhert S.;Han, Mikyong;Yoon, Hyunjin
    • ETRI Journal
    • /
    • 제42권2호
    • /
    • pp.230-238
    • /
    • 2020
  • Human-object interaction (HOI) detection is a popular computer vision task that detects interactions between humans and objects. This task can be useful in many applications that require a deeper understanding of semantic scenes. Current HOI detection networks typically consist of a feature extractor followed by detection layers comprising small filters (eg, 1 × 1 or 3 × 3). Although small filters can capture local spatial features with a few parameters, they fail to capture larger context information relevant for recognizing interactions between humans and distant objects owing to their small receptive regions. Hence, we herein propose a three-stream HOI detection network that employs a context convolution module (CCM) in each stream branch. The CCM can capture larger contexts from input feature maps by adopting combinations of large separable convolution layers and residual-based convolution layers without increasing the number of parameters by using fewer large separable filters. We evaluate our HOI detection method using two benchmark datasets, V-COCO and HICO-DET, and demonstrate its state-of-the-art performance.

Activity Object Detection Based on Improved Faster R-CNN

  • Zhang, Ning;Feng, Yiran;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제24권3호
    • /
    • pp.416-422
    • /
    • 2021
  • Due to the large differences in human activity within classes, the large similarity between classes, and the problems of visual angle and occlusion, it is difficult to extract features manually, and the detection rate of human behavior is low. In order to better solve these problems, an improved Faster R-CNN-based detection algorithm is proposed in this paper. It achieves multi-object recognition and localization through a second-order detection network, and replaces the original feature extraction module with Dense-Net, which can fuse multi-level feature information, increase network depth and avoid disappearance of network gradients. Meanwhile, the proposal merging strategy is improved with Soft-NMS, where an attenuation function is designed to replace the conventional NMS algorithm, thereby avoiding missed detection of adjacent or overlapping objects, and enhancing the network detection accuracy under multiple objects. During the experiment, the improved Faster R-CNN method in this article has 84.7% target detection result, which is improved compared to other methods, which proves that the target recognition method has significant advantages and potential.