• Title/Summary/Keyword: Object Region Detection

Search Result 285, Processing Time 0.027 seconds

Efficient Object Recognition by Masking Semantic Pixel Difference Region of Vision Snapshot for Lightweight Embedded Systems (경량화된 임베디드 시스템에서 의미론적인 픽셀 분할 마스킹을 이용한 효율적인 영상 객체 인식 기법)

  • Yun, Heuijee;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.813-826
    • /
    • 2022
  • AI-based image processing technologies in various fields have been widely studied. However, the lighter the board, the more difficult it is to reduce the weight of image processing algorithm due to a lot of computation. In this paper, we propose a method using deep learning for object recognition algorithm in lightweight embedded boards. We can determine the area using a deep neural network architecture algorithm that processes semantic segmentation with a relatively small amount of computation. After masking the area, by using more accurate deep learning algorithm we could operate object detection with improved accuracy for efficient neural network (ENet) and You Only Look Once (YOLO) toward executing object recognition in real time for lightweighted embedded boards. This research is expected to be used for autonomous driving applications, which have to be much lighter and cheaper than the existing approaches used for object recognition.

Object Detection Method for The Wild Pig Surveillance System (멧돼지 감시 시스템을 위한 객체 검출 방법)

  • Kim, Dong-Woo;Song, Young-Jun;Kim, Ae-Kyeong;Hong, You-Sik;Ahn, Jae-Hyeong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.5
    • /
    • pp.229-235
    • /
    • 2010
  • In this paper, we propose a method to improve the efficiency of the moving object detection in real-time surveillance camera system. The existing methods, the methods using differential image and background image, are difficult to detect the moving object from outside the video streams. The proposed method keeps the background image if it doesn't be detected moving object using the differential value between a previous frame and a current frame. And the background image is renewed as the moving object is gone in a frame. To decide people and wild pig, the proposed system estimates a bounding box enclosing each moving object in the detecting region. As a result of simulation, the proposed method is better than the existing method.

A New Object Region Detection and Classification Method using Multiple Sensors on the Driving Environment (다중 센서를 사용한 주행 환경에서의 객체 검출 및 분류 방법)

  • Kim, Jung-Un;Kang, Hang-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1271-1281
    • /
    • 2017
  • It is essential to collect and analyze target information around the vehicle for autonomous driving of the vehicle. Based on the analysis, environmental information such as location and direction should be analyzed in real time to control the vehicle. In particular, obstruction or cutting of objects in the image must be handled to provide accurate information about the vehicle environment and to facilitate safe operation. In this paper, we propose a method to simultaneously generate 2D and 3D bounding box proposals using LiDAR Edge generated by filtering LiDAR sensor information. We classify the classes of each proposal by connecting them with Region-based Fully-Covolutional Networks (R-FCN), which is an object classifier based on Deep Learning, which uses two-dimensional images as inputs. Each 3D box is rearranged by using the class label and the subcategory information of each class to finally complete the 3D bounding box corresponding to the object. Because 3D bounding boxes are created in 3D space, object information such as space coordinates and object size can be obtained at once, and 2D bounding boxes associated with 3D boxes do not have problems such as occlusion.

Shadow Removal Based on Chromaticity and Entropy for Efficient Moving Object Tracking (효과적인 이동물체 추적을 위한 색도 영상과 엔트로피 기반의 그림자 제거)

  • Park, Ki-Hong
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.4
    • /
    • pp.387-392
    • /
    • 2014
  • Recently, various research for intelligent video surveillance system have been proposed, but the existing monitoring systems are inefficient because all of situational awareness is judged by the human. In this paper, shadow removal based moving object tracking method is proposed using the chromaticity and entropy image. The background subtraction model, effective in the context awareness environment, has been applied for moving object detection. After detecting the region of moving object, the shadow candidate region has been estimated and removed by RGB based chromaticity and minimum cross entropy images. For the validity of the proposed method, the highway video is used to experiment. Some experiments are conducted so as to verify the proposed method, and as a result, shadow removal and moving object tracking are well performed.

A Saliency Map based on Color Boosting and Maximum Symmetric Surround

  • Huynh, Trung Manh;Lee, Gueesang
    • Smart Media Journal
    • /
    • v.2 no.2
    • /
    • pp.8-13
    • /
    • 2013
  • Nowadays, the saliency region detection has become a popular research topic because of its uses for many applications like object recognition and object segmentation. Some of recent methods apply color distinctiveness based on an analysis of statistics of color image derivatives in order to boosting color saliency can produce the good saliency maps. However, if the salient regions comprise more than half the pixels of the image or the background is complex, it may cause bad results. In this paper, we introduce the method to handle these problems by using maximum symmetric surround. The results show that our method outperforms the previous algorithms. We also show the segmentation results by using Otsu's method.

  • PDF

Cluster Based Object Detection in Wireless Sensor Network

  • Rahman, Obaidur;Hong, Choong-Seon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10d
    • /
    • pp.56-58
    • /
    • 2006
  • Sensing and coverage are the two relevant tasks for a sensor network. Quality of sensor network totally depends upon the sensing ability of sensors. In a certain monitored region success of detecting or sensing an object with the help of sensor network tells that how efficiently the network coverage perform. Here in this paper we have proposed a clustering algorithm for the deployment of sensors and thus calculated the object detection probability. Actually by this work we can easily identify the present network coverage status and accordingly can take action for its improvement.

  • PDF

Robust Vision Based Algorithm for Accident Detection of Crossroad (교차로 사고감지를 위한 강건한 비젼기반 알고리즘)

  • Jeong, Sung-Hwan;Lee, Joon-Whoan
    • The KIPS Transactions:PartB
    • /
    • v.18B no.3
    • /
    • pp.117-130
    • /
    • 2011
  • The purpose of this study is to produce a better way to detect crossroad accidents, which involves an efficient method to produce background images in consideration of object movement and preserve/demonstrate the candidate accident region. One of the prior studies proposed an employment of traffic signal interval within crossroad to detect accidents on crossroad, but it may cause a failure to detect unwanted accidents if any object is covered on an accident site. This study adopted inverse perspective mapping to control the scale of object, and proposed different ways such as producing robust background images enough to resist surrounding noise, generating candidate accident regions through information on object movement, and by using edge information to preserve and delete the candidate accident region. In order to measure the performance of proposed algorithm, a variety of traffic images were saved and used for experiment (e.g. recorded images on rush hours via DVR installed on crossroad, different accident images recorded in day and night rainy days, and recorded images including surrounding noise of lighting and shades). As a result, it was found that there were all 20 experiment cases of accident detected and actual effective rate of accident detection amounted to 76.9% on average. In addition, the image processing rate ranged from 10~14 frame/sec depending on the area of detection region. Thus, it is concluded that there will be no problem in real-time image processing.

A binocular robot vision system with quadrangle recognition

  • Yabuta, Yoshito;Mizumoto, Hiroshi;Arii, Shiro
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.80-83
    • /
    • 2005
  • A binocular robot vision system having an autonomously moving active viewpoint is proposed. By using this active viewpoint, the system constructs a correspondence between the images of a feature points on the right and left retinas and calculates the spatial coordinates of the feature points. The system incorporates a function of detecting straight lines in an image. To detect lines the system uses Hough transform. The system searches a region surrounded by 4 straight lines. Then the system recognizes the region as a quadrangle. The system constructs a correspondence between the quadrangles in the right and left images. By the use of the result of the constructed correspondence, the system calculates the spatial coordinates of an object. An experiment shows the effect of the line detection using Hough transform, the recognition of the surface of the object and the calculation of the spatial coordinates of the object.

  • PDF

Motion Estimation-based Human Fall Detection for Visual Surveillance

  • Kim, Heegwang;Park, Jinho;Park, Hasil;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.5
    • /
    • pp.327-330
    • /
    • 2016
  • Currently, the world's elderly population continues to grow at a dramatic rate. As the number of senior citizens increases, detection of someone falling has attracted increasing attention for visual surveillance systems. This paper presents a novel fall-detection algorithm using motion estimation and an integrated spatiotemporal energy map of the object region. The proposed method first extracts a human region using a background subtraction method. Next, we applied an optical flow algorithm to estimate motion vectors, and an energy map is generated by accumulating the detected human region for a certain period of time. We can then detect a fall using k-nearest neighbor (kNN) classification with the previously estimated motion information and energy map. The experimental results show that the proposed algorithm can effectively detect someone falling in any direction, including at an angle parallel to the camera's optical axis.

A Study on the Edge Detection using Region Segmentation of the Mask (마스크의 영역 분할을 이용한 에지 검출에 관한 연구)

  • Lee, Chang-Young;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.3
    • /
    • pp.718-723
    • /
    • 2013
  • In general, the boundary portion of the background and objects are the rapidly changing point and an important elements to analyze characteristics of image. Using these boundary parts, information about the position or shape of an object in the image are detected, and many studies have been continued in order to detect it. Existing methods are that implementation of algorithm is comparatively simple and its processing speed is fast, but edge detection characteristics is insufficient because weighted values are applied to all the pixels equally. Therefore, in this paper, we proposed an algorithm using region segmentation of the mask in order to adaptive edge detection according to image, and the results processed by proposed algorithm indicated superior edge detection characteristics in edge area.