• Title/Summary/Keyword: Object Localization

Search Result 175, Processing Time 0.032 seconds

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Activity Object Detection Based on Improved Faster R-CNN

  • Zhang, Ning;Feng, Yiran;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.416-422
    • /
    • 2021
  • Due to the large differences in human activity within classes, the large similarity between classes, and the problems of visual angle and occlusion, it is difficult to extract features manually, and the detection rate of human behavior is low. In order to better solve these problems, an improved Faster R-CNN-based detection algorithm is proposed in this paper. It achieves multi-object recognition and localization through a second-order detection network, and replaces the original feature extraction module with Dense-Net, which can fuse multi-level feature information, increase network depth and avoid disappearance of network gradients. Meanwhile, the proposal merging strategy is improved with Soft-NMS, where an attenuation function is designed to replace the conventional NMS algorithm, thereby avoiding missed detection of adjacent or overlapping objects, and enhancing the network detection accuracy under multiple objects. During the experiment, the improved Faster R-CNN method in this article has 84.7% target detection result, which is improved compared to other methods, which proves that the target recognition method has significant advantages and potential.

Position Estimation of Autonomous Mobile Robot Using Geometric Information of a Moving Object (이동물체의 기하학적 위치정보를 이용한 자율이동로봇의 위치추정)

  • Jin, Tae-Seok;Lee, Jang-Myung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.438-444
    • /
    • 2004
  • The intelligent robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robots need to recognize their position and posture in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for a robot to estimate of his position by solving uncertainty for mobile robot navigation, as one of the best important problems. In this paper, we describe a method for the localization of a mobile robot using image information of a moving object. This method combines the observed position from dead-reckoning sensors and the estimated position from the images captured by a fixed camera to localize a mobile robot. Using the a priori known path of a moving object in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a moving object and the estimated robot's position. Since the equations are based or the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot. The Kalman filter scheme is applied for this method. its performance is verified by the computer simulation and the experiment.

2D Human Pose Estimation based on Object Detection using RGB-D information

  • Park, Seohee;Ji, Myunggeun;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.800-816
    • /
    • 2018
  • In recent years, video surveillance research has been able to recognize various behaviors of pedestrians and analyze the overall situation of objects by combining image analysis technology and deep learning method. Human Activity Recognition (HAR), which is important issue in video surveillance research, is a field to detect abnormal behavior of pedestrians in CCTV environment. In order to recognize human behavior, it is necessary to detect the human in the image and to estimate the pose from the detected human. In this paper, we propose a novel approach for 2D Human Pose Estimation based on object detection using RGB-D information. By adding depth information to the RGB information that has some limitation in detecting object due to lack of topological information, we can improve the detecting accuracy. Subsequently, the rescaled region of the detected object is applied to ConVol.utional Pose Machines (CPM) which is a sequential prediction structure based on ConVol.utional Neural Network. We utilize CPM to generate belief maps to predict the positions of keypoint representing human body parts and to estimate human pose by detecting 14 key body points. From the experimental results, we can prove that the proposed method detects target objects robustly in occlusion. It is also possible to perform 2D human pose estimation by providing an accurately detected region as an input of the CPM. As for the future work, we will estimate the 3D human pose by mapping the 2D coordinate information on the body part onto the 3D space. Consequently, we can provide useful human behavior information in the research of HAR.

Estimating Distance of a Target Object from the Background Objects with Electric Image (전기장을 이용한 물체의 거리 측정 연구)

  • Sim, Mi-Young;Kim, Dae-Eun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.3
    • /
    • pp.56-62
    • /
    • 2010
  • Weakly electric fish uses active sensing to detect the distortion of self-generated electric field in the underwater environments. The active electrolocation makes it possible to identify target objects from the surroundings without vision in the dark sea. Weakly electric fish have many electroreceptors over the whole body surface of electric fish, and sensor readings from a collection of electroreceptors are represented as an electric image. Many researchers have worked on finding features in the electric image to know how the weakly electric fish identify the target object. In this paper, we suggest a new mechanism of how the electrolocation can recognize a given target object among object plants. This approach is based on the differential components of the electric image, and has a potential to be applied to the underwater robotic system for object localization.

Online Hard Example Mining for Training One-Stage Object Detectors (단-단계 물체 탐지기 학습을 위한 고난도 예들의 온라인 마이닝)

  • Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.5
    • /
    • pp.195-204
    • /
    • 2018
  • In this paper, we propose both a new loss function and an online hard example mining scheme for improving the performance of single-stage object detectors which use deep convolutional neural networks. The proposed loss function and the online hard example mining scheme can not only overcome the problem of imbalance between the number of annotated objects and the number of background examples, but also improve the localization accuracy of each object. Therefore, the loss function and the mining scheme can provide intrinsically fast single-stage detectors with detection performance higher than or similar to that of two-stage detectors. In experiments conducted with the PASCAL VOC 2007 benchmark dataset, we show that the proposed loss function and the online hard example mining scheme can improve the performance of single-stage object detectors.

Attention based Feature-Fusion Network for 3D Object Detection (3차원 객체 탐지를 위한 어텐션 기반 특징 융합 네트워크)

  • Sang-Hyun Ryoo;Dae-Yeol Kang;Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.190-196
    • /
    • 2023
  • Recently, following the development of LIDAR technology which can detect distance from the object, the interest for LIDAR based 3D object detection network is getting higher. Previous networks generate inaccurate localization results due to spatial information loss during voxelization and downsampling. In this study, we propose an attention-based convergence method and a camera-LIDAR convergence system to acquire high-level features and high positional accuracy. First, by introducing the attention method into the Voxel-RCNN structure, which is a grid-based 3D object detection network, the multi-scale sparse 3D convolution feature is effectively fused to improve the performance of 3D object detection. Additionally, we propose the late-fusion mechanism for fusing outcomes in 3D object detection network and 2D object detection network to delete false positive. Comparative experiments with existing algorithms are performed using the KITTI data set, which is widely used in the field of autonomous driving. The proposed method showed performance improvement in both 2D object detection on BEV and 3D object detection. In particular, the precision was improved by about 0.54% for the car moderate class compared to Voxel-RCNN.

Development of a Hovering Robot System for Calamity Observation

  • Kang, M.S.;Park, S.;Lee, H.G.;Won, D.H.;Kim, T.J.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.580-585
    • /
    • 2005
  • A QRT(Quad-Rotor Type) hovering robot system is developed for quick detection and observation of the circumstances under calamity environment such as indoor fire spots. The UAV(Unmanned Aerial Vehicle) is equipped with four propellers driven by each electric motor, an embedded controller using a DSP, INS(Inertial Navigation System) using 3-axis rate gyros, a CCD camera with wireless communication transmitter for observation, and an ultrasonic range sensor for height control. The developed hovering robot shows stable flying performances under the adoption of RIC(Robust Internal-loop Compensator) based disturbance compensation and the vision based localization method. The UAV can also avoid obstacles using eight IR and four ultrasonic range sensors. The VTOL(Vertical Take-Off and Landing) flying object flies into indoor fire spots and sends the images captured by the CCD camera to the operator. This kind of small-sized UAV can be widely used in various calamity observation fields without danger of human beings under harmful environment.

  • PDF

A Simulation for Robust SLAM to the Error of Heading in Towing Tank (Unscented Kalman Filter을 이용한 Simultaneous Localization and Mapping 기법 적용)

  • Hwang, A-Rom;Seong, Woo-Jae
    • Proceedings of the Korea Committee for Ocean Resources and Engineering Conference
    • /
    • 2006.11a
    • /
    • pp.339-346
    • /
    • 2006
  • Increased usage of autonomous underwater vehicle (AUV) has led to the development of alternative navigational methods that do not employ the acoustic beacons and dead reckoning sensors. This paper describes a simultaneous localization and mapping (SLAM) scheme that uses range sonars mounted on a small AUV. The SLAM is one of such alternative navigation methods for measuring the environment that the vehicle is passing through and providing relative position of AUV by processing the data from sonar measurements. A technique for SLAM algorithm which uses several ranging sonars is presented. This technique utilizes an unscented Kalman filter to estimate the locations of the AUV and objects. In order for the algorithm to work efficiently, the nearest neighbor standard filter is introduced as the algorithm of data association in the SLAM for associating the stored targets the sonar returns at each time step. The proposed SLAM algorithm is tested by simulations under various conditions. The results of the simulation show that the proposed SLAM algorithm is capable of estimating the position of the AUV and the object and demonstrates that the algorithm will perform well in various environments.

  • PDF

Development of an Intelligent Security Robot System for Home Surveillance (가정용 지능형 경비 로봇 시스템 개발)

  • Park, Jeong-Ho;Shin, Dong-Gwan;Woo, Chun-Kyu;Kim, Hyung-Chul;Kwon, Yong-Kwan;Choi, Byoung-Wook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.810-816
    • /
    • 2007
  • A security robot system named EGIS-SR is a mobile security robot through one of the new growth engine project in robotic industries. It allows home surveillance through an autonomous mobile platform using onboard cameras and wireless security sensors. EGIS-SR has many sensors to allow autonomous navigation, hierarchical control architecture to handle lots of situations in monitoring home surveillance and mighty networks to achieve unmanned security services. EGIS-SR is tightly coupled with a networked security environment, where the information of the robot is remotely connected with the remote cockpit and patrol man. It achieved an intelligent unmanned security service. The robot is a two-wheeled mobile robot and has casters and suspension to overcome a doorsill. The dynamic motion is verified through $ADAMS^{TM}$ simulation. For the main controller, PXA270 based hardware platform based on linux kernel 2.6 is developed. In the linux platform, data handling for various sensors and the localization algorithm are performed. Also, a local path planning algorithm for object avoidance with ultrasonic sensors and localization using $StarGazer^{TM}$ is developed. Finally, for the automatic charging, a docking algorithm with infrared ray system is implemented.