• Title/Summary/Keyword: 3D Indoor Map

Search Result 43, Processing Time 0.027 seconds

Estimating Human Size in 2D Image for Improvement of Detection Speed in Indoor Environments (실내 환경에서 검출 속도 개선을 위한 2D 영상에서의 사람 크기 예측)

  • Gil, Jong In;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.21 no.2
    • /
    • pp.252-260
    • /
    • 2016
  • The performance of human detection system is affected by camera location and view angle. In 2D image acquired from such camera settings, humans are displayed in different sizes. Detecting all the humans with diverse sizes poses a difficulty in realizing a real-time system. However, if the size of a human in an image can be predicted, the processing time of human detection would be greatly reduced. In this paper, we propose a method that estimates human size by constructing an indoor scene in 3D space. Since the human has constant size everywhere in 3D space, it is possible to estimate accurate human size in 2D image by projecting 3D human into the image space. Experimental results validate that a human size can be predicted from the proposed method and that machine-learning based detection methods can yield the reduction of the processing time.

3D Information based Visualization System for Real-Time Teleoperation of Unmanned Ground Vehicles (무인 지상 로봇의 실시간 원격 제어를 위한 3차원 시각화 시스템)

  • Jang, Ga-Ram;Bae, Ji-Hun;Lee, Dong-Hyuk;Park, Jae-Han
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.220-229
    • /
    • 2018
  • In the midst of disaster, such as an earthquake or a nuclear radiation exposure area, there are huge risks to send human crews. Many robotic researchers have studied to send UGVs in order to replace human crews at dangerous environments. So far, two-dimensional camera information has been widely used for teleoperation of UGVs. Recently, three-dimensional information based teleoperations are attempted to compensate the limitations of camera information based teleoperation. In this paper, the 3D map information of indoor and outdoor environments reconstructed in real-time is utilized in the UGV teleoperation. Further, we apply the LTE communication technology to endure the stability of the teleoperation even under the deteriorate environment. The proposed teleoperation system is performed at explosive disposal missions and their feasibilities could be verified through completion of that missions using the UGV with the Explosive Ordnance Disposal (EOD) team of Busan Port Security Corporation.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

High Speed Self-Adaptive Algorithms for Implementation in a 3-D Vision Sensor (3-D 비젼센서를 위한 고속 자동선택 알고리즘)

  • Miche, Pierre;Bensrhair, Abdelaziz;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.6 no.2
    • /
    • pp.123-130
    • /
    • 1997
  • In this paper, we present an original stereo vision system which comprises two process: 1. An image segmentation algorithm based on new concept called declivity and using automatic thresholds. 2. A new stereo matching algorithm based on an optimal path search. This path is obtained by dynamic programming method which uses the threshold values calculated during the segmentation process. At present, a complete depth map of indoor scene only needs about 3 s on a Sun workstation IPX, and this time will be reduced to a few tenth of second on a specialised architecture based on several DSPs which is currently under consideration.

  • PDF

SLAM with Visually Salient Line Features in Indoor Hallway Environments (실내 복도 환경에서 선분 특징점을 이용한 비전 기반의 지도 작성 및 위치 인식)

  • An, Su-Yong;Kang, Jeong-Gwan;Lee, Lae-Kyeong;Oh, Se-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.1
    • /
    • pp.40-47
    • /
    • 2010
  • This paper presents a simultaneous localization and mapping (SLAM) of an indoor hallway environment using Rao-Blackwellized particle filter (RBPF) along with a line segment as a landmark. Based on the fact that fluent line features can be extracted around the ceiling and side walls of hallway using vision sensor, a horizontal line segment is extracted from an edge image using Hough transform and is also tracked continuously by an optical flow method. A successive observation of a line segment gives initial state of the line in 3D space. For data association, registered feature and observed feature are matched in image space through a degree of overlap, an orientation of line, and a distance between two lines. Experiments show that a compact environmental map can be constructed with small number of horizontal line features in real-time.

Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map

  • Farooq, Adnan;Jalal, Ahmad;Kamal, Shaharyar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1856-1869
    • /
    • 2015
  • This paper addresses the issues of 3D human activity detection, tracking and recognition from RGB-D video sequences using a feature structured framework. During human tracking and activity recognition, initially, dense depth images are captured using depth camera. In order to track human silhouettes, we considered spatial/temporal continuity, constraints of human motion information and compute centroids of each activity based on chain coding mechanism and centroids point extraction. In body skin joints features, we estimate human body skin color to identify human body parts (i.e., head, hands, and feet) likely to extract joint points information. These joints points are further processed as feature extraction process including distance position features and centroid distance features. Lastly, self-organized maps are used to recognize different activities. Experimental results demonstrate that the proposed method is reliable and efficient in recognizing human poses at different realistic scenes. The proposed system should be applicable to different consumer application systems such as healthcare system, video surveillance system and indoor monitoring systems which track and recognize different activities of multiple users.

Reliable Autonomous Reconnaissance System for a Tracked Robot in Multi-floor Indoor Environments with Stairs (다층 실내 환경에서 계단 극복이 가능한 궤도형 로봇의 신뢰성 있는 자율 주행 정찰 시스템)

  • Juhyeong Roh;Boseong Kim;Dokyeong Kim;Jihyeok Kim;D. Hyunchul Shim
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.2
    • /
    • pp.149-158
    • /
    • 2024
  • This paper presents a robust autonomous navigation and reconnaissance system for tracked robots, designed to handle complex multi-floor indoor environments with stairs. We introduce a localization algorithm that adjusts scan matching parameters to robustly estimate positions and create maps in environments with scarce features, such as narrow rooms and staircases. Our system also features a path planning algorithm that calculates distance costs from surrounding obstacles, integrated with a specialized PID controller tuned to the robot's differential kinematics for collision-free navigation in confined spaces. The perception module leverages multi-image fusion and camera-LiDAR fusion to accurately detect and map the 3D positions of objects around the robot in real time. Through practical tests in real settings, we have verified that our system performs reliably. Based on this reliability, we expect that our research team's autonomous reconnaissance system will be practically utilized in actual disaster situations and environments that are difficult for humans to access, thereby making a significant contribution.

Probabilistic Object Recognition in a Sequence of 3D Images (연속된 3차원 영상에서의 통계적 물체인식)

  • Jang Dae-Sik;Rhee Yang-Won;Sheng Guo-Rui
    • KSCI Review
    • /
    • v.14 no.1
    • /
    • pp.241-248
    • /
    • 2006
  • The recognition of a relatively big and rarely movable object. such as refrigerator and air conditioner, etc. is necessary because these objects can be crucial global stable features of Simultaneous Localization and Map building(SLAM) in the indoor environment. In this paper. we propose a novel method to recognize these big objects using a sequence of 3D scenes. The particles representing an object to be recognized are scattered to the environment and then the probability of each particles is calculated by the matching test with 3D lines of the environment. Based on the probability and degree of convergence of particles, we can recognize the object in the environment and the pose of object is also estimated. The experimental results show the feasibility of incremental object recognition based on particle filtering and the application to SLAM

  • PDF

The Implementation of Information Providing Method System for Indoor Area by using the Immersive Media's Video Information (실감미디어 동영상정보를 이용한 실내 공간 정보 제공 시스템 구현)

  • Lee, Sangyoon;Ahn, Heuihak
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.3
    • /
    • pp.157-166
    • /
    • 2016
  • This paper presents the interior space information using 6D-360 degree immersive media video information. And we implement the augmented reality, which includes a variety of information such as position information, movement information of the specific location in the interior space GPS signal does not reach the position information. Augmented reality containing the 6D-360 degree immersive media video information provides the position information and the three dimensional space image information to identify the exact location of a user in an interior space of a moving object as well as a fixed interior space. This paper constitutes a three dimensional image database based on the 6D-360 degree immersive media video information and provides augmented reality service. Therefore, to map the various information to 6D-360 degree immersive media video information, the user can check the plant in the same environment as the actual. It suggests the augmented reality service for the emergency escape and repair to the passengers and employees.

Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion (멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동)

  • Jeong Hyun Choi;In Cheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.407-418
    • /
    • 2023
  • The Multi-Object Goal Visual Navigation(MultiOn) is a visual navigation task in which an agent must visit to multiple object goals in an unknown indoor environment in a given order. Existing models for the MultiOn task suffer from the limitation that they cannot utilize an integrated view of multimodal context because use only a unimodal context map. To overcome this limitation, in this paper, we propose a novel deep neural network-based agent model for MultiOn task. The proposed model, MCFMO, uses a multimodal context map, containing visual appearance features, semantic features of environmental objects, and goal object features. Moreover, the proposed model effectively fuses these three heterogeneous features into a global multimodal context map by using a point-wise convolutional neural network module. Lastly, the proposed model adopts an auxiliary task learning module to predict the observation status, goal direction and the goal distance, which can guide to learn the navigational policy efficiently. Conducting various quantitative and qualitative experiments using the Habitat-Matterport3D simulation environment and scene dataset, we demonstrate the superiority of the proposed model.