• Title/Summary/Keyword: View Finder

Search Result 19, Processing Time 0.042 seconds

Multiple Target Tracking and Forward Velocity Control for Collision Avoidance of Autonomous Mobile Robot (실외 자율주행 로봇을 위한 다수의 동적 장애물 탐지 및 선속도 기반 장애물 회피기법 개발)

  • Kim, Sun-Do;Roh, Chi-Won;Kang, Yeon-Sik;Kang, Sung-Chul;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.7
    • /
    • pp.635-641
    • /
    • 2008
  • In this paper, we used a laser range finder (LRF) to detect both the static and dynamic obstacles for the safe navigation of a mobile robot. LRF sensor measurements containing the information of obstacle's geometry are first processed to extract the characteristic points of the obstacle in the sensor field of view. Then the dynamic states of the characteristic points are approximated using kinematic model, which are tracked by associating the measurements with Probability Data Association Filter. Finally, the collision avoidance algorithm is developed by using fuzzy decision making algorithm depending on the states of the obstacles tracked by the proposed obstacle tracking algorithm. The performance of the proposed algorithm is evaluated through experiments with the experimental mobile robot.

Object Recognition-based Global Localization for Mobile Robots (이동로봇의 물체인식 기반 전역적 자기위치 추정)

  • Park, Soon-Yyong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.1
    • /
    • pp.33-41
    • /
    • 2008
  • Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision- based global localization algorithm.

  • PDF

DEVELOPMENT OF 3-D POSITION DETECTING TECHNIQUE BY PAN/TILT

  • Son, J.R.;Kang, C.H.;Han, K.S.;Jung, S.R.;Kwon, K.Y.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11c
    • /
    • pp.698-706
    • /
    • 2000
  • It is very difficult to mechanize tomato harvesting because identifying a tomato partly covered with leaves and stalks is not easy. This research was conducted to develop tomato harvesting robot which can identify a target tomato, determine its three dimensional position, and harvest it in a limited time. Followings were major findings in this study. The first visual system of the harvesting robot was composed of two CCD cameras, however, this could not detect tomatoes which are not seen on the view finder of the camera especially those partly covered by leaves or stalks. The second visual device, combined with two CCD cameras and pan/tilt procedures was designed to minimize the positioning errors within ${\pm}10mm$, but this is still not enough to detect tomatoes partly covered with leaves etc. Finally, laser distance detector was added to the visual system that could reduce the position detecting errors within 10mm in X-Y direction and 5mm in Z direction for the partly covered tomatoes.

  • PDF

Development of Precise Localization System for Autonomous Mobile Robots using Multiple Ultrasonic Transmitters and Receivers in Indoor Environments (다수의 초음파 송수신기를 이용한 이동 로봇의 정밀 실내 위치인식 시스템의 개발)

  • Kim, Yong-Hwi;Song, Ui-Kyu;Kim, Byung-Kook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.4
    • /
    • pp.353-361
    • /
    • 2011
  • A precise embedded ultrasonic localization system is developed for autonomous mobile robots in indoor environments, which is essential for autonomous navigation of mobile robots with various tasks. Although ultrasonic sensors are more cost-effective than other sensors such as LRF (Laser Range Finder) and vision, they suffer inaccuracy and directional ambiguity. First, we apply the matched filter to measure the distance precisely. For resolving the computational complexity of the matched filter for embedded systems, we propose a new matched filter algorithm with fast computation in three points of view. Second, we propose an accurate ultrasonic localization system which consists of three ultrasonic receivers on the mobile robot and two or more transmitters on the ceiling. Last, we add an extended Kalman filter to estimate position and orientation. Various simulations and experimental results show the effectiveness of the proposed system.

Human Assisted Fitting and Matching Primitive Objects to Sparse Point Clouds for Rapid Workspace Modeling in Construction Automation (-건설현장에서의 시공 자동화를 위한 Laser Sensor기반의 Workspace Modeling 방법에 관한 연구-)

  • KWON SOON-WOOK
    • Korean Journal of Construction Engineering and Management
    • /
    • v.5 no.5 s.21
    • /
    • pp.151-162
    • /
    • 2004
  • Current methods for construction site modeling employ large, expensive laser range scanners that produce dense range point clouds of a scene from different perspectives. Days of skilled interpretation and of automatic segmentation may be required to convert the clouds to a finished CAD model. The dynamic nature of the construction environment requires that a real-time local area modeling system be capable of handling a rapidly changing and uncertain work environment. However, in practice, large, simple, and reasonably accurate embodying volumes are adequate feedback to an operator who, for instance, is attempting to place materials in the midst of obstacles with an occluded view. For real-time obstacle avoidance and automated equipment control functions, such volumes also facilitate computational tractability. In this research, a human operator's ability to quickly evaluate and associate objects in a scene is exploited. The operator directs a laser range finder mounted on a pan and tilt unit to collect range points on objects throughout the workspace. These groups of points form sparse range point clouds. These sparse clouds are then used to create geometric primitives for visualization and modeling purposes. Experimental results indicate that these models can be created rapidly and with sufficient accuracy for automated obstacle avoidance and equipment control functions.

The New Definition of Creative Leadership in the Communication Design Industry - Focused on the 4th Industrial Revolution

  • Kim, Kyung-won
    • International Journal of Contents
    • /
    • v.15 no.2
    • /
    • pp.53-58
    • /
    • 2019
  • The aim of this paper is to discuss how designers lead and direct 'technology-driven society' using their creative communication skill. To this end, it is required for communication designers to take conscious steps to recognize the future direction of their profession. Despite the advancement in technology, there is a human being at the center of all design activities. From a certain point of view, contemporary communication design takes an open-ended exploration of the subject matter, rather than a finished output. The notion of creative leadership may potentially expand more in terms of improving the methodology of today's visual culture. The paper will examine creative leadership that could be proposed by the challenge of discourse upon the upcoming industrial revolution. Today, communication designers are confronted by new leadership opportunities and challenges. Some leading designers seem to focus on brand new media technologies to prepare the 4th industrial revolutions. However, communication design cannot be discussed in the medium but can be understood as a process. Top-down and bottom-up process is always a concerned about the relationship since the focus of leadership has changed. In the top-down process, the leadership has existed between 'designer and client' because designers have played their role as a problem solver. On the other hand, there is a different model of leadership between 'design and technology' based on bottom-up process, which stem from the design authorship. In this regard, the new definition of creative leadership in the $4^{th}$ industrial revolution proposes a designer as a problem-finder based on the relationship between the 'designer and the public'.

Seam Finding Algorithm using the Brightness Difference Between Pictures in 360 VR (360 VR을 구성하는 영상들 간 밝기 차이를 이용한 seam finding 알고리즘)

  • Nam, Da-yoon;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.896-913
    • /
    • 2018
  • Seam finding algorithm is one of the most important techniques to construct the high quality 360 VR image. We found that some degradations, such as ghost effect, are generated when the conventional seam finding algorithms (for examples, Voronoi algorithm, Dynamic Programming algorithm, Graph Cut algorithm) are applied, because those make the inefficient masks which cross the body of main objects. In this paper, we proposed an advanced seam finding algorithm providing the efficient masks which go through background region, instead of the body of objects. Simulation results show that the proposed algorithm outperforms the conventional techniques in the viewpoint of the quality of the stitched image.

REAL-TIME 3D MODELING FOR ACCELERATED AND SAFER CONSTRUCTION USING EMERGING TECHNOLOGY

  • Jochen Teizer;Changwan Kim;Frederic Bosche;Carlos H. Caldas;Carl T. Haas
    • International conference on construction engineering and project management
    • /
    • 2005.10a
    • /
    • pp.539-543
    • /
    • 2005
  • The research presented in this paper enables real-time 3D modeling to help make construction processes ultimately faster, more predictable and safer. Initial research efforts used an emerging sensor technology and proved its usefulness in the acquisition of range information for the detection and efficient representation of static and moving objects. Based on the time-of-flight principle, the sensor acquires range and intensity information of each image pixel within the entire sensor's field-of-view in real-time with frequencies of up to 30 Hz. However, real-time working range data processing algorithms need to be developed to rapidly process range information into meaningful 3D computer models. This research ultimately focuses on the application of safer heavy equipment operation. The paper compares (a) a previous research effort in convex hull modeling using sparse range point clouds from a single laser beam range finder, to (b) high-frame rate update Flash LADAR (Laser Detection and Ranging) scanning for complete scene modeling. The presented research will demonstrate if the FlashLADAR technology can play an important role in real-time modeling of infrastructure assets in the near future.

  • PDF

A 2D / 3D Map Modeling of Indoor Environment (실내환경에서의 2 차원/ 3 차원 Map Modeling 제작기법)

  • Jo, Sang-Woo;Park, Jin-Woo;Kwon, Yong-Moo;Ahn, Sang-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.355-361
    • /
    • 2006
  • In large scale environments like airport, museum, large warehouse and department store, autonomous mobile robots will play an important role in security and surveillance tasks. Robotic security guards will give the surveyed information of large scale environments and communicate with human operator with that kind of data such as if there is an object or not and a window is open. Both for visualization of information and as human machine interface for remote control, a 3D model can give much more useful information than the typical 2D maps used in many robotic applications today. It is easier to understandable and makes user feel like being in a location of robot so that user could interact with robot more naturally in a remote circumstance and see structures such as windows and doors that cannot be seen in a 2D model. In this paper we present our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model. The algorithm is applied to 2 cases which are corridor and space that has the four wall like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.

  • PDF