• Title/Summary/Keyword: Vision modeling

Search Result 241, Processing Time 0.024 seconds

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).

Korean Wide Area Differential Global Positioning System Development Status and Preliminary Test Results

  • Yun, Ho;Kee, Chang-Don;Kim, Do-Yoon
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.3
    • /
    • pp.274-282
    • /
    • 2011
  • This paper is focused on dynamic modeling and control system design as well as vision based collision avoidance for multi-rotor unmanned aerial vehicles (UAVs). Multi-rotor UAVs are defined as rotary-winged UAVs with multiple rotors. These multi-rotor UAVs can be utilized in various military situations such as surveillance and reconnaissance. They can also be used for obtaining visual information from steep terrains or disaster sites. In this paper, a quad-rotor model is introduced as well as its control system, which is designed based on a proportional-integral-derivative controller and vision-based collision avoidance control system. Additionally, in order for a UAV to navigate safely in areas such as buildings and offices with a number of obstacles, there must be a collision avoidance algorithm installed in the UAV's hardware, which should include the detection of obstacles, avoidance maneuvering, etc. In this paper, the optical flow method, one of the vision-based collision avoidance techniques, is introduced, and multi-rotor UAV's collision avoidance simulations are described in various virtual environments in order to demonstrate its avoidance performance.

Optical Flow Based Collision Avoidance of Multi-Rotor UAVs in Urban Environments

  • Yoo, Dong-Wan;Won, Dae-Yeon;Tahk, Min-Jea
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.3
    • /
    • pp.252-259
    • /
    • 2011
  • This paper is focused on dynamic modeling and control system design as well as vision based collision avoidance for multi-rotor unmanned aerial vehicles (UAVs). Multi-rotor UAVs are defined as rotary-winged UAVs with multiple rotors. These multi-rotor UAVs can be utilized in various military situations such as surveillance and reconnaissance. They can also be used for obtaining visual information from steep terrains or disaster sites. In this paper, a quad-rotor model is introduced as well as its control system, which is designed based on a proportional-integral-derivative controller and vision-based collision avoidance control system. Additionally, in order for a UAV to navigate safely in areas such as buildings and offices with a number of obstacles, there must be a collision avoidance algorithm installed in the UAV's hardware, which should include the detection of obstacles, avoidance maneuvering, etc. In this paper, the optical flow method, one of the vision-based collision avoidance techniques, is introduced, and multi-rotor UAV's collision avoidance simulations are described in various virtual environments in order to demonstrate its avoidance performance.

Implementation of Path Finding Method using 3D Mapping for Autonomous Robotic (3차원 공간 맵핑을 통한 로봇의 경로 구현)

  • Son, Eun-Ho;Kim, Young-Chul;Chong, Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.168-177
    • /
    • 2008
  • Path finding is a key element in the navigation of a mobile robot. To find a path, robot should know their position exactly, since the position error exposes a robot to many dangerous conditions. It could make a robot move to a wrong direction so that it may have damage by collision by the surrounding obstacles. We propose a method obtaining an accurate robot position. The localization of a mobile robot in its working environment performs by using a vision system and Virtual Reality Modeling Language(VRML). The robot identifies landmarks located in the environment. An image processing and neural network pattern matching techniques have been applied to find location of the robot. After the self-positioning procedure, the 2-D scene of the vision is overlaid onto a VRML scene. This paper describes how to realize the self-positioning, and shows the overlay between the 2-D and VRML scenes. The suggested method defines a robot's path successfully. An experiment using the suggested algorithm apply to a mobile robot has been performed and the result shows a good path tracking.

Interface Modeling for Digital Device Control According to Disability Type in Web

  • Park, Joo Hyun;Lee, Jongwoo;Lim, Soon-Bum
    • Journal of Multimedia Information System
    • /
    • v.7 no.4
    • /
    • pp.249-256
    • /
    • 2020
  • Learning methods using various assistive and smart devices have been developed to enable independent learning of the disabled. Pointer control is the most important consideration for the disabled when controlling a device and the contents of an existing graphical user interface (GUI) environment; however, difficulties can be encountered when using a pointer, depending on the disability type; Although there are individual differences depending on the blind, low vision, and upper limb disability, problems arise in the accuracy of object selection and execution in common. A multimodal interface pilot solution is presented that enables people with various disability types to control web interactions more easily. First, we classify web interaction types using digital devices and derive essential web interactions among them. Second, to solve problems that occur when performing web interactions considering the disability type, the necessary technology according to the characteristics of each disability type is presented. Finally, a pilot solution for the multimodal interface for each disability type is proposed. We identified three disability types and developed solutions for each type. We developed a remote-control operation voice interface for blind people and a voice output interface applying the selective focusing technique for low-vision people. Finally, we developed a gaze-tracking and voice-command interface for GUI operations for people with upper-limb disability.

Charter School Principals' Perception on Transformational Leadership Practices

  • Lee, In-Hoi
    • International Journal of Contents
    • /
    • v.7 no.4
    • /
    • pp.70-76
    • /
    • 2011
  • The purpose of this study was to investigate charter school principals' perception on the transformational leadership practices in New York State. The data generating sample consisted of 44 charter school principals. Descriptive statistics and multiple regression were employed to analyze the data. The results were as follows: first, the transformational leadership practices of charter school principals were in the moderate to high categories, and the greatest gap was on Inspiring a shared vision leadership practice. Second, there were no statistically significant relationships between the leadership practices and the demographic variables of: gender, age, ethnicity, and level of education of principals. However, a positive relationship was found between both the Modeling the way and Encouraging the heart leadership practices and the educational level of charter school principals. Third, there was a significant relationship between the Inspiring a shared vision leadership practice of charter school principals and prior experience as a school principal.

Optimization of Finite Element Retina by GA for Plant Growth Neuro Modeling

  • Murase, H.
    • Agricultural and Biosystems Engineering
    • /
    • v.1 no.1
    • /
    • pp.22-29
    • /
    • 2000
  • The development of bio-response feedback control system known as the speaking plant approach has been a challenging task for plant production engineers and scientists. In order to achieve the aim of developing such a bio-response feedback control system, the primary concern should be to develop a practical non-invasive technique for monitoring plant growth. Those who are skilled in raising plants can sense whether their plants are under adequate water conditions or not, for example, by merely observing minor color and tone changes before the plants wilt. Consequently, using machine vision, it may be possible to recognize changes in indices that describe plant conditions based on the appearance of growing plants. The interpretation of image information of plants may be based on image features extracted from the original pictorial image. In this study, the performance of a finite element retina was optimized by a genetic algorithm. The optimized finite element retina was evaluated based on the performance of neural plant growth monitor that requires input data given by the finite element retina.

  • PDF

Object Recognition-based Global Localization for Mobile Robots (이동로봇의 물체인식 기반 전역적 자기위치 추정)

  • Park, Soon-Yyong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.1
    • /
    • pp.33-41
    • /
    • 2008
  • Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision- based global localization algorithm.

  • PDF

Color Enhancement in Images with Single CCD camera in Night Vision Environment

  • Hwang, Wonjun;Ko, Hanseok
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.58-61
    • /
    • 2000
  • In this paper, we describe an effective method to enhance the color night images with spatio-temporal multi-scale retinex focused to the Intelligent Transportation System (ITS) applications such as in the single CCD based Electronic Toll Collection System (ETCS). The basic spatial retinex is known to provide color constancy while effectively removing local shades. However, it is relatively ineffective in night vision enhancement. Our proposed method, STMSR, exploits the iterative time averaging of image sequences to suppress the noise in consideration of the moving vehicles in image frame. In the STMSR method, the spatial term makes the dark images distinguishable and preserves the color information day and night while the temporal term reduces the noise effect for sharper and clearer reconstruction of the contents in each image frame. We show through representative simulations that incorporating both terms in the modeling produces the output sequential images visually more pleasing than the original dim images.

  • PDF

Target Object Image Extraction from 3D Space using Stereo Cameras

  • Yoo, Chae-Gon;Jung, Chang-Sung;Hwang, Chi-Jung
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1678-1680
    • /
    • 2002
  • Stereo matching technique is used in many practical fields like satellite image analysis and computer vision. In this paper, we suggest a method to extract a target object image from a complicated background. For example, human face image can be extracted from random background. This method can be applied to computer vision such as security system, dressing simulation by use of extracted human face, 3D modeling, and security system. Many researches about stereo matching have been performed. Conventional approaches can be categorized into area-based and feature-based method. In this paper, we start from area-based method and apply area tracking using scanning window. Coarse depth information is used for area merging process using area searching data. Finally, we produce a target object image.

  • PDF