• Title/Summary/Keyword: vision-based techniques

Search Result 296, Processing Time 0.022 seconds

A Study on Vision-based Robust Hand-Posture Recognition Using Reinforcement Learning (강화 학습을 이용한 비전 기반의 강인한 손 모양 인식에 대한 연구)

  • Jang Hyo-Young;Bien Zeung-Nam
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.39-49
    • /
    • 2006
  • This paper proposes a hand-posture recognition method using reinforcement learning for the performance improvement of vision-based hand-posture recognition. The difficulties in vision-based hand-posture recognition lie in viewing direction dependency and self-occlusion problem due to the high degree-of-freedom of human hand. General approaches to deal with these problems include multiple camera approach and methods of limiting the relative angle between cameras and the user's hand. In the case of using multiple cameras, however, fusion techniques to induce the final decision should be considered. Limiting the angle of user's hand restricts the user's freedom. The proposed method combines angular features and appearance features to describe hand-postures by a two-layered data structure and reinforcement learning. The validity of the proposed method is evaluated by appling it to the hand-posture recognition system using three cameras.

Kalman Filter-based Sensor Fusion for Posture Stabilization of a Mobile Robot (모바일 로봇 자세 안정화를 위한 칼만 필터 기반 센서 퓨전)

  • Jang, Taeho;Kim, Youngshik;Kyoung, Minyoung;Yi, Hyunbean;Hwan, Yoondong
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.40 no.8
    • /
    • pp.703-710
    • /
    • 2016
  • In robotics research, accurate estimation of current robot position is important to achieve motion control of a robot. In this research, we focus on a sensor fusion method to provide improved position estimation for a wheeled mobile robot, considering two different sensor measurements. In this case, we fuse camera-based vision and encode-based odometry data using Kalman filter techniques to improve the position estimation of the robot. An external camera-based vision system provides global position coordinates (x, y) for the mobile robot in an indoor environment. An internal encoder-based odometry provides linear and angular velocities of the robot. We then use the position data estimated by the Kalman filter as inputs to the motion controller, which significantly improves performance of the motion controller. Finally, we experimentally verify the performance of the proposed sensor fused position estimation and motion controller using an actual mobile robot system. In our experiments, we also compare the Kalman filter-based sensor fused estimation with two different single sensor-based estimations (vision-based and odometry-based).

Optical Flow-Based Marker Tracking Algorithm for Collaboration Between Drone and Ground Vehicle (드론과 지상로봇 간의 협업을 위한 광학흐름 기반 마커 추적방법)

  • Beck, Jong-Hwan;Kim, Sang-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.3
    • /
    • pp.107-112
    • /
    • 2018
  • In this paper, optical flow based keypoint detection and tracking technique is proposed for the collaboration between flying drone with vision system and ground robots. There are many challenging problems in target detection research using moving vision system, so we combined the improved FAST algorithm and Lucas-Kanade method for adopting the better techniques in each feature detection and optical flow motion tracking, which results in 40% higher in processing speed than previous works. Also, proposed image binarization method which is appropriate for the given marker helped to improve the marker detection accuracy. We also studied how to optimize the embedded system which is operating complex computations for intelligent functions in a very limited resources while maintaining the drone's present weight and moving speed. In a future works, we are aiming to develop collaborating smarter robots by using the techniques of learning and recognizing targets even in a complex background.

Vision-Based Dynamic Motion Measurement of a Floating Structure Using Multiple Targets under Wave Loadings (다중 표적을 이용한 부유식 구조물의 영상 기반 동적 응답 계측)

  • Yi, Jin-Hak;Kim, Jin-Ha;Jeong, Weon-Mu;Chae, Jang-Won
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.32 no.1A
    • /
    • pp.19-30
    • /
    • 2012
  • Recently, vision-based dynamic deflection measurement techniques have significant interests and are getting more popular owing to development of the high-quality and low-price camcorder and also image processing algorithm. However, there are still several research issues to be improved including the self-vibration of vision device, i.e. camcorder, and the image processing algorithm in device aspect, and also the application area should be extended to measure three dimensional movement of floating structures in application aspect. In this study, vision-based dynamic motion measurement technique using multiple targets is proposed to measure three dimensional dynamic motion of floating structures. And also a new scheme to select threshold value to discriminate the background from the raw image containing targets. The proposed method is applied to measure the dynamic motion of large concrete floating quay in open sea area under several wave conditions, and the results are compared with the measurement results from conventional RTK-GPS(Real Time Kinematics-Global Positioning System) and MRU(Motion Reference Unit).

Terrain Geometry from Monocular Image Sequences

  • McKenzie, Alexander;Vendrovsky, Eugene;Noh, Jun-Yong
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.1
    • /
    • pp.98-108
    • /
    • 2008
  • Terrain reconstruction from images is an ill-posed, yet commonly desired Structure from Motion task when compositing visual effects into live-action photography. These surfaces are required for choreography of a scene, casting physically accurate shadows of CG elements, and occlusions. We present a novel framework for generating the geometry of landscapes from extremely noisy point cloud datasets obtained via limited resolution techniques, particularly optical flow based vision algorithms applied to live-action video plates. Our contribution is a new statistical approach to remove erroneous tracks ('outliers') by employing a unique combination of well established techniques-including Gaussian Mixture Models (GMMs) for robust parameter estimation and Radial Basis Functions (REFs) for scattered data interpolation-to exploit the natural constraints of this problem. Our algorithm offsets the tremendously laborious task of modeling these landscapes by hand, automatically generating a visually consistent, camera position dependent, thin-shell surface mesh within seconds for a typical tracking shot.

Recent Developments Involving the Application of Infrared Thermal Imaging in Agriculture

  • Lee, Jun-Soo;Hong, Gwang-Wook;Shin, Kyeongho;Jung, Dongsoo;Kim, Joo-Hyung
    • Journal of Sensor Science and Technology
    • /
    • v.27 no.5
    • /
    • pp.280-293
    • /
    • 2018
  • The conversion of an invisible thermal radiation pattern of an object into a visible image using infrared (IR) thermal technology is very useful to understand phenomena what we are interested in. Although IR thermal images were originally developed for military and space applications, they are currently employed to determine thermal properties and heat features in various applications, such as the non-destructive evaluation of industrial equipment, power plants, electricity, military or drive-assisted night vision, and medical applications to monitor heat generation or loss. Recently, IR imaging-based monitoring systems have been considered for application in agricultural, including crop care, plant-disease detection, bruise detection of fruits, and the evaluation of fruit maturity. This paper reviews recent progress in the development of IR thermal imaging techniques and suggests possible applications of thermal imaging techniques in agriculture.

A Survey of Face Recognition Techniques

  • Jafri, Rabia;Arabnia, Hamid R.
    • Journal of Information Processing Systems
    • /
    • v.5 no.2
    • /
    • pp.41-68
    • /
    • 2009
  • Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.

Overview of sensor fusion techniques for vehicle positioning (차량정밀측위를 위한 복합측위 기술 동향)

  • Park, Jin-Won;Choi, Kae-Won
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.2
    • /
    • pp.139-144
    • /
    • 2016
  • This paper provides an overview of recent trends in sensor fusion technologies for vehicle positioning. The GNSS by itself cannot satisfy precision and reliability required by autonomous driving. We survey sensor fusion techniques that combine the outputs from the GNSS and the inertial navigation sensors such as an odometer and a gyroscope. Moreover, we overview landmark-based positioning that matches landmarks detected by a lidar or a stereo vision to high-precision digital maps.

Three-dimensional Shape Recovery from Image Focus Using Polynomial Regression Analysis in Optical Microscopy

  • Lee, Sung-An;Lee, Byung-Geun
    • Current Optics and Photonics
    • /
    • v.4 no.5
    • /
    • pp.411-420
    • /
    • 2020
  • Non-contact three-dimensional (3D) measuring technology is used to identify defects in miniature products, such as optics, polymers, and semiconductors. Hence, this technology has garnered significant attention in computer vision research. In this paper, we focus on shape from focus (SFF), which is an optical passive method for 3D shape recovery. In existing SFF techniques using interpolation, all datasets of the focus volume are approximated using one model. However, these methods cannot demonstrate how a predefined model fits all image points of an object. Moreover, it is not reasonable to explain various shapes of datasets using one model. Furthermore, if noise is present in the dataset, an error will be generated. Therefore, we propose an algorithm based on polynomial regression analysis to address these disadvantages. Our experimental results indicate that the proposed method is more accurate than existing methods.

A Study on Classification Performance Analysis of Convolutional Neural Network using Ensemble Learning Algorithm (앙상블 학습 알고리즘을 이용한 컨벌루션 신경망의 분류 성능 분석에 관한 연구)

  • Park, Sung-Wook;Kim, Jong-Chan;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.6
    • /
    • pp.665-675
    • /
    • 2019
  • In this paper, we compare and analyze the classification performance of deep learning algorithm Convolutional Neural Network(CNN) ac cording to ensemble generation and combining techniques. We used several CNN models(VGG16, VGG19, DenseNet121, DenseNet169, DenseNet201, ResNet18, ResNet34, ResNet50, ResNet101, ResNet152, GoogLeNet) to create 10 ensemble generation combinations and applied 6 combine techniques(average, weighted average, maximum, minimum, median, product) to the optimal combination. Experimental results, DenseNet169-VGG16-GoogLeNet combination in ensemble generation, and the product rule in ensemble combination showed the best performance. Based on this, it was concluded that ensemble in different models of high benchmarking scores is another way to get good results.