• Title/Summary/Keyword: vision-based method

Search Result 1,454, Processing Time 0.034 seconds

Event recognition of entering and exiting (출입 이벤트 인식)

  • Cui, Yaohuan;Lee, Chang-Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2008.06a
    • /
    • pp.199-204
    • /
    • 2008
  • Visual surveillance is an active topic recently in Computer Vision. Event detection and recognition is one important and useful application of visual surveillance system. In this paper, we propose a new method to recognize the entering and exiting events based on the human's movement feature and the door's state. Without sensors, the proposed approach is based on novel and simple vision method as a combination of edge detection, motion history image and geometrical characteristic of the human shape. The proposed method includes several applications such as access control in visual surveillance and computer vision fields.

  • PDF

Vision-Based Eyes-Gaze Detection Using Two-Eyes Displacement

  • Ponglanka, Wirote;Kumhom, Pinit;Chamnongthai, Kosin
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.46-49
    • /
    • 2002
  • One problem of vision-based eye-gazed detection is that it gives low resolution. Base on the displacement of the eyes, we proposed method for vision-based eye-gaze detection. While looking at difference positions on the screen, the distance of the centers of the eyes change accordingly. This relationship is derived and used to map the displacement to the distance in the screen. The experiments are performed to measure the accuracy and resolution to verify the proposed method. The results shown the accuracy of the screen mapping function for the horizontal plane are 76.47% and the error of this function be 23.53%

  • PDF

Microassembly System for the assembly of photonic components (광 부품 조립을 위한 마이크로 조립 시스템)

  • 강현재;김상민;남궁영우;김병규
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.241-245
    • /
    • 2003
  • In this paper, a microassembly system based on hybrid manipulation schemes is proposed and applied to the assembly of a photonic component. In order to achieve both high precision and dexterity in microassembly, we propose a hybrid microassembly system with sensory feedbacks of vision and force. This system consists of the distributed 6-DOF micromanipulation units, the stereo microscope, and haptic interface for the force feedback-based microassembly. A hybrid assembly method, which combines the vision-based microassembly and the scaled teleoperated microassembly with force feedback, is proposed. The feasibility of the proposed method is investigated via experimental studies for assembling micro opto-electrical components. Experimental results show that the hybrid microassembly system is feasible for applications to the assembly of photonic components in the commercial market with better flexibility and efficiency.

  • PDF

Obstacle Avoidance using Power Potential Field for Stereo Vision based Mobile Robot (PPF를 이용한 4족 로봇의 장애물 회피)

  • 조경수;김동진;기창두
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.10a
    • /
    • pp.554-557
    • /
    • 2002
  • This paper describes power potential field method for the collision-free path planning of stereo-vision based mobile robot. Area based stereo matching is performed for obstacle detection in uncertain environment. The repulsive potential is constructed by distributing source points discretely and evenly on the boundaries of obstacles and superposing the power potential which is defined so that the source potential will have more influence on the robot than the sink potential when the robot is near to source point. The mobile robot approaches the goal point by moving the robot directly in negative gradient direction of the main potential. We have investigated the possibility of power potential method for the collision-free path planning of mobile robot through various experiments.

  • PDF

Vision-Based Collision-Free Formation Control of Multi-UGVs using a Camera on UAV (무인비행로봇에 장착된 카메라를 이용한 다중 무인지상로봇의 충돌 없는 대형 제어기법)

  • Choi, Francis Byonghwa;Ha, Changsu;Lee, Dongjun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.1
    • /
    • pp.53-58
    • /
    • 2013
  • In this paper, we present a framework for collision avoidance of UGVs by vision-based control. On the image plane which is created by perspective camera rigidly attached to UAV hovering stationarily, image features of UGVs are to be controlled by our control framework so that they proceed to desired locations while avoiding collision. UGVs are assumed as unicycle wheeled mobile robots with nonholonomic constraint and they follow the image feature's movement on the ground plane with low-level controller. We used potential function method to guarantee collision prevention, and showed its stability. Simulation results are presented to validate capability and stability of the proposed framework.

Evolutionary Generation Based Color Detection Technique for Object Identification in Degraded Robot Vision (저하된 로봇 비전에서의 물체 인식을 위한 진화적 생성 기반의 컬러 검출 기법)

  • Kim, Kyoungtae;Seo, Kisung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.7
    • /
    • pp.1040-1046
    • /
    • 2015
  • This paper introduces GP(Genetic Programming) based color detection model for an object detection of humanoid robot vision. Existing color detection methods have used linear/nonlinear transformation of RGB color-model. However, most of cases have difficulties to classify colors satisfactory because of interference of among color channels and susceptibility for illumination variation. Especially, they are outstanding in degraded images from robot vision. To solve these problems, we propose illumination robust and non-parametric multi-colors detection model using evolution of GP. The proposed method is compared to the existing color-models for various environments in robot vision for real humanoid Nao.

Diagnosis of the Rice Lodging for the UAV Image using Vision Transformer (Vision Transformer를 이용한 UAV 영상의 벼 도복 영역 진단)

  • Hyunjung Myung;Seojeong Kim;Kangin Choi;Donghoon Kim;Gwanghyeong Lee;Hvung geun Ahn;Sunghwan Jeong;Bvoungiun Kim
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.28-37
    • /
    • 2023
  • The main factor affecting the decline in rice yield is damage caused by localized heavy rains or typhoons. The method of analyzing the rice lodging area is difficult to obtain objective results based on visual inspection and judgment based on field surveys visiting the affected area. it requires a lot of time and money. In this paper, we propose the method of estimation and diagnosis for rice lodging areas using a Vision Transformer-based Segformer for RGB images, which are captured by unmanned aerial vehicles. The proposed method estimates the lodging, normal, and background area using the Segformer model, and the lodging rate is diagnosed through the rice field inspection criteria in the seed industry Act. The diagnosis result can be used to find the distribution of the rice lodging areas, to show the trend of lodging, and to use the quality management of certified seed in government. The proposed method of rice lodging area estimation shows 98.33% of mean accuracy and 96.79% of mIoU.

Traffic Light Detection Method in Image Using Geometric Analysis Between Traffic Light and Vision Sensor (교통 신호등과 비전 센서의 위치 관계 분석을 통한 이미지에서 교통 신호등 검출 방법)

  • Choi, Changhwan;Yoo, Kook-Yeol;Park, Yongwan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.10 no.2
    • /
    • pp.101-108
    • /
    • 2015
  • In this paper, a robust traffic light detection method is proposed by using vision sensor and DGPS(Difference Global Positioning System). The conventional vision-based detection methods are very sensitive to illumination change, for instance, low visibility at night time or highly reflection by bright light. To solve these limitations in visual sensor, DGPS is incorporated to determine the location and shape of traffic lights which are available from traffic light database. Furthermore the geometric relationship between traffic light and vision sensor is used to locate the traffic light in the image by using DGPS information. The empirical results show that the proposed method improves by 51% in detection rate for night time with marginal improvement in daytime environment.

Edge-based Method for Human Detection in an Image (영상 내 사람의 검출을 위한 에지 기반 방법)

  • Do, Yongtae;Ban, Jonghee
    • Journal of Sensor Science and Technology
    • /
    • v.25 no.4
    • /
    • pp.285-290
    • /
    • 2016
  • Human sensing is an important but challenging technology. Unlike other methods for sensing humans, a vision sensor has many advantages, and there has been active research in automatic human detection in camera images. The combination of Histogram of Oriented Gradients (HOG) and Support Vector Machine (SVM) is currently one of the most successful methods in vision-based human detection. However, extracting HOG features from an image is computer intensive, and it is thus hard to employ the HOG method in real-time processing applications. This paper describes an efficient solution to this speed problem of the HOG method. Our method obtains edge information of an image and finds candidate regions where humans very likely exist based on the distribution pattern of the detected edge points. The HOG features are then extracted only from the candidate image regions. Since complex HOG processing is adaptively done by the guidance of the simpler edge detection step, human detection can be performed quickly. Experimental results show that the proposed method is effective in various images.

A Double-channel Four-band True Color Night Vision System

  • Jiang, Yunfeng;Wu, Dongsheng;Liu, Jie;Tian, Kuo;Wang, Dan
    • Current Optics and Photonics
    • /
    • v.6 no.6
    • /
    • pp.608-618
    • /
    • 2022
  • By analyzing the signal-to-noise ratio (SNR) theory of the conventional true color night vision system, we found that the output image SNR is limited by the wavelength range of the system response λ1 and λ2. Therefore, we built a double-channel four-band true color night vision system to expand the system response to improve the output image SNR. In the meantime, we proposed an image fusion method based on principal component analysis (PCA) and nonsubsampled shearlet transform (NSST) to obtain the true color night vision images. Through experiments, a method based on edge extraction of the targets and spatial dimension decorrelation was proposed to calculate the SNR of the obtained images and we calculated the correlation coefficient (CC) between the edge graphs of obtained and reference images. The results showed that the SNR of the images of four scenes obtained by our system were 125.0%, 145.8%, 86.0% and 51.8% higher, respectively, than that of the conventional tri-band system and CC was also higher, which demonstrated that our system can get true color images with better quality.