• 제목/요약/키워드: vision-based method

검색결과 1,450건 처리시간 0.03초

Vision-Based Eyes-Gaze Detection Using Two-Eyes Displacement

  • Ponglanka, Wirote;Kumhom, Pinit;Chamnongthai, Kosin
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -1
    • /
    • pp.46-49
    • /
    • 2002
  • One problem of vision-based eye-gazed detection is that it gives low resolution. Base on the displacement of the eyes, we proposed method for vision-based eye-gaze detection. While looking at difference positions on the screen, the distance of the centers of the eyes change accordingly. This relationship is derived and used to map the displacement to the distance in the screen. The experiments are performed to measure the accuracy and resolution to verify the proposed method. The results shown the accuracy of the screen mapping function for the horizontal plane are 76.47% and the error of this function be 23.53%

  • PDF

광 부품 조립을 위한 마이크로 조립 시스템 (Microassembly System for the assembly of photonic components)

  • 강현재;김상민;남궁영우;김병규
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2003년도 춘계학술대회 논문집
    • /
    • pp.241-245
    • /
    • 2003
  • In this paper, a microassembly system based on hybrid manipulation schemes is proposed and applied to the assembly of a photonic component. In order to achieve both high precision and dexterity in microassembly, we propose a hybrid microassembly system with sensory feedbacks of vision and force. This system consists of the distributed 6-DOF micromanipulation units, the stereo microscope, and haptic interface for the force feedback-based microassembly. A hybrid assembly method, which combines the vision-based microassembly and the scaled teleoperated microassembly with force feedback, is proposed. The feasibility of the proposed method is investigated via experimental studies for assembling micro opto-electrical components. Experimental results show that the hybrid microassembly system is feasible for applications to the assembly of photonic components in the commercial market with better flexibility and efficiency.

  • PDF

PPF를 이용한 4족 로봇의 장애물 회피 (Obstacle Avoidance using Power Potential Field for Stereo Vision based Mobile Robot)

  • 조경수;김동진;기창두
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2002년도 추계학술대회 논문집
    • /
    • pp.554-557
    • /
    • 2002
  • This paper describes power potential field method for the collision-free path planning of stereo-vision based mobile robot. Area based stereo matching is performed for obstacle detection in uncertain environment. The repulsive potential is constructed by distributing source points discretely and evenly on the boundaries of obstacles and superposing the power potential which is defined so that the source potential will have more influence on the robot than the sink potential when the robot is near to source point. The mobile robot approaches the goal point by moving the robot directly in negative gradient direction of the main potential. We have investigated the possibility of power potential method for the collision-free path planning of mobile robot through various experiments.

  • PDF

무인비행로봇에 장착된 카메라를 이용한 다중 무인지상로봇의 충돌 없는 대형 제어기법 (Vision-Based Collision-Free Formation Control of Multi-UGVs using a Camera on UAV)

  • 최병화;하창수;이동준
    • 한국정밀공학회지
    • /
    • 제30권1호
    • /
    • pp.53-58
    • /
    • 2013
  • In this paper, we present a framework for collision avoidance of UGVs by vision-based control. On the image plane which is created by perspective camera rigidly attached to UAV hovering stationarily, image features of UGVs are to be controlled by our control framework so that they proceed to desired locations while avoiding collision. UGVs are assumed as unicycle wheeled mobile robots with nonholonomic constraint and they follow the image feature's movement on the ground plane with low-level controller. We used potential function method to guarantee collision prevention, and showed its stability. Simulation results are presented to validate capability and stability of the proposed framework.

저하된 로봇 비전에서의 물체 인식을 위한 진화적 생성 기반의 컬러 검출 기법 (Evolutionary Generation Based Color Detection Technique for Object Identification in Degraded Robot Vision)

  • 김경태;서기성
    • 전기학회논문지
    • /
    • 제64권7호
    • /
    • pp.1040-1046
    • /
    • 2015
  • This paper introduces GP(Genetic Programming) based color detection model for an object detection of humanoid robot vision. Existing color detection methods have used linear/nonlinear transformation of RGB color-model. However, most of cases have difficulties to classify colors satisfactory because of interference of among color channels and susceptibility for illumination variation. Especially, they are outstanding in degraded images from robot vision. To solve these problems, we propose illumination robust and non-parametric multi-colors detection model using evolution of GP. The proposed method is compared to the existing color-models for various environments in robot vision for real humanoid Nao.

Vision Transformer를 이용한 UAV 영상의 벼 도복 영역 진단 (Diagnosis of the Rice Lodging for the UAV Image using Vision Transformer)

  • 명현정;김서정;최강인;김동훈;이광형;안형근;정성환;김병준
    • 스마트미디어저널
    • /
    • 제12권9호
    • /
    • pp.28-37
    • /
    • 2023
  • 쌀 수확량 감소에 크게 영향을 주는 것은 집중호우나 태풍에 의한 도복 피해이다. 도복 피해 면적 산정 방법은 직접 피해 지역을 방문하는 현장 조사를 기반으로 육안 검사 및 판단하여 객관적인 결과 획득이 어렵고 많은 시간과 비용이 요구된다. 본 논문에서는 무인 항공기로 촬영된 RGB 영상을 Vision Transformer 기반 Segformer을 활용한 벼 도복 영역 추정 및 진단을 제안한다. 제안된 방법은 도복, 정상, 그리고 배경 영역을 추정하고 종자관리요강 내 벼 포장 검사를 통해 도복률을 진단한다. 진단된 결과를 통해 벼 도복 피해 분포를 관찰할 수 있게 하며, 정부 보급종 포장 검사에 활용할 수 있다. 본 연구의 벼 도복 영역 추정 성능은 평균 정확도 98.33%와 mIoU 96.79%의 성능을 나타내었다.

교통 신호등과 비전 센서의 위치 관계 분석을 통한 이미지에서 교통 신호등 검출 방법 (Traffic Light Detection Method in Image Using Geometric Analysis Between Traffic Light and Vision Sensor)

  • 최창환;유국열;박용완
    • 대한임베디드공학회논문지
    • /
    • 제10권2호
    • /
    • pp.101-108
    • /
    • 2015
  • In this paper, a robust traffic light detection method is proposed by using vision sensor and DGPS(Difference Global Positioning System). The conventional vision-based detection methods are very sensitive to illumination change, for instance, low visibility at night time or highly reflection by bright light. To solve these limitations in visual sensor, DGPS is incorporated to determine the location and shape of traffic lights which are available from traffic light database. Furthermore the geometric relationship between traffic light and vision sensor is used to locate the traffic light in the image by using DGPS information. The empirical results show that the proposed method improves by 51% in detection rate for night time with marginal improvement in daytime environment.

영상 내 사람의 검출을 위한 에지 기반 방법 (Edge-based Method for Human Detection in an Image)

  • 도용태;반종희
    • 센서학회지
    • /
    • 제25권4호
    • /
    • pp.285-290
    • /
    • 2016
  • Human sensing is an important but challenging technology. Unlike other methods for sensing humans, a vision sensor has many advantages, and there has been active research in automatic human detection in camera images. The combination of Histogram of Oriented Gradients (HOG) and Support Vector Machine (SVM) is currently one of the most successful methods in vision-based human detection. However, extracting HOG features from an image is computer intensive, and it is thus hard to employ the HOG method in real-time processing applications. This paper describes an efficient solution to this speed problem of the HOG method. Our method obtains edge information of an image and finds candidate regions where humans very likely exist based on the distribution pattern of the detected edge points. The HOG features are then extracted only from the candidate image regions. Since complex HOG processing is adaptively done by the guidance of the simpler edge detection step, human detection can be performed quickly. Experimental results show that the proposed method is effective in various images.

A Double-channel Four-band True Color Night Vision System

  • Jiang, Yunfeng;Wu, Dongsheng;Liu, Jie;Tian, Kuo;Wang, Dan
    • Current Optics and Photonics
    • /
    • 제6권6호
    • /
    • pp.608-618
    • /
    • 2022
  • By analyzing the signal-to-noise ratio (SNR) theory of the conventional true color night vision system, we found that the output image SNR is limited by the wavelength range of the system response λ1 and λ2. Therefore, we built a double-channel four-band true color night vision system to expand the system response to improve the output image SNR. In the meantime, we proposed an image fusion method based on principal component analysis (PCA) and nonsubsampled shearlet transform (NSST) to obtain the true color night vision images. Through experiments, a method based on edge extraction of the targets and spatial dimension decorrelation was proposed to calculate the SNR of the obtained images and we calculated the correlation coefficient (CC) between the edge graphs of obtained and reference images. The results showed that the SNR of the images of four scenes obtained by our system were 125.0%, 145.8%, 86.0% and 51.8% higher, respectively, than that of the conventional tri-band system and CC was also higher, which demonstrated that our system can get true color images with better quality.

Automation of a Teleoperated Microassembly Desktop Station Supervised by Virtual Reality

  • Antoine Ferreira;Fontaine, Jean-Guy;Shigeoki Hirai
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제4권1호
    • /
    • pp.23-31
    • /
    • 2002
  • We proposed a concept of a desktop micro device factory for visually servoed teleoperated microassembly assisted by a virtual reality (VR) interface. It is composed of two micromanipulators equipped with micro tools operating under a light microscope. First a manipulator, control method for the micro object to follow a planned trajectory in pushing operation is proposed undo. vision based-position control. Then, we present the cooperation control strategy of the micro handling operation under vision-based force control integrating a sensor fusion framework approach. A guiding-system based on virtual micro-world exactly reconstructed from the CAD-CAM databases of the real environment being considered is presented for the imprecisely calibrated micro world. Finally, some experimental results of microassembly tasks performed on millimeter-sized components are provided.