• 제목/요약/키워드: Robot Vision

검색결과 878건 처리시간 0.023초

실감만남 공간에서의 비전 센서 기반의 사람-로봇간 운동 정보 전달에 관한 연구 (Vision-based Human-Robot Motion Transfer in Tangible Meeting Space)

  • 최유경;나성권;김수환;김창환;박성기
    • 로봇학회논문지
    • /
    • 제2권2호
    • /
    • pp.143-151
    • /
    • 2007
  • This paper deals with a tangible interface system that introduces robot as remote avatar. It is focused on a new method which makes a robot imitate human arm motions captured from a remote space. Our method is functionally divided into two parts: capturing human motion and adapting it to robot. In the capturing part, we especially propose a modified potential function of metaballs for the real-time performance and high accuracy. In the adapting part, we suggest a geometric scaling method for solving the structural difference between a human and a robot. With our method, we have implemented a tangible interface and showed its speed and accuracy test.

  • PDF

시각 센서 기반의 다 관절 매니퓰레이터 간접교시를 위한 유저 인터페이스 설계 (A User Interface for Vision Sensor based Indirect Teaching of a Robotic Manipulator)

  • 김태우;이후만;김중배
    • 제어로봇시스템학회논문지
    • /
    • 제19권10호
    • /
    • pp.921-927
    • /
    • 2013
  • This paper presents a user interface for vision based indirect teaching of a robotic manipulator with Kinect and IMU (Inertial Measurement Unit) sensors. The user interface system is designed to control the manipulator more easily in joint space, Cartesian space and tool frame. We use the skeleton data of the user from Kinect and Wrist-mounted IMU sensors to calculate the user's joint angles and wrist movement for robot control. The interface system proposed in this paper allows the user to teach the manipulator without a pre-programming process. This will improve the teaching time of the robot and eventually enable increased productivity. Simulation and experimental results are presented to verify the performance of the robot control and interface system.

로봇 메니퓰레이터의 동력학 시각서보 (Dynamic Visual Servoing of Robot Manipulators)

  • 백승민;임경수;한웅기;국태용
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제49권1호
    • /
    • pp.41-47
    • /
    • 2000
  • A better tracking performance can be achieved, if visual sensors such as CCD cameras are used in controling a robot manipulator, than when only relative sensors such as encoders are used. However, for precise visual servoing of a robot manipulator, an expensive vision system which has fast sampling rate must be used. Moreover, even if a fast vision system is implemented for visual servoing, one cannot get a reliable performance without use of robust and stable inner joint servo-loop. In this paper, we propose a dynamic control scheme for robot manipulators with eye-in-hand camera configuration, where a dynamic learning controller is designed to improve the tracking performance of robotic system. The proposed control scheme is implemented for tasks of tracking moving objects and shown to be robust to parameter uncertainty, disturbances, low sampling rate, etc.

  • PDF

감시용 로봇의 시각을 위한 인공 신경망 기반 겹친 사람의 구분 (Dividing Occluded Humans Based on an Artificial Neural Network for the Vision of a Surveillance Robot)

  • 도용태
    • 제어로봇시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.505-510
    • /
    • 2009
  • In recent years the space where a robot works has been expanding to the human space unlike traditional industrial robots that work only at fixed positions apart from humans. A human in the recent situation may be the owner of a robot or the target in a robotic application. This paper deals with the latter case; when a robot vision system is employed to monitor humans for a surveillance application, each person in a scene needs to be identified. Humans, however, often move together, and occlusions between them occur frequently. Although this problem has not been seriously tackled in relevant literature, it brings difficulty into later image analysis steps such as tracking and scene understanding. In this paper, a probabilistic neural network is employed to learn the patterns of the best dividing position along the top pixels of an image region of partly occlude people. As this method uses only shape information from an image, it is simple and can be implemented in real time.

복잡한 환경에서 자율이동 로봇의 문 통과방법 (Door Traversing for A Mobile Robot in Complex Environment)

  • 김영중;임묘택;서민옥
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제54권7호
    • /
    • pp.447-452
    • /
    • 2005
  • This paper presents a method that a mobile robot finds location of doors in complex environments and safely traverses the door PCA(Principal Component Analysis) algorithm using the vision information is used for a robot to find the location of door, PCA is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding pattern in data of high dimension. Fuzzy controller using a sonar data is used for a robot to avoid obstacles and traverse the doors.

비전센서를 이용한 강교량 재도장 로봇의 주행 모니터링 모듈 개발 (Development of a Monitoring Module for a Steel Bridge-repainting Robot Using a Vision Sensor)

  • 서명국;이호연;장동욱;장병하
    • 드라이브 ㆍ 컨트롤
    • /
    • 제19권1호
    • /
    • pp.1-7
    • /
    • 2022
  • Recently, a re-painting robot was developed to semi-automatically conduct blasting work in bridge spaces to improve work productivity and worker safety. In this study, a vision sensor-based monitoring module was developed to automatically move the re-painting robot along the path. The monitoring module provides direction information to the robot by analyzing the boundary between the painting surface and the metal surface. To stably measure images in unstable environments, various techniques for improving image visibility were applied in this study. Then, the driving performance was verified in a similar environment.

복도 주행 로봇을 위한 단일 카메라 영상에서의 사람 검출 (Human Detection in the Images of a Single Camera for a Corridor Navigation Robot)

  • 김정대;도용태
    • 로봇학회논문지
    • /
    • 제8권4호
    • /
    • pp.238-246
    • /
    • 2013
  • In this paper, a robot vision technique is presented to detect obstacles, particularly approaching humans, in the images acquired by a mobile robot that autonomously navigates in a narrow building corridor. A single low-cost color camera is attached to the robot, and a trapezoidal area is set as a region of interest (ROI) in front of the robot in the camera image. The lower parts of a human such as feet and legs are first detected in the ROI from their appearances in real time as the distance between the robot and the human becomes smaller. Then, the human detection is confirmed by detecting his/her face within a small search region specified above the part detected in the trapezoidal ROI. To increase the credibility of detection, a final decision about human detection is made when a face is detected in two consecutive image frames. We tested the proposed method using images of various people in corridor scenes, and could get promising results. This method can be used for a vision-guided mobile robot to make a detour for avoiding collision with a human during its indoor navigation.

Development of a Hovering Robot System for Calamity Observation

  • Kang, M.S.;Park, S.;Lee, H.G.;Won, D.H.;Kim, T.J.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.580-585
    • /
    • 2005
  • A QRT(Quad-Rotor Type) hovering robot system is developed for quick detection and observation of the circumstances under calamity environment such as indoor fire spots. The UAV(Unmanned Aerial Vehicle) is equipped with four propellers driven by each electric motor, an embedded controller using a DSP, INS(Inertial Navigation System) using 3-axis rate gyros, a CCD camera with wireless communication transmitter for observation, and an ultrasonic range sensor for height control. The developed hovering robot shows stable flying performances under the adoption of RIC(Robust Internal-loop Compensator) based disturbance compensation and the vision based localization method. The UAV can also avoid obstacles using eight IR and four ultrasonic range sensors. The VTOL(Vertical Take-Off and Landing) flying object flies into indoor fire spots and sends the images captured by the CCD camera to the operator. This kind of small-sized UAV can be widely used in various calamity observation fields without danger of human beings under harmful environment.

  • PDF

홀위치 측정을 위한 레이져비젼 시스템 개발 (A Laser Vision System for the High-Speed Measurement of Hole Positions)

  • 노영식;서영수;최원태
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.333-335
    • /
    • 2006
  • In this page, we developed the inspection system for automobile parts using the laser vision sensor. Laser vision sensor has gotten 2 dimensions information and third dimension information of laser vision camera using the vision camera. Used JIG and ROBOT for inspection position transfer. Also, computer integration system developed that control system component pal1s and manage data measurement information. Compare sensor measurement result with CAD Data and verified measurement result effectiveness taking advantage of CAD to get information of measurement object.

  • PDF

백라이트 유닛의 결함 검사를 위한 비전 시스템 개발 (Development of Vision system for Back Light Unit of Defect)

  • 한창호;오춘석;유영기;조상희
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제55권4호
    • /
    • pp.161-164
    • /
    • 2006
  • In this thesis we designed the vision system to inspect the defect of a back light unit of plat panel display device. The vision system is divided into hardware and inspection algorithm of defect. Hardware components consist of illumination part, robot-arm controller part and image-acquisition part. Illumination part is made of acrylic panel for light diffusion and five 36W FPL's(Fluorescent Parallel Lamp) and electronic ballast with low frequency harmonics. The CCD(Charge-Coupled Device) camera of image-acquisition part is able to acquire the bright image by the light coming from lamp. The image-acquisition part is composed of CCD camera and frame grabber. The robot-arm controller part has a role to let the CCD camera move to the desired position. To take inspections of surface images of a flat panel display it can be controlled and located every nook and comer. Images obtained by robot-arm and image-acquisition board are saved on the hard-disk through windows programming and are tested whether there are defects by using the image processing algorithms.