• 제목/요약/키워드: vision-based control

검색결과 683건 처리시간 0.026초

Vision Training Device(OTUS)적용에 따른 기능성 근시의 개선 효과 (Improvement effect of Functional Myopia by Using of Vision Training Device(OTUS))

  • 박성용;윤영대;김덕훈;이동희
    • 한국융합학회논문지
    • /
    • 제11권2호
    • /
    • pp.147-154
    • /
    • 2020
  • 본 연구는 조절훈련을 통한 기능성 근시개선 효과를 유발할 수 있는, ICT 기반의 시력회복용 웨어러블 디바이스의 개발에 관한 것이다. 시력훈련기기(OTUS)는 헤드마운트 형태를 가지는 웨어러블 디바이스로써 섬모체 근육의 수축과 이완, 눈모음과 눈벌림을 자연스럽게 자극하는 조절 훈련기기이다. 사용자는 디바이스를 통해 저장된 개인 시력정보를 바탕으로 맞춤형 시력훈련을 진행할 수 있다. 실험에서는 기능성 근시를 유발한 후 두 그룹(비교군 16명, 조절훈련군 16명)에 대해 조절훈련으로 인한 증상의 개선 효과를 비교 분석하였다. 그 결과 조절훈련군에서 기능성 근시가 평균 0.44D±0.35(p<0.05)로 개선되었다. 이 연구가 시력훈련기기(OTUS)의 기능성 근시에 대한 유효성을 밝히고 있지만, 기능성 근시를 장기간 제어할 수 있는 가능성을 입증하기 위해 추가적인 임상시험이 필요할 것으로 판단된다.

다중프로세서 방식의 자동조립시스템을 위한 관리제어 (Supervisory Control for Multi-Processor-Based Automatic Assembly System)

  • 이재혁;유범재;오상록
    • 대한전기학회논문지
    • /
    • 제39권8호
    • /
    • pp.888-897
    • /
    • 1990
  • In this paper, a multi-processor-based supervisory control for automatic assembly system is presented. The proposed supervisory control is organized in terms of C-language and with structured and easily expandable characteristics. Also the controller is designed to possess diagnostic capability including self-diagnosis of processor module. The developed supervisory control has been shown to be very useful via a high speed automatic assembly system with vision capability.

  • PDF

무인 항공기의 이동체 상부로의 영상 기반 자동 착륙 시스템 (Vision-based Autonomous Landing System of an Unmanned Aerial Vehicle on a Moving Vehicle)

  • 정성욱;구정모;정광익;김형진;명현
    • 로봇학회논문지
    • /
    • 제11권4호
    • /
    • pp.262-269
    • /
    • 2016
  • Flight of an autonomous unmanned aerial vehicle (UAV) generally consists of four steps; take-off, ascent, descent, and finally landing. Among them, autonomous landing is a challenging task due to high risks and reliability problem. In case the landing site where the UAV is supposed to land is moving or oscillating, the situation becomes more unpredictable and it is far more difficult than landing on a stationary site. For these reasons, the accurate and precise control is required for an autonomous landing system of a UAV on top of a moving vehicle which is rolling or oscillating while moving. In this paper, a vision-only based landing algorithm using dynamic gimbal control is proposed. The conventional camera systems which are applied to the previous studies are fixed as downward facing or forward facing. The main disadvantage of these system is a narrow field of view (FOV). By controlling the gimbal to track the target dynamically, this problem can be ameliorated. Furthermore, the system helps the UAV follow the target faster than using only a fixed camera. With the artificial tag on a landing pad, the relative position and orientation of the UAV are acquired, and those estimated poses are used for gimbal control and UAV control for safe and stable landing on a moving vehicle. The outdoor experimental results show that this vision-based algorithm performs fairly well and can be applied to real situations.

출입 이벤트 인식 (Event recognition of entering and exiting)

  • 취야오환;이창우
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2008년도 제38차 하계학술발표논문집 16권1호
    • /
    • pp.199-204
    • /
    • 2008
  • Visual surveillance is an active topic recently in Computer Vision. Event detection and recognition is one important and useful application of visual surveillance system. In this paper, we propose a new method to recognize the entering and exiting events based on the human's movement feature and the door's state. Without sensors, the proposed approach is based on novel and simple vision method as a combination of edge detection, motion history image and geometrical characteristic of the human shape. The proposed method includes several applications such as access control in visual surveillance and computer vision fields.

  • PDF

비젼시스템을 이용한 로봇시스템의 점배치실험에 관한 연구 (A Study on the Point Placement Task of Robot System Based on the Vision System)

  • 장완식;유창규
    • 한국정밀공학회지
    • /
    • 제13권8호
    • /
    • pp.175-183
    • /
    • 1996
  • This paper presents three-dimensional robot task using the vision control method. A minimum of two cameras is required to place points on end dffectors of n degree-of-freedom manipulators relative to other bodies. This is accomplished using a sequential estimation scheme that permits placement of these points in each of the two-dimensional image planes of monitoring cameras. Estimation model is developed based on a model that generalizes known three-axis manipulator kinematics to accommodate unknown relative camera position and orientation, etc. This model uses six uncertainty-of-view parameters estimated by the iteration method.

  • PDF

회전 평면경 영상의 단일 카메라 투영에 의한 거리 측정 (Depth Estimation Through the Projection of Rotating Mirror Image unto Mono-camera)

  • 김형석;송재홍;한후석
    • 제어로봇시스템학회논문지
    • /
    • 제7권9호
    • /
    • pp.790-797
    • /
    • 2001
  • A simple computer vision technology to measure the middle-ranged depth with a mono camera and a plain mirror is proposed. The proposed system is structured with the rotating mirror in front of the fixed mono camera. In contrast to the previous stereo vision system in which the disparity of the closer object is larger than that of the distant object, the pixel movement caused by the rotating mirror is bigger for the pixels of the distant object in the proposed system. Being inspired by such distinguished feature in the proposed system, the principle of the depth measurement based on the relation of the pixel movement and the distance of object is investigated. Also, the factors to influence the precision of the measurement are analysed. The benefits of the proposed system are low price and less chance of occlusion. The robustness for practical usage is an additional benefit of the proposed vision system.

  • PDF

비전 센서를 이용한 쿼드로터형 무인비행체의 목표 추적 제어 (Target Tracking Control of a Quadrotor UAV using Vision Sensor)

  • 유민구;홍성경
    • 한국항공우주학회지
    • /
    • 제40권2호
    • /
    • pp.118-128
    • /
    • 2012
  • 본 논문은 쿼드로터형 무인 비행체를 비전센서를 이용한 목표 추적 위치 제어기 설계하였고, 이를 시뮬레이션 및 실험을 통해서 확인하였다. 우선 제어기 설계에 앞서 쿼드로터의 동역학 분석 및 실험데이터를 통한 모델링을 수행하였다. 이때, 모델의 계수들은 실제 비행 데이터를 이용한 PEM(Prediction Error Method)을 이용하여 얻었다. 이 추정된 모델을 바탕으로 LQR(Linear Quadratic Regulator) 기법을 이용한 임의의 목표를 따라가는 위치 제어기를 설계하였으며, 이때 위치 정보는 비전센서의 색 정보를 이용한 Color Tracking기능을 이용하여 쿼드로터와 물체의 상대적인 위치를 얻어내었고, 초음파 센서를 이용하여 고도 정보를 얻어 내었다. 마지막으로 실제 움직이는 물체의 추적 제어 실험을 수행하여 LQR 제어기 성능을 평가하였다.

어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM (3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner)

  • 최윤원;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제21권7호
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

융합 센서 네트워크 정보로 보정된 관성항법센서를 이용한 추측항법의 위치추정 향상에 관한 연구 (Study on the Localization Improvement of the Dead Reckoning using the INS Calibrated by the Fusion Sensor Network Information)

  • 최재영;김성관
    • 제어로봇시스템학회논문지
    • /
    • 제18권8호
    • /
    • pp.744-749
    • /
    • 2012
  • In this paper, we suggest that how to improve an accuracy of mobile robot's localization by using the sensor network information which fuses the machine vision camera, encoder and IMU sensor. The heading value of IMU sensor is measured using terrestrial magnetism sensor which is based on magnetic field. However, this sensor is constantly affected by its surrounding environment. So, we isolated template of ceiling using vision camera to increase the sensor's accuracy when we use IMU sensor; we measured the angles by pattern matching algorithm; and to calibrate IMU sensor, we compared the obtained values with IMU sensor values and the offset value. The values that were used to obtain information on the robot's position which were of Encoder, IMU sensor, angle sensor of vision camera are transferred to the Host PC by wireless network. Then, the Host PC estimates the location of robot using all these values. As a result, we were able to get more accurate information on estimated positions than when using IMU sensor calibration solely.

조명의 변화가 심한 환경에서 자동차 부품 유무 비전검사 방법 (Auto Parts Visual Inspection in Severe Changes in the Lighting Environment)

  • 김기석;박요한;박종섭;조재수
    • 제어로봇시스템학회논문지
    • /
    • 제21권12호
    • /
    • pp.1109-1114
    • /
    • 2015
  • This paper presents an improved learning-based visual inspection method for auto parts inspection in severe lighting changes. Automobile sunroof frames are produced automatically by robots in most production lines. In the sunroof frame manufacturing process, there is a quality problem with some parts such as volts are missed. Instead of manual sampling inspection using some mechanical jig instruments, a learning-based machine vision system was proposed in the previous research[1]. But, in applying the actual sunroof frame production process, the inspection accuracy of the proposed vision system is much lowered because of severe illumination changes. In order to overcome this capricious environment, some selective feature vectors and cascade classifiers are used for each auto parts. And we are able to improve the inspection accuracy through the re-learning concept for the misclassified data. The effectiveness of the proposed visual inspection method is verified through sufficient experiments in a real sunroof production line.