• 제목/요약/키워드: Vision Model

검색결과 1,322건 처리시간 0.028초

로봇 비젼 제어기법에 사용된 카메라의 최적 배치에 대한 실험적 연구 (An Experimental Study on the Optimal Arrangement of Cameras Used for the Robot's Vision Control Scheme)

  • 민관웅;장완식
    • 한국생산제조학회지
    • /
    • 제19권1호
    • /
    • pp.15-25
    • /
    • 2010
  • The objective of this study is to investigate the optimal arrangement of cameras used for the robot's vision control scheme. The used robot's vision control scheme involves two estimation models, which are the parameter estimation and robot's joint angle estimation models. In order to perform this study, robot's working region is divided into three work spaces such as left, central and right spaces. Also, cameras are positioned on circular arcs with radius of 1.5m, 2.0m and 2.5m. Seven cameras are placed on each circular arc. For the experiment, nine cases of camera arrangement are selected in each robot's work space, and each case uses three cameras. Six parameters are estimated for each camera using the developed parameter estimation model in order to show the suitability of the vision system model in nine cases of each robot's work space. Finally, the robot's joint angles are estimated using the joint angle estimation model according to the arrangement of cameras for robot's point-position control. Thus, the effect of camera arrangement used for the robot's vision control scheme is shown for robot's point-position control experimentally.

매니퓰레이터의 조립작업을 위한 비젼시스템 모델 개발 (Development of Vision System Model for Manipulator's Assemble task)

  • 장완식
    • 한국생산제조학회지
    • /
    • 제6권2호
    • /
    • pp.10-18
    • /
    • 1997
  • This paper presents the development of real-time estimation and control details for a computer vision-based robot control method. This is accomplished using a sequential estimation scheme that permits placement of these points in each of the two-dimensional image planes of monitoring cameras. Estimation model is developed based on a model that generalizes know 4-axis Scorbot manipulator kinematics to accommodate unknown relative camera position and orientation, etc. This model uses six uncertainty-of-view parameters estimated by the iteration method. The method is tested experimentally in two ways : First the validity of estimation model is tested by using the self-built test model. Second, the practicality of the presented control method is verified in performing 4-axis manipulator's assembly task. These results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as deburring and welding.

  • PDF

Integrated Navigation Design Using a Gimbaled Vision/LiDAR System with an Approximate Ground Description Model

  • Yun, Sukchang;Lee, Young Jae;Kim, Chang Joo;Sung, Sangkyung
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제14권4호
    • /
    • pp.369-378
    • /
    • 2013
  • This paper presents a vision/LiDAR integrated navigation system that provides accurate relative navigation performance on a general ground surface, in GNSS-denied environments. The considered ground surface during flight is approximated as a piecewise continuous model, with flat and slope surface profiles. In its implementation, the presented system consists of a strapdown IMU, and an aided sensor block, consisting of a vision sensor and a LiDAR on a stabilized gimbal platform. Thus, two-dimensional optical flow vectors from the vision sensor, and range information from LiDAR to ground are used to overcome the performance limit of the tactical grade inertial navigation solution without GNSS signal. In filter realization, the INS error model is employed, with measurement vectors containing two-dimensional velocity errors, and one differenced altitude in the navigation frame. In computing the altitude difference, the ground slope angle is estimated in a novel way, through two bisectional LiDAR signals, with a practical assumption representing a general ground profile. Finally, the overall integrated system is implemented, based on the extended Kalman filter framework, and the performance is demonstrated through a simulation study, with an aircraft flight trajectory scenario.

Object Recognition Using Planar Surface Segmentation and Stereo Vision

  • Kim, Do-Wan;Kim, Sung-Il;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1920-1925
    • /
    • 2004
  • This paper describes a new method for 3D object recognition which used surface segment-based stereo vision. The position and orientation of an objects is identified accurately enabling a robot to pick up, even though the objects are multiple and partially occluded. The stereo vision is used to get the 3D information as 3D sensing, and CAD model with its post processing is used for building models. Matching is initially performed using the model and object features, and calculate roughly the object's position and orientation. Though the fine adjustment step, the accuracy of the position and orientation are improved.

  • PDF

Vision-based technique for bolt-loosening detection in wind turbine tower

  • Park, Jae-Hyung;Huynh, Thanh-Canh;Choi, Sang-Hoon;Kim, Jeong-Tae
    • Wind and Structures
    • /
    • 제21권6호
    • /
    • pp.709-726
    • /
    • 2015
  • In this study, a novel vision-based bolt-loosening monitoring technique is proposed for bolted joints connecting tubular steel segments of the wind turbine tower (WTT) structure. Firstly, a bolt-loosening detection algorithm based on image processing techniques is developed. The algorithm consists of five steps: image acquisition, segmentation of each nut, line detection of each nut, nut angle estimation, and bolt-loosening detection. Secondly, experimental tests are conducted on a lab-scale bolted joint model under various bolt-loosening scenarios. The bolted joint model, which is consisted of a ring flange and 32 sets of bolt and nut, is used for simulating the real bolted joint connecting steel tower segments in the WTT. Finally, the feasibility of the proposed vision-based technique is evaluated by bolt-loosening monitoring in the lab-scale bolted joint model.

Vision Transformer를 활용한 비디오 분류 성능 향상을 위한 Fine-tuning 신경망 (Fine-tuning Neural Network for Improving Video Classification Performance Using Vision Transformer)

  • 이광엽;이지원;박태룡
    • 전기전자학회논문지
    • /
    • 제27권3호
    • /
    • pp.313-318
    • /
    • 2023
  • 본 논문은 Vision Transformer를 기반으로 하는 Video Classification의 성능을 개선하는 방법으로 fine-tuning를 적용한 신경망을 제안한다. 최근 딥러닝 기반 실시간 비디오 영상 분석의 필요성이 대두되고 있다. Image Classification에 사용되는 기존 CNN 모델의 특징상 연속된 Frame에 대한 연관성을 분석하기 어렵다는 단점이 있다. 이와 같은 문제를 Attention 메커니즘이 적용된 Vistion Transformer와 Non-local 신경망 모델을 비교 분석하여 최적의 모델을 찾아 해결하고자 한다. 또한, 전이 학습 방법으로 fine-tuning의 다양한 방법을 적용하여 최적의 fine-tuning 신경망 모델을 제안한다. 실험은 UCF101 데이터셋으로 모델을 학습시킨 후, UTA-RLDD 데이터셋에 전이 학습 방법을 적용하여 모델의 성능을 검증하였다.

터널 막장면 고해상도 DEM(Digital Elevation Model) 생성에 관한 연구 (A Study on Developing a High-Resolution Digital Elevation Model (DEM) of a Tunnel Face)

  • 김광염;김창용;백승한;홍성완;이승도
    • 한국지반공학회:학술대회논문집
    • /
    • 한국지반공학회 2006년도 춘계 학술발표회 논문집
    • /
    • pp.931-938
    • /
    • 2006
  • Using high resolution stereoscopic imaging system three digital elevation model of tunnel face is acquired. The images oriented within a given tunnel coordinate system are brought into a stereoscopic vision system enabling three dimensional inspection and evaluation. The possibilities for the prediction ahead and outside of tunnel face have been improved by the digital vision system with 3D model. Interpolated image structures of rock mass between subsequent stereo images will enable to model the rock mass surrounding the opening within a short time at site. The models shall be used as input to numerical simulations on site, comparison of expected and encountered geological conditions, and for the interpretation of geotechnical monitoring results.

  • PDF

독립 비젼 시스템 기반의 축구로봇을 위한 계층적 행동 제어기 (A Hierarchical Motion Controller for Soccer Robots with Stand-alone Vision System)

  • 이동일;김형종;김상준;장재완;최정원;이석규
    • 한국정밀공학회지
    • /
    • 제19권9호
    • /
    • pp.133-141
    • /
    • 2002
  • In this paper, we propose a hierarchical motion controller with stand-alone vision system to enhance the flexibility of the robot soccer system. In addition, we simplified the model of dynamic environments of the robot using petri-net and simple state diagram. Based on the proposed model, we designed the robot soccer system with velocity and position controller that includes 4-level hierarchically structured controller. Some experimental results using the stand-alone vision system from host system show improvement of the controller performance by reducing processing time of vision algorithm.

Real-Time Control of a SCARA Robot by Visual Servoing with the Stereo Vision

  • S. H. Han;Lee, M. H.;K. Son;Lee, M. C.;Park, J. W.;Lee, J. M.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1998년도 제13차 학술회의논문집
    • /
    • pp.238-243
    • /
    • 1998
  • This paper presents a new approach to visual servoing with the stereo vision. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using a binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location but also at the other locations. The suggested technique can guide a robot manipulator to the desired location without giving such priori knowledge as the relative distance to the desired location or the model of an object even if the initial positioning error is large. This paper describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by the simulation and experimental results and compared with the case of conventional method fur a SCARA robot.

  • PDF

MEAN Stack 기반의 컴퓨터 비전 플랫폼 설계 (Computer Vision Platform Design with MEAN Stack Basis)

  • 홍선학;조경순;윤진섭
    • 디지털산업정보학회논문지
    • /
    • 제11권3호
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, we implemented the computer vision platform design with MEAN Stack through Raspberry PI 2 model which is an open source platform. we experimented the face recognition, temperature and humidity sensor data logging with WiFi communication under Raspberry Pi 2 model. Especially we directly made the shape of platform with 3D printing design. In this paper, we used the face recognition algorithm with OpenCV software through haarcascade feature extraction machine learning algorithm, and extended the functionality of wireless communication function ability with Bluetooth technology for the purpose of making Android Mobile devices interface. And therefore we implemented the functions of the vision platform for identifying the face recognition characteristics of scanning with PI camera with gathering the temperature and humidity sensor data under IoT environment. and made the vision platform with 3D printing technology. Especially we used MongoDB for developing the performance of vision platform because the MongoDB is more akin to working with objects in a programming language than what we know of as a database. Afterwards, we would enhance the performance of vision platform for clouding functionalities.