• 제목/요약/키워드: Vision-based

검색결과 3,444건 처리시간 0.028초

PC를 기반으로한 VISION 기능을 갖는 ROBOT 제어기의 개발

  • 서일홍;김재현;정중기;노병옥
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 1994년도 추계학술대회 논문집
    • /
    • pp.537-542
    • /
    • 1994
  • 본 연구에서는 최근 널리 알려져 있는 vision function들을 Robot controller에 맞게 수정하고, 또한 각종 windows에 익숙한 사용자들이 보다 쉽게 로보트를 다룰 수 있도록 하기 위해, 이러한 기본 기능들을 X-windows 상에서 vision language로 구현하였다. 기존의 기본 motion language, I/O language와 함께 Vision Langeage를 구현함으로써 Vison based Intelligent Robot system을 구축하였다.

  • PDF

산업용Wire Harness Vision 검사 장비 개발 (Development of Vision Inspection System for Defects of Industrial Wire Harness)

  • 한승철
    • 한국산업융합학회 논문집
    • /
    • 제11권4호
    • /
    • pp.189-194
    • /
    • 2008
  • This paper presents vision based inspection system for defects of industrial wire harness. Five type of nonconformities facter such as barrel deform, projected wire, overcoating, rack of wire length, over-strip is considered. Developed inspectio algorithmn has been tested on real specimens from a wire harness factory. Experimental results show that the inspection algorithm an has a good performance.

  • PDF

다수의 비전 센서와 INS를 활용한 랜드마크 기반의 통합 항법시스템 (INS/Multi-Vision Integrated Navigation System Based on Landmark)

  • 김종명;이현재
    • 한국항공우주학회지
    • /
    • 제45권8호
    • /
    • pp.671-677
    • /
    • 2017
  • 본 논문은 관성항법시스템(Inertial Navigation System)과 비전 센서(Vision Sensor)를 활용한 통합 항법시스템의 성능 향상을 위한 INS/멀티비전 통합항법 시스템을 제시하였다. 기존의 단일 센서나 스테레오 비전(Stereo vision)을 활용한 경우 측정되는 랜드마크의 수가 적을 경우 필터가 발산하는 문제가 발생할 수 있다. 이러한 문제를 해결하기 위해 본 논문에서는 3개의 비전 센서를 동체를 기준으로 $0^{\circ}$, $120^{\circ}$, $-120^{\circ}$으로 설치하여 단일 센서로 사용되는 경우보다 성능이 향상됨을 수치 시뮬레이션을 통하여 검증하였다.

강체 이동타겟 추적을 위한 일괄처리방법을 이용한 로봇비젼 제어기법 개발 (Development of Robot Vision Control Schemes based on Batch Method for Tracking of Moving Rigid Body Target)

  • 김재명;최철웅;장완식
    • 한국기계가공학회지
    • /
    • 제17권5호
    • /
    • pp.161-172
    • /
    • 2018
  • This paper proposed the robot vision control method to track a moving rigid body target using the vision system model that can actively control camera parameters even if the relative position between the camera and the robot and the focal length and posture of the camera change. The proposed robotic vision control scheme uses a batch method that uses all the vision data acquired from each moving point of the robot. To process all acquired data, this robot vision control scheme is divided into two cases. One is to give an equal weight for all acquired data, the other is to give weighting for the recent data acquired near the target. Finally, using the two proposed robot vision control schemes, experiments were performed to estimate the positions of a moving rigid body target whose spatial positions are unknown but only the vision data values are known. The efficiency of each control scheme is evaluated by comparing the accuracy through the experimental results of each control scheme.

Chinese-clinical-record Named Entity Recognition using IDCNN-BiLSTM-Highway Network

  • Tinglong Tang;Yunqiao Guo;Qixin Li;Mate Zhou;Wei Huang;Yirong Wu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권7호
    • /
    • pp.1759-1772
    • /
    • 2023
  • Chinese named entity recognition (NER) is a challenging work that seeks to find, recognize and classify various types of information elements in unstructured text. Due to the Chinese text has no natural boundary like the spaces in the English text, Chinese named entity identification is much more difficult. At present, most deep learning based NER models are developed using a bidirectional long short-term memory network (BiLSTM), yet the performance still has some space to improve. To further improve their performance in Chinese NER tasks, we propose a new NER model, IDCNN-BiLSTM-Highway, which is a combination of the BiLSTM, the iterated dilated convolutional neural network (IDCNN) and the highway network. In our model, IDCNN is used to achieve multiscale context aggregation from a long sequence of words. Highway network is used to effectively connect different layers of networks, allowing information to pass through network layers smoothly without attenuation. Finally, the global optimum tag result is obtained by introducing conditional random field (CRF). The experimental results show that compared with other popular deep learning-based NER models, our model shows superior performance on two Chinese NER data sets: Resume and Yidu-S4k, The F1-scores are 94.98 and 77.59, respectively.

OnBoard Vision Based Object Tracking Control Stabilization Using PID Controller

  • Mariappan, Vinayagam;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International Journal of Advanced Culture Technology
    • /
    • 제4권4호
    • /
    • pp.81-86
    • /
    • 2016
  • In this paper, we propose a simple and effective vision-based tracking controller design for autonomous object tracking using multicopter. The multicopter based automatic tracking system usually unstable when the object moved so the tracking process can't define the object position location exactly that means when the object moves, the system can't track object suddenly along to the direction of objects movement. The system will always looking for the object from the first point or its home position. In this paper, PID control used to improve the stability of tracking system, so that the result object tracking became more stable than before, it can be seen from error of tracking. A computer vision and control strategy is applied to detect a diverse set of moving objects on Raspberry Pi based platform and Software defined PID controller design to control Yaw, Throttle, Pitch of the multicopter in real time. Finally based series of experiment results and concluded that the PID control make the tracking system become more stable in real time.

엘리트 유전 알고리즘을 이용한 비젼 기반 로봇의 위치 제어 (Vision Based Position Control of a Robot Manipulator Using an Elitist Genetic Algorithm)

  • 박광호;김동준;기석호;기창두
    • 한국정밀공학회지
    • /
    • 제19권1호
    • /
    • pp.119-126
    • /
    • 2002
  • In this paper, we present a new approach based on an elitist genetic algorithm for the task of aligning the position of a robot gripper using CCD cameras. The vision-based control scheme for the task of aligning the gripper with the desired position is implemented by image information. The relationship between the camera space location and the robot joint coordinates is estimated using a camera-space parameter modal that generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation. To find the joint angles of a robot manipulator for reaching the target position in the image space, we apply an elitist genetic algorithm instead of a nonlinear least square error method. Since GA employs parallel search, it has good performance in solving optimization problems. In order to improve convergence speed, the real coding method and geometry constraint conditions are used. Experiments are carried out to exhibit the effectiveness of vision-based control using an elitist genetic algorithm with a real coding method.

얇은막대 배치작업에 대한 N-R 과 EKF 방법을 이용하여 개발한 로봇 비젼 제어알고리즘의 평가 (Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKF Methods for Slender Bar Placement)

  • 손재경;장완식;홍성문
    • 대한기계학회논문집A
    • /
    • 제37권4호
    • /
    • pp.447-459
    • /
    • 2013
  • 실제 산업현장에서 비젼 시스템을 적용하기에는 로봇 비젼 제어알고리즘의 기구학모델의 정확도, 로봇이 움직이는 동안 카메라 초점거리와 방위에 대한 보정, 3 차원 물리적 좌표에서 2 차원 카메라 좌표로의 매핑에 대한 이해 등 해결해야 할 많은 문제점들이 있다. 본 논문에 제안된 비젼 시스템 모델은 카메라와 로봇 사이의 상대적인 위치가 알려지지 않아도 제어가 가능하고, 카메라 보정 문제를 해결하기 위해 6 개의 카메라 매개변수를 가지는 비젼 시스템 모델을 제시하였으며, 이를 이용하여 로봇 비젼 제어알고리즘 개발에 N-R 방법과 EKF 방법을 적용하였다. 최종적으로 N-R 과 EKF 방법에 의하여 개발된 로봇 비젼 제어 알고리즘의 위치 정밀도와 데이터 처리 시간을 얇은 막대 배치작업을 수행하여 비교하였다.

Monocular Vision-Based Guidance and Control for a Formation Flight

  • Cheon, Bong-kyu;Kim, Jeong-ho;Min, Chan-oh;Han, Dong-in;Cho, Kyeum-rae;Lee, Dae-woo;Seong, kie-jeong
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제16권4호
    • /
    • pp.581-589
    • /
    • 2015
  • This paper describes a monocular vision-based formation flight technology using two fixed wing unmanned aerial vehicles. To measuring relative position and attitude of a leader aircraft, a monocular camera installed in the front of the follower aircraft captures an image of the leader, and position and attitude are measured from the image using the KLT feature point tracker and POSIT algorithm. To verify the feasibility of this vision processing algorithm, a field test was performed using two light sports aircraft, and our experimental results show that the proposed monocular vision-based measurement algorithm is feasible. Performance verification for the proposed formation flight technology was carried out using the X-Plane flight simulator. The formation flight simulation system consists of two PCs playing the role of leader and follower. When the leader flies by the command of user, the follower aircraft tracks the leader by designed guidance and a PI control law, and all the information about leader was measured using monocular vision. This simulation shows that guidance using relative attitude information tracks the leader aircraft better than not using attitude information. This simulation shows absolute average errors for the relative position as follows: X-axis: 2.88 m, Y-axis: 2.09 m, and Z-axis: 0.44 m.

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF