• Title/Summary/Keyword: Vision-based

Search Result 3,438, Processing Time 0.034 seconds

PC를 기반으로한 VISION 기능을 갖는 ROBOT 제어기의 개발

  • 서일홍;김재현;정중기;노병옥
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1994.10a
    • /
    • pp.537-542
    • /
    • 1994
  • 본 연구에서는 최근 널리 알려져 있는 vision function들을 Robot controller에 맞게 수정하고, 또한 각종 windows에 익숙한 사용자들이 보다 쉽게 로보트를 다룰 수 있도록 하기 위해, 이러한 기본 기능들을 X-windows 상에서 vision language로 구현하였다. 기존의 기본 motion language, I/O language와 함께 Vision Langeage를 구현함으로써 Vison based Intelligent Robot system을 구축하였다.

  • PDF

Development of Vision Inspection System for Defects of Industrial Wire Harness (산업용Wire Harness Vision 검사 장비 개발)

  • Han, Seung-Chul
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.11 no.4
    • /
    • pp.189-194
    • /
    • 2008
  • This paper presents vision based inspection system for defects of industrial wire harness. Five type of nonconformities facter such as barrel deform, projected wire, overcoating, rack of wire length, over-strip is considered. Developed inspectio algorithmn has been tested on real specimens from a wire harness factory. Experimental results show that the inspection algorithm an has a good performance.

  • PDF

INS/Multi-Vision Integrated Navigation System Based on Landmark (다수의 비전 센서와 INS를 활용한 랜드마크 기반의 통합 항법시스템)

  • Kim, Jong-Myeong;Leeghim, Henzeh
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.8
    • /
    • pp.671-677
    • /
    • 2017
  • A new INS/Vision integrated navigation system by using multi-vision sensors is addressed in this paper. When the total number of landmark measured by the vision sensor is smaller than the allowable number, there is possibility that the navigation filter can diverge. To prevent this problem, multi-vision concept is applied to expend the field of view so that reliable number of landmarks are always guaranteed. In this work, the orientation of camera installed are 0, 120, and -120degree with respect to the body frame to improve the observability. Finally, the proposed technique is verified by using numerical simulation.

Development of Robot Vision Control Schemes based on Batch Method for Tracking of Moving Rigid Body Target (강체 이동타겟 추적을 위한 일괄처리방법을 이용한 로봇비젼 제어기법 개발)

  • Kim, Jae-Myung;Choi, Cheol-Woong;Jang, Wan-Shik
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.17 no.5
    • /
    • pp.161-172
    • /
    • 2018
  • This paper proposed the robot vision control method to track a moving rigid body target using the vision system model that can actively control camera parameters even if the relative position between the camera and the robot and the focal length and posture of the camera change. The proposed robotic vision control scheme uses a batch method that uses all the vision data acquired from each moving point of the robot. To process all acquired data, this robot vision control scheme is divided into two cases. One is to give an equal weight for all acquired data, the other is to give weighting for the recent data acquired near the target. Finally, using the two proposed robot vision control schemes, experiments were performed to estimate the positions of a moving rigid body target whose spatial positions are unknown but only the vision data values are known. The efficiency of each control scheme is evaluated by comparing the accuracy through the experimental results of each control scheme.

Chinese-clinical-record Named Entity Recognition using IDCNN-BiLSTM-Highway Network

  • Tinglong Tang;Yunqiao Guo;Qixin Li;Mate Zhou;Wei Huang;Yirong Wu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1759-1772
    • /
    • 2023
  • Chinese named entity recognition (NER) is a challenging work that seeks to find, recognize and classify various types of information elements in unstructured text. Due to the Chinese text has no natural boundary like the spaces in the English text, Chinese named entity identification is much more difficult. At present, most deep learning based NER models are developed using a bidirectional long short-term memory network (BiLSTM), yet the performance still has some space to improve. To further improve their performance in Chinese NER tasks, we propose a new NER model, IDCNN-BiLSTM-Highway, which is a combination of the BiLSTM, the iterated dilated convolutional neural network (IDCNN) and the highway network. In our model, IDCNN is used to achieve multiscale context aggregation from a long sequence of words. Highway network is used to effectively connect different layers of networks, allowing information to pass through network layers smoothly without attenuation. Finally, the global optimum tag result is obtained by introducing conditional random field (CRF). The experimental results show that compared with other popular deep learning-based NER models, our model shows superior performance on two Chinese NER data sets: Resume and Yidu-S4k, The F1-scores are 94.98 and 77.59, respectively.

OnBoard Vision Based Object Tracking Control Stabilization Using PID Controller

  • Mariappan, Vinayagam;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International Journal of Advanced Culture Technology
    • /
    • v.4 no.4
    • /
    • pp.81-86
    • /
    • 2016
  • In this paper, we propose a simple and effective vision-based tracking controller design for autonomous object tracking using multicopter. The multicopter based automatic tracking system usually unstable when the object moved so the tracking process can't define the object position location exactly that means when the object moves, the system can't track object suddenly along to the direction of objects movement. The system will always looking for the object from the first point or its home position. In this paper, PID control used to improve the stability of tracking system, so that the result object tracking became more stable than before, it can be seen from error of tracking. A computer vision and control strategy is applied to detect a diverse set of moving objects on Raspberry Pi based platform and Software defined PID controller design to control Yaw, Throttle, Pitch of the multicopter in real time. Finally based series of experiment results and concluded that the PID control make the tracking system become more stable in real time.

Vision Based Position Control of a Robot Manipulator Using an Elitist Genetic Algorithm (엘리트 유전 알고리즘을 이용한 비젼 기반 로봇의 위치 제어)

  • Park, Kwang-Ho;Kim, Dong-Joon;Kee, Seok-Ho;Kee, Chang-Doo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.1
    • /
    • pp.119-126
    • /
    • 2002
  • In this paper, we present a new approach based on an elitist genetic algorithm for the task of aligning the position of a robot gripper using CCD cameras. The vision-based control scheme for the task of aligning the gripper with the desired position is implemented by image information. The relationship between the camera space location and the robot joint coordinates is estimated using a camera-space parameter modal that generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation. To find the joint angles of a robot manipulator for reaching the target position in the image space, we apply an elitist genetic algorithm instead of a nonlinear least square error method. Since GA employs parallel search, it has good performance in solving optimization problems. In order to improve convergence speed, the real coding method and geometry constraint conditions are used. Experiments are carried out to exhibit the effectiveness of vision-based control using an elitist genetic algorithm with a real coding method.

Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKF Methods for Slender Bar Placement (얇은막대 배치작업에 대한 N-R 과 EKF 방법을 이용하여 개발한 로봇 비젼 제어알고리즘의 평가)

  • Son, Jae Kyung;Jang, Wan Shik;Hong, Sung Mun
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.37 no.4
    • /
    • pp.447-459
    • /
    • 2013
  • Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKF methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKF and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

Monocular Vision-Based Guidance and Control for a Formation Flight

  • Cheon, Bong-kyu;Kim, Jeong-ho;Min, Chan-oh;Han, Dong-in;Cho, Kyeum-rae;Lee, Dae-woo;Seong, kie-jeong
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.16 no.4
    • /
    • pp.581-589
    • /
    • 2015
  • This paper describes a monocular vision-based formation flight technology using two fixed wing unmanned aerial vehicles. To measuring relative position and attitude of a leader aircraft, a monocular camera installed in the front of the follower aircraft captures an image of the leader, and position and attitude are measured from the image using the KLT feature point tracker and POSIT algorithm. To verify the feasibility of this vision processing algorithm, a field test was performed using two light sports aircraft, and our experimental results show that the proposed monocular vision-based measurement algorithm is feasible. Performance verification for the proposed formation flight technology was carried out using the X-Plane flight simulator. The formation flight simulation system consists of two PCs playing the role of leader and follower. When the leader flies by the command of user, the follower aircraft tracks the leader by designed guidance and a PI control law, and all the information about leader was measured using monocular vision. This simulation shows that guidance using relative attitude information tracks the leader aircraft better than not using attitude information. This simulation shows absolute average errors for the relative position as follows: X-axis: 2.88 m, Y-axis: 2.09 m, and Z-axis: 0.44 m.

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF