• 제목/요약/키워드: vision-based techniques

검색결과 290건 처리시간 0.027초

A vision-based robotic assembly system

  • Oh, Sang-Rok;Lim, Joonhong;Shin, You-Shik;Bien, Zeungnam
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1987년도 한국자동제어학술회의논문집(한일합동학술편); 한국과학기술대학, 충남; 16-17 Oct. 1987
    • /
    • pp.770-775
    • /
    • 1987
  • In this paper, design and development experiences of a vision based robotic assembly system for electronic components are described. Specifically, the overall system consists of the following three subsystems each of which employs a 16 bit Preprocessor MC 68000 : supervisory controller, real-time vision system, and servo system. The three microprocessors are interconnected using the time shared common memory bus structure with hardwired bus arbitration scheme and operated as a master-slave type in which each slave is functionally fixed in view of software. With this system architecture, the followings are developed and implemented in this research; (i) the system programming language, called 'CLRC', for man-machine interface including the robot motion and vision primitives, (ii) real-time vision system using hardwired chain coder, (iii) the high-precision servo techniques for high speed de motors and high speed stepping motors. The proposed control system were implemented and tested in real-time successfully.

  • PDF

A VISION SYSTEM IN ROBOTIC WELDING

  • Absi Alfaro, S. C.
    • 대한용접접합학회:학술대회논문집
    • /
    • 대한용접접합학회 2002년도 Proceedings of the International Welding/Joining Conference-Korea
    • /
    • pp.314-319
    • /
    • 2002
  • The Automation and Control Group at the University of Brasilia is developing an automatic welding station based on an industrial robot and a controllable welding machine. Several techniques were applied in order to improve the quality of the welding joints. This paper deals with the implementation of a laser-based computer vision system to guide the robotic manipulator during the welding process. Currently the robot is taught to follow a prescribed trajectory which is recorded a repeated over and over relying on the repeatability specification from the robot manufacturer. The objective of the computer vision system is monitoring the actual trajectory followed by the welding torch and to evaluate deviations from the desired trajectory. The position errors then being transfer to a control algorithm in order to actuate the robotic manipulator and cancel the trajectory errors. The computer vision systems consists of a CCD camera attached to the welding torch, a laser emitting diode circuit, a PC computer-based frame grabber card, and a computer vision algorithm. The laser circuit establishes a sharp luminous reference line which images are captured through the video camera. The raw image data is then digitized and stored in the frame grabber card for further processing using specifically written algorithms. These image-processing algorithms give the actual welding path, the relative position between the pieces and the required corrections. Two case studies are considered: the first is the joining of two flat metal pieces; and the second is concerned with joining a cylindrical-shape piece to a flat surface. An implementation of this computer vision system using parallel computer processing is being studied.

  • PDF

원거리 학습 기반 컴퓨터 비젼 실습 사례연구 (A Case Study on Distance Learning Based Computer Vision Laboratory)

  • 이성열
    • 한국경영과학회:학술대회논문집
    • /
    • 한국경영과학회 2005년도 추계학술대회 및 정기총회
    • /
    • pp.175-181
    • /
    • 2005
  • This paper describes the development of on-line computer vision laboratories to teach the detailed image processing and pattern recognition techniques. The computer vision laboratories include distant image acquisition method, basic image processing and pattern recognition methods, lens and light, and communication. This study introduces a case study that teaches computer vision in distance learning environment. It shows a schematic of a distant loaming workstation and contents of laboratories with image processing examples. The study focus more on the contents of the vision Labs rather than internet application method. The study proposes the ways to improve the on-line computer vision laboratories and includes the further research perspectives

  • PDF

수중 로봇을 위한 다중 템플릿 및 가중치 상관 계수 기반의 물체 인식 및 추종 (Multiple Templates and Weighted Correlation Coefficient-based Object Detection and Tracking for Underwater Robots)

  • 김동훈;이동화;명현;최현택
    • 로봇학회논문지
    • /
    • 제7권2호
    • /
    • pp.142-149
    • /
    • 2012
  • The camera has limitations of poor visibility in underwater environment due to the limited light source and medium noise of the environment. However, its usefulness in close range has been proved in many studies, especially for navigation. Thus, in this paper, vision-based object detection and tracking techniques using artificial objects for underwater robots have been studied. We employed template matching and mean shift algorithms for the object detection and tracking methods. Also, we propose the weighted correlation coefficient of adaptive threshold -based and color-region-aided approaches to enhance the object detection performance in various illumination conditions. The color information is incorporated into the template matched area and the features of the template are used to robustly calculate correlation coefficients. And the objects are recognized using multi-template matching approach. Finally, the water basin experiments have been conducted to demonstrate the performance of the proposed techniques using an underwater robot platform yShark made by KORDI.

A completely non-contact recognition system for bridge unit influence line using portable cameras and computer vision

  • Dong, Chuan-Zhi;Bas, Selcuk;Catbas, F. Necati
    • Smart Structures and Systems
    • /
    • 제24권5호
    • /
    • pp.617-630
    • /
    • 2019
  • Currently most of the vision-based structural identification research focus either on structural input (vehicle location) estimation or on structural output (structural displacement and strain responses) estimation. The structural condition assessment at global level just with the vision-based structural output cannot give a normalized response irrespective of the type and/or load configurations of the vehicles. Combining the vision-based structural input and the structural output from non-contact sensors overcomes the disadvantage given above, while reducing cost, time, labor force including cable wiring work. In conventional traffic monitoring, sometimes traffic closure is essential for bridge structures, which may cause other severe problems such as traffic jams and accidents. In this study, a completely non-contact structural identification system is proposed, and the system mainly targets the identification of bridge unit influence line (UIL) under operational traffic. Both the structural input (vehicle location information) and output (displacement responses) are obtained by only using cameras and computer vision techniques. Multiple cameras are synchronized by audio signal pattern recognition. The proposed system is verified with a laboratory experiment on a scaled bridge model under a small moving truck load and a field application on a footbridge on campus under a moving golf cart load. The UILs are successfully identified in both bridge cases. The pedestrian loads are also estimated with the extracted UIL and the predicted weights of pedestrians are observed to be in acceptable ranges.

Compensation of Installation Errors in a Laser Vision System and Dimensional Inspection of Automobile Chassis

  • Barkovski Igor Dunin;Samuel G.L.;Yang Seung-Han
    • Journal of Mechanical Science and Technology
    • /
    • 제20권4호
    • /
    • pp.437-446
    • /
    • 2006
  • Laser vision inspection systems are becoming popular for automated inspection of manufactured components. The performance of such systems can be enhanced by improving accuracy of the hardware and robustness of the software used in the system. This paper presents a new approach for enhancing the capability of a laser vision system by applying hardware compensation and using efficient analysis software. A 3D geometrical model is developed to study and compensate for possible distortions in installation of gantry robot on which the vision system is mounted. Appropriate compensation is applied to the inspection data obtained from the laser vision system based on the parameters in 3D model. The present laser vision system is used for dimensional inspection of car chassis sub frame and lower arm assembly module. An algorithm based on simplex search techniques is used for analyzing the compensated inspection data. The details of 3D model, parameters used for compensation and the measurement data obtained from the system are presented in this paper. The details of search algorithm used for analyzing the measurement data and the results obtained are also presented in the paper. It is observed from the results that, by applying compensation and using appropriate algorithms for analyzing, the error in evaluation of the inspection data can be significantly minimized, thus reducing the risk of rejecting good parts.

임베디드 엣지 플랫폼에서의 경량 비전 트랜스포머 성능 평가 (Performance Evaluation of Efficient Vision Transformers on Embedded Edge Platforms)

  • 이민하;이성재;김태현
    • 대한임베디드공학회논문지
    • /
    • 제18권3호
    • /
    • pp.89-100
    • /
    • 2023
  • Recently, on-device artificial intelligence (AI) solutions using mobile devices and embedded edge devices have emerged in various fields, such as computer vision, to address network traffic burdens, low-energy operations, and security problems. Although vision transformer deep learning models have outperformed conventional convolutional neural network (CNN) models in computer vision, they require more computations and parameters than CNN models. Thus, they are not directly applicable to embedded edge devices with limited hardware resources. Many researchers have proposed various model compression methods or lightweight architectures for vision transformers; however, there are only a few studies evaluating the effects of model compression techniques of vision transformers on performance. Regarding this problem, this paper presents a performance evaluation of vision transformers on embedded platforms. We investigated the behaviors of three vision transformers: DeiT, LeViT, and MobileViT. Each model performance was evaluated by accuracy and inference time on edge devices using the ImageNet dataset. We assessed the effects of the quantization method applied to the models on latency enhancement and accuracy degradation by profiling the proportion of response time occupied by major operations. In addition, we evaluated the performance of each model on GPU and EdgeTPU-based edge devices. In our experimental results, LeViT showed the best performance in CPU-based edge devices, and DeiT-small showed the highest performance improvement in GPU-based edge devices. In addition, only MobileViT models showed performance improvement on EdgeTPU. Summarizing the analysis results through profiling, the degree of performance improvement of each vision transformer model was highly dependent on the proportion of parts that could be optimized in the target edge device. In summary, to apply vision transformers to on-device AI solutions, either proper operation composition and optimizations specific to target edge devices must be considered.

The Use of Advanced Optical Measurement Methods for the Mechanical Analysis of Shear Deficient Prestressed Concrete Members

  • Wilder, K. De;Roeck, G. De;Vandewalle, L.
    • International Journal of Concrete Structures and Materials
    • /
    • 제10권2호
    • /
    • pp.189-203
    • /
    • 2016
  • This paper investigates on the use of advanced optical measurement methods, i.e. 3D coordinate measurement machines (3D CMM) and stereo-vision digital image correlation (3D DIC), for the mechanical analysis of shear deficient prestressed concrete members. Firstly, the experimental program is elaborated. Secondly, the working principle, experimental setup and corresponding accuracy and precision of the considered optical measurement techniques are reported. A novel way to apply synthesised strain sensor patterns for DIC is introduced. Thirdly, the experimental results are reported and an analysis is made of the structural behaviour based on the gathered experimental data. Both techniques yielded useful and complete data in comparison to traditional mechanical measurement techniques and allowed for the assessment of the mechanical behaviour of the reported test specimens. The identified structural behaviour presented in this paper can be used to optimize design procedure for shear-critical structural concrete members.

이동물체의 정확한 추출을 위한 스테레오 알고리즘 (Stereo vision Techniques for Correct extract of Moving object)

  • 김종만
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 제36회 하계학술대회 논문집 D
    • /
    • pp.2531-2533
    • /
    • 2005
  • The proposed neural network technique is the real time computation method based theory of inter-node diffusion for searching the safety distances from the sudden appearance-objects during the work driving. The main steps of the distance computation using the theory of stereo vision like the eyes of man is following steps. One is the processing for finding the corresponding points of stereo images and the other is the interpolation processing of full image data from nonlinear image data of objects. All of therm request much memory space and time. Therefore the most reliable neural-network algorithm is drived for real-time matching of obejects, which is composed of a dynamic programming algorithm based on sequence matching techniques in moving objects.

  • PDF

Real-time geometry identification of moving ships by computer vision techniques in bridge area

  • Li, Shunlong;Guo, Yapeng;Xu, Yang;Li, Zhonglong
    • Smart Structures and Systems
    • /
    • 제23권4호
    • /
    • pp.359-371
    • /
    • 2019
  • As part of a structural health monitoring system, the relative geometric relationship between a ship and bridge has been recognized as important for bridge authorities and ship owners to avoid ship-bridge collision. This study proposes a novel computer vision method for the real-time geometric parameter identification of moving ships based on a single shot multibox detector (SSD) by using transfer learning techniques and monocular vision. The identification framework consists of ship detection (coarse scale) and geometric parameter calculation (fine scale) modules. For the ship detection, the SSD, which is a deep learning algorithm, was employed and fine-tuned by ship image samples downloaded from the Internet to obtain the rectangle regions of interest in the coarse scale. Subsequently, for the geometric parameter calculation, an accurate ship contour is created using morphological operations within the saturation channel in hue, saturation, and value color space. Furthermore, a local coordinate system was constructed using projective geometry transformation to calculate the geometric parameters of ships, such as width, length, height, localization, and velocity. The application of the proposed method to in situ video images, obtained from cameras set on the girder of the Wuhan Yangtze River Bridge above the shipping channel, confirmed the efficiency, accuracy, and effectiveness of the proposed method.