• Title/Summary/Keyword: Vision-based

Search Result 3,438, Processing Time 0.031 seconds

Development of PCI Vision System (PCI비젼 시스템 개발)

  • Kim, Jeong-Hun;Jeon, Jae-Wook;Byun, Jong-Eun
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2868-2870
    • /
    • 2000
  • Vision systems need to have high speed transfer methods for transferring large data. After PC accepts PCL, PCI becomes a more effective method for data translation. PCI substitutes previous ISA. This paper proposes an architecture of vision system and window driver based on PCI.

  • PDF

Shape Based Framework for Recognition and Tracking of Texture-free Objects for Submerged Robots in Structured Underwater Environment (수중로봇을 위한 형태를 기반으로 하는 인공표식의 인식 및 추종 알고리즘)

  • Han, Kyung-Min;Choi, Hyun-Taek
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.6
    • /
    • pp.91-98
    • /
    • 2011
  • This paper proposes an efficient and accurate vision based recognition and tracking framework for texture free objects. We approached this problem with a two phased algorithm: detection phase and tracking phase. In the detection phase, the algorithm extracts shape context descriptors that used for classifying objects into predetermined interesting targets. Later on, the matching result is further refined by a minimization technique. In the tracking phase, we resorted to meanshift tracking algorithm based on Bhattacharyya coefficient measurement. In summary, the contributions of our methods for the underwater robot vision are four folds: 1) Our method can deal with camera motion and scale changes of objects in underwater environment; 2) It is inexpensive vision based recognition algorithm; 3) The advantage of shape based method compared to a distinct feature point based method (SIFT) in the underwater environment with possible turbidity variation; 4) We made a quantitative comparison of our method with a few other well-known methods. The result is quite promising for the map based underwater SLAM task which is the goal of our research.

Vision-based dense displacement and strain estimation of miter gates with the performance evaluation using physics-based graphics models

  • Narazaki, Yasutaka;Hoskere, Vedhus;Eick, Brian A.;Smith, Matthew D.;Spencer, Billie F.
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.709-721
    • /
    • 2019
  • This paper investigates the framework of vision-based dense displacement and strain measurement of miter gates with the approach for the quantitative evaluation of the expected performance. The proposed framework consists of the following steps: (i) Estimation of 3D displacement and strain from images before and after deformation (water-fill event), (ii) evaluation of the expected performance of the measurement, and (iii) selection of measurement setting with the highest expected accuracy. The framework first estimates the full-field optical flow between the images before and after water-fill event, and project the flow to the finite element (FE) model to estimate the 3D displacement and strain. Then, the expected displacement/strain estimation accuracy is evaluated at each node/element of the FE model. Finally, methods and measurement settings with the highest expected accuracy are selected to achieve the best results from the field measurement. A physics-based graphics model (PBGM) of miter gates of the Greenup Lock and Dam with the updated texturing step is used to simulate the vision-based measurements in a photo-realistic environment and evaluate the expected performance of different measurement plans (camera properties, camera placement, post-processing algorithms). The framework investigated in this paper can be used to analyze and optimize the performance of the measurement with different camera placement and post-processing steps prior to the field test.

Semantic crack-image identification framework for steel structures using atrous convolution-based Deeplabv3+ Network

  • Ta, Quoc-Bao;Dang, Ngoc-Loi;Kim, Yoon-Chul;Kam, Hyeon-Dong;Kim, Jeong-Tae
    • Smart Structures and Systems
    • /
    • v.30 no.1
    • /
    • pp.17-34
    • /
    • 2022
  • For steel structures, fatigue cracks are critical damage induced by long-term cycle loading and distortion effects. Vision-based crack detection can be a solution to ensure structural integrity and performance by continuous monitoring and non-destructive assessment. A critical issue is to distinguish cracks from other features in captured images which possibly consist of complex backgrounds such as handwritings and marks, which were made to record crack patterns and lengths during periodic visual inspections. This study presents a parametric study on image-based crack identification for orthotropic steel bridge decks using captured images with complicated backgrounds. Firstly, a framework for vision-based crack segmentation using the atrous convolution-based Deeplapv3+ network (ACDN) is designed. Secondly, features on crack images are labeled to build three databanks by consideration of objects in the backgrounds. Thirdly, evaluation metrics computed from the trained ACDN models are utilized to evaluate the effects of obstacles on crack detection results. Finally, various training parameters, including image sizes, hyper-parameters, and the number of training images, are optimized for the ACDN model of crack detection. The result demonstrated that fatigue cracks could be identified by the trained ACDN models, and the accuracy of the crack-detection result was improved by optimizing the training parameters. It enables the applicability of the vision-based technique for early detecting tiny fatigue cracks in steel structures.

Reliability and Validity of the Korean Version of Melbourne Low-Vision ADL Index (한국판 맬버른 저시력 일상생활지수(Melbourne Low-vision ADL Index: MLVAI)의 신뢰도 및 타당도)

  • Yoo, Yeon Hwan;Park, Ji-Hyuk;Jung, Min-Ye;Park, Hae Yean
    • Therapeutic Science for Rehabilitation
    • /
    • v.7 no.2
    • /
    • pp.51-61
    • /
    • 2018
  • Objective: The purpose of this study was performed to modify the performance-based assessment tool, Melbourne Low-Vision ADL Index (MLVAI) had been developed in Australia to suit the Korea culture to verify the reliability and validity. Methods: The subjects were only over the age of 20 living in communities, 26 with low-vision and 42 normal persons. The Korean MLVAI was completed through the expert translation verification and validation of the configuration tool. The validity of the Korean MLVAI was established through the content, discriminant, and convergent validity. Also, the reliability was analyzed through internal consistency reliability for the items, test-retest, and interrater reliability. Results: The Content Validity Index(CVI) was more than .78. There was a statically significant low in low-vision. Also, there was a statically significant low in low-vision. The convergent validity was determined the correlation coefficient of .751 analyzing LVQOL and Korean MLVAI total score, had a significant correlation(p<.05). Cronbach's ${\alpha}$ coefficient indicated an internal consistency of .983(p<.05). Test-retest reliability had a high, significant correlation of .976 and interrater reliability had a high, an intraclass correlation coefficient of .91(p<.05). Conclusion: The results of this study mean that the Korean MLVAI which was modified to fit the Korean is the ADL assessment tool with both validity and reliability. Based on this study, the Korean MLVAI can be used as a useful ADL assessment for OT interventions in low vision.

Recent Trends and Prospects of 3D Content Using Artificial Intelligence Technology (인공지능을 이용한 3D 콘텐츠 기술 동향 및 향후 전망)

  • Lee, S.W.;Hwang, B.W.;Lim, S.J.;Yoon, S.U.;Kim, T.J.;Kim, K.N.;Kim, D.H;Park, C.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.15-22
    • /
    • 2019
  • Recent technological advances in three-dimensional (3D) sensing devices and machine learning such as deep leaning has enabled data-driven 3D applications. Research on artificial intelligence has developed for the past few years and 3D deep learning has been introduced. This is the result of the availability of high-quality big data, increases in computing power, and development of new algorithms; before the introduction of 3D deep leaning, the main targets for deep learning were one-dimensional (1D) audio files and two-dimensional (2D) images. The research field of deep leaning has extended from discriminative models such as classification/segmentation/reconstruction models to generative models such as those including style transfer and generation of non-existing data. Unlike 2D learning, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become increasingly popular owing to advances in 3D vision technology, the generation/acquisition of 3D data is still very difficult. Even if 3D data can be acquired, post-processing remains a significant problem. Moreover, it is not easy to directly apply existing network models such as convolution networks owing to the various ways in which 3D data is represented. In this paper, we summarize technological trends in AI-based 3D content generation.

A Study on Development and Application of Real Time Vision Algorithm for Inspection Process Automation (검사공정 자동화를 위한 실시간 비전알고리즘 개발 및 응용에 관한 연구)

  • Back, Seung-Hak;Hwang, Won-Jun;Shin, Haeng-Bong;Choi, Young-Sik;Park, Dae-Yeong
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.19 no.1
    • /
    • pp.42-49
    • /
    • 2016
  • This study proposes a non-contact inspective technology based robot vision system for Faulty Inspection of welding States and Parts Shape. The maine focus is real time implementation of the machining parts' automatic inspection by the robotic moving. For this purpose, the automatic test instrument inspects the precision components designator the vision system. pattern Recognition Technologies and Precision Components for vision inspection technology and precision machining of precision parts including the status and appearance distinguish between good and bad. To perform a realization of a real-time automation integration system for the precision parts of manufacturing process, it is designed a robot vision system for the integrated system controller and verified the reliability through experiments. The main contents of this paper, the robot vision technology for noncontact inspection of precision components and machinery parts is useful technology for FA.

A Study on Visual Feedback Control of a Dual Arm Robot with Eight Joints

  • Lee, Woo-Song;Kim, Hong-Rae;Kim, Young-Tae;Jung, Dong-Yean;Han, Sung-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.610-615
    • /
    • 2005
  • Visual servoing is the fusion of results from many elemental areas including high-speed image processing, kinematics, dynamics, control theory, and real-time computing. It has much in common with research into active vision and structure from motion, but is quite different from the often described use of vision in hierarchical task-level robot control systems. We present a new approach to visual feedback control using image-based visual servoing with the stereo vision in this paper. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using a binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location but also at the other locations. The suggested technique can guide a robot manipulator to the desired location without giving such priori knowledge as the relative distance to the desired location or the model of an object even if the initial positioning error is large. This paper describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by the simulation and experimental results and compared with the case of conventional method for dual-arm robot made in Samsung Electronics Co., Ltd.

  • PDF

A Study on Algorithm for Inspection of Automobile's plastic part locking lever (자동차 플라스틱 부품 락킹레버 검사를 위한 알고리즘 연구)

  • Jang, Bong-Choon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.5
    • /
    • pp.1558-1563
    • /
    • 2010
  • This paper describes a study on algorithm for the development of machine vision system as well as the inspection of automobile's plastic part locking lever to replace a human worker's eye inspection. Before developing the machine vision system based on the PC, the purpose of this research is to develop the algorithm to decide whether a product is a good/bad one in real time inspection. NI-LabVIEW software is used in the inspection method and an inspection program is developed using LabVIEW Vision image functions. The inspection program was built and validated to help the system operator set up the inspection area and change the criteria number in the program.

VFH-based Navigation using Monocular Vision (단일 카메라를 이용한 VFH기반의 실시간 주행 기술 개발)

  • Park, Se-Hyun;Hwang, Ji-Hye;Ju, Jin-Sun;Ko, Eun-Jeong;Ryu, Juang-Tak;Kim, Eun-Yi
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.2
    • /
    • pp.65-72
    • /
    • 2011
  • In this paper, a real-time monocular vision based navigation system is developed for the disabled people, where online background learning and vector field histogram are used for identifying obstacles and recognizing avoidable paths. The proposed system is performed by three steps: obstacle classification, occupancy grid map generation and VFH-based path recommendation. Firstly, the obstacles are discriminated from images by subtracting with background model which is learned in real time. Thereafter, based on the classification results, an occupancy map sized at $32{\times}24$ is produced, each cell of which represents its own risk by 10 gray levels. Finally, the polar histogram is drawn from the occupancy map, then the sectors corresponding to the valley are chosen as safe paths. To assess the effectiveness of the proposed system, it was tested with a variety of obstacles at indoors and outdoors, then it showed the a'ccuracy of 88%. Moreover, it showed the superior performance when comparing with sensor based navigation systems, which proved the feasibility of the proposed system in using assistive devices of disabled people.