• Title/Summary/Keyword: Multi-Vision System

Search Result 275, Processing Time 0.029 seconds

A Study of the B/STUD Inspection System Using the Vision System (비전을 이용한 B/STUD 검사 시스템에 관한 연구)

  • 장영훈;한창수
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1120-1123
    • /
    • 1995
  • In this paper, an automatic B/STUD inspection system has been developed using the computer aided vision system. Index Table has been used to get the rapid measurement and multi-camera has been used to get the high resolution in mechanical system. Camera calibration was suggested to perform the reliable Inspection. Image processing and data analysis algorithms for B/STUD inspection system has been investigated and were performed quickly with high accuracy. As a result, Inspection system of a B/STUD can be measured with a high resolution in real time.

  • PDF

Combined Static and Dynamic Platform Calibration for an Aerial Multi-Camera System

  • Cui, Hong-Xia;Liu, Jia-Qi;Su, Guo-Zhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2689-2708
    • /
    • 2016
  • Multi-camera systems which integrate two or more low-cost digital cameras are adopted to reach higher ground coverage and improve the base-height ratio in low altitude remote sensing. To guarantee accurate multi-camera integration, the geometric relationship among cameras must be determined through platform calibration techniques. This paper proposed a combined two-step platform calibration method. In the first step, the static platform calibration was conducted based on the stable relative orientation constraint and convergent conditions among cameras in static environments. In the second step, a dynamic platform self-calibration approach was proposed based on not only tie points but also straight lines in order to correct the small change of the relative relationship among cameras during dynamic flight. Experiments based on the proposed two-step platform calibration method were carried out with terrestrial and aerial images from a multi-camera system combined with four consumer-grade digital cameras onboard an unmanned aerial vehicle. The experimental results have shown that the proposed platform calibration approach is able to compensate the varied relative relationship during flight, acquiring the mosaicing accuracy of virtual images smaller than 0.5pixel. The proposed approach can be extended for calibrating other low-cost multi-camera system without rigorously mechanical structure.

A Study on the Analysis of Posture Balance Based on Multi-parameter in Time Variation (시간변화에 따른 다중파라미터기반에서 자세균형의 분석 연구)

  • Kim, Jeong-Lae;Lee, Kyoung-Joung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.5
    • /
    • pp.151-157
    • /
    • 2011
  • This study analyzed the posture balance of time variation for exercising body a period of time. Posture balance measured output values for the posture balance system of body moving in the multi-parameter. Posture moving variation had three methods such as open and closed eye, head moving and upper body moving. There were checked a parameter that measured vision, vestibular, somatosensory, CNS. This system was evaluated a data through the stability. This system has catched a signal for physical condition of body data such as a data acquisition system, data signal processing and feedback system. The output signal was generated Fourier analysis that using frequency of 0.1Hz, 0.1-0.5Hz, 0.5-1Hz and 1Hz over. The posture balance system will be used to support assessment for body moving the posture balance of time variation. It was expected to monitor a physical parameter for health verification system.

The compensation of kinematic differences of a robot using image information (화상정보를 이용한 로봇기구학의 오차 보정)

  • Lee, Young-Jin;Lee, Min-Chul;Ahn, Chul-Ki;Son, Kwon;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1840-1843
    • /
    • 1997
  • The task environment of a robot is changing rapidly and task itself becomes complicated due to current industrial trends of multi-product and small lot size production. A convenient user-interfaced off-line programming(OLP) system is being developed in order to overcome the difficulty in teaching a robot task. Using the OLP system, operators can easily teach robot tasks off-line and verify feasibility of the task through simulation of a robot prior to the on-line execution. However, some task errors are inevitable by kinematic differences between the robot model in OLP and the actual robot. Three calibration methods using image information are proposed to compensate the kinematic differences. These methods compose of a relative position vector method, three point compensation method, and base line compensation method. To compensate a kinematic differences the vision system with one monochrome camera is used in the calibration experiment.

  • PDF

Development of a Fast Alignment Method of Micro-Optic Parts Using Multi Dimension Vision and Optical Feedback

  • Han, Seung-Hyun;Kim, Jin-Oh;Park, Joong-Wan;Kim, Jong-Han
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.273-277
    • /
    • 2003
  • A general process of electronic assembly is composed of a series of geometric alignments and bonding/screwing processes. After assembly, the function is tested in a following process of inspection. However, assembly of micro-optic devices requires both processes to be performed in equipment. Coarse geometric alignment is made by using vision and optical function is improved by the following fine motion based on feedback of tunable laser interferometer. The general system is composed of a precision robot system for 3D assembly, a 3D vision guided system for geometric alignment and an optical feedback system with a tunable laser. In this study, we propose a new fast alignment algorithm of micro-optic devices for both of visual and optical alignments. The main goal is to find a fastest alignment process and algorithms with state-of-the-art technology. We propose a new approach with an optimal sequence of processes, a visual alignment algorithm and a search algorithm for an optimal optical alignment. A system is designed to show the effectiveness and efficiency of the proposed method.

  • PDF

Real-Time Objects Tracking using Color Configuration in Intelligent Space with Distributed Multi-Vision (분산다중센서로 구현된 지능화공간의 색상정보를 이용한 실시간 물체추적)

  • Jin, Tae-Seok;Lee, Jang-Myung;Hashimoto, Hideki
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.9
    • /
    • pp.843-849
    • /
    • 2006
  • Intelligent Space defines an environment where many intelligent devices, such as computers and sensors, are distributed. As a result of the cooperation between smart devices, intelligence emerges from the environment. In such scheme, a crucial task is to obtain the global location of every device in order to of for the useful services. Some tracking systems often prepare the models of the objects in advance. It is difficult to adopt this model-based solution as the tracking system when many kinds of objects exist. In this paper the location is achieved with no prior model, using color properties as information source. Feature vectors of multiple objects using color histogram and tracking method are described. The proposed method is applied to the intelligent environment and its performance is verified by the experiments.

UGV Localization using Multi-sensor Fusion based on Federated Filter in Outdoor Environments (야지환경에서 연합형 필터 기반의 다중센서 융합을 이용한 무인지상로봇 위치추정)

  • Choi, Ji-Hoon;Park, Yong Woon;Joo, Sang Hyeon;Shim, Seong Dae;Min, Ji Hong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.5
    • /
    • pp.557-564
    • /
    • 2012
  • This paper presents UGV localization using multi-sensor fusion based on federated filter in outdoor environments. The conventional GPS/INS integrated system does not guarantee the robustness of localization because GPS is vulnerable to external disturbances. In many environments, however, vision system is very efficient because there are many features compared to the open space and these features can provide much information for UGV localization. Thus, this paper uses the scene matching and pose estimation based vision navigation, magnetic compass and odometer to cope with the GPS-denied environments. NR-mode federated filter is used for system safety. The experiment results with a predefined path demonstrate enhancement of the robustness and accuracy of localization in outdoor environments.

Vision Sensor System for Abnormal Region Detection under Outdoor Environment (옥외 환경 하에서의 이상영역 검출을 위한 시각 감시 시스템의 구축)

  • Seo, Won-Chan
    • Journal of Sensor Science and Technology
    • /
    • v.9 no.1
    • /
    • pp.61-69
    • /
    • 2000
  • In this paper, an algorithm was developed to construct a vision sensor system that can detect abnormal region under ever changing outdoor environment. The algorithm was implemented on parallel network system consist of multi-processors according as it's properties to enlarge it's features. From experiments using real scenes, the algorithm was adaptive to ever changing outdoor environmental conditions and it was confirmed that the system is robust and effective.

  • PDF

Integration of Multi-scale CAM and Attention for Weakly Supervised Defects Localization on Surface Defective Apple

  • Nguyen Bui Ngoc Han;Ju Hwan Lee;Jin Young Kim
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.45-59
    • /
    • 2023
  • Weakly supervised object localization (WSOL) is a task of localizing an object in an image using only image-level labels. Previous studies have followed the conventional class activation mapping (CAM) pipeline. However, we reveal the current CAM approach suffers from problems which cause original CAM could not capture the complete defects features. This work utilizes a convolutional neural network (CNN) pretrained on image-level labels to generate class activation maps in a multi-scale manner to highlight discriminative regions. Additionally, a vision transformer (ViT) pretrained was treated to produce multi-head attention maps as an auxiliary detector. By integrating the CNN-based CAMs and attention maps, our approach localizes defective regions without requiring bounding box or pixel-level supervision during training. We evaluate our approach on a dataset of apple images with only image-level labels of defect categories. Experiments demonstrate our proposed method aligns with several Object Detection models performance, hold a promise for improving localization.