• Title/Summary/Keyword: Multi-Vision System

Search Result 273, Processing Time 0.029 seconds

Mobile Robot Localization using Ubiquitous Vision System (시각기반 센서 네트워크를 이용한 이동로봇의 위치 추정)

  • Dao, Nguyen Xuan;Kim, Chi-Ho;You, Bum-Jae
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2780-2782
    • /
    • 2005
  • In this paper, we present a mobile robot localization solution by using a Ubiquitous Vision System (UVS). The collective information gathered by multiple cameras that are strategically placed has many advantages. For example, aggregation of information from multiple viewpoints reduces the uncertainty about the robots' positions. We construct UVS as a multi-agent system by regarding each vision sensor as one vision agent (VA). Each VA performs target segmentation by color and motion information as well as visual tracking for multiple objects. Our modified identified contractnet (ICN) protocol is used for communication between VAs to coordinate multitask. This protocol raises scalability and modularity of thesystem because of independent number of VAs and needless calibration. Furthermore, the handover between VAs by using ICN is seamless. Experimental results show the robustness of the solution with respect to a widespread area. The performance in indoor environments shows the feasibility of the proposed solution in real-time.

  • PDF

The Multipass Joint Tracking System by Vision Sensor (비전센서를 이용한 다층 용접선 추적 시스템)

  • Lee, Jeong-Ick;Koh, Byung-Kab
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.5
    • /
    • pp.14-23
    • /
    • 2007
  • Welding fabrication invariantly involves three district sequential steps: preparation, actual process execution and post-weld inspection. One of the major problems in automating these steps and developing autonomous welding system is the lack of proper sensing strategies. Conventionally, machine vision is used in robotic arc welding only for the correction of pre-taught welding paths in single pass. However, in this paper, multipass tracking more than single pass tracking is performed by conventional seam tracking algorithm and developed one. And tracking performances of two algorithm are compared in multipass tracking. As the result, tracking performance in multi-pass welding shows superior conventional seam tracking algorithm to developed one.

A Study on Adaptive Control to Fill Weld GrooveBy Using Multi-Torches in SAW (SAW 용접시 다중 토치를 이용한 용접부 적응제어에 관한 연구)

  • 문형순;김정섭;권혁준;정문영
    • Proceedings of the KWS Conference
    • /
    • 1999.10a
    • /
    • pp.47-50
    • /
    • 1999
  • The term adaptive control is often used to describe recent advances in welding process control but strictly this only applies to system which are able to cope with dynamic changes in system performance. In welding applications, the term adaptive control may not imply the conventional control theory definition but may be used in the more descriptive sense to explain the need for the process to adapt to the changing welding conditions. This paper proposed a methodology for obtaining a good bead appearance based on multi-torches welding system with the vision system in SAW. The methodologies for adaptive filling control used the welding current/voltage, arc voltage/welding current/wire feed speed combination and welding speed by using the vision sensor. It was shown that the algorithm for the welding current/voltage combination and welding speed revealed the sound weld bead appearance compared with that of the voltage/current combination.

  • PDF

An implementation of multi-platform authoring system for augmented reality content (다중 플랫폼 증강현실 콘텐츠 저작 시스템 구현)

  • Park, Go-Gwang;Kim, Sung-Hyun;Kim, Shin-Hyong
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06d
    • /
    • pp.61-63
    • /
    • 2012
  • 본 논문에서는 다중 플랫폼을 지원하는 증강현실 콘텐츠 저작 시스템을 소개한다. 소개하는 저작 시스템은 일반 PC 환경과 모바일 환경에서 구현되었으며 같은 오픈 소스 엔진을 사용하여 상호 호환이 가능하다. 또한 시스템에 콘텐츠 파일의 저장 및 전송을 담당할 서버를 함께 구성하여 사용자의 증강현실 콘텐츠 저작에 대한 파일 접근성을 높였다.

Multi-Camera Vision System for Tele-Robotics

  • Park, Changhwn;Kohtaro Ohba;Park, Kyihwan;Sayaka Odano;Hisayaki Sasaki;Nakyoung Chong;Tetsuo Kotoku;Kazuo Tanie
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.25.6-25
    • /
    • 2001
  • A new monitoring system is proposed to give direct visual information of the remote site when working with a tele-operation system. In order to have a similar behavior of a human when he is inspecting an object, multiple cameras that have different view point are attached around the robot hand and are switched on and elf according to the operator´s motion such as joystick manipulation or operator´s head movement. The performance of the system is estimated by performing comparison experiments among single camera (SC) vision system, head mount display (HMD)system and proposed multiple camera (MC) vision system by applying a task to several examines. The reality, depth feeling and controllability are estimated for the examines ...

  • PDF

Development and Packaging of Multi-channel Imaging Module for Near-infrared Fluorescence Imaging System (근적외선 형광 영상 시스템용 다채널 영상 모듈 개발 및 패키징)

  • Kim, Taehoon;Seo, Kyung Hwan;Lee, Hak Keun;Jeong, Myung Yung
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.26 no.2
    • /
    • pp.59-64
    • /
    • 2019
  • In this paper, we introduced a near-infrared multi-channel fluorescence imaging system and analyzed the effects of measurements variables such as exposure time, working distance and intensity of excitation light. Fluorescence signal is increased as exposure time becomes longer, excitation light intensity increases or working distance becomes smaller. Furthermore, the proper composition of optical filters and precise packaging of the imaging modules prevent the increase of background signal. Thus, we confirmed an increase in SBR. Based on the result of this research, we proposed a method to use a multi-channel fluorescence imaging system.

A Study on the Vision-Based Inspection System for Ball-Stud (비전을 이용한 볼-스터드 검사 시스템에 관한 연구)

  • 장영훈;권태종;한창수;문영식
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.12
    • /
    • pp.7-13
    • /
    • 1998
  • In this paper, an automatic ball-stud inspection system has been developed using the computer-aided vision system. Index table has been used to get the rapid measurement and multi-camera has been used to get the high resolution in physical system. Camera calibration was suggested to perform the reliable inspection. Image processing and data analysis algorithms for ball stud inspection system have been investigated and were performed quickly with high accuracy. As a result, inspection system of a ball stud could be used with a high resolution in real time.

  • PDF

Jointly Image Topic and Emotion Detection using Multi-Modal Hierarchical Latent Dirichlet Allocation

  • Ding, Wanying;Zhu, Junhuan;Guo, Lifan;Hu, Xiaohua;Luo, Jiebo;Wang, Haohong
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.55-67
    • /
    • 2014
  • Image topic and emotion analysis is an important component of online image retrieval, which nowadays has become very popular in the widely growing social media community. However, due to the gaps between images and texts, there is very limited work in literature to detect one image's Topics and Emotions in a unified framework, although topics and emotions are two levels of semantics that often work together to comprehensively describe one image. In this work, a unified model, Joint Topic/Emotion Multi-Modal Hierarchical Latent Dirichlet Allocation (JTE-MMHLDA) model, which extends previous LDA, mmLDA, and JST model to capture topic and emotion information at the same time from heterogeneous data, is proposed. Specifically, a two level graphical structured model is built to realize sharing topics and emotions among the whole document collection. The experimental results on a Flickr dataset indicate that the proposed model efficiently discovers images' topics and emotions, and significantly outperform the text-only system by 4.4%, vision-only system by 18.1% in topic detection, and outperforms the text-only system by 7.1%, vision-only system by 39.7% in emotion detection.

  • PDF

Map-Building and Position Estimation based on Multi-Sensor Fusion for Mobile Robot Navigation in an Unknown Environment (이동로봇의 자율주행을 위한 다중센서융합기반의 지도작성 및 위치추정)

  • Jin, Tae-Seok;Lee, Min-Jung;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.5
    • /
    • pp.434-443
    • /
    • 2007
  • Presently, the exploration of an unknown environment is an important task for thee new generation of mobile service robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems. This paper presents a technique for localization of a mobile robot using fusion data of multi-ultrasonic sensors and vision system. The mobile robot is designed for operating in a well-structured environment that can be represented by planes, edges, comers and cylinders in the view of structural features. In the case of ultrasonic sensors, these features have the range information in the form of the arc of a circle that is generally named as RCD(Region of Constant Depth). Localization is the continual provision of a knowledge of position which is deduced from it's a priori position estimation. The environment of a robot is modeled into a two dimensional grid map. we defines a vision-based environment recognition, phisically-based sonar sensor model and employs an extended Kalman filter to estimate position of the robot. The performance and simplicity of the approach is demonstrated with the results produced by sets of experiments using a mobile robot.

Vision Inspection for Flexible Lens Assembly of Camera Phone (카메라 폰 렌즈 조립을 위한 비전 검사 방법들에 대한 연구)

  • Lee I.S.;Kim J.O.;Kang H.S.;Cho Y.J.;Lee G.B.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.631-632
    • /
    • 2006
  • The assembly of camera lens modules fur the mobile phone has not been automated so far. They are still assembled manually because of high precision of all parts and hard-to-recognize lens by vision camera. In addition, the very short life cycle of the camera phone lens requires flexible and intelligent automation. This study proposes a fast and accurate identification system of the parts by distributing the camera for 4 degree of freedom assembly robot system. Single or multi-cameras can be installed according to the part's image capture and processing mode. It has an agile structure which enables adaptation with the minimal job change. The framework is proposed and the experimental result is shown to prove the effectiveness.

  • PDF