• Title/Summary/Keyword: multi-vision

Search Result 483, Processing Time 0.032 seconds

Benchmarking on High-speed Image Processing Techniques based on Multi-processor (멀티프로세서 기반의 고속 영상처리 기술에 대한 벤치마킹)

  • Cui, Xue-Nan;Park, Eun-Soo;Kim, Jun-Chul;Kim, Hak-Il
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.111-112
    • /
    • 2007
  • 본 논문에서는 멀티프로세서 기반의 고속 영상처리 알고리즘 개발방법에 대해 소개한다. 영상획득 방식의 발전과 더불어 고해상도 영상의 획득이 가능해지고 영상이 컬러화가 되면서 많은 영상처리 응용분야에서 알고리즘 고속화를 필요로 하고 있다. 이러한 수요를 만족시키기 위해서는 최근에 출시되고 있는 멀티프로세서를 최대한 활용할 수 있는 알고리즘 개발이 최우선이다. 본 논문에서는 OpenMP, MIL(Matrox Image Library), OpenCV, IPP(Integrated Performance Primitives), SSE (Streaming SIMD (Single Instruction Multiple Data) Extensions)등 병렬처리와 고속 영상처리 라이브러리를 이용한 알고리즘 개발방법에 대해 소개하고, 각 개발방법에 따른 알고리즘 성능을 분석 및 평가하였다. 실험결과로부터 SSE와 IPP, MIL(Thread)을 이용하여 Mean, Dilation, Erosion, Open, Closing, Sobel등 알고리즘을 구현하여 $4057{\times}4048$크기의 영상에 적용하였을 때 $7{\sim}35msec$의 좋은 성능을 나타내어 기타 방식보다 우수함을 알 수 있었다.

  • PDF

Calibration for Color Measurement of Lean Tissue and Fat of the Beef

  • Lee, S.H.;Hwang, H.
    • Agricultural and Biosystems Engineering
    • /
    • v.4 no.1
    • /
    • pp.16-21
    • /
    • 2003
  • In the agricultural field, a machine vision system has been widely used to automate most inspection processes especially in quality grading. Though machine vision system was very effective in quantifying geometrical quality factors, it had a deficiency in quantifying color information. This study was conducted to evaluate color of beef using machine vision system. Though measuring color of a beef using machine vision system had an advantage of covering whole lean tissue area at a time compared to a colorimeter, it revealed the problem of sensitivity depending on the system components such as types of camera, lighting conditions, and so on. The effect of color balancing control of a camera was investigated and multi-layer BP neural network based color calibration process was developed. Color calibration network model was trained using reference color patches and showed the high correlation with L*a*b* coordinates of a colorimeter. The proposed calibration process showed the successful adaptability to various measurement environments such as different types of cameras and light sources. Compared results with the proposed calibration process and MLR based calibration were also presented. Color calibration network was also successfully applied to measure the color of the beef. However, it was suggested that reflectance properties of reference materials for calibration and test materials should be considered to achieve more accurate color measurement.

  • PDF

Implementation of Virtual Instrumentation based Realtime Vision Guided Autopilot System and Onboard Flight Test using Rotory UAV (가상계측기반 실시간 영상유도 자동비행 시스템 구현 및 무인 로터기를 이용한 비행시험)

  • Lee, Byoung-Jin;Yun, Suk-Chang;Lee, Young-Jae;Sung, Sang-Kyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.9
    • /
    • pp.878-886
    • /
    • 2012
  • This paper investigates the implementation and flight test of realtime vision guided autopilot system based on virtual instrumentation platform. A graphical design process via virtual instrumentation platform is fully used for the image processing, communication between systems, vehicle dynamics control, and vision coupled guidance algorithms. A significatnt ojective of the algorithm is to achieve an environment robust autopilot despite wind and an irregular image acquisition condition. For a robust vision guided path tracking and hovering performance, the flight path guidance logic is combined in a multi conditional basis with the position estimation algorithm coupled with the vehicle attitude dynamics. An onboard flight test equipped with the developed realtime vision guided autopilot system is done using the rotary UAV system with full attitude control capability. Outdoor flight test demonstrated that the designed vision guided autopilot system succeeded in UAV's hovering on top of ground target within about several meters under geenral windy environment.

Map-Building and Position Estimation based on Multi-Sensor Fusion for Mobile Robot Navigation in an Unknown Environment (이동로봇의 자율주행을 위한 다중센서융합기반의 지도작성 및 위치추정)

  • Jin, Tae-Seok;Lee, Min-Jung;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.5
    • /
    • pp.434-443
    • /
    • 2007
  • Presently, the exploration of an unknown environment is an important task for thee new generation of mobile service robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems. This paper presents a technique for localization of a mobile robot using fusion data of multi-ultrasonic sensors and vision system. The mobile robot is designed for operating in a well-structured environment that can be represented by planes, edges, comers and cylinders in the view of structural features. In the case of ultrasonic sensors, these features have the range information in the form of the arc of a circle that is generally named as RCD(Region of Constant Depth). Localization is the continual provision of a knowledge of position which is deduced from it's a priori position estimation. The environment of a robot is modeled into a two dimensional grid map. we defines a vision-based environment recognition, phisically-based sonar sensor model and employs an extended Kalman filter to estimate position of the robot. The performance and simplicity of the approach is demonstrated with the results produced by sets of experiments using a mobile robot.

Vision Inspection for Flexible Lens Assembly of Camera Phone (카메라 폰 렌즈 조립을 위한 비전 검사 방법들에 대한 연구)

  • Lee I.S.;Kim J.O.;Kang H.S.;Cho Y.J.;Lee G.B.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.631-632
    • /
    • 2006
  • The assembly of camera lens modules fur the mobile phone has not been automated so far. They are still assembled manually because of high precision of all parts and hard-to-recognize lens by vision camera. In addition, the very short life cycle of the camera phone lens requires flexible and intelligent automation. This study proposes a fast and accurate identification system of the parts by distributing the camera for 4 degree of freedom assembly robot system. Single or multi-cameras can be installed according to the part's image capture and processing mode. It has an agile structure which enables adaptation with the minimal job change. The framework is proposed and the experimental result is shown to prove the effectiveness.

  • PDF

A Study on the Pedestrian Detection on the Road Using Machine Vision (머신비전을 이용한 도로상의 보행자 검출에 관한 연구)

  • Lee, Byung-Ryong;Truong, Quoc Bao;Kim, Hyoung-Seok;Bae, Yong-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.5
    • /
    • pp.490-498
    • /
    • 2011
  • In this paper, we present a two-stage vision-based approach to detect multi views of pedestrian in road scene images. The first stage is HG (Hypothesis Generation), in which potential pedestrian are hypothesized. During the hypothesis generation step, we use a vertical, horizontal edge map, and different colors between road background and pedestrian's clothes to determine the leg position of pedestrian, then a novel symmetry peaks processing is performed to define how many pedestrians is covered in one potential candidate region. Finally, the real candidate region where pedestrian exists will be constructed. The second stage is HV (Hypothesis Verification). In this stage, all hypotheses are verified by Support Vector Machine for classification, which is robust for multi views of pedestrian detection and recognition problems.

Development of Control Algorithm and Pick & Placer (반도체 소자 Pick &Placer 및 제어 알고리즘 개발)

  • 심성보;김재희;유범상
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.1339-1343
    • /
    • 2004
  • This paper presents a development of the control algorithm and Pick & Placer. The Pick & Placer provides a powerful multi-task system that includes both graphical and remote interface. Users can easily set up sorting parameters and record important data including wafer number, data, and operator information. This System sets up a dustproof device and massively machined components to provide an extremely stable sorting environment. Precise resolution and accuracy result from using machine vision, a pneumatic slide drive and close -looped positioning.

  • PDF

Development of multi-functioned remote impact wrench (다기능 원격 임팩트 렌치 개발)

  • 윤지섭;이재설;박현수
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10a
    • /
    • pp.298-301
    • /
    • 1990
  • This paper presents technologies to improve the control of an impact wrench. Impact wrench is a tool which is held by the electro-mechanical manipulator and used to fasten and loosen the bolts for remote maintenance of equipment in hostile environment. Vision system was developed to measure the distance and improve the positioning of the impact wrench. The vision system used two laser beams with a CCTV camera. Also, a torque adjusting method was developed to limit the fastening torque.

  • PDF

Recognition of Multi-sensor based Car Driving Patterns for GeoVision (GeoVision을 위한 멀티 센서 기반 운전 패턴 인식)

  • Song, Chung-Won;Nam, Kwang-Woo;Lee, Chang-Woo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.1185-1187
    • /
    • 2011
  • 이 논문에서는 운전자의 운전 패턴을 분석하기 위한 멀티 센서 기반의 패턴 분석 알고리즘을 제안한다. 센서를 통해 얻어진 주행 데이터의 상관 관계를 비교, 분석하여 주행 패턴을 인식한다. 가속도 센서에 작용하는 중력값과 지자기 센서의 방향 데이터을 통해 각 운전 패턴을 인식하는 정확도를 높이는데 이용하였다.

A Study on Adaptive Control to Fill Weld GrooveBy Using Multi-Torches in SAW (SAW 용접시 다중 토치를 이용한 용접부 적응제어에 관한 연구)

  • 문형순;김정섭;권혁준;정문영
    • Proceedings of the KWS Conference
    • /
    • 1999.10a
    • /
    • pp.47-50
    • /
    • 1999
  • The term adaptive control is often used to describe recent advances in welding process control but strictly this only applies to system which are able to cope with dynamic changes in system performance. In welding applications, the term adaptive control may not imply the conventional control theory definition but may be used in the more descriptive sense to explain the need for the process to adapt to the changing welding conditions. This paper proposed a methodology for obtaining a good bead appearance based on multi-torches welding system with the vision system in SAW. The methodologies for adaptive filling control used the welding current/voltage, arc voltage/welding current/wire feed speed combination and welding speed by using the vision sensor. It was shown that the algorithm for the welding current/voltage combination and welding speed revealed the sound weld bead appearance compared with that of the voltage/current combination.

  • PDF