• 제목/요약/키워드: Vision Based Sensor

검색결과 424건 처리시간 0.026초

Feature Extraction for Vision Based Micromanipulation

  • Jang, Min-Soo;Lee, Seok-Joo;Park, Gwi-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.41.5-41
    • /
    • 2002
  • This paper presents a feature extraction algorithm for vision-based micromanipulation. In order to guarantee of the accurate micromanipulation, most of micromanipulation systems use vision sensor. Vision data from an optical microscope or high magnification lens have vast information, however, characteristics of micro image such as emphasized contour, texture, and noise are make it difficult to apply macro image processing algorithms to micro image. Grasping points extraction is very important task in micromanipulation because inaccurate grasping points can cause breakdown of micro gripper or miss of micro objects. To solve those problems and extract grasping points for micromanipulation...

  • PDF

무인선의 비전기반 장애물 충돌 위험도 평가 (Vision-Based Obstacle Collision Risk Estimation of an Unmanned Surface Vehicle)

  • 우주현;김낙완
    • 제어로봇시스템학회논문지
    • /
    • 제21권12호
    • /
    • pp.1089-1099
    • /
    • 2015
  • This paper proposes vision-based collision risk estimation method for an unmanned surface vehicle. A robust image-processing algorithm is suggested to detect target obstacles from the vision sensor. Vision-based Target Motion Analysis (TMA) was performed to transform visual information to target motion information. In vision-based TMA, a camera model and optical flow are adopted. Collision risk was calculated by using a fuzzy estimator that uses target motion information and vision information as input variables. To validate the suggested collision risk estimation method, an unmanned surface vehicle experiment was performed.

Identified Contract Net 프로토콜 기반의 유비쿼터스 시각시스템 (A Ubiquitous Vision System based on the Identified Contract Net Protocol)

  • 김치호;유범재;김학배
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제54권10호
    • /
    • pp.620-629
    • /
    • 2005
  • In this paper, a new protocol-based approach was proposed for development of a ubiquitous vision system. It is possible to apply the approach by regarding the ubiquitous vision system as a multiagent system. Thus, each vision sensor can be regarded as an agent (vision agent). Each vision agent independently performs exact segmentation for a target by color and motion information, visual tracking for multiple targets in real-time, and location estimation by a simple perspective transform. Matching problem for the identity of a target during handover between vision agents is solved by the Identified Contract Net (ICN) protocol implemented for the protocol-based approach. The protocol-based approach by the ICN protocol is independent of the number of vision agents and moreover the approach doesn't need calibration and overlapped region between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. The protocol-based approach was successfully applied for our ubiquitous vision system and operated well through several experiments.

Multi-point displacement monitoring of bridges using a vision-based approach

  • Ye, X.W.;Yi, Ting-Hua;Dong, C.Z.;Liu, T.;Bai, H.
    • Wind and Structures
    • /
    • 제20권2호
    • /
    • pp.315-326
    • /
    • 2015
  • To overcome the drawbacks of the traditional contact-type sensor for structural displacement measurement, the vision-based technology with the aid of the digital image processing algorithm has received increasing concerns from the community of structural health monitoring (SHM). The advanced vision-based system has been widely used to measure the structural displacement of civil engineering structures due to its overwhelming merits of non-contact, long-distance, and high-resolution. However, seldom currently-available vision-based systems are capable of realizing the synchronous structural displacement measurement for multiple points on the investigated structure. In this paper, the method for vision-based multi-point structural displacement measurement is presented. A series of moving loading experiments on a scale arch bridge model are carried out to validate the accuracy and reliability of the vision-based system for multi-point structural displacement measurement. The structural displacements of five points on the bridge deck are measured by the vision-based system and compared with those obtained by the linear variable differential transformer (LVDT). The comparative study demonstrates that the vision-based system is deemed to be an effective and reliable means for multi-point structural displacement measurement.

비색 MOF 가스센서 어레이 기반 고정밀 질환 VOCs 바이오마커 검출을 위한 머신비전 플랫폼 (Machine Vision Platform for High-Precision Detection of Disease VOC Biomarkers Using Colorimetric MOF-Based Gas Sensor Array)

  • 이준영;오승윤;김동민;김영웅;허정석;이대식
    • 센서학회지
    • /
    • 제33권2호
    • /
    • pp.112-116
    • /
    • 2024
  • Gas-sensor technology for volatile organic compounds (VOC) biomarker detection offers significant advantages for noninvasive diagnostics, including rapid response time and low operational costs, exhibiting promising potential for disease diagnosis. Colorimetric gas sensors, which enable intuitive analysis of gas concentrations through changes in color, present additional benefits for the development of personal diagnostic kits. However, the traditional method of visually monitoring these sensors can limit quantitative analysis and consistency in detection threshold evaluation, potentially affecting diagnostic accuracy. To address this, we developed a machine vision platform based on metal-organic framework (MOF) for colorimetric gas sensor arrays, designed to accurately detect disease-related VOC biomarkers. This platform integrates a CMOS camera module, gas chamber, and colorimetric MOF sensor jig to quantitatively assess color changes. A specialized machine vision algorithm accurately identifies the color-change Region of Interest (ROI) from the captured images and monitors the color trends. Performance evaluation was conducted through experiments using a platform with four types of low-concentration standard gases. A limit-of-detection (LoD) at 100 ppb level was observed. This approach significantly enhances the potential for non-invasive and accurate disease diagnosis by detecting low-concentration VOC biomarkers and offers a novel diagnostic tool.

Implementation of a Stereo Vision Using Saliency Map Method

  • Choi, Hyeung-Sik;Kim, Hwan-Sung;Shin, Hee-Young;Lee, Min-Ho
    • Journal of Advanced Marine Engineering and Technology
    • /
    • 제36권5호
    • /
    • pp.674-682
    • /
    • 2012
  • A new intelligent stereo vision sensor system was studied for the motion and depth control of unmanned vehicles. A new bottom-up saliency map model for the human-like active stereo vision system based on biological visual process was developed to select a target object. If the left and right cameras successfully find the same target object, the implemented active vision system with two cameras focuses on a landmark and can detect the depth and the direction information. By using this information, the unmanned vehicle can approach to the target autonomously. A number of tests for the proposed bottom-up saliency map were performed, and their results were presented.

비전 및 IMU 센서의 정보융합을 이용한 자율주행 자동차의 횡방향 제어시스템 개발 및 실차 실험 (Development of a Lateral Control System for Autonomous Vehicles Using Data Fusion of Vision and IMU Sensors with Field Tests)

  • 박은성;유창호;최재원
    • 제어로봇시스템학회논문지
    • /
    • 제21권3호
    • /
    • pp.179-186
    • /
    • 2015
  • In this paper, a novel lateral control system is proposed for the purpose of improving lane keeping performance which is independent from GPS signals. Lane keeping is a key function for the realization of unmanned driving systems. In order to obtain this objective, a vision sensor based real-time lane detection scheme is developed. Furthermore, we employ a data fusion along with a real-time steering angle of the test vehicle to improve its lane keeping performance. The fused direction data can be obtained by an IMU sensor and vision sensor. The performance of the proposed system was verified by computer simulations along with field tests using MOHAVE, a commercial vehicle from Kia Motors of Korea.

Integrated Navigation Design Using a Gimbaled Vision/LiDAR System with an Approximate Ground Description Model

  • Yun, Sukchang;Lee, Young Jae;Kim, Chang Joo;Sung, Sangkyung
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제14권4호
    • /
    • pp.369-378
    • /
    • 2013
  • This paper presents a vision/LiDAR integrated navigation system that provides accurate relative navigation performance on a general ground surface, in GNSS-denied environments. The considered ground surface during flight is approximated as a piecewise continuous model, with flat and slope surface profiles. In its implementation, the presented system consists of a strapdown IMU, and an aided sensor block, consisting of a vision sensor and a LiDAR on a stabilized gimbal platform. Thus, two-dimensional optical flow vectors from the vision sensor, and range information from LiDAR to ground are used to overcome the performance limit of the tactical grade inertial navigation solution without GNSS signal. In filter realization, the INS error model is employed, with measurement vectors containing two-dimensional velocity errors, and one differenced altitude in the navigation frame. In computing the altitude difference, the ground slope angle is estimated in a novel way, through two bisectional LiDAR signals, with a practical assumption representing a general ground profile. Finally, the overall integrated system is implemented, based on the extended Kalman filter framework, and the performance is demonstrated through a simulation study, with an aircraft flight trajectory scenario.

차량용 통합 센서 모듈 제어를 위한 시뮬레이터 개발 (Development of Control Simulator for Integrated Sensor Module of Vehicle)

  • 전진영;박정연;변형기
    • 센서학회지
    • /
    • 제22권1호
    • /
    • pp.65-70
    • /
    • 2013
  • The integrated sensor module of vehicle combines the functions of rain sensor, auto defog sensor, and sun angle sensor into a single module. These functions originally were applied to work separatively. This integrated sensor module should meet the each performance which appears from the individual modules up to the same level or higher. Therefore, it is important to verify the stability and the accuracy considering the characteristics of the integrated sensor module according to various situations. For the verification, we need to use the actual data of integrated sensor module measured but, a lot of time and money is needed to collect data measured under various circumstances when operating. Thus, through the development of this simulator for the control of the integrated sensor module, we can use it effectively for the initial verification of integrated sensor module by implementing the various situations. In this paper, the simulator for controlling the integrated sensor module which combines vision-based rain sensor, auto defog sensor, auto light sensor, and sun angle sensor has been developed.