• Title/Summary/Keyword: Vision Sensor

Search Result 829, Processing Time 0.033 seconds

Development of an Automatic Unmanned Target Object Carrying System for ASV Sensor Evaluation Methods (ASV용 센서통합평가 기술을 위한 무인 타겟 이동 시스템의 개발)

  • Kim, Eunjeong;Song, Insung;Yu, Sybok;Kim, Byungsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.4 no.2
    • /
    • pp.32-36
    • /
    • 2012
  • The Automatic unmanned target object carrying system (AUTOCS) is developed for testing road vehicle radar and vision sensor. It is important for the target to reflect the realistic target characteristics when developing ASV or ADAS products. The AUTOCS is developed to move the pedestrian or motorcycle target for desired speed and position. The AUTOCS is designed that only payload target which is a manikin or a motorcycle is detected by the sensor not the AUTOCS itself. In order for the AUTOCS to have low exposure to radar, the AUTOCS is stealthy shaped to have low RCS(Radar Cross Section). For deceiving vision sensor, the AUTOCS has a specially designed pattern on outside skin which resembles the asphalt pattern. The AUTOCS has three driving modes which are remote control, path following and replay. The AUTOCS V.1 is tested to verify the radar detect characteristics, and the AUTOCS successfully demonstrated that it is not detected by a car radar. The result is presented in this paper.

Efficient Digitizing in Reverse Engineering By Sensor Fusion (역공학에서 센서융합에 의한 효율적인 데이터 획득)

  • Park, Young-Kun;Ko, Tae-Jo;Kim, Hrr-Sool
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.9
    • /
    • pp.61-70
    • /
    • 2001
  • This paper introduces a new digitization method with sensor fusion for shape measurement in reverse engineering. Digitization can be classified into contact and non-contact type according to the measurement devices. Important thing in digitization is speed and accuracy. The former is excellent in speed and the latter is good for accuracy. Sensor fusion in digitization intends to incorporate the merits of both types so that the system can be automatized. Firstly, non-contact sensor with vision system acquires coarse 3D point data rapidly. This process is needed to identify and loco]ice the object located at unknown position on the table. Secondly, accurate 3D point data can be automatically obtained using scanning probe based on the previously measured coarse 3D point data. In the research, a great number of measuring points of equi-distance were instructed along the line acquired by the vision system. Finally, the digitized 3D point data are approximated to the rational B-spline surface equation, and the free-formed surface information can be transferred to a commercial CAD/CAM system via IGES translation in order to machine the modeled geometric shape.

  • PDF

Active Peg-in-hole of Chamferless Parts Using Multi-sensors (다중센서를 사용한 챔퍼가 없는 부품의 능동적인 삽입작업)

  • Jeon, Hun-Jong;Kim, Kab-Il;Kim, Dae-Won;Son, Yu-Seck
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.410-413
    • /
    • 1993
  • Chamferless peg-in-hole process of the cylindrical type parts using force/torque sensor and vision sensor is analyzed and simulated in this paper. Peg-in-hole process is classified to the normal mode (only position error) and tilted mode(position and orientation error). The tilted mode is sub-classified to the small and the big tilted mode according to the relative orientation error. Since the big tilted node happened very rare, most papers dealt with only the normal or the small tilted mode. But the most errors of the peg-in-hole process happened in the big tilted mode. This problem is analyzed and simulated in this paper using the force/torque sensor and vision senor. In the normal mode, fuzzy logic is introduced to combine the data of the force/torque sensor and vision sensor. Also the whole processing algorithms and simulations are presented.

  • PDF

Development of A Vision-based Lane Detection System with Considering Sensor Configuration Aspect (센서 구성을 고려한 비전 기반 차선 감지 시스템 개발)

  • Park Jaehak;Hong Daegun;Huh Kunsoo;Park Jahnghyon;Cho Dongil
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.4
    • /
    • pp.97-104
    • /
    • 2005
  • Vision-based lane sensing systems require accurate and robust sensing performance in lane detection. Besides, there exists trade-off between the computational burden and processor cost, which should be considered for implementing the systems in passenger cars. In this paper, a stereo vision-based lane detection system is developed with considering sensor configuration aspects. An inverse perspective mapping method is formulated based on the relative correspondence between the left and right cameras so that the 3-dimensional road geometry can be reconstructed in a robust manner. A new monitoring model for estimating the road geometry parameters is constructed to reduce the number of the measured signals. The selection of the sensor configuration and specifications is investigated by utilizing the characteristics of standard highways. Based on the sensor configurations, it is shown that appropriate sensing region on the camera image coordinate can be determined. The proposed system is implemented on a passenger car and verified experimentally.

Computer Vision Platform Design with MEAN Stack Basis (MEAN Stack 기반의 컴퓨터 비전 플랫폼 설계)

  • Hong, Seonhack;Cho, Kyungsoon;Yun, Jinseob
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.3
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, we implemented the computer vision platform design with MEAN Stack through Raspberry PI 2 model which is an open source platform. we experimented the face recognition, temperature and humidity sensor data logging with WiFi communication under Raspberry Pi 2 model. Especially we directly made the shape of platform with 3D printing design. In this paper, we used the face recognition algorithm with OpenCV software through haarcascade feature extraction machine learning algorithm, and extended the functionality of wireless communication function ability with Bluetooth technology for the purpose of making Android Mobile devices interface. And therefore we implemented the functions of the vision platform for identifying the face recognition characteristics of scanning with PI camera with gathering the temperature and humidity sensor data under IoT environment. and made the vision platform with 3D printing technology. Especially we used MongoDB for developing the performance of vision platform because the MongoDB is more akin to working with objects in a programming language than what we know of as a database. Afterwards, we would enhance the performance of vision platform for clouding functionalities.

A Study on the Real-Time Vision Control Method for Manipulator's position Control in the Uncertain Circumstance (불확실한 환경에서 매니퓰레이터 위치제어를 위한 실시간 비젼제어기법에 관한 연구)

  • Jang, W.-S.;Kim, K.-S.;Shin, K.-S.;Joo, C.;;Yoon, H.-K.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.12
    • /
    • pp.87-98
    • /
    • 1999
  • This study is concentrated on the development of real-time estimation model and vision control method as well as the experimental test. The proposed method permits a kind of adaptability not otherwise available in that the relationship between the camera-space location of manipulable visual cues and the vector of manipulator joint coordinates is estimate in real time. This is done based on a estimation model ta\hat generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation as well as uncertainty of manipulator. This vision control method is roboust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the manipulator, and correct knowledge of position and orientation of CCD camera with respect to the manipulator base. Finally, evidence of the ability of real-time vision control method for manipulator's position control is provided by performing the thin-rod placement in space with 2 cues test model which is completed without a prior knowledge of camera or manipulator positions. This feature opens the door to a range of applications of manipulation, including a mobile manipulator with stationary cameras tracking and providing information for control of the manipulator event.

  • PDF

Vision chip for edge detection with resolution improvement through simplification of unit-pixel circuit (단위 픽셀 회로의 간소화를 통해서 해상도를 향상시킨 이차원 윤곽 검출용 시각칩)

  • Sung, Dong-Kyu;Kong, Jae-Sung;Hyun, Hyo-Young;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.17 no.1
    • /
    • pp.15-22
    • /
    • 2008
  • When designing image sensors including a CMOS vision chip for edge detection, resolution is a significant factor to evaluate the performance. It is hard to improve the resolution of a bio-inspired CMOS vision using a resistive network because the vision chip contains many circuits such as a resistive network and several signal processing circuits as well as photocircuits of general image sensors such as CMOS image sensor (CIS). Low resolution restricts the use of the application systems. In this paper, we improve the resolution through layout and circuit optimization. Furthermore, we have designed a printed circuit board using FPGA which controls the vision chip. The vision chip for edge detection has been designed and fabricated by using $0.35{\mu}m$ double-poly four-metal CMOS technology, and its output characteristics have been investigated.

Machine Vision Platform for High-Precision Detection of Disease VOC Biomarkers Using Colorimetric MOF-Based Gas Sensor Array (비색 MOF 가스센서 어레이 기반 고정밀 질환 VOCs 바이오마커 검출을 위한 머신비전 플랫폼)

  • Junyeong Lee;Seungyun Oh;Dongmin Kim;Young Wung Kim;Jungseok Heo;Dae-Sik Lee
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.2
    • /
    • pp.112-116
    • /
    • 2024
  • Gas-sensor technology for volatile organic compounds (VOC) biomarker detection offers significant advantages for noninvasive diagnostics, including rapid response time and low operational costs, exhibiting promising potential for disease diagnosis. Colorimetric gas sensors, which enable intuitive analysis of gas concentrations through changes in color, present additional benefits for the development of personal diagnostic kits. However, the traditional method of visually monitoring these sensors can limit quantitative analysis and consistency in detection threshold evaluation, potentially affecting diagnostic accuracy. To address this, we developed a machine vision platform based on metal-organic framework (MOF) for colorimetric gas sensor arrays, designed to accurately detect disease-related VOC biomarkers. This platform integrates a CMOS camera module, gas chamber, and colorimetric MOF sensor jig to quantitatively assess color changes. A specialized machine vision algorithm accurately identifies the color-change Region of Interest (ROI) from the captured images and monitors the color trends. Performance evaluation was conducted through experiments using a platform with four types of low-concentration standard gases. A limit-of-detection (LoD) at 100 ppb level was observed. This approach significantly enhances the potential for non-invasive and accurate disease diagnosis by detecting low-concentration VOC biomarkers and offers a novel diagnostic tool.

A study on development of automatic welding system for compressor ease using vision sensor (시각센서를 이용한 비원형 압축기 케이스의 자동용접시스템 개발에 관한 연구)

  • 박현준;유제용;나석주;홍성준;강형식
    • Journal of Welding and Joining
    • /
    • v.14 no.5
    • /
    • pp.78-86
    • /
    • 1996
  • Vision sensor was used to track the weld line of the compressor case. The compressor case was fixed in jig equipped with the rotating system, and two torches having one degree of freedom was applied in automatic welding system. The radius of rotation for the compressor case is varying with each rotating angle, while, the angle velocity is constant Therefore, an algorithm to extract the feature of the compressor case for varying rotation angle is needed. To over come the avove difficulties, the curve fitting and composite curve were used. The experiment to verify the proposed algorithm showed desirable results for tracking the welding line of compressor case.

  • PDF

Development of a Lane Departure Avoidance System using Vision Sensor and Active Steering Control (비전 센서 및 능동 조향 제어를 이용한 차선 이탈 방지 시스템 개발)

  • 허건수;박범찬;홍대건
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.11 no.6
    • /
    • pp.222-228
    • /
    • 2003
  • Lane departure avoidance system is one of the key technologies for the future active-safety passenger cars. The lane departure avoidance system is composed of two subsystems; lane sensing algorithm and active-steering controller. In this paper, the road image is obtained by vision sensor and the lane parameters are estimated using image processing and Kalman Filter technique. The active-steering controller is designed to prevent the lane departure. The developed active-steering controller can be realized by steer-by-wire actuator. The lane-sensing algorithm and active-steering controller are implemented into the steering HILS(Hardware-In-the-Loop Simulation) and their performance is evaluated with a human driver in the loop.