• 제목/요약/키워드: 비전 센서

Search Result 294, Processing Time 0.025 seconds

Smart Ship Container With M2M Technology (M2M 기술을 이용한 스마트 선박 컨테이너)

  • Sharma, Ronesh;Lee, Seong Ro
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.3
    • /
    • pp.278-287
    • /
    • 2013
  • Modern information technologies continue to provide industries with new and improved methods. With the rapid development of Machine to Machine (M2M) communication, a smart container supply chain management is formed based on high performance sensors, computer vision, Global Positioning System (GPS) satellites, and Globle System for Mobile (GSM) communication. Existing supply chain management has limitation to real time container tracking. This paper focuses on the studies and implementation of real time container chain management with the development of the container identification system and automatic alert system for interrupts and for normal periodical alerts. The concept and methods of smart container modeling are introduced together with the structure explained prior to the implementation of smart container tracking alert system. Firstly, the paper introduces the container code identification and recognition algorithm implemented in visual studio 2010 with Opencv (computer vision library) and Tesseract (OCR engine) for real time operation. Secondly it discusses the current automatic alert system provided for real time container tracking and the limitations of those systems. Finally the paper summarizes the challenges and the possibilities for the future work for real time container tracking solutions with the ubiquitous mobile and satellite network together with the high performance sensors and computer vision. All of those components combine to provide an excellent delivery of supply chain management with outstanding operation and security.

Color Temperature Measurement and Classification of Ambient Light Sources Using two Color Sensors, Yellow and Cyan (옐로우와 사이안 두 광센서를 사용한 주위 조명광의 색온도 측정 및 분류)

  • Choi, Duk-Kyu;Kwon, Yong-Dae;Kwon, Ki-Ryong;Sohng, Kyu-Ik
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.6
    • /
    • pp.409-417
    • /
    • 1998
  • Originally, the reference white of the NTSC system used to be CIE illuminant C of 6774K. However, that of color television receiver has been adjusted to 9300K as a result of consumer preference for a very bluish white for monochrome television. Recent studies have revealed that the preferred color temperature of display white should be 3000K or 4000K higher than that of surround illuminant. Therefore it is required to classify ambient lighting source. In this paper, a efficient method that can distinguish the ambient incandescent lamp from fluorescent lamp under television viewing condition is developed using only two color sensors, yellow and cyan. Experimental results show that the proposed method is very useful for the discrimination of ambient lighting source, fluorescent of 6000K and incandescent lamp of 3000K. The system was also tested for mixture of these light sources.

  • PDF

Evaluation of Accident Prevention Performance of Vision and Radar Sensor for Major Accident Scenarios in Intersection (교차로 주요 사고 시나리오에 대한 비전 센서와 레이더 센서의 사고 예방성능 평가)

  • Kim, Yeeun;Tak, Sehyun;Kim, Jeongyun;Yeo, Hwasoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.5
    • /
    • pp.96-108
    • /
    • 2017
  • The current collision warning and avoidance system(CWAS) is one of the representative Advanced Driver Assistance Systems (ADAS) that significantly contributes to improve the safety performance of a vehicle and mitigate the severity of an accident. However, current CWAS mainly have focused on preventing a forward collision in an uninterrupted flow, and the prevention performance near intersections and other various types of accident scenarios are not extensively studied. In this paper, the safety performance of Vision-Sensor (VS) and Radar-Sensor(RS) - based collision warning systems are evaluated near an intersection area with the data from Naturalistic Driving Study(NDS) of Second Strategic Highway Research Program(SHRP2). Based on the VS and RS data, we newly derived sixteen vehicle-to-vehicle accident scenarios near an intersection. Then, we evaluated the detection performance of VS and RS within the derived scenarios. The results showed that VS and RS can prevent an accident in limited situations due to their restrained field-of-view. With an accident prevention rate of 0.7, VS and RS can prevent an accident in five and four scenarios, respectively. For an efficient accident prevention, a different system that can detect vehicles'movement with longer range than VS and RS is required as well as an algorithm that can predict the future movement of other vehicles. In order to further improve the safety performance of CWAS near intersection areas, a communication-based collision warning system such as integration algorithm of data from infrastructure and in-vehicle sensor shall be developed.

Development of Defect Inspection System for Polygonal Containers (다각형 용기의 결함 검사 시스템 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.485-492
    • /
    • 2021
  • In this paper, we propose the development of a defect inspection system for polygonal containers. Embedded board consists of main part, communication part, input/output part, etc. The main unit is a main arithmetic unit, and the operating system that drives the embedded board is ported to control input/output for external communication, sensors and control. The input/output unit converts the electrical signals of the sensors installed in the field into digital and transmits them to the main module and plays the role of controlling the external stepper motor. The communication unit performs a role of setting an image capturing camera trigger and driving setting of the control device. The input/output unit converts the electrical signals of the control switches and sensors into digital and transmits them to the main module. In the input circuit for receiving the pulse input related to the operation mode, etc., a photocoupler is designed for each input port in order to minimize the interference of external noise. In order to objectively evaluate the accuracy of the development of the proposed polygonal container defect inspection system, comparison with other machine vision inspection systems is required, but it is impossible because there is currently no machine vision inspection system for polygonal containers. Therefore, by measuring the operation timing with an oscilloscope, it was confirmed that waveforms such as Test Time, One Angle Pulse Value, One Pulse Time, Camera Trigger Pulse, and BLU brightness control were accurately output.

Development of an Automatic Unmanned Target Object Carrying System for ASV Sensor Evaluation Methods (ASV용 센서통합평가 기술을 위한 무인 타겟 이동 시스템의 개발)

  • Kim, Eunjeong;Song, Insung;Yu, Sybok;Kim, Byungsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.4 no.2
    • /
    • pp.32-36
    • /
    • 2012
  • The Automatic unmanned target object carrying system (AUTOCS) is developed for testing road vehicle radar and vision sensor. It is important for the target to reflect the realistic target characteristics when developing ASV or ADAS products. The AUTOCS is developed to move the pedestrian or motorcycle target for desired speed and position. The AUTOCS is designed that only payload target which is a manikin or a motorcycle is detected by the sensor not the AUTOCS itself. In order for the AUTOCS to have low exposure to radar, the AUTOCS is stealthy shaped to have low RCS(Radar Cross Section). For deceiving vision sensor, the AUTOCS has a specially designed pattern on outside skin which resembles the asphalt pattern. The AUTOCS has three driving modes which are remote control, path following and replay. The AUTOCS V.1 is tested to verify the radar detect characteristics, and the AUTOCS successfully demonstrated that it is not detected by a car radar. The result is presented in this paper.

VRSMS: VR-based Sensor Management System (VRSMS: 가상현실 기반 센서 관리 시스템)

  • Kim, Han-Soo;Kim, Hyung-Seok
    • Journal of the HCI Society of Korea
    • /
    • v.3 no.2
    • /
    • pp.1-8
    • /
    • 2008
  • We introduce VRSMS(VR-based sensor management system) which is the visualization system of micro-scale air quality monitoring system Airscope[3]. By adopting VR-based visualization method, casual users can get insight of air quality data intuitively. Users can also manipulate sensors in VR space to get specific data they needed. For adaptive visualization, we separated visualization and interaction methods from air quality data. By separation, we can get consistent way for data access so new visualization and interaction methods are easily attached. As one of the adaptive visualization method, we constructed large display system which consists of several small displays. This system can provide accessibility for air quality data to people one public space.

  • PDF

Grouping Images Based on Camera Sensor for Efficient Image Stitching (효율적인 영상 스티칭을 위한 카메라 센서 정보 기반 영상 그룹화)

  • Im, Jiheon;Lee, Euisang;Kim, Hoejung;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.256-259
    • /
    • 2017
  • 파노라마 영상은 카메라 시야각의 제한을 극복하여 넓은 시야를 가질 수 있으므로 컴퓨터 비전, 스테레오 카메라 등의 분야에서 효율적으로 연구되고 있다. 파노라마 영상을 생성하기 위해서는 영상 스티칭 기술이 필요하다. 영상 스티칭 기술은 여러 영상에서 추출한 특징점의 디스크립터를 생성하고, 특징점들 간의 유사도를 비교하여 영상들을 이어 붙여 큰 하나의 영상으로 만드는 것이다. 각각의 특징점은 수십 수백차원의 정보를 가지고 있고, 스티칭 할 영상이 많아질수록 데이터 처리 시간이 증가하게 된다. 본 논문에서는 이를 해결 하기 위해서 전처리 과정으로 겹치는 영역이 많을 것이라고 예상되는 영상들을 그룹화 하는 방법을 제안한다. 카메라 센서 정보를 기반으로 영상들을 미리 그룹화 하여 한 번에 스티칭 할 영상의 수를 줄임으로써 데이터 처리 시간을 줄일 수 있다. 후에 계층적으로 스티칭 하여 하나의 큰 파노라마를 만든다. 실험 결과를 통해 제안한 방법이 기존의 스티칭 처리 시간 보다 짧아진 것을 검증하였다.

  • PDF

Attitude Estimation for the Biped Robot with Vision and Gyro Sensor Fusion (비전 센서와 자이로 센서의 융합을 통한 보행 로봇의 자세 추정)

  • Park, Jin-Seong;Park, Young-Jin;Park, Youn-Sik;Hong, Deok-Hwa
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.6
    • /
    • pp.546-551
    • /
    • 2011
  • Tilt sensor is required to control the attitude of the biped robot when it walks on an uneven terrain. Vision sensor, which is used for recognizing human or detecting obstacles, can be used as a tilt angle sensor by comparing current image and reference image. However, vision sensor alone has a lot of technological limitations to control biped robot such as low sampling frequency and estimation time delay. In order to verify limitations of vision sensor, experimental setup of an inverted pendulum, which represents pitch motion of the walking or running robot, is used and it is proved that only vision sensor cannot control an inverted pendulum mainly because of the time delay. In this paper, to overcome limitations of vision sensor, Kalman filter for the multi-rate sensor fusion algorithm is applied with low-quality gyro sensor. It solves limitations of the vision sensor as well as eliminates drift of gyro sensor. Through the experiment of an inverted pendulum control, it is found that the tilt estimation performance of fusion sensor is greatly improved enough to control the attitude of an inverted pendulum.

Development of Vision Sensor Module for the Measurement of Welding Profile (용접 형상 측정용 시각 센서 모듈 개발)

  • Kim C.H.;Choi T.Y.;Lee J.J.;Suh J.;Park K.T.;Kang H.S.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.285-286
    • /
    • 2006
  • The essential tasks to operate the welding robot are the acquisition of the position and/or shape of the parent metal. For the seam tracking or the robot automation, many kinds of contact and non-contact sensors are used. Recently, the vision sensor is most popular. In this paper, the development of the system which measures the profile of the welding part is described. The total system will be assembled into a compact module which can be attached to the head of welding robot system. This system uses the line-type structured laser diode and the vision sensor It implemented Direct Linear Transformation (DLT) for the camera calibration as well as radial distortion correction. The three dimensional shape of the parent metal is obtained after simple linear transformation and therefore, the system operates in real time. Some experiments are carried out to evaluate the performance of the developed system.

  • PDF

A Study on IMM-PDAF based Sensor Fusion Method for Compensating Lateral Errors of Detected Vehicles Using Radar and Vision Sensors (레이더와 비전 센서를 이용하여 선행차량의 횡방향 운동상태를 보정하기 위한 IMM-PDAF 기반 센서융합 기법 연구)

  • Jang, Sung-woo;Kang, Yeon-sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.633-642
    • /
    • 2016
  • It is important for advanced active safety systems and autonomous driving cars to get the accurate estimates of the nearby vehicles in order to increase their safety and performance. This paper proposes a sensor fusion method for radar and vision sensors to accurately estimate the state of the preceding vehicles. In particular, we performed a study on compensating for the lateral state error on automotive radar sensors by using a vision sensor. The proposed method is based on the Interactive Multiple Model(IMM) algorithm, which stochastically integrates the multiple Kalman Filters with the multiple models depending on lateral-compensation mode and radar-single sensor mode. In addition, a Probabilistic Data Association Filter(PDAF) is utilized as a data association method to improve the reliability of the estimates under a cluttered radar environment. A two-step correction method is used in the Kalman filter, which efficiently associates both the radar and vision measurements into single state estimates. Finally, the proposed method is validated through off-line simulations using measurements obtained from a field test in an actual road environment.