• Title/Summary/Keyword: Video sensor

Search Result 320, Processing Time 0.024 seconds

A Comparison Between the Tape Switch Sensor and the Video Images Frame Analysis Method on the Speed Measurement of Vehicle (차량 속도 측정의 실무적용을 위한 테이프스위치 센서 방식과 영상 프레임 분석방법의 비교연구)

  • Kim Man-Bae;Hyun Cheol-Seung;Yoo Sung-Jun;Hong You-Sik
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.9 s.351
    • /
    • pp.120-127
    • /
    • 2006
  • In Korea the vehicle enforcement system(VES) detects speeding vehicle using two inductive loop detectors. And the speed reliability of theirs are evaluated through the analysis of image frame which is captured from video camera. This method is validated to evaluate VES on Korea Laboratory Accreditation Scheme(KOLAS) but it needs much time and expense for the analysis of image frame. Because the number of VES are increasing rapidly, the requirement of new evaluation method is necessary. On this paper, the tape switch sensor as a substitution of existing method was introduced and its application on the site are discussed. On the site test we compared the tape switch sensor on the speed measurement of vehicle with the video image frame. As a result we have founded that the tape switch sensor is evaluated to be feasible system on site in respect to measure the overspeed vehicle.

A 3-D Vision Sensor Implementation on Multiple DSPs TMS320C31 (다중 TMS320C31 DSP를 사용한 3-D 비젼센서 Implementation)

  • Oksenhendler, V.;Bensrhair, Abdelaziz;Miche, Pierre;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.2
    • /
    • pp.124-130
    • /
    • 1998
  • High-speed 3D vision systems are essential for autonomous robot or vehicle control applications. In our study, a stereo vision process has been developed. It consists of three steps : extraction of edges in right and left images, matching corresponding edges and calculation of the 3D map. This process is implemented in a VME 150/40 Imaging Technology vision system. It is a modular system composed by a display, an acquisition, a four Mbytes image frame memory, and three computational cards. Programmable accelerator computational modules are running at 40 MHz and are based on TMS320C31 DSP with a $64{\times}32$ bit instruction cache and two $1024{\times}32$ bit internal RAMs. Each is equipped with 512 Kbytes static RAM, 4 Mbytes image memory, 1 Mbytes flash EEPROM and a serial port. Data transfers and communications between modules are provided by three 8 bit global video bus, and three local configurable pipeline 8 bit video bus. The VME bus is dedicated to system management. Tasks between DSPs are distributed as follows: two DSPs are used to edges detection, one for the right image and the other for the left one. The last processor computes the matching process and the 3D calculation. With $512{\times}512$ pixels images, this sensor generates dense 3D maps at a rate of about 1 Hz depending of the scene complexity. Results can surely be improved by using a special suited multiprocessors cards.

  • PDF

Development of Human Following Method of Mobile Robot Using QR Code and 2D LiDAR Sensor (QR 2D 코드와 라이다 센서를 이용한 모바일 로봇의 사람 추종 기법 개발)

  • Lee, SeungHyeon;Choi, Jae Won;Van Dang, Chien;Kim, Jong-Wook
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.1
    • /
    • pp.35-42
    • /
    • 2020
  • In this paper, we propose a method to keep the robot at a distance of 30 to 45cm from the user in consideration of each individual's minimum area and inconvenience by using a 2D LiDAR sensor LDS-01 as the secondary sensor along with a QR code. First, the robot determines the brightness of the video and the presence of a QR code. If the light is bright and there is a QR code due to human's presence, the range of the 2D LiDAR sensor is set based on the position of the QR code in the captured image to find and follow the correct target. On the other hand, when the robot does not recognize the QR code due to the low light, the target is followed using a database that stores obstacles and human actions made before the experiment using only the 2D LiDAR sensor. As a result, our robot can follow the target person in four situations based on nine locations with seven types of motion.

An Intelligent Wireless Camera Surveillance System with Motion sensor and Remote Control (무선조종과 모션 센서를 이용한 지능형 무선감시카메라 구현)

  • Lee, Young-Woong;Kim, Jong-Nam
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.672-676
    • /
    • 2009
  • Recently, intelligent surveillance camera systems are needed popularly. However, current researches are focussed on improvement of a single module rather than implementation of an integrated system. In this paper, we implemented a wireless surveillance camera system which is composed of face detection, and using motion sensor. In our implementation, we used a camera module from SHARP, a pair of wireless video transmission module from ECOM, a pair of ZigBee RF wireless transmission module from ROBOBLOCK, and a motion sensor module (AMN14111) from PANASONIC. We used OpenCV library for face dection and MFC for implement software. We identified real-time operations of face detection, PTT control, and motion sensor detecton. Thus, the implemented system will be useful for the applications of remote control, human detection, and using motion sensor.

  • PDF

Development and Performance Analysis of a Near Real-Time Sensor Model Correction System for Frame Motion Imagery (프레임동영상의 근실시간 센서모델 보정시스템 개발 및 성능분석)

  • Kwon, Hyuk Tae;Koh, Jin-Woo;Kim, Sanghee;Park, Se Hyoung
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.21 no.3
    • /
    • pp.315-322
    • /
    • 2018
  • Due to the increasing demand for more rapid, precise and accurate geolocation of the targets on video frames from UAVs, an efficient and timely method for correcting sensor models of motion imagery is required. In this paper, we propose a method to adjust or correct sensor models of motion imagery frames using space resection via image matching with reference data. The proposed method adopts image matching between the motion imagery frames and the reference frames which are synthesized from reference data. Ground or reference control points are generated or selected through the matching process in near real time, and are used for space resection to get adjusted sensor models. Finally, more precise and accurate geolocation of the targets can possibly be done on the fly, and we have got the promising result on performance analysis in terms of the geolocation quality.

An Energy-Aware Cooperative Communication Scheme for Wireless Multimedia Sensor Networks (무선 멀티미디어 센서 네트워크에서 에너지 효율적인 협력 통신 방법)

  • Kim, Jeong-Oh;Kim, Hyunduk;Choi, Wonik
    • Journal of KIISE
    • /
    • v.42 no.5
    • /
    • pp.671-680
    • /
    • 2015
  • Numerous clustering schemes have been proposed to increase energy efficiency in wireless sensor networks. Clustering schemes consist of a hierarchical structure in the sensor network to aggregate and transmit data. However, existing clustering schemes are not suitable for use in wireless multimedia sensor networks because they consume a large quantity of energy and have extremely short lifetime. To address this problem, we propose the Energy-Aware Cooperative Communication (EACC) method which is a novel cooperative clustering method that systematically adapts to various types of multimedia data including images and video. An evaluation of its performance shows that the proposed method is up to 2.5 times more energy-efficient than the existing clustering schemes.

Low-Resolution Depth Map Upsampling Method Using Depth-Discontinuity Information (깊이 불연속 정보를 이용한 저해상도 깊이 영상의 업샘플링 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.875-880
    • /
    • 2013
  • When we generate 3D video that provides immersive and realistic feeling to users, depth information of the scene is essential. Since the resolution of the depth map captured by a depth sensor is lower than of the color image, we need to upsample the low-resolution depth map for high-resolution 3D video generation. In this paper, we propose a depth upsampling method using depth-discontinuity information. Using the high-resolution color image and the low-resolution depth map, we detect depth-discontinuity regions. Then, we define an energy function for the depth map upsampling and optimize it using the belief propagation method. Experimental results show that the proposed method outperforms other depth upsampling methods in terms of the bad pixel rate.

Study on Low Delay and Adaptive Video Transmission for a Surveillance System in Visual Sensor Networks (비디오 센서 망에서의 감시 체계를 위한 저지연/적응형 영상전송 기술 연구)

  • Lee, In-Woong;Kim, Hak-Sub;Oh, Tae-Geun;Lee, Sang-Hoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.5
    • /
    • pp.435-446
    • /
    • 2014
  • Even if it is important to transmit high rate multimedia information without any transmission errors for surveillance systems, it is difficult to achieve error-free transmission due to infra-less adhoc networks. In order to reduce the transmission errors furthermore, additional signal overheads or retransmission of signals should be required, but they may lead to transmission delay. This paper represents a study on low delay and adaptive video transmission for the unmanned surveillance systems by developing system protocols. In addition, we introduce an efficient and adaptive control algorithm using system parameters for exploiting unmanned surveillance system properly over multi-channels.

A Development of 3D video simulation system using GPS (GPS와 9-axis sensor를 이용한 3D 영상 구현 시뮬레이션 시스템)

  • Joo, Sang-Woong;Shim, Kyou-Chul;Kim, Kyeong-Hwan;Zhu, Jiang;Liu, Hao;Liu, Jie;Jeong, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.1021-1023
    • /
    • 2013
  • Currently, aircraft and automobile simulator for training provides a variety of training by making hypothetical situation on a simulator Installed on the ground Currently. And the instructor maximizes the effectiveness of the training by Monitoring training and instructing the required training. When trainees are boarding the aircraft or automobile. The Instructor in the ground is not able to monitoring aircraft, automobile. The assessment of the training is not easy after the end of the training Therefore, it is difficult to provide high quality of education to the students. In this paper, Simulation software is to develop the following. Collecting GPS and real-time information for aircraft, automobile ${\grave{a}}implementing$ 3D simulation. Implementing Current image of the aircraft or automobile in the screen by 3D Real-time monitoring of training situation at the control center utilizing for training saving 3D video files Analysis, evaluation on training After the end of the training.

  • PDF

A Study on Multi-function Implementation using Single Sensor (단일 센서를 사용한 다기능 구현에 관한 연구)

  • Choi, Su-Yeol;Lee, Chang-Hee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.4
    • /
    • pp.133-137
    • /
    • 2016
  • The video and audio information occupies a large portion of the IoT information. Various sensors can be used in a more accurate situation awareness and the absence of the main information has been required. Increasing in resource management in accordance with the use of various sensors. As a method to reduce the resources required in the communication of the various sensors and find the possibility to process the sensor information that can take the place of the other sensor. In this paper, using the LIS302 DL MEMS motion sensor to measure the data in the ping-pong ball, shuttlecock, tennis ball falling into table tennis. Data measured in the three object was confirmed that in proportion to the amount of impact. This experiment using the accelerometer can be confirmed that changes in the amount of impact. The results using a single multi-function sensor showed a possible implementation. In addition, the recognized in consideration of the situation in the early development stage of the multi-function sensor.