• Title/Summary/Keyword: Video sensor

Search Result 321, Processing Time 0.027 seconds

Development of a Portable Multi-sensor System for Geo-referenced Images and its Accuracy Evaluation (Geo-referenced 영상 획득을 위한 휴대용 멀티센서 시스템 구축 및 정확도 평가)

  • Lee, Ji-Hun;Choi, Kyoung-Ah;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.6
    • /
    • pp.637-643
    • /
    • 2010
  • In this study, we developed a Portable Multi-sensor System, which consists of a video camera, a GPS/MEMS IMU and a UMPC to acquire video images and position/attitude data. We performed image georeferencing based on the bundle adjustment without ground control points using the acquired data and then evaluated the effectiveness of our system through the accuracy verification. The experimental results showed that the RMSE of relative coordinates on the ground point coordinates obtained from our system was several centimeters. Our system can be efficiently utilized to obtain the 3D model of object and their relative coordinates. In future, we plan to improve the accuracy of absolute coordinates through the rigorous calibration of the system and camera.

Enhancement on 3 DoF Image Stitching Using Inertia Sensor Data (관성 센서 데이터를 활용한 3 DoF 이미지 스티칭 향상)

  • Kim, Minwoo;Kim, Sang-Kyun
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.51-61
    • /
    • 2017
  • This paper proposes a method to generate panoramic images by combining conventional feature extraction algorithms (e.g., SIFT, SURF, MPEG-7 CDVS) with sensed data from an inertia sensor to enhance the stitching results. The challenge of image stitching increases when the images are taken from two different mobile phones with no posture calibration. Using inertia sensor data obtained by the mobile phone, images with different yaw angles, pitch angles, roll angles are preprocessed and adjusted before performing stitching process. Performance of stitching (e.g., feature extraction time, inlier point numbers, stitching accuracy) between conventional feature extraction algorithms is reported along with the stitching performance with/without using the inertia sensor data.

A Study for Video Data Acquisition and Alternate Node using Quadcopter in Disaster Detection System based on Wireless Sensor Networks (무선 센서 네트워크 기반의 재난재해 감지 시스템에서 쿼드콥터를 이용한 영상 데이터 수집 및 대체 노드에 관한 연구)

  • Jeong, Ji-Eun;Lee, Kang-whan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.493-495
    • /
    • 2016
  • In this paper, we propose an alternative method of collecting image data and changing nodes in wireless sensor networks based sensing disaster systems configured with zigbee using quad copter. It is difficult to observe a wide area caused by existing fixed video cameras in the wireless sensor network system using a conventional image-based. Also the nodes don't have provide to alternate methodology in situations when missing due to a disaster or destruction. In this paper, to use wearing the IP camera and the communication node to the quad copter, it provides a method to take advantage of as improving an alternative node of the reliability of the sensor node. The results show to improve the reliability of the sensor nodes and real-time status information through a video quad Copt than conventional systems.

  • PDF

Design and Implementation of Emergency Recognition System based on Multimodal Information (멀티모달 정보를 이용한 응급상황 인식 시스템의 설계 및 구현)

  • Kim, Eoung-Un;Kang, Sun-Kyung;So, In-Mi;Kwon, Tae-Kyu;Lee, Sang-Seol;Lee, Yong-Ju;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.181-190
    • /
    • 2009
  • This paper presents a multimodal emergency recognition system based on visual information, audio information and gravity sensor information. It consists of video processing module, audio processing module, gravity sensor processing module and multimodal integration module. The video processing module and gravity sensor processing module respectively detects actions such as moving, stopping and fainting and transfer them to the multimodal integration module. The multimodal integration module detects emergency by fusing the transferred information and verifies it by asking a question and recognizing the answer via audio channel. The experiment results show that the recognition rate of video processing module only is 91.5% and that of gravity sensor processing module only is 94%, but when both information are combined the recognition result becomes 100%.

PSD Sensor Module Based Monocular Motion Capture System (PSD센서모듈 기반 단안 모션캡쳐 시스템)

  • Kim, Yu-Geon;Ryu, Young-Kee;Oh, Choon-Suk
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.582-584
    • /
    • 2006
  • This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system's compact size, low-cost, ease of installation, and high frame rates are suitable for high speed motion tracking in games.

  • PDF

Capturing Distance Parameters Using a Laser Sensor in a Stereoscopic 3D Camera Rig System

  • Chung, Wan-Young;Ilham, Julian;Kim, Jong-Jin
    • Journal of Sensor Science and Technology
    • /
    • v.22 no.6
    • /
    • pp.387-392
    • /
    • 2013
  • Camera rigs for shooting 3D video are classified as manual, motorized, or fully automatic. Even in an automatic camera rig, the process of Stereoscopic 3D (S3D) video capture is very complex and time-consuming. One of the key time-consuming operations is capturing the distance parameters, which are near distance, far distance, and convergence distance. Traditionally these distances are measured by tape measure or triangular indirect measurement methods. These two methods consume a long time for every scene in shot. In our study, a compact laser distance sensing system with long range distance sensitivity is developed. The system is small enough to be installed on top of a camera and the measuring accuracy is within 2% even at a range of 50 m. The shooting time of an automatic camera rig equipped with the laser distance sensing system can be reduced significantly to less than a minute.

Development and Evaluation of 3-Axis Gyro Sensor based Servo motion control (3-Axis Gyro Sensor based on Servo Motion Control 장치의 성능평가기준 및 시험규격개발)

  • Lee, WonBu;Chang, Chulsoon;Kim, JeongKuk;Park, Soohong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.627-630
    • /
    • 2009
  • The combination of the marine use various multi sensor surveillance system technology with the development of servo motion control algorithm and gyro sensor in six freedom motion is implemented to analyze the movement response. The stabilization of the motion control is developed and Nano driving Precision Pan-Tilt/Gimbal system is obtained from the security positioning cameras with ultra high speed device is used to carry out the exact behavior of the device. The exact behavior will be used to make a essential equipment. Finally the development of the Nano Driving Multi Sensor, Nano of Surveillance System Driving Precision Pan-Tilt/Gimbal optimal design and production, 3-aix Gyro Sensor based with Servo Motion Control algorithm development, Image trace video software and hardware tracking the development is organized and discuss in details. The development of the equipment and the system integration are fully experimented and verified.

  • PDF

Method of extracting context from media data by using video sharing site

  • Kondoh, Satoshi;Ogawa, Takeshi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.709-713
    • /
    • 2009
  • Recently, a lot of research that applies data acquired from devices such as cameras and RFIDs to context aware services is being performed in the field on Life-Log and the sensor network. A variety of analytical techniques has been proposed to recognize various information from the raw data because video and audio data include a larger volume of information than other sensor data. However, manually watching a huge amount of media data again has been necessary to create supervised data for the update of a class or the addition of a new class because these techniques generally use supervised learning. Therefore, the problem was that applications were able to use only recognition function based on fixed supervised data in most cases. Then, we proposed a method of acquiring supervised data from a video sharing site where users give comments on any video scene because those sites are remarkably popular and, therefore, many comments are generated. In the first step of this method, words with a high utility value are extracted by filtering the comment about the video. Second, the set of feature data in the time series is calculated by applying functions, which extract various feature data, to media data. Finally, our learning system calculates the correlation coefficient by using the above-mentioned two kinds of data, and the correlation coefficient is stored in the DB of the system. Various other applications contain a recognition function that is used to generate collective intelligence based on Web comments, by applying this correlation coefficient to new media data. In addition, flexible recognition that adjusts to a new object becomes possible by regularly acquiring and learning both media data and comments from a video sharing site while reducing work by manual operation. As a result, recognition of not only the name of the seen object but also indirect information, e.g. the impression or the action toward the object, was enabled.

  • PDF

Adaptive Multi-view Video Interpolation Method Based on Inter-view Nonlinear Moving Blocks Estimation (시점 간 비선형 움직임 블록 예측에 기초한 적응적 다시점 비디오 보상 보간 기법)

  • Kim, Jin-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.4
    • /
    • pp.9-18
    • /
    • 2014
  • Recently, many researches have been focused on multi-view video applications and services such as wireless video surveillance networks, wireless video sensor networks and wireless mobile video. In multi-view video signal processing, to exploit the strong correlation between images acquired by different cameras plays great role in developing a core technique of multi-view video coding. This paper proposes an adaptive multi-view video interpolation technique which is applicable for multi-view distributed video coding without requiring any cooperation amongst the cameras. The proposed algorithm estimates the non-linear moving blocks and employs disparity compensated view prediction, and then fills in the unreliable blocks. Through computer simulations, it is shown that the proposed method outperforms the conventional methods.

A Fast Image Matching Method for Oblique Video Captured with UAV Platform

  • Byun, Young Gi;Kim, Dae Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.2
    • /
    • pp.165-172
    • /
    • 2020
  • There is growing interest in Vision-based video image matching owing to the constantly developing technology of unmanned-based systems. The purpose of this paper is the development of a fast and effective matching technique for the UAV oblique video image. We first extracted initial matching points using NCC (Normalized Cross-Correlation) algorithm and improved the computational efficiency of NCC algorithm using integral image. Furthermore, we developed a triangulation-based outlier removal algorithm to extract more robust matching points among the initial matching points. In order to evaluate the performance of the propose method, our method was quantitatively compared with existing image matching approaches. Experimental results demonstrated that the proposed method can process 2.57 frames per second for video image matching and is up to 4 times faster than existing methods. The proposed method therefore has a good potential for the various video-based applications that requires image matching as a pre-processing.