• Title/Summary/Keyword: video navigation

Search Result 161, Processing Time 0.025 seconds

Development Robust Video Stabilization algorithm based Opticla Flow (Optical flow를 이용한 영상의 흔들림 보정 알고리듬 개발)

  • Cho, Gyeong-Rae;Doh, Deog-Hee;Kim, Hong-Yeob;Jin, Gwang-Ja;Kim, Do-Hyun
    • Journal of the Korean Society of Visualization
    • /
    • v.17 no.3
    • /
    • pp.66-69
    • /
    • 2019
  • An image compensating algorithm with high-vibration movement is proposed, using optical flow and the Kalman Filter. The temporal motion vector field is calculated by Optical flow and suspicious vectors are removed or adjusted by the Gaussian interpolation method. The high-vibrated vector filled is stabilized by the Kalman filter. Lastly, compensated images are obtained by affine transformation. This proposed algorithm gives good compensated video images on high-vibration situations.

Observation of bubble dynamics under water in high-magnetic fields using a high-speed video camera

  • Lee, Seung-Hwan;Takeda, Minoru
    • Journal of Navigation and Port Research
    • /
    • v.28 no.2
    • /
    • pp.141-148
    • /
    • 2004
  • The observations of rapid motion of bubbles under water for approximately 50ms or less in high . magnetic fields of 10 T have been carried out successfully for the first time. The observation system constructed is composed of a high-speed video camera, a telescope, a cryostat with a split-type superconducting magnet, a light source, a mirror and a transparent sample cell. Using this system, the influence of magnetic field on the path and shape of single bubbles of $O_2$(paramagnetism) and $N_2$ (diamagnetism) has been examined carefully. Experimental values describing the path are in good agreement with theoretical values calculated on the basis of the magneto-Archimedes effect, despite the effect of magnetism on the bubble. However, no effect of magnetism on the shape of the bubble is observed In addition, the influence of magnetic field on drag coefficient of the bubble is discussed.

Evaluation of the Use of Inertial Navigation Systems to Improve the Accuracy of Object Navigation

  • Iasechko, Maksym;Shelukhin, Oleksandr;Maranov, Alexandr;Lukianenko, Serhii;Basarab, Oleksandr;Hutchenko, Oleh
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.3
    • /
    • pp.71-75
    • /
    • 2021
  • The article discusses the dead reckoning of the traveled path based on the analysis of the video data stream coming from the optoelectronic surveillance devices; the use of relief data makes it possible to partially compensate for the shortcomings of the first method. Using the overlap of the photo-video data stream, the terrain is restored. Comparison with a digital terrain model allows the location of the aircraft to be determined; the use of digital images of the terrain also allows you to determine the coordinates of the location and orientation by comparing the current view information. This method provides high accuracy in determining the absolute coordinates even in the absence of relief. It also allows you to find the absolute position of the camera, even when its approximate coordinates are not known at all.

How to Search and Evaluate Video Content for Online Learning (온라인 학습을 위한 동영상 콘텐츠 검색 및 평가방법)

  • Yong, Sung-Jung;Moon, Il-Young
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.3
    • /
    • pp.238-244
    • /
    • 2020
  • The development and distribution rate of smartphones have progressed so rapidly that it is safe for the entire nation to use them in the smart age, and the use of smartphones has become an essential medium for the use of domestic media content, and many people are using various contents regardless of gender, age, or region. Recently, various media outlets have been consuming video content for online learning, indicating that learners utilize video content online for learning. In the previous research, satisfaction studies were conducted according to the type of content, and the improvement plan was necessary because no research was conducted on how to evaluate the learning content itself and provide it to learners. In this paper, we would like to propose a system through evaluation and review of learning content itself as a way to improve the way of providing video content for learning and quality learning content.

Robot vision system for face tracking using color information from video images (로봇의 시각시스템을 위한 동영상에서 칼라정보를 이용한 얼굴 추적)

  • Jung, Haing-Sup;Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.4
    • /
    • pp.553-561
    • /
    • 2010
  • This paper proposed the face tracking method which can be effectively applied to the robot's vision system. The proposed algorithm tracks the facial areas after detecting the area of video motion. Movement detection of video images is done by using median filter and erosion and dilation operation as a method for removing noise, after getting the different images using two continual frames. To extract the skin color from the moving area, the color information of sample images is used. The skin color region and the background area are separated by evaluating the similarity by generating membership functions by using MIN-MAX values as fuzzy data. For the face candidate region, the eyes are detected from C channel of color space CMY, and the mouth from Q channel of color space YIQ. The face region is tracked seeking the features of the eyes and the mouth detected from knowledge-base. Experiment includes 1,500 frames of the video images from 10 subjects, 150 frames per subject. The result shows 95.7% of detection rate (the motion areas of 1,435 frames are detected) and 97.6% of good face tracking result (1,401 faces are tracked).

Multimodal Approach for Summarizing and Indexing News Video

  • Kim, Jae-Gon;Chang, Hyun-Sung;Kim, Young-Tae;Kang, Kyeong-Ok;Kim, Mun-Churl;Kim, Jin-Woong;Kim, Hyung-Myung
    • ETRI Journal
    • /
    • v.24 no.1
    • /
    • pp.1-11
    • /
    • 2002
  • A video summary abstracts the gist from an entire video and also enables efficient access to the desired content. In this paper, we propose a novel method for summarizing news video based on multimodal analysis of the content. The proposed method exploits the closed caption data to locate semantically meaningful highlights in a news video and speech signals in an audio stream to align the closed caption data with the video in a time-line. Then, the detected highlights are described using MPEG-7 Summarization Description Scheme, which allows efficient browsing of the content through such functionalities as multi-level abstracts and navigation guidance. Multimodal search and retrieval are also within the proposed framework. By indexing synchronized closed caption data, the video clips are searchable by inputting a text query. Intensive experiments with prototypical systems are presented to demonstrate the validity and reliability of the proposed method in real applications.

  • PDF

A Trial Toward Marine Watch System by Image Processing

  • Shimpo, Masatoshi;Hirasawa, Masato;Ishida, Keiichi;Oshima, Masaki
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.1
    • /
    • pp.41-46
    • /
    • 2006
  • This paper describes a marine watch system on a ship, which is aided by an image processing method. The system detects other ships through a navigational image sequence to prevent oversights, and it measures their bearings to maintain their movements. The proposed method is described, the detection techniques and measurement of bearings techniques are derived, and the results have been reported. The image is divided into small regions on the basis of the brightness value and then labeled. Each region is considered as a template. A template is assumed to be a ship. Then, the template is compared with frames in the original image after a selected time. A moving vector of the regions is calculated using an Excel table. Ships are detected using the characteristics of the moving vector. The video camera captures 30 frames per second. We segmented one frame into approximately 5000 regions; from these, approximately 100 regions are presumed to be ships and considered to be templates. Each template was compared with frames captured at 0.33 s or 0.66 s. In order to improve the accuracy, this interval was changed on the basis of the magnification of the video camera. Ships’ bearings also need to be determined. The proposed method can measure the ships’ bearings on the basis of three parameters: (1) the course of the own ship, (2) arrangement between the camera and hull, and (3) coordinates of the ships detected from the image. The course of the own ship can be obtained by using a gyrocompass. The camera axis is calibrated along a particular direction using a stable position on a bridge. The field of view of the video camera is measured from the size of a known structure on the hull in the image. Thus, ships’ bearings can be calculated using these parameters.

  • PDF

Fundamental Research for Video-Integrated Collision Prediction and Fall Detection System to Support Navigation Safety of Vessels

  • Kim, Bae-Sung;Woo, Yun-Tae;Yu, Yung-Ho;Hwang, Hun-Gyu
    • Journal of Ocean Engineering and Technology
    • /
    • v.35 no.1
    • /
    • pp.91-97
    • /
    • 2021
  • Marine accidents caused by ships have brought about economic and social losses as well as human casualties. Most of these accidents are caused by small and medium-sized ships and are due to their poor conditions and insufficient equipment compared with larger vessels. Measures are quickly needed to improve the conditions. This paper discusses a video-integrated collision prediction and fall detection system to support the safe navigation of small- and medium-sized ships. The system predicts the collision of ships and detects falls by crew members using the CCTV, displays the analyzed integrated information using automatic identification system (AIS) messages, and provides alerts for the risks identified. The design consists of an object recognition algorithm, interface module, integrated display module, collision prediction and fall detection module, and an alarm management module. For the basic research, we implemented a deep learning algorithm to recognize the ship and crew from images, and an interface module to manage messages from AIS. To verify the implemented algorithm, we conducted tests using 120 images. Object recognition performance is calculated as mAP by comparing the pre-defined object with the object recognized through the algorithms. As results, the object recognition performance of the ship and the crew were approximately 50.44 mAP and 46.76 mAP each. The interface module showed that messages from the installed AIS were accurately converted according to the international standard. Therefore, we implemented an object recognition algorithm and interface module in the designed collision prediction and fall detection system and validated their usability with testing.

Locating Intersections for Autonomous Vehicles: A Bayesian Network Approach

  • Choi, Kyoung-Ho;Joo, Sung-Kwan;Cho, Seong-Ik;Park, Jong-Hyun
    • ETRI Journal
    • /
    • v.29 no.2
    • /
    • pp.249-251
    • /
    • 2007
  • A novel idea is presented to locate intersections in a video sequence captured from a moving vehicle. More specifically, we propose a Bayesian network approach to combine evidence extracted from a video sequence and evidence from a database, maximizing evidence from various sensors in a systematic manner and locating intersections robustly.

  • PDF

Design of Real-time Video Acquisition for Control of Unmanned Aerial Vehicle

  • Jeong, Min-Hwa
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.9 no.2
    • /
    • pp.131-138
    • /
    • 2020
  • In this paper, we analyze the delay phenomenon that can occur when controlling an unmanned aerial vehicle using a camera and describe a solution to solve the phenomenon. The group of pictures (GOP) value is changed in order to reduce the delay according to the frame data size that can occur in the moving image data transmission. The appropriate GOP values were determined through experimental data accumulation and validated through camera self-test, system integration laboratory (SIL) verification test and system integration test.