• Title/Summary/Keyword: multi-temporal method

Search Result 230, Processing Time 0.03 seconds

Motion Vector Predictor selection method for multi-view video coding (다시점 비디오 부호화를 위한 움직임벡터 예측값 선택 방법)

  • Choi, Won-Jun;Suh, Doug-Young;Kim, Kyu-Heon;Park, Gwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.12 no.6
    • /
    • pp.565-573
    • /
    • 2007
  • In this paper, we propose a method to select motion vector predictor by considering prediction structure of a multi view content for coding efficiency of multi view coding which is being standardized in JVT. Motion vector of a different tendency is happened while carrying out temporal and view reference prediction of multi-view video coding. Also, due to the phenomena of motion vectors being searched in both temporal and view order, the motion vectors do not agree with each other resulting a decline in coding efficiency. This paper is about how the motion vector predictor are selected with information of prediction structure. By using the proposed method, a compression ratio of the proposed method in multi-view video coding is increased, and finally $0.03{\sim}0.1$ dB PSNR(Peak Signal-to-Noise Ratio) improvement was obtained compared with the case of JMVM 3.6 method.

An Adaptive Motion Vector Estimation Method for Multi-view Video Coding Based on Spatio-temporal Correlations among Motion Vectors (움직임 벡터들의 시·공간적 상관성을 이용한 다시점 비디오 부호화를 위한 적응적 움직임 벡터 추정 기법)

  • Yoon, Hyo-Sun;Kim, Mi-Young
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.12
    • /
    • pp.35-45
    • /
    • 2018
  • Motion Estimation(ME) has been developed to reduce the redundant data in digital video signal. ME is an important part of video encoding system, However, it requires huge computational complexity of the encoder part, and fast motion search methods have been proposed to reduce huge complexity. Multi- view video is obtained by capturing on a three-dimensional scene with many cameras at different positions and its complexity increases in proportion to the number of cameras. In this paper, we proposed an efficient motion method which chooses a search pattern adaptively by using the temporal-spatial correlation of the block and the characteristics of the block. Experiment results show that the computational complexity reduction of the proposed method over TZ search method and FS method can be up to 70~75% and 99% respectively while keeping similar image quality and bit rates.

Adaptive Spatio-Temporal Prediction for Multi-view Coding in 3D-Video (3차원 비디오 압축에서의 다시점 부호화를 위한 적응적 시공간적 예측 부호화)

  • 성우철;이영렬
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.214-224
    • /
    • 2004
  • In this paper, an adaptive spatio-temporal predictive coding based on the H.264 is proposed for 3D immersive media encoding, such as 3D image processing, 3DTV, and 3D videoconferencing. First, we propose a spatio-temporal predictive coding using the same view and inter-view images for the two TPPP, IBBP GOP (group of picture) structures 4hat are different from the conventional simulcast method. Second, an 2D inter-view direct mode for the efficient prediction is proposed when the proposed spatio-temporal prediction uses the IBBP structure. The 2D inter-view direct mode is applied when the temporal direct mode in B(hi-Predictive) picture of the H.264 refers to an inter-view image, since the current temporal direct mode in the H.264 standard could no: be applied to the inter-view image. The proposed method is compared to the conventional simulcast method in terms of PSNR (peak signal to noise ratio) for the various 3D test video sequences. The proposed method shows better PSNR results than the conventional simulcast mode.

Application of Change Detection Techniques using KOMPSAT-1 EOC Images

  • Lee, Kwang-Jae;Kim, Youn-Soo
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.222-227
    • /
    • 2002
  • This research will examine into the capabilities of KOMPSAI-1 EOC image application in the field of urban environment and at the same time, with that as its foundation, come to understand the urban changes of the study areas. This research is constructed in three stages: Firstly, for application of change detection techniques, which utilizes multi-temporal remotely sensed data, the data normalization process is carried out. Secondly, change detection method is applied fur the systematic monitoring of land use changes, which utilizes multi-temporal EOC images. Lastly, by using the results of the application of land use changes, the existing land use map is updated. Consequently, the land-use change patterns are monitored, which utilize multi-temporal panchromatic EOC image data; and application potentials of ancillary data fur updating existing data can be presented. In this research, with the use of the land use change, monitoring of urban growth has been carried out, and the potential for the application of KOMPSAT-1 EOC images and the scope of application was examined. Henceforth, the future expansion of the scope of application of KOMPSAT-1 EOC image is anticipated.

  • PDF

Human Activity Recognition using Multi-temporal Neural Networks (다중 시구간 신경회로망을 이용한 인간 행동 인식)

  • Lee, Hyun-Jin
    • Journal of Digital Contents Society
    • /
    • v.18 no.3
    • /
    • pp.559-565
    • /
    • 2017
  • A lot of studies have been conducted to recognize the motion state or behavior of the user using the acceleration sensor built in the smartphone. In this paper, we applied the neural networks to the 3-axis acceleration information of smartphone to study human behavior. There are performance issues in applying time series data to neural networks. We proposed a multi-temporal neural networks which have trained three neural networks with different time windows for feature extraction and uses the output of these neural networks as input to the new neural network. The proposed method showed better performance than other methods like SVM, AdaBoot and IBk classifier for real acceleration data.

Multi-scale and Interactive Visual Analysis of Public Bicycle System

  • Shi, Xiaoying;Wang, Yang;Lv, Fanshun;Yang, Xiaohang;Fang, Qiming;Zhang, Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3037-3054
    • /
    • 2019
  • Public bicycle system (PBS) is a new emerging and popular mode of public transportation. PBS data can be adopted to analyze human movement patterns. Previous work usually focused on specific scales, and the relationships between different levels of hierarchies are ignored. In this paper, we introduce a multi-scale and interactive visual analytics system to investigate human cycling movement and PBS usage condition. The system supports level-of-detail explorative analysis of spatio-temporal characteristics in PBS. Visual views are designed from global, regional and microcosmic scales. For the regional scale, a bicycle network is constructed to model PBS data, and an flow-based community detection algorithm is applied on the bicycle network to determine station clusters. In contrast to the previous used Louvain algorithm, our method avoids producing super-communities and generates better results. We provide two cases to demonstrate how our system can help analysts explore the overall cycling condition in the city and spatio-temporal aggregation of stations.

Real-Time Detection of Moving Objects from Shaking Camera Based on the Multiple Background Model and Temporal Median Background Model (다중 배경모델과 순시적 중앙값 배경모델을 이용한 불안정 상태 카메라로부터의 실시간 이동물체 검출)

  • Kim, Tae-Ho;Jo, Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.3
    • /
    • pp.269-276
    • /
    • 2010
  • In this paper, we present the detection method of moving objects based on two background models. These background models support to understand multi layered environment belonged in images taken by shaking camera and each model is MBM(Multiple Background Model) and TMBM (Temporal Median Background Model). Because two background models are Pixel-based model, it must have noise by camera movement. Therefore correlation coefficient calculates the similarity between consecutive images and measures camera motion vector which indicates camera movement. For the calculation of correlation coefficient, we choose the selected region and searching area in the current and previous image respectively then we have a displacement vector by the correlation process. Every selected region must have its own displacement vector therefore the global maximum of a histogram of displacement vectors is the camera motion vector between consecutive images. The MBM classifies the intensity distribution of each pixel continuously related by camera motion vector to the multi clusters. However, MBM has weak sensitivity for temporal intensity variation thus we use TMBM to support the weakness of system. In the video-based experiment, we verify the presented algorithm needs around 49(ms) to generate two background models and detect moving objects.

Multi-View Video Coding Using Illumination Change-Adaptive Motion Estimation and 2D Direct Mode (조명변화에 적응적인 움직임 검색 기법과 2차원 다이렉트 모드를 사용한 다시점 비디오 부호화)

  • Lee, Yung Ki;Hur, Jae Ho;Lee, Yung Lyul
    • Journal of Broadcast Engineering
    • /
    • v.10 no.3
    • /
    • pp.321-327
    • /
    • 2005
  • A MVC (Multi-view Video Coding) method, which uses both an illumination change-adaptive ME (Motion Estimation)/DC (Motion Compensation) and a 2D (Dimensional) direct mode, is proposed. Firstly, a new SAD (Sum of Absolute Difference) measure for ME/MC is proposed to compensate the Luma pixel value changes for spatio-temporal motion vector prediction. Illumination change-adaptive (ICA) ME/MC uses the new SAD to improve both MV (Motion Vector) accuracy and bit saving. Secondly, The proposed 2D direct mode that can be used in inter-view prediction is an extended version of the temporal direct mode in MPEG-4 AVC. The proposed MVC method obtains approximately 0.8dB PSNR (Peak Signal-to-Noise Ratio) increment compared with the MPEG-4 AVC simulcast coding.

Use of Unmanned Aerial Vehicle for Multi-temporal Monitoring of Soybean Vegetation Fraction

  • Yun, Hee Sup;Park, Soo Hyun;Kim, Hak-Jin;Lee, Wonsuk Daniel;Lee, Kyung Do;Hong, Suk Young;Jung, Gun Ho
    • Journal of Biosystems Engineering
    • /
    • v.41 no.2
    • /
    • pp.126-137
    • /
    • 2016
  • Purpose: The overall objective of this study was to evaluate the vegetation fraction of soybeans, grown under different cropping conditions using an unmanned aerial vehicle (UAV) equipped with a red, green, and blue (RGB) camera. Methods: Test plots were prepared based on different cropping treatments, i.e., soybean single-cropping, with and without herbicide application and soybean and barley-cover cropping, with and without herbicide application. The UAV flights were manually controlled using a remote flight controller on the ground, with 2.4 GHz radio frequency communication. For image pre-processing, the acquired images were pre-treated and georeferenced using a fisheye distortion removal function, and ground control points were collected using Google Maps. Tarpaulin panels of different colors were used to calibrate the multi-temporal images by converting the RGB digital number values into the RGB reflectance spectrum, utilizing a linear regression method. Excess Green (ExG) vegetation indices for each of the test plots were compared with the M-statistic method in order to quantitatively evaluate the greenness of soybean fields under different cropping systems. Results: The reflectance calibration methods used in the study showed high coefficients of determination, ranging from 0.8 to 0.9, indicating the feasibility of a linear regression fitting method for monitoring multi-temporal RGB images of soybean fields. As expected, the ExG vegetation indices changed according to different soybean growth stages, showing clear differences among the test plots with different cropping treatments in the early season of < 60 days after sowing (DAS). With the M-statistic method, the test plots under different treatments could be discriminated in the early seasons of <41 DAS, showing a value of M > 1. Conclusion: Therefore, multi-temporal images obtained with an UAV and a RGB camera could be applied for quantifying overall vegetation fractions and crop growth status, and this information could contribute to determine proper treatments for the vegetation fraction.

A Study on Detection of Deforested Land Using Aerial Photographs (항공사진을 이용한 훼손 산지 탐지 연구)

  • Ham, Bo Young;Lee, Chun Yong;Byun, Hye Kyung;Min, Byoung Keol
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.3
    • /
    • pp.11-17
    • /
    • 2013
  • With high social demands for the diverse utilizations of forest lands, the illegal forest land use changes have increased. We studied change detection technique to detect changes in forest land use using an object-oriented segmentation of RED bands differencing in multi-temporal aerial photographs. The new object-oriented segmentation method consists of the 5 steps, "Image Composite - Segmentation - Reshaping - Noise Remover - Change Detection". The method enabled extraction of deforested objects by selecting a suitable threshold to determine whether the objects was divided or merged, based on the relations between the objects, spectral characteristics and contextual information from multi-temporal aerial photographs. The results found that the object-oriented segmentation method detected 12% of changes in forest land use, with 96% of the average detection accuracy compared by visual interpretation. Therefore this research showed that the spatial data by the object-oriented segmentation method can be complementary to the one by a visual interpretation method, and proved the possibility of automatically detecting and extracting changes in forest land use from multi-temporal aerial photographs.