• Title/Summary/Keyword: Temporal texture

Search Result 63, Processing Time 0.025 seconds

Temporal Texture modeling for Video Retrieval (동영상 검색을 위한 템포럴 텍스처 모델링)

  • Kim, Do-Nyun;Cho, Dong-Sub
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.3
    • /
    • pp.149-157
    • /
    • 2001
  • In the video retrieval system, visual clues of still images and motion information of video are employed as feature vectors. We generate the temporal textures to express the motion information whose properties are simple expression, easy to compute. We make those temporal textures of wavelet coefficients to express motion information, M components. Then, temporal texture feature vectors are extracted using spatial texture feature vectors, i.e. spatial gray-level dependence. Also, motion amount and motion centroid are computed from temporal textures. Motion trajectories provide the most important information for expressing the motion property. In our modeling system, we can extract the main motion trajectory from the temporal textures.

  • PDF

Performance Analysis of Temporal Texture Modeling for Image Database Retrieval (영상 데이터베이스 검색을 위한 Temporal texture 모델링의 성능분석)

  • Hong, Ji-Su;Kim, Do-Nyun;Kim, Yung-Bok;Kim, Dong-Sub
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.10b
    • /
    • pp.1661-1664
    • /
    • 2000
  • 내용 기반의 비디오 검색에 있어 텍스처는 중요한 변수로 사용될 수 있다. 모든 물체의 표면은 독특한 성질을 보유하고 있으므로, 텍스처는 형상이나 색과 더불어 중요한 변수로 사용될 수 있다. 어떤 영상의 특징을 올바르게 추출하고 잘 분류하여 표현하는 것은 비디오 검색에 있어서 매우 중요하다. Temporal texture는 무한한 시공간적 범위의 복잡하고, 추상적인 움직임 패턴이며 자연 세계에 흔히 나타난다. 그러므로 이를 특징화시킬 수 있고, temporal texture 패턴을 얼마나 잘 이용할 수 있느냐는 비디오 검색의 성능에 많은 영향을 끼칠 수 있다. 본 논문은 temporal texture 모델링들 중 서로 다른 특징을 가진 세 가지의 모델을 선정하여 비교, 분석한다. 특히, 특징 추출의 분류가 정확하게 이루어지느냐에 초점을 맞추어서 분석하였다. 분류의 성능은 두 가지 변수 즉, 어떤 성질의 모델이며 비디오 데이터인가에 따라 달라지게 된다. 이들 모델링이 분류하기까지 걸리는 시간의 차이는 무시할 수 있을 정도의 시간차이므로, 정확도를 위주로 성능을 분석했다.

  • PDF

Study on Performance Analysis of Video Retrieval Using Temporal Texture (Temporal texture를 이용한 비디오 검색의 성능분석)

  • 홍지수;김영복;김도년;조동섭
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.443-445
    • /
    • 2000
  • 모든 물체의 표면은 독특한 성질을 보유하고 있으므로, 비디오 검색에 있어 텍스처는 형상이나 색과 더불어 중요한 변수로 사용될 수 있다. 비디오 검색에 있어서 중요한 것은 어떤 영상의 특징을 올바르게 추출하고 잘 분류하여 표현하는 것이다. Temporal texture는 무한한 시공간적 범위의 복잡하고, 추상적인 움직임 패턴도 특징화시킬 수 있으므로, temporal texture 패턴을 얼마나 잘 이용할 수 있느냐는 비디오 검색의 성능에 많은 영향을 끼칠 수 있다. 본 논문은 temporal texture의 서로 다른 특징을 가진 세 가지의 모델을 선정하여 비교한다. 특히, 특징 추출의 분류가 정확하게 이루어지느냐에 초점을 맞추어서 분석하였다. 분류의 성능은 두 가지 변수 즉, 어떤 성질의 모델이며 비디오 데이터인가에 따라 달라지게 된다. 이들 모델링이 분류하기까지 걸리는 시간의 차이는 무시할 수 있을 정도의 시간차이므로 정확도를 위주로 성능을 분석했다.

  • PDF

Texture Transfer Based on Video (비디오 기반의 질감 전이 기법)

  • Kong, Phutphalla;Lee, Ho-Chang;Yoon, Kyung-Hyun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.406-407
    • /
    • 2012
  • Texture transfer is a NPR technique for expressing various styles according to source (reference) image. By late 2000s, there are many texture transfer researches. But video base researchers are not active. Moreover, they didn't use important feature like directional information which need to express detail characteristics of target. So, we propose a new method to generate texture transfer animation (using video) with directional effect for maintaining temporal coherence and controlling coherence direction of texture. For maintaining temporal coherence, we use optical flow and confidence map to adapt for occlusion/disocclusion boundaries. And we control direction of texture for taking structure of input. For expressing various texture effects according to different regions, we calculate gradient based on directional weight. With these techniques, our algorithm can make animation result that maintain temporal coherence and express directional texture effect. It is reflect the characteristics of source and target image well. And our result can express various texture directions automatically.

Spatial-temporal texture features for 3D human activity recognition using laser-based RGB-D videos

  • Ming, Yue;Wang, Guangchao;Hong, Xiaopeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1595-1613
    • /
    • 2017
  • The IR camera and laser-based IR projector provide an effective solution for real-time collection of moving targets in RGB-D videos. Different from the traditional RGB videos, the captured depth videos are not affected by the illumination variation. In this paper, we propose a novel feature extraction framework to describe human activities based on the above optical video capturing method, namely spatial-temporal texture features for 3D human activity recognition. Spatial-temporal texture feature with depth information is insensitive to illumination and occlusions, and efficient for fine-motion description. The framework of our proposed algorithm begins with video acquisition based on laser projection, video preprocessing with visual background extraction and obtains spatial-temporal key images. Then, the texture features encoded from key images are used to generate discriminative features for human activity information. The experimental results based on the different databases and practical scenarios demonstrate the effectiveness of our proposed algorithm for the large-scale data sets.

Digital Video Steganalysis Based on a Spatial Temporal Detector

  • Su, Yuting;Yu, Fan;Zhang, Chengqian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.360-373
    • /
    • 2017
  • This paper presents a novel digital video steganalysis scheme against the spatial domain video steganography technology based on a spatial temporal detector (ST_D) that considers both spatial and temporal redundancies of the video sequences simultaneously. Three descriptors are constructed on XY, XT and YT planes respectively to depict the spatial and temporal relationship between the current pixel and its adjacent pixels. Considering the impact of local motion intensity and texture complexity on the histogram distribution of three descriptors, each frame is segmented into non-overlapped blocks that are $8{\times}8$ in size for motion and texture analysis. Subsequently, texture and motion factors are introduced to provide reasonable weights for histograms of the three descriptors of each block. After further weighted modulation, the statistics of the histograms of the three descriptors are concatenated into a single value to build the global description of ST_D. The experimental results demonstrate the great advantage of our features relative to those of the rich model (RM), the subtractive pixel adjacency model (SPAM) and subtractive prediction error adjacency matrix (SPEAM), especially for compressed videos, which constitute most Internet videos.

Shadow Texture Generation Using Temporal Coherence (시간일관성을 이용한 그림자 텍스처 생성방법)

  • Oh Kyoung-su;Shin Byeong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1550-1555
    • /
    • 2004
  • Shadows increase the visual realism of computer-generated images and they are good hint for spatial relationships between objects. Previous methods to produce a shadow texture for an object are to render all objects between the object and light source. Consequently entire time for generating shadow textures between all objects is Ο(Ν$^2$), where Ν is the number of objects. We propose a novel shadow texture generation method with constant processing time for each object using shadow depth buffet. In addition, we also present method to achieve further speed-up using temporal coherence. If the transition between dynamic and static state is not frequent, depth values of static objects does not vary significantly. So we can reuse the depth value for static objects and render only dynamic objects.

  • PDF

A Real-time Dual-mode Temporal Synchronization and Compensation based on Reliability Measure in Stereoscopic Video (3D 입체 영상 시스템에서 신뢰도를 활용한 듀얼 모드 실시간 동기 에러 검출 및 보상 방법)

  • Kim, Giseok;Cho, Jae-Soo;Lee, Gwangsoon;Lee, Eung-Don
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.896-906
    • /
    • 2014
  • In this paper, a real-time dual-mode temporal synchronization and compensation method based on a new reliability measure in stereoscopic video is proposed. The goal of temporal alignment is to detect the temporal asynchrony and recover synchronization of the two video streams. The accuracy of the temporal synchronization algorithm depends on the 3DTV contents. In order to compensate the temporal synchronization error, it is necessary to judge whether the result of the temporal synchronization is reliable or not. Based on our recently developed temporal synchronization method[1], we define a new reliability measure for the result of the temporal synchronization method. Furthermore, we developed a dual-mode temporal synchronization method, which uses a usual texture matching method and the temporal spatiogram method[1]. The new reliability measure is based on two distinctive features, a dynamic feature for scene change and a matching distinction feature. Various experimental results show the effectiveness of the proposed method. The proposed algorithms are evaluated and verified through an experimental system implemented for 3DTV.

Temporal and Spatial Variation of Soil Moisture in Upland Soil using AMSR2 SMC

  • Na, Sang-Il;Lee, Kyoung-Do;Kim, Sook-Kyoung;Hong, Suk-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.48 no.6
    • /
    • pp.658-665
    • /
    • 2015
  • Temporal and spatial variation of soil moisture is important for understanding patterns of climate change, for developing and evaluating land surface models, for designing surface soil moisture observation networks, and for determining the appropriate resolution for satellite-based remote sensing instruments for soil moisture. In this study, we measured several soil moistures in upland soil using Advanced Microwave Scanning Radiometer 2 (AMSR2) Soil Moisture Content (SMC) during eight-month period in Chungbuk province. The upland soil moisture properties were expressed by simple statistical methods (average, standard deviation and coefficient of variation) from the monthly context. Supplementary studies were also performed about the effect of top soil texture on the soil moisture responses. If the results from this study were utilized well in specific cities and counties in Korea, it would be helpful to establish the countermeasures and action plans for preventing disasters because it was possible to compare with the relationship between soil moisture and top soil texture of each region. And it would be the fundamental data for estimating the effect of future agricultural plan.

Real-time Style Transfer for Video (실시간 비디오 스타일 전이 기법에 관한 연구)

  • Seo, Sang Hyun
    • Smart Media Journal
    • /
    • v.5 no.4
    • /
    • pp.63-68
    • /
    • 2016
  • Texture transfer is a method to transfer the texture of an input image into a target image, and is also used for transferring artistic style of the input image. This study presents a real-time texture transfer for generating artistic style video. In order to enhance performance, this paper proposes a parallel framework using T-shape kernel used in general texture transfer on GPU. To accelerate motion computation time which is necessarily required for maintaining temporal coherence, a multi-scaled motion field is proposed in parallel concept. Through these approach, an artistic texture transfer for video with a real-time performance is archived.