• Title/Summary/Keyword: Motion Texture

Search Result 82, Processing Time 0.027 seconds

Temporal Texture modeling for Video Retrieval (동영상 검색을 위한 템포럴 텍스처 모델링)

  • Kim, Do-Nyun;Cho, Dong-Sub
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.3
    • /
    • pp.149-157
    • /
    • 2001
  • In the video retrieval system, visual clues of still images and motion information of video are employed as feature vectors. We generate the temporal textures to express the motion information whose properties are simple expression, easy to compute. We make those temporal textures of wavelet coefficients to express motion information, M components. Then, temporal texture feature vectors are extracted using spatial texture feature vectors, i.e. spatial gray-level dependence. Also, motion amount and motion centroid are computed from temporal textures. Motion trajectories provide the most important information for expressing the motion property. In our modeling system, we can extract the main motion trajectory from the temporal textures.

  • PDF

Frame rate up conversion method using bilateral motion estimation based on texture activity and neighboring motion information (질감 활성도 기반 양방향 움직임 추정과 인접 움직임 정보를 이용한 프레임률 증가 기법)

  • Jung, Youn-Ho;Kim, Jin-Hyung;Ko, Yun-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.797-805
    • /
    • 2014
  • In this paper we propose a new frame rate up conversion scheme which is used to overcome the motion blur problem of liquid crystal display caused by its slow response. The conventional bilateral motion estimation method which is mainly used in the frame rate up conversion scheme has a drawback that it cannot find true motion vector if there are blocks with simple texture in the search range. To solve this problem, a texture adaptive bilateral motion estimation method that increases cost value of block with simple texture is proposed. Also a motion estimation scheme that utilizes neighboring motion vector effectively is proposed to reduce computation time required to estimate motion. Since the proposed scheme does not apply all available motion vectors within the search range, the execution time of frame rate up conversion can be reduced dramatically. Experimental results show that the interpolated frame image quality of the proposed method is improved in subjective as well as objective view point compared with that of the conventional method.

A Data Driven Motion Generation for Driving Simulators Using Motion Texture (모션 텍스처를 이용한 차량 시뮬레이터의 통합)

  • Cha, Moo-Hyun;Han, Soon-Hung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.7 s.262
    • /
    • pp.747-755
    • /
    • 2007
  • To improve the reality of motion simulator, the method of data-driven motion generation has been introduced to simply record and replay the motion of real vehicles. We can achieve high quality of reality from real samples, but it has no interactions between users and simulations. However, in character animation, user controllable motions are generated by the database made up of motion capture signals and appropriate control algorithms. In this study, as a tool for the interactive data-driven driving simulator, we proposed a new motion generation method. We sample the motion data from a real vehicle, transform the data into the appropriate data structure(motion block), and store a series of them into a database. While simulation, our system searches and synthesizes optimal motion blocks from database and generates motion stream reflecting current simulation conditions and parameterized user demands. We demonstrate the value of the proposed method through experiments with the integrated motion platform system.

Digital Video Steganalysis Based on a Spatial Temporal Detector

  • Su, Yuting;Yu, Fan;Zhang, Chengqian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.360-373
    • /
    • 2017
  • This paper presents a novel digital video steganalysis scheme against the spatial domain video steganography technology based on a spatial temporal detector (ST_D) that considers both spatial and temporal redundancies of the video sequences simultaneously. Three descriptors are constructed on XY, XT and YT planes respectively to depict the spatial and temporal relationship between the current pixel and its adjacent pixels. Considering the impact of local motion intensity and texture complexity on the histogram distribution of three descriptors, each frame is segmented into non-overlapped blocks that are $8{\times}8$ in size for motion and texture analysis. Subsequently, texture and motion factors are introduced to provide reasonable weights for histograms of the three descriptors of each block. After further weighted modulation, the statistics of the histograms of the three descriptors are concatenated into a single value to build the global description of ST_D. The experimental results demonstrate the great advantage of our features relative to those of the rich model (RM), the subtractive pixel adjacency model (SPAM) and subtractive prediction error adjacency matrix (SPEAM), especially for compressed videos, which constitute most Internet videos.

Real-time Style Transfer for Video (실시간 비디오 스타일 전이 기법에 관한 연구)

  • Seo, Sang Hyun
    • Smart Media Journal
    • /
    • v.5 no.4
    • /
    • pp.63-68
    • /
    • 2016
  • Texture transfer is a method to transfer the texture of an input image into a target image, and is also used for transferring artistic style of the input image. This study presents a real-time texture transfer for generating artistic style video. In order to enhance performance, this paper proposes a parallel framework using T-shape kernel used in general texture transfer on GPU. To accelerate motion computation time which is necessarily required for maintaining temporal coherence, a multi-scaled motion field is proposed in parallel concept. Through these approach, an artistic texture transfer for video with a real-time performance is archived.

Segmented Video Coding Using Variable Block-Size Segmentation by Motion Vectors (움직임벡터에 의한 가변블럭영역화를 이용한 영역기반 동영상 부호화)

  • 이기헌;김준식;박래홍;이상욱;최종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.4
    • /
    • pp.62-76
    • /
    • 1994
  • In this paper, a segmentation-based coding technique as applied to video sequences is proposed. A proposed method separates an image into contour and texture parts, then the visually-sensitive contour part is represented by chain codes and the visually-insensitive texture part is reconstructed by a representative motion vector of a region and mean of the segmented frame difference. It uses a change detector to find moving areas and adopts variable blocks to represent different motions correctly. For better quality of reconstructed images, the displaced frame difference between the original image and the motion compensated image reconstructed by the representative motion vector is segmented. Computer simulation with several video sequences shows that the proposed method gives better performance than the conventional ones in terms of the peak signal to noise ratio(PSNR) and compression ration.

  • PDF

Motion Segmentation for Layer Decomposition of Image Sequences (영상 시퀀스의 계층 분리를 위한 움직임 분할)

  • 장정진;오정수;홍현기;최종수
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.29-32
    • /
    • 2000
  • This paper proposes a motion segmentation algorithm for layer decomposition of image sequences. The proposed algorithm segments an image into initial regions by using its color and texture and computes a motion model of each initial region. Each pixel assigns one of the motion represented by the models or a motion except them, which segments the image into the motion regions. The proposed algorithm is app]ied image sequences and the segmented motion is shown.

  • PDF

Smoke detection in video sequences based on dynamic texture using volume local binary patterns

  • Lin, Gaohua;Zhang, Yongming;Zhang, Qixing;Jia, Yang;Xu, Gao;Wang, Jinjun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.11
    • /
    • pp.5522-5536
    • /
    • 2017
  • In this paper, a video based smoke detection method using dynamic texture feature extraction with volume local binary patterns is studied. Block based method was used to distinguish smoke frames in high definition videos obtained by experiments firstly. Then we propose a method that directly extracts dynamic texture features based on irregular motion regions to reduce adverse impacts of block size and motion area ratio threshold. Several general volume local binary patterns were used to extract dynamic texture, including LBPTOP, VLBP, CLBPTOP and CVLBP, to study the effect of the number of sample points, frame interval and modes of the operator on smoke detection. Support vector machine was used as the classifier for dynamic texture features. The results show that dynamic texture is a reliable clue for video based smoke detection. It is generally conducive to reducing the false alarm rate by increasing the dimension of the feature vector. However, it does not always contribute to the improvement of the detection rate. Additionally, it is found that the feature computing time is not directly related to the vector dimension in our experiments, which is important for the realization of real-time detection.

Block-based Motion Vector Smoothing for Nonrigid Moving Objects (비정형성 등속운동 객체의 움직임 추정을 위한 블록기반 움직임 평활화)

  • Sohn, Young-Wook;Kang, Moon-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.47-53
    • /
    • 2007
  • True motion estimation is necessary for deinterlacing, frame-rate conversion, and film judder compensation. There have been several block-based approaches to find true motion vectors by tracing minimum sum-of-absolute-difference (SAD) values by considering spatial and temporal consistency. However, the algorithms cannot find robust motion vectors when the texture of objects is changed. To find the robust motion vectors in the region, a recursive vector selection scheme and an adaptive weighting parameter are proposed. Previous frame vectors are recursively averaged to be utilized for motion error region. The weighting parameter controls fidelity to input vectors and the recursively averaged ones, where the input vectors come from the conventional estimators. If the input vectors are not reliable, then the mean vectors of the previous frame are used for temporal consistency. Experimental results show more robust motion vectors than those of the conventional methods in time-varying texture objects.

A Fast Algorithm for Region-Oriented Texture Coding

  • Choi, Young-Gyu;Choi, Chong-Hwan;Cheong, Ha-Young
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.6
    • /
    • pp.519-525
    • /
    • 2016
  • This paper addresses the framework of object-oriented image coding, describing a new algorithm, based on monodimensional Legendre polynomials, for texture approximation. Through the use of 1D orthogonal basis functions, the computational complexity which usually makes prohibitive most of 2D region-oriented approaches is significantly reduced, while only a slight increment of distortion is introduced. In the aim of preserving the bidimensional intersample correlation of the texture information as much as possible, suitable pseudo-bidimensional basis functions have been used, yielding significant improvements with respect to the straightforward 1D approach. The algorithm has been experimented for coding still images as well as motion compensated sequences, showing interesting possibilities of application for very low bitrate video coding.