• Title/Summary/Keyword: 비디오시각화

Search Result 85, Processing Time 0.026 seconds

SPIHT Video Coder Using Perceptual Weight in Wavelet transform (웨이브릿 변환에서 인지적 가중치를 이용한 SPIHT 비디오 부호기)

  • 정용재;강경원;문광석
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.1
    • /
    • pp.15-20
    • /
    • 2002
  • The frame coding inside the screen for a video coder has a big influence on the quality of the whole frame. The standardized video coder uses DCT, however it can give rise to a low image quality due to the blocking effect at low bit rate. This paper proposes a video coding method for an image quality improvement in human visual aspects. With the proposed method, the perceptual weight is coded with SPIHT and VLC by applying it into the frame and the visual noises are eliminated.

  • PDF

Video Data Modeling for Supporting Structural and Semantic Retrieval (구조 및 의미 검색을 지원하는 비디오 데이타의 모델링)

  • 복경수;유재수;조기형
    • Journal of KIISE:Databases
    • /
    • v.30 no.3
    • /
    • pp.237-251
    • /
    • 2003
  • In this paper, we propose a video retrieval system to search logical structure and semantic contents of video data efficiently. The proposed system employs a layered modelling method that orBanifes video data in raw data layer, content layer and key frame layer. The layered modelling of the proposed system represents logical structures and semantic contents of video data in content layer. Also, the proposed system supports various types of searches such as text search, visual feature based similarity search, spatio-temporal relationship based similarity search and semantic contents search.

A Study on the Digital Video Frame Obfuscation Method for Intellectual Property Protection (저작권 보호를 위한 디지털 비디오 화면 모호화 기법에 관한 연구)

  • Boo, Hee-Hyung;Kim, Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.1
    • /
    • pp.1-8
    • /
    • 2012
  • In this paper, we propose the digital video frame obfuscation method for intellectual property protection using the DC component of the intra frame and the motion vector of the inter frame at digital video encoding. The proposed method considers characteristics of the HVS (human visual system) which is sensitive at the low frequency and the middle frequency. This method makes the signal distorted as operating XOR between authentication signal and the DC coefficient of the intra frame including main information and the sign of the motion vector including edge motion, so that the video is normally displayed only when suitable authentication signal is applied.

The Structure of Synchronized Data Broadcasting Applications (연동형 데이터 방송 애플리케이션의 구조)

  • 정문열;백두원
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.74-82
    • /
    • 2004
  • In digital broadcasting, applications are computer programs executed by the set-top box(TV receiver) , and synchronized applications are those that perform tasks at the specified moments in the underlying video. This paper describes important concepts, standards, and skills needed to implement synchronized applications and explains how to integrate them to implement these applications. This Paper assumes the European data broadcasting standard, DVB-MHP. In DVB-MHP, scheduled stream events are recommended as a means of synchronizing applications with video streams. In this method, the application receives each stream event, and executes the action associated with the stream event at the time specified in the stream event. Commercially available stream generators, i.e., multiplexers, do not generate transport streams that support scheduled stream events. So we used a stream generator implemented in our lab. We implemented a synchronized application where the actions triggered by stream events are to display graphic images. We found that our synchronized application processes scheduled stream events successfully. In our experimentation, the stream events were synchronized with the video and the deviation from the intended time was within 240 ㎳, which is a tolerance for synchronization skew between graphic images and video.

Temporal Color Correlograms for Video Retrieval (비디오 검색을 위한 시간 색상 상관관계그래프)

  • Park, Ho-Sik;Lee, Young-Sik;Kim, Jin-Han;Na, Sang-Dong;Bae, Cheol-Soo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.05a
    • /
    • pp.643-646
    • /
    • 2003
  • 본 논문은 분할된 비디오 화면들의 색상 내용을 기반으로 하는 새로운 영상 검색 방법을 제안 하고자 한다. 제안된 시간 색상 상관그래프는 공통적인 통계 데이터를 이용하여 비디오 화면 내의 공간-시간 관계를 계산한다. 시간 색상 상관 그래프는 내용 기반의 영상 검색에 매우 효과적인 것으로 밝혀진 HSV(Hue, Saturation, Value) 색상 상관 그래프를 기반으로 하고 있다. 시간 색상 상관 그래프는 하나의 비디오 화면으로부터 추출된 프레임 샘플의 양자화 된 HSV 색상 값의 자기상관관계를 이용하여 구성하였다. 본 논문에서는 11시간 분량의 분할된 MPEG-1 비디오에 대한 질의와 질의에 대한 관련성 판정을 하고자 내용 기반의 멀티미디어 검색 시스템을 구축하여 실험하였다. 실험 견과 제안된 방법이 시각 정보만을 필요로 하는 검색에 있어 기존의 다른 검색 방법보다 우수한 결과를 나타냄을 증명하였다.

  • PDF

Video WaterMarking Scheme with Adaptive Embedding in 3D-DCT domain (3D-DCT 계수를 적응적으로 이용한 비디오 워터마킹)

  • 한지석;신현빈;문영식
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04a
    • /
    • pp.349-351
    • /
    • 2004
  • 비디오는 인접한 프레임들간에 유사성이 있는 연속된 프레임들로 구성된다. 만약 인접한 프레임들간에 유사성이 존재하는 영역, 즉 움직임이 없는 영역에 워터마크를 삽입한다면 워터마크는 인지되기 쉽다. 본 논문에서는 워터마크의 투영성과 강인성을 위하여 이와 같은 비디오의 특성을 고려한 3D-DCT 계수를 이용한다. 즉, 3D-DCT 압축을 위한 양자화 상수에서 민감도를 유도하고 전역적인 움직임에 비해 지역적인 움직임이 큰 영역의 민감도를 조절한 후 움직임의 크기에 비례해서 시각적으로 중요한 계수를 워터마크를 삽입한다. 실험을 통하여 비디오의 특성을 고려하지 않고 3D-DCT 계수를 이용하는 기존의 방법과 비교해서 PSNR은 유사하지만 JND를 기반으로 하였기 때문에 워터마크의 투명성을 보장하였고 MPEG 압축 및 시간적 공격에 대한 강인성은 약 5% 정도의 성능향상이 있음을 확인하였다.

  • PDF

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.

Design and Implementation of Automated Detection System of Personal Identification Information for Surgical Video De-Identification (수술 동영상의 비식별화를 위한 개인식별정보 자동 검출 시스템 설계 및 구현)

  • Cho, Youngtak;Ahn, Kiok
    • Convergence Security Journal
    • /
    • v.19 no.5
    • /
    • pp.75-84
    • /
    • 2019
  • Recently, the value of video as an important data of medical information technology is increasing due to the feature of rich clinical information. On the other hand, video is also required to be de-identified as a medical image, but the existing methods are mainly specialized in the stereotyped data and still images, which makes it difficult to apply the existing methods to the video data. In this paper, we propose an automated system to index candidate elements of personal identification information on a frame basis to solve this problem. The proposed system performs indexing process using text and person detection after preprocessing by scene segmentation and color knowledge based method. The generated index information is provided as metadata according to the purpose of use. In order to verify the effectiveness of the proposed system, the indexing speed was measured using prototype implementation and real surgical video. As a result, the work speed was more than twice as fast as the playing time of the input video, and it was confirmed that the decision making was possible through the case of the production of surgical education contents.

Transmission system design for synchronized data service on digital data broadcasting environment (디지털 데이터 방송 환경에서 동기화 데이터 서비스를 위한 전송 시스템 설계)

  • Lee Yong Ju;Park Min Sik;Choi Ji Hoon;Choi Jin Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2003.11a
    • /
    • pp.201-204
    • /
    • 2003
  • 본 논문에서는 디지털 데이터 방송에서 동기화 데이터 서비스를 제공하기 위한 전송 시스템과 이를 이용한 동기화 데이터 서비스 방법을 제안한다. 데이터 방송을 통해 전송되는 부가 데이터는 그 특징에 따라 비동기, 동기, 동기화 데이터로 구분된다. 이들 중 동기화 데이터는 사용자의 선택에 의해 데이터 방송 수신기에서 재생되는 비동기 데이터와는 달리 비디오 또는 오디오의 특정 장면에 통기되어 재생될 수 있는 데이터고서, 데이터를 전송하는 단계에서 데이터가 재생되어야 하는 시전의 시간 정보인 재생시각이 함께 전송되는 특징이 있다. 동기화 데이터의 이와 같은 특징으로 인해 현재 대부분의 데이터 방송에 사용되고 있는 비동기 데이터 서비스를 위한 전송 시스템은 동기화 데이터의 전송에는 부적합하며, 동기화 데이터 서비스를 위해서는 새로운 전송 시스템에 대한 연구가 필요하다. 본 논문에서는 데이터 방송을 동해 동기화 데이터 서비스를 제공하기 위해 기존의 비동기 데이터 전송 시스템에 MPEG-2 오디오/비디오 TS(Transport Stream)를 출력하는 장치와 동기화 데이터를 다중화 하는 장치를 추가한 새로운 동기화 데이터 전송 시스템과 이를 이용한 동기화 데이터 서비스 방법을 제안한다

  • PDF

H.263-Based Scalable Video Codec (H.263을 기반으로 한 확장 가능한 비디오 코덱)

  • 노경택
    • Journal of the Korea Society of Computer and Information
    • /
    • v.5 no.3
    • /
    • pp.29-32
    • /
    • 2000
  • Layered video coding schemes allow the video information to be transmitted in multiple video bitstreams to achieve scalability. they are attractive in theory for two reasons. First, they naturally allow for heterogeneity in networks and receivers in terms of client processing capability and network bandwidth. Second, they correspond to optimal utilization of available bandwidth when several video qualify levels are desired. In this paper we propose a scalable video codec architectures with motion estimation, which is suitable for real-time audio and video communication over packet networks. The coding algorithm is compatible with ITU-T recommendation H.263+ and includes various techniques to reduce complexity. Fast motion estimation is Performed at the H.263-compatible base layer and used at higher layers, and perceptual macroblock skipping is performed at all layers before motion estimation. Error propagation from packet loss is avoided by Periodically rebuilding a valid Predictor in Intra mode at each layer.

  • PDF