• Title/Summary/Keyword: 16K video

Search Result 486, Processing Time 0.023 seconds

Rate-Distortion Based Selective Encoding in Distributed Video Coding (율-왜곡 기반 선택적 분산 비디오 부호화 기법)

  • Lee, Byung-Tak;Kim, Jae-Gon;Kim, Jin-Soo;Seo, Kwang-Deok
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.123-132
    • /
    • 2011
  • Recently, DVC (Distributed Video Coding) is receiving a lot of attention as one of the low complexity video encoding techniques suitable for various applications with computation-limited and/or power-limited environment, and is being actively studied for improving the coding efficiency. This paper proposes a rate-distortion based selective block encoding scheme. First, the motion information is obtained in the process of generating side information at decoder and received through the feedback channel, and then, based on this information, the proposed method performs a selective block encoding based on rate-distortion optimization. Experimental results show that the performance of the proposed scheme reaches up to 2.2 dB PSNR gain over the existing scheme. Moreover, it is shown that the complexity can be reduced by encoding parts of region considering rate-distortion cost.

Video Reality Improvement Using Measurement of Emotion for Olfactory Information (후각정보의 감성측정을 이용한 영상실감향상)

  • Lee, Guk-Hee;Kim, ShinWoo
    • Science of Emotion and Sensibility
    • /
    • v.18 no.3
    • /
    • pp.3-16
    • /
    • 2015
  • Will orange scent enhance video reality if it is presented with a video which vividly illustrates orange juice? Or, will romantic scent improve video reality if it is presented along with a date scene? Whereas the former is related to reality improvement when concrete objects or places are present in a video, the latter is related to a case when they are absent. This paper reviews previous research which tested diverse videos and scents in order to answer the above two different questions, and discusses implications, limitations, and future research directions. In particular, this paper focuses on measurement methods and results regarding acceptability of olfactory information, perception of scent similarity, olfactory vividness and video reality, matching between scent vs. color (or color temperature), and description of various scents using emotional adjectives. We expect this paper to help researchers or engineers who are interested in using scents for video reality.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

Design of MMT-based Broadcasting System for UHD Video Streaming over Heterogeneous Networks (이 기종 망에서의 UHD 비디오 전송을 위한 MMT 기반 방송 시스템 설계)

  • Sohn, YeJin;Cho, MinJu;Paik, JongHo
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.16-25
    • /
    • 2015
  • Even if the demands for ultra-high-quality multimedia contents are increasing, it is difficult to produce, encode, play and transport ultra-high-quality contents under the existing broadcasting environment. By the reason, various technologies for the UHD contents have been developed in order to satisfy the user's needs. In this paper, we propose a design methodology of a broadcasting system, which consists of two parts, for UHD services with two parts. At the transmit part of the proposed system can encode a video into several layered-bitstreams hierarchically, and then transport each bitstream over heterogeneous networks. The receiver part can play the received video by composing the separated bitstreams. The proposed system can adaptively provide both HD and UHD contents according to user's reception conditions by using the heterogeneous networks.

Prototype Design and Development of Intelligent Video Interview System for Online Recruitment (원격 온라인 인력 채용을 위한 지능형 동영상 면접시스템 설계 및 시작품 개발)

  • Cho, Jinhyung
    • Journal of Digital Convergence
    • /
    • v.16 no.2
    • /
    • pp.189-194
    • /
    • 2018
  • This study reflects the current trend of the blind hiring culture focused on job competency rather than education specification as government initiative. In order to overcome the limitation of the existing document-oriented online recruitment process, we proposed a system architecture design of video interview system. In addition, we have evaluated the effectiveness through the development of prototype and performance experiment based on it. The proposed online video interview system is designed to combine intelligent Web technology to enable customized job matching and distant job coaching. This system is designed to reduce recruitment cost and opportunity cost of job seekers. Based on results derived from this study, commercialization of the proposed video interview system can be expected to be an practical online recruitment solution for the job competency based employment.

Channel-Adaptive Mobile Streaming Video Control over Mobile WiMAX Network (모바일 와이맥스망에서 채널 적응적인 모바일 스트리밍 비디오 제어)

  • Pyun, Jae-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.5
    • /
    • pp.37-43
    • /
    • 2009
  • Streaming video service over wireless and mobile communication networks has received significant interests from both academia and industry recently. Specifically, mobile WiMAX (IEEE 802.16e) is capable of providing high data rate and flexible Quality of Service (QoS) mechanisms, supporting mobile streaming very attractive. However, we need to note that streaming videos can be partially deteriorated in their macroblocks and/or slices owing to errors on OFDMA subcarriers, as we consider that compressed video sequence is generally sensitive to the error-prone channel status of the wireless and mobile network. In this paper, we introduce an OFDMA subcarrier-adaptive mobile streaming server based on cross-layer design. This streaming server system is substantially efficient to reduce the deterioration of streaming video transferred on the subcarriers of low power strength without any modifications of the existing schedulers, packet ordering/reassembly, and subcarrier allocation strategies in the base station.

A Study on Effective Stitching Technique of 360° Camera Image (360° 카메라 영상의 효율적인 스티칭 기법에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.16 no.2
    • /
    • pp.335-341
    • /
    • 2018
  • This study is a study on effective stitching technique for video recorded by using a dual-lens $360^{\circ}$ camera composed of two fisheye lenses. First of all, this study located a problem in the result of stitching by using a bundled program. And the study was carried out, focusing on looking for a stitching technique more efficient and closer to perfect by comparatively analyzing the results of stitching by using Autopano Video Pro and Autopano Giga, professional stitching program. As a result, it was shown that the problems of bundled program were horizontal and vertical distortion, exposure and color mismatch and unsmooth stitching line. And it was possible to solve the problem of the horizontal and vertical by using Automatic Horizon and Verticals Tool of Autopano Video Pro and Autopano Giga, problem of exposure and color by using Levels, Color and Edit Color Anchors and problem of stitching line by using Mask function. Based on this study, it is to be hoped that $360^{\circ}$ VR video content closer to perfect can be produced by efficient stitching technique for video recorded by using dual-lens $360^{\circ}$ camera in the future.

An effective transform hardware design for real-time HEVC encoder (HEVC 부호기의 실시간처리를 위한 효율적인 변환기 하드웨어 설계)

  • Jo, Heung-seon;Kumi, Fred Adu;Ryoo, Kwang-ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.416-419
    • /
    • 2015
  • In this paper, we propose an effective design of transform hardware for real-time HEVC(High Efficiency Video Coding) encoder. HEVC encoder determines the transform mode($4{\times}4$, $8{\times}8$, $16{\times}16$, $32{\times}32$) by comparing RDCost. RDCost require a significant amount of computation and time because it is determined by bit-rate and distortion which is computated via transform, quantization, dequantization, and inverse transform. This paper therefore proposes a new method for transform mode determination using sum of transform coefficient. Also, proposed hardware architecture is implemented with multiplexer, recursive adder/subtracter, and shifter only to derive reduction of the computation. Proposed method for transform mode determination results in an increase of 0.096 in BD-PSNR, 0.057 in BD-Bitrate, and decrease of 9.3% in encoding time by comparing HM 10.0. The hardware which is proposed is implemented by 256K logic gates in TSMC 130nm process. Its maximum operation frequency is 200MHz. At 140MHz, the proposed hardware can support 4K Ultra HD video encoding at 60fps in real time.

  • PDF

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

The Study and Hypothesis of Realize AR Video Calling Method (효과적인 AR 영상통화 구현 방법을 위한 가설 방안과 연구)

  • Guo, Dawei;Chung, Jeanhun
    • Journal of Digital Convergence
    • /
    • v.16 no.9
    • /
    • pp.413-419
    • /
    • 2018
  • Nowadays, smart phone became an important part of communication media and integrated into people's life. If callers rely on helmet-mounted display(HMD) augmented reality technique to add two-way user's facial expression, appearance, actions during the calling process, it will let callers have a visualized fantastic sensual experience. And through that method can break the limitations of vision, so research that technical problem can promote the development of visual arts, that is meaningful. This paper will choose and composite several existed technologies to set up two hypothesis, try to realize AR video calling. Through comparison and analysis to find those two hypothesis' problem, and create design solutions to solve problems. And use case study method to present two cases for prove my paper's result that is those two hypothesis can be realize in future. Use those technologies can bring more convenience and enjoyment to people's life. It can be predicted that AR video calling process can be successfully realized and will have unlimited development in future.