• Title/Summary/Keyword: 2D Video

Search Result 910, Processing Time 0.026 seconds

Face Spoofing Attack Detection Using Spatial Frequency and Gradient-Based Descriptor

  • Ali, Zahid;Park, Unsang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.892-911
    • /
    • 2019
  • Biometric recognition systems have been widely used for information security. Among the most popular biometric traits, there are fingerprint and face due to their high recognition accuracies. However, the security system that uses face recognition as the login method are vulnerable to face-spoofing attacks, from using printed photo or video of the valid user. In this study, we propose a fast and robust method to detect face-spoofing attacks based on the analysis of spatial frequency differences between the real and fake videos. We found that the effect of a spoofing attack stands out more prominently in certain regions of the 2D Fourier spectra and, therefore, it is adequate to use the information about those regions to classify the input video or image as real or fake. We adopt a divide-conquer-aggregate approach, where we first divide the frequency domain image into local blocks, classify each local block independently, and then aggregate all the classification results by the weighted-sum approach. The effectiveness of the methodology is demonstrated using two different publicly available databases, namely: 1) Replay Attack Database and 2) CASIA-Face Anti-Spoofing Database. Experimental results show that the proposed method provides state-of-the-art performance by processing fewer frames of each video.

Switched Digital Video for the Efficient Utilization of Bandwidth In Cable Systems (케이블방송의 효율적 주파수 활용을 위한 SDV 전송 기술)

  • Choi, Jin-Chul;Lee, Chae-Woo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.305-318
    • /
    • 2011
  • Since switched digital video (SDV) provides specific programs only to the subscribers who request the programs, SDV has attracted considerable interest of MSOs for bandwidth efficiency. In North America, MSOs service over 2.3 million households with the SDV for cable networks. In Korea, since demand of HD program, high-speed Internet, VoD, and VoIP is noticeably rising, the SDV is considered as the alternative for bandwidth saving and efficient managing. In this paper, the characteristics, operating structure, and bandwidth saving of the SDV are discussed and technical requirements for the SDV are also introduced. The channel switching performance and stability of the SDV are analyzed through the test-bed.

A Feature Point Extraction and Identification Technique for Immersive Contents Using Deep Learning (딥 러닝을 이용한 실감형 콘텐츠 특징점 추출 및 식별 방법)

  • Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.529-535
    • /
    • 2020
  • As the main technology of the 4th industrial revolution, immersive 360-degree video contents are drawing attention. The market size of immersive 360-degree video contents worldwide is projected to increase from $6.7 billion in 2018 to approximately $70 billion in 2020. However, most of the immersive 360-degree video contents are distributed through illegal distribution networks such as Webhard and Torrent, and the damage caused by illegal reproduction is increasing. Existing 2D video industry uses copyright filtering technology to prevent such illegal distribution. The technical difficulties dealing with immersive 360-degree videos arise in that they require ultra-high quality pictures and have the characteristics containing images captured by two or more cameras merged in one image, which results in the creation of distortion regions. There are also technical limitations such as an increase in the amount of feature point data due to the ultra-high definition and the processing speed requirement. These consideration makes it difficult to use the same 2D filtering technology for 360-degree videos. To solve this problem, this paper suggests a feature point extraction and identification technique that select object identification areas excluding regions with severe distortion, recognize objects using deep learning technology in the identification areas, extract feature points using the identified object information. Compared with the previously proposed method of extracting feature points using stitching area for immersive contents, the proposed technique shows excellent performance gain.

Effect of Input Data Video Interval and Input Data Image Similarity on Learning Accuracy in 3D-CNN

  • Kim, Heeil;Chung, Yeongjee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.208-217
    • /
    • 2021
  • 3D-CNN is one of the deep learning techniques for learning time series data. However, these three-dimensional learning can generate many parameters, requiring high performance or having a significant impact on learning speed. We will use these 3D-CNNs to learn hand gesture and find the parameters that showed the highest accuracy, and then analyze how the accuracy of 3D-CNN varies through input data changes without any structural changes in 3D-CNN. First, choose the interval of the input data. This adjusts the ratio of the stop interval to the gesture interval. Secondly, the corresponding interframe mean value is obtained by measuring and normalizing the similarity of images through interclass 2D cross correlation analysis. This experiment demonstrates that changes in input data affect learning accuracy without structural changes in 3D-CNN. In this paper, we proposed two methods for changing input data. Experimental results show that input data can affect the accuracy of the model.

The effects of emotional matching between video color-temperature and scent on reality improvement (영상의 색온도와 향의 감성적 일치가 영상실감 향상에 미치는 효과)

  • Lee, Guk-Hee;Li, Hyung-Chul O.;Ahn, ChungHyun;Ki, MyungSeok;Kim, ShinWoo
    • Journal of the HCI Society of Korea
    • /
    • v.10 no.1
    • /
    • pp.29-41
    • /
    • 2015
  • Technologies for video reality (e.g., 3D displays, vibration, surround sound, etc.) utilize various sensory input and many of them are now commercialized. However, when it comes to the use of olfaction for video reality, there has not been much progress in both practical and academic respects. Because olfactory sense is tightly associated with human emotion, proper use of this sense is expected to help to achieve a high degree of video reality. This research tested the effects of a video's color-temperature related scent on reality improvement when the video does not have apparent object (e.g., coffee, flower, etc.) which suggest specific smell. To this end, we had participants to rate 48 scents based on a color-temperature scale of 1,500K (warm)-15,000K (cold) and chose 8 scents (4 warm scents, 4 cold scents) which showed clear correspondence with warm or cold color-temperatures (Expt. 1). And then after applying warm (3,000K), neutral (6,500K), or cold (14,000K) color-temperatures to images or videos, we presented warm or cold scents to participants while they rate reality improvement on a 7-point scale depending on relatedness of scent vs. color-temperature (related, unrelated, neutral) (Expts. 2-3). The results showed that participants experienced greater reality when scent and color-temperature was related than when they were unrelated or neutral. This research has important practical implications in demonstrating the possibility that provision of color-temperature related scent improves video reality even when there are no concrete objects that suggest specific olfactory information.

A Wireless Video Streaming System for TV White Space Applications (TV 유휴대역 응용을 위한 무선 영상전송 시스템)

  • Park, Hyeongyeol;Ko, Inchang;Park, Hyungchul;Shin, Hyunchol
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.26 no.4
    • /
    • pp.381-388
    • /
    • 2015
  • In this paper, a wireless video streaming system is designed and implemented for TV white space applications. It consists of a RF transceiver module, a digital modem, a camera, and a LCD screen. A VGA resolution video is captured by a camera, modulated by modem, and transmitted by RF transceiver module, and finally displayed at a destination 2.6-inch LCD screen. The RF transceiver is based on direct-conversion architecture. Image leakage is improved by low pass filtering LO, which successfully covers the TVWS. Also, DC offset problem is solved by current steering techniques which control common mode level at DAC output node. The output power of the transmitter and the minimum sensitivity of the receiver is +10 dBm and -82 dBm, respectively. The channel bandwidth is tunable among 6, 7 and 8 MHz according to regulations and standards. Digital modem is realized in Kintex-7 FPGA. Data rate is 9 Mbps based on QPSK and 512ch OFDM. A VGA video is successfully streamed through the air by using the developed TV white-space RF communication module.

Improvement of Rainfall Estimation according to the Calibration Bias of Dual-polarimetric Radar Variables (이중편파레이더 관측오차 보정에 따른 강수량 추정값 개선)

  • Kim, Hae-Lim;Park, Hye-Sook;Ko, Jeong-Seok
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.12
    • /
    • pp.1227-1237
    • /
    • 2014
  • Dual-polarization can distinguish precipitation type and dual-polarization is provide not only meteorological phenomena in the atmosphere but also non-precipitation echoes. Therefore dual-polarization radar can improve radar estimates of rainfall. However polarimetric measurements by transmitting vertically vibration waves and horizontally vibrating waves simultaneously is contain systematic bias of the radar itself. Thus the calibration bias is necessary to improve quantitative precipitation estimation. In this study, the calibration bias of reflectivity (Z) and differential reflectivity ($Z_{DR}$) from the Bislsan dual-polarization radar is calculated using the 2-Dimensional Video Disdrometer (2DVD) data. And an improvement in rainfall estimation is investigated by applying derived calibration bias. A total of 33 rainfall cases occurring in Daegu from 2011 to 2012 were selected. As a results, the calibration bias of Z is about -0.3 to 5.5 dB, and $Z_{DR}$ is about -0.1 dB to 0.6 dB. In most cases, the Bislsan radar generally observes Z and $Z_{DR}$ variables lower than the simulated variables. Before and after calibration bias, compared estimated rainfall from the dual-polarization radar with AWS rain gauge in Daegu found that the mean bias has fallen by 1.69 to 1.54 mm/hr, and the RMSE has decreased by 2.54 to 1.73 mm/hr. And estimated rainfall comparing to the surface rain gauge as ground truth, rainfall estimation is improved about 7-61%.

Acceleration of Viewport Extraction for Multi-Object Tracking Results in 360-degree Video (360도 영상에서 다중 객체 추적 결과에 대한 뷰포트 추출 가속화)

  • Heesu Park;Seok Ho Baek;Seokwon Lee;Myeong-jin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.3
    • /
    • pp.306-313
    • /
    • 2023
  • Realistic and graphics-based virtual reality content is based on 360-degree videos, and viewport extraction through the viewer's intention or automatic recommendation function is essential. This paper designs a viewport extraction system based on multiple object tracking in 360-degree videos and proposes a parallel computing structure necessary for multiple viewport extraction. The viewport extraction process in 360-degree videos is parallelized by composing pixel-wise threads, through 3D spherical surface coordinate transformation from ERP coordinates and 2D coordinate transformation of 3D spherical surface coordinates within the viewport. The proposed structure evaluated the computation time for up to 30 viewport extraction processes in aerial 360-degree video sequences and confirmed up to 5240 times acceleration compared to the CPU-based computation time proportional to the number of viewports. When using high-speed I/O or memory buffers that can reduce ERP frame I/O time, viewport extraction time can be further accelerated by 7.82 times. The proposed parallelized viewport extraction structure can be applied to simultaneous multi-access services for 360-degree videos or virtual reality contents and video summarization services for individual users.

A Viewpoint Switching Service for Multi-View Videos based on MPEG-4 System (MPEG-4 시스템 기반의 다시점 동영상 시점 전환 서비스)

  • Park, Kyung-Seok;Kim, Min-Jun;Kang, Sung-Hwan;Kim, Sung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.1
    • /
    • pp.65-69
    • /
    • 2010
  • Moving Picture Expert Group(MPEG) is the organization foundedin 1998 to establish the standards for compressing and expressing the multimedia contents. The organization has established the technological standards such as MPEG-1, MPEG-2, MPEG-4 and MPEG-7. As the 3D video related standards, there is Multiview Profile which is included in the MPEG-2 video of 1996. However, as the MPEG-2 multiview profile is the standard for compressing the videos from two viewpoints on the object, it is not enough to meet the requirement of multiviewvideo technology. In addition, it does not have the technology on the viewpoint switching that it does not provide the services such as the user interaction. This paper proposes the structure in which the specific viewpoint can be described for video switching in addition to the current MPEG-4 system.

Novel IME Instructions and their Hardware Architecture for Fast Search Algorithm (고속 탐색 알고리즘에 적합한 움직임 추정 전용 명령어 및 구조 설계)

  • Bang, Ho-Il;SunWoo, Myung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.12
    • /
    • pp.58-65
    • /
    • 2011
  • This paper presents an ASIP (Application-specific Instruction Processor) for motion estimation that employs specific IME instructions and its programmable and reconfigurable hardware architecture for various video codecs, such as H.264/AVC, MPEG4, etc. With the proposed specific instructions and variable point 2D SAD hardware accelerator, it can handle the real-time processing requirement of High Definition (HD) video. With the SAD unit and its parallel operations using pattern information, the proposed IME instructions support not only full search algorithms but also other fast search algorithms. The hardware size is 25.5K gates for each Processing Element Group (PEG) which has 128 SAD Processor Elements (PEs). The proposed ASIP has been verified by the Synopsys Processor Designer and implemented by the Design Compiler using the IBM 90nm process technology. The hardware size is 453K gates for the IME unit and the operating frequency is 188MHz for 1080p@30 frame in real time. The proposed ASIP can reduce the hardware size about 26% and the number of operation cycles about 18%.