• Title/Summary/Keyword: video filtering

Search Result 254, Processing Time 0.035 seconds

Hardware Design of High Performance HEVC Deblocking Filter for UHD Videos (UHD 영상을 위한 고성능 HEVC 디블록킹 필터 설계)

  • Park, Jaeha;Ryoo, Kwangki
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.1
    • /
    • pp.178-184
    • /
    • 2015
  • This paper proposes a hardware architecture for high performance Deblocking filter(DBF) in High Efficiency Video Coding for UHD(Ultra High Definition) videos. This proposed hardware architecture which has less processing time has a 4-stage pipelined architecture with two filters and parallel boundary strength module. Also, the proposed filter can be used in low-voltage design by using clock gating architecture in 4-stage pipeline. The segmented memory architecture solves the hazard issue that arises when single port SRAM is accessed. The proposed order of filtering shortens the delay time that arises when storing data into the single port SRAM at the pre-processing stage. The DBF hardware proposed in this paper was designed with Verilog HDL, and was implemented with 22k logic gates as a result of synthesis using TSMC 0.18um CMOS standard cell library. Furthermore, the dynamic frequency can process UHD 8k($7680{\times}4320$) samples@60fps using a frequency of 150MHz with an 8K resolution and maximum dynamic frequency is 285MHz. Result from analysis shows that the proposed DBF hardware architecture operation cycle for one process coding unit has improved by 32% over the previous one.

Current Status and Results of In-orbit Function, Radiometric Calibration and INR of GOCI-II (Geostationary Ocean Color Imager 2) on Geo-KOMPSAT-2B (정지궤도 해양관측위성(GOCI-II)의 궤도 성능, 복사보정, 영상기하보정 결과 및 상태)

  • Yong, Sang-Soon;Kang, Gm-Sil;Huh, Sungsik;Cha, Sung-Yong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1235-1243
    • /
    • 2021
  • Geostationary Ocean Color Imager 2 (GOCI-II) on Geo-KOMPSAT-2 (GK2B)satellite was developed as a mission successor of GOCI on COMS which had been operated for around 10 years since launch in 2010 to observe and monitor ocean color around Korean peninsula. GOCI-II on GK2B was successfully launched in February of 2020 to continue for detection, monitoring, quantification, and prediction of short/long term changes of coastal ocean environment for marine science research and application purpose. GOCI-II had already finished IAC and IOT including early in-orbit calibration and had been handed over to NOSC (National Ocean Satellite Center) in KHOA (Korea Hydrographic and Oceanographic Agency). Radiometric calibration was periodically conducted using on-board solar calibration system in GOCI-II. The final calibrated gain and offset were applied and validated during IOT. And three video parameter sets for one day and 12 video parameter sets for a year was selected and transferred to NOSC for normal operation. Star measurement-based INR (Image Navigation and Registration) navigation filtering and landmark measurement-based image geometric correction were applied to meet the all INR requirements. The GOCI2 INR software was validated through INR IOT. In this paper, status and results of IOT, radiometric calibration and INR of GOCI-II are analysed and described.

Frame Rate Up-Conversion with Occlusion Detection Function (폐색영역탐지 기능을 갖는 프레임율 변환)

  • Kim, Nam-Uk;Lee, Yung-Lyul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.2
    • /
    • pp.265-272
    • /
    • 2015
  • A new technology on video frame rate up-conversion (FRUC) is presented by combining the median filter and motion estimation (ME) with an occlusion detection (OD) method. First, ME is performed to have a motion vector. Then, the OD method is used to refine motion vector in the occlusion region. Since the wrong motion vector can be obtained with high possibility in the occluded area, a median filtering that less depends on the motion vector is applied to that area, and since the motion vector is continuous and robust in the non-occluded area, BDMC(Bi-Directional Motion Compensated interpolation) is applied to obtain interpolated image in that area. BDMC using the bi-directional motion vectors achieves good results when continuity and robustness of the motion vector is higher. Experimental results show that the proposed algorithm provides better performance than the conventional approach. The average gain of PSNR (Peak Signal to Noise Ratio) is approximately 0.16 dB in the test sequences compared with BDMC.

Design of Parallel Processing of Lane Detection System Based on Multi-core Processor (멀티코어를 이용한 차선 검출 병렬화 시스템 설계)

  • Lee, Hyo-Chan;Moon, Dai-Tchul;Park, In-hag;Heo, Kang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.9
    • /
    • pp.1778-1784
    • /
    • 2016
  • we improved the performance by parallelizing lane detection algorithms. Lane detection, as a intellectual assisting system, helps drivers make an alarm sound or revise the handle in response of lane departure. Four kinds of algorithms are implemented in order as following, Gaussian filtering algorithm so as to remove the interferences, gray conversion algorithm to simplify images, sobel edge detection algorithm to find out the regions of lanes, and hough transform algorithm to detect straight lines. Among parallelized methods, the data level parallelism algorithm is easy to design, yet still problem with the bottleneck. The high-speed data level parallelism is suggested to reduce this bottleneck, which resulted in noticeable performance improvement. In the result of applying actual road video of black-box on our parallel algorithm, the measurement, in the case of single-core, is approximately 30 Frames/sec. Furthermore, in the case of octa-core parallelism, the data level performance is approximately 100 Frames/sec and the highest performance comes close to 150 Frames/sec.

Hardware Architecture and its Design of Real-Time Video Compression Processor for Motion JPEG2000 (Motion JPEG2000을 위한 실시간 비디오 압축 프로세서의 하드웨어 구조 및 설계)

  • 서영호;김동욱
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.1
    • /
    • pp.1-9
    • /
    • 2004
  • In this paper, we proposed a hardware(H/W) structure which can compress and recontruct the input image in real time operation and implemented it into a FPGA platform using VHDL(VHSIC Hardware Description Language). All the image processing element to process both compression and reconstruction in a FPGA were considered each of them was mapped into a H/W with the efficient structure for FPGA. We used the DWT(discrete wavelet transform) which transforms the data from spatial domain to the frequency domain, because use considered the motion JPEG2000 as the application. The implemented H/W is separated to both the data path part and the control part. The data path part consisted of the image processing blocks and the data processing blocks. The image processing blocks consisted of the DWT Kernel for the filtering by DWT, Quantizer/Huffman Encoder, Inverse Adder/Buffer for adding the low frequency coefficient to the high frequency one in the inverse DWT operation, and Huffman Decoder. Also there existed the interface blocks for communicating with the external application environments and the timing blocks for buffering between the internal blocks. The global operations of the designed H/W are the image compression and the reconstruction, and it is operated by the unit or a field synchronized with the A/D converter. The implemented H/W used the 54%(12943) LAB(Logic Array Block) and 9%(28352) ESB(Embedded System Block) in the APEX20KC EP20K600CB652-7 FPGA chip of ALTERA, and stably operated in the 70MHz clock frequency. So we verified the real time operation. that is. processing 60 fields/sec(30 frames/sec).

Parallel Gaussian Processes for Gait and Phase Analysis (보행 방향 및 상태 분석을 위한 병렬 가우스 과정)

  • Sin, Bong-Kee
    • Journal of KIISE
    • /
    • v.42 no.6
    • /
    • pp.748-754
    • /
    • 2015
  • This paper proposes a sequential state estimation model consisting of continuous and discrete variables, as a way of generalizing all discrete-state factorial HMM, and gives a design of gait motion model based on the idea. The discrete state variable implements a Markov chain that models the gait dynamics, and for each state of the Markov chain, we created a Gaussian process over the space of the continuous variable. The Markov chain controls the switching among Gaussian processes, each of which models the rotation or various views of a gait state. Then a particle filter-based algorithm is presented to give an approximate filtering solution. Given an input vector sequence presented over time, this finds a trajectory that follows a Gaussian process and occasionally switches to another dynamically. Experimental results show that the proposed model can provide a very intuitive interpretation of video-based gait into a sequence of poses and a sequence of posture states.

A Study on Contents-based Retrieval using Wavelet (Wavelet을 이용한 내용기반 검색에 관한 연구)

  • 강진석;박재필;나인호;최연성;김장형
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.5
    • /
    • pp.1051-1066
    • /
    • 2000
  • According to the recent advances of digital encoding technologies and computing power, large amounts of multimedia informations such as image, graphic, audio and video are fully used in multimedia systems through Internet. By this, diverse retrieval mechanisms are required for users to search dedicated informations stored in multimedia systems, and especially it is preferred to use contents-based retrieval method rather than text-type keyword retrieval method. In this paper, we propose a new contents-based indexing and searching algorithm which aims to get both high efficiency and high retrieval performance. To achieve these objectives, firstly the proposed algorithm classifies images by a pre-processing process of edge extraction, range division, and multiple filtering, and secondly it searches the target images using spatial and textural characteristics of colors, which are extracted from the previous process, in a image. In addition, we describe the simulation results of search requests and retrieval outputs for several images of company's trade-mark using the proposed contents-based retrieval algorithm based on wavelet.

  • PDF

Autonomous Battle Tank Detection and Aiming Point Search Using Imagery (영상정보에 기초한 전차 자율탐지 및 조준점탐색 연구)

  • Kim, Jong-Hwan;Jung, Chi-Jung;Heo, Mira
    • Journal of the Korea Society for Simulation
    • /
    • v.27 no.2
    • /
    • pp.1-10
    • /
    • 2018
  • This paper presents an autonomous detection and aiming point computation of a battle tank by using RGB images. Maximally stable extremal regions algorithm was implemented to find features of the tank, which are matched with images extracted from streaming video to figure out the region of interest where the tank is present. The median filter was applied to remove noises in the region of interest and decrease camouflage effects of the tank. For the tank segmentation, k-mean clustering was used to autonomously distinguish the tank from its background. Also, both erosion and dilation algorithms of morphology techniques were applied to extract the tank shape without noises and generate the binary image with 1 for the tank and 0 for the background. After that, Sobel's edge detection was used to measure the outline of the tank by which the aiming point at the center of the tank was calculated. For performance measurement, accuracy, precision, recall, and F-measure were analyzed by confusion matrix, resulting in 91.6%, 90.4%, 85.8%, and 88.1%, respectively.

Automatic Person Identification using Multiple Cues

  • Swangpol, Danuwat;Chalidabhongse, Thanarat
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1202-1205
    • /
    • 2005
  • This paper describes a method for vision-based person identification that can detect, track, and recognize person from video using multiple cues: height and dressing colors. The method does not require constrained target's pose or fully frontal face image to identify the person. First, the system, which is connected to a pan-tilt-zoom camera, detects target using motion detection and human cardboard model. The system keeps tracking the moving target while it is trying to identify whether it is a human and identify who it is among the registered persons in the database. To segment the moving target from the background scene, we employ a version of background subtraction technique and some spatial filtering. Once the target is segmented, we then align the target with the generic human cardboard model to verify whether the detected target is a human. If the target is identified as a human, the card board model is also used to segment the body parts to obtain some salient features such as head, torso, and legs. The whole body silhouette is also analyzed to obtain the target's shape information such as height and slimness. We then use these multiple cues (at present, we uses shirt color, trousers color, and body height) to recognize the target using a supervised self-organization process. We preliminary tested the system on a set of 5 subjects with multiple clothes. The recognition rate is 100% if the person is wearing the clothes that were learned before. In case a person wears new dresses the system fail to identify. This means height is not enough to classify persons. We plan to extend the work by adding more cues such as skin color, and face recognition by utilizing the zoom capability of the camera to obtain high resolution view of face; then, evaluate the system with more subjects.

  • PDF

Online Face Pose Estimation based on A Planar Homography Between A User's Face and Its Image (사용자의 얼굴과 카메라 영상 간의 호모그래피를 이용한 실시간 얼굴 움직임 추정)

  • Koo, Deo-Olla;Lee, Seok-Han;Doo, Kyung-Soo;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.4
    • /
    • pp.25-33
    • /
    • 2010
  • In this paper, we propose a simple and efficient algorithm for head pose estimation using a single camera. First, four subimages are obtained from the camera image for face feature extraction. These subimages are used as feature templates. The templates are then tracked by Kalman filtering, and camera projective matrix is computed by the projective mapping between the templates and their coordinate in the 3D coordinate system. And the user's face pose is estimated from the projective mapping between the user's face and image plane. The accuracy and the robustness of our technique is verified on the experimental results of several real video sequences.