• Title/Summary/Keyword: 연산 지도

Search Result 4,001, Processing Time 0.025 seconds

Motion Estimation and Mode Decision Algorithm for Very Low-complexity H.264/AVC Video Encoder (초저복잡도 H.264 부호기의 움직임 추정 및 모드 결정 알고리즘)

  • Yoo Youngil;Kim Yong Tae;Lee Seung-Jun;Kang Dong Wook;Kim Ki-Doo
    • Journal of Broadcast Engineering
    • /
    • v.10 no.4 s.29
    • /
    • pp.528-539
    • /
    • 2005
  • The H.264 has been adopted as the video codec for various multimedia services such as DMB and next-generation DVD because of its superior coding performance. However, the reference codec of the standard, the joint model (JM) contains quite a few algorithms which are too complex to be used for the resource-constraint embedded environment. This paper introduces very low-complexity H.264 encoding algorithm which is applicable for the embedded environment. The proposed algorithm was realized by restricting some coding tools on the basis that it should not cause too severe degradation of RD-performance and adding a few early termination and bypass conditions during the motion estimation and mode decision process. In case of encoding of 7.5fps QCIF sequence with 64kbpswith the proposed algorithm, the encoder yields worse PSNRs by 0.4 dB than the standard JM, but requires only $15\%$ of computational complexity and lowers the required memory and power consumption drastically. By porting the proposed H.264 codec into the PDA with Intel PXA255 Processor, we verified the feasibility of the H.264 based MMS(Multimedia Messaging Service) on PDA.

Incremental Frequent Pattern Detection Scheme Based on Sliding Windows in Graph Streams (그래프 스트림에서 슬라이딩 윈도우 기반의 점진적 빈발 패턴 검출 기법)

  • Jeong, Jaeyun;Seo, Indeok;Song, Heesub;Park, Jaeyeol;Kim, Minyeong;Choi, Dojin;Bok, Kyoungsoo;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.2
    • /
    • pp.147-157
    • /
    • 2018
  • Recently, with the advancement of network technologies, and the activation of IoT and social network services, many graph stream data have been generated. As the relationship between objects in the graph streams changes dynamically, studies have been conducting to detect or analyze the change of the graph. In this paper, we propose a scheme to incrementally detect frequent patterns by using frequent patterns information detected in previous sliding windows. The proposed scheme calculates values that represent whether the frequent patterns detected in previous sliding windows will be frequent in how many future silding windows. By using the values, the proposed scheme reduces the overall amount of computation by performing only necessary calculations in the next sliding window. In addition, only the patterns that are connected between the patterns are recognized as one pattern, so that only the more significant patterns are detected. We conduct various performance evaluations in order to show the superiority of the proposed scheme. The proposed scheme is faster than existing similar scheme when the number of duplicated data is large.

Development of A Recovery Algorithm for Sparse Signals based on Probabilistic Decoding (확률적 희소 신호 복원 알고리즘 개발)

  • Seong, Jin-Taek
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.5
    • /
    • pp.409-416
    • /
    • 2017
  • In this paper, we consider a framework of compressed sensing over finite fields. One measurement sample is obtained by an inner product of a row of a sensing matrix and a sparse signal vector. A recovery algorithm proposed in this study for sparse signals based probabilistic decoding is used to find a solution of compressed sensing. Until now compressed sensing theory has dealt with real-valued or complex-valued systems, but for the processing of the original real or complex signals, the loss of the information occurs from the discretization. The motivation of this work can be found in efforts to solve inverse problems for discrete signals. The framework proposed in this paper uses a parity-check matrix of low-density parity-check (LDPC) codes developed in coding theory as a sensing matrix. We develop a stochastic algorithm to reconstruct sparse signals over finite field. Unlike LDPC decoding, which is published in existing coding theory, we design an iterative algorithm using probability distribution of sparse signals. Through the proposed recovery algorithm, we achieve better reconstruction performance as the size of finite fields increases. Since the sensing matrix of compressed sensing shows good performance even in the low density matrix such as the parity-check matrix, it is expected to be actively used in applications considering discrete signals.

Virtual core point detection and ROI extraction for finger vein recognition (지정맥 인식을 위한 가상 코어점 검출 및 ROI 추출)

  • Lee, Ju-Won;Lee, Byeong-Ro
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.3
    • /
    • pp.249-255
    • /
    • 2017
  • The finger vein recognition technology is a method to acquire a finger vein image by illuminating infrared light to the finger and to authenticate a person through processes such as feature extraction and matching. In order to recognize a finger vein, a 2D mask-based two-dimensional convolution method can be used to detect a finger edge but it takes too much computation time when it is applied to a low cost micro-processor or micro-controller. To solve this problem and improve the recognition rate, this study proposed an extraction method for the region of interest based on virtual core points and moving average filtering based on the threshold and absolute value of difference between pixels without using 2D convolution and 2D masks. To evaluate the performance of the proposed method, 600 finger vein images were used to compare the edge extraction speed and accuracy of ROI extraction between the proposed method and existing methods. The comparison result showed that a processing speed of the proposed method was at least twice faster than those of the existing methods and the accuracy of ROI extraction was 6% higher than those of the existing methods. From the results, the proposed method is expected to have high processing speed and high recognition rate when it is applied to inexpensive microprocessors.

Thermodynamics-Based Weight Encoding Methods for Improving Reliability of Biomolecular Perceptrons (생체분자 퍼셉트론의 신뢰성 향상을 위한 열역학 기반 가중치 코딩 방법)

  • Lim, Hee-Woong;Yoo, Suk-I.;Zhang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.12
    • /
    • pp.1056-1064
    • /
    • 2007
  • Biomolecular computing is a new computing paradigm that uses biomolecules such as DNA for information representation and processing. The huge number of molecules in a small volume and the innate massive parallelism inspired a novel computation method, and various computation models and molecular algorithms were developed for problem solving. In the meantime, the use of biomolecules for information processing supports the possibility of DNA computing as an application for biological problems. It has the potential as an analysis tool for biochemical information such as gene expression patterns. In this context, a DNA computing-based model of a biomolecular perceptron has been proposed and the result of its experimental implementation was presented previously. The weight encoding and weighted sum operation, which are the main components of a biomolecular perceptron, are based on the competitive hybridization reactions between the input molecules and weight-encoding probe molecules. However, thermodynamic symmetry in the competitive hybridizations is assumed, so there can be some error in the weight representation depending on the probe species in use. Here we suggest a generalized model of hybridization reactions considering the asymmetric thermodynamics in competitive hybridizations and present a weight encoding method for the reliable implementation of a biomolecular perceptron based on this model. We compare the accuracy of our weight encoding method with that of the previous one via computer simulations and present the condition of probe composition to satisfy the error limit.

Application of Self-Adaptive Meta-Heuristic Optimization Algorithm for Muskingum Flood Routing (Muskingum 홍수추적을 위한 자가적응형 메타 휴리스틱 알고리즘의 적용)

  • Lee, Eui Hoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.29-37
    • /
    • 2020
  • In the past, meta-heuristic optimization algorithms were developed to solve the problems caused by complex nonlinearities occurring in natural phenomena, and various studies have been conducted to examine the applicability of the developed algorithms. The self-adaptive vision correction algorithm (SAVCA) showed excellent performance in mathematics problems, but it did not apply to complex engineering problems. Therefore, it is necessary to review the application process of the SAVCA. The SAVCA, which was recently developed and showed excellent performance, was applied to the advanced Muskingum flood routing model (ANLMM-L) to examine the application and application process. First, initial solutions were generated by the SAVCA, and the fitness was then calculated by ANLMM-L. The new value selected by a local and global search was put into the SAVCA. A new solution was generated, and ANLMM-L was applied again to calculate the fitness. The final calculation was conducted by comparing and improving the results of the new solution and existing solutions. The sum of squares (SSQ) was used to calculate the error between the observed and calculated runoff, and the applied results were compared with the current models. SAVCA, which showed excellent performance in the Muskingum flood routing model, is expected to show excellent performance in a range of engineering problems.

Implementation of Parallel Local Alignment Method for DNA Sequence using Apache Spark (Apache Spark을 이용한 병렬 DNA 시퀀스 지역 정렬 기법 구현)

  • Kim, Bosung;Kim, Jinsu;Choi, Dojin;Kim, Sangsoo;Song, Seokil
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.10
    • /
    • pp.608-616
    • /
    • 2016
  • The Smith-Watrman (SW) algorithm is a local alignment algorithm which is one of important operations in DNA sequence analysis. The SW algorithm finds the optimal local alignment with respect to the scoring system being used, but it has a problem to demand long execution time. To solve the problem of SW, some methods to perform SW in distributed and parallel manner have been proposed. The ADAM which is a distributed and parallel processing framework for DNA sequence has parallel SW. However, the parallel SW of the ADAM does not consider that the SW is a dynamic programming method, so the parallel SW of the ADAM has the limit of its performance. In this paper, we propose a method to enhance the parallel SW of ADAM. The proposed parallel SW (PSW) is performed in two phases. In the first phase, the PSW splits a DNA sequence into the number of partitions and assigns them to multiple nodes. Then, the original Smith-Waterman algorithm is performed in parallel at each node. In the second phase, the PSW estimates the portion of data sequence that should be recalculated, and the recalculation is performed on the portions in parallel at each node. In the experiment, we compare the proposed PSW to the parallel SW of the ADAM to show the superiority of the PSW.

Joint Quality Control of MPEG-2 Video Programs for Digital Broadcasting Services (디지털 방송 서비스를 위한 MPEG-2 비디오 프로그램들의 결합 화질 제어)

  • 홍성훈;김성대
    • Journal of Broadcast Engineering
    • /
    • v.3 no.1
    • /
    • pp.69-84
    • /
    • 1998
  • In digital broadcasting, services such as digital satellite TV, cable TV and digital terrestrial TV, several video programs are compressed by MPEG-2, and then simultaneously transmitted over a conventional CBR (Constant Bit Rate) broadcasting channel. In this paper, we propose a joint quality control scheme to be able to accurately control the relative picture quality among the video programs, which is achieved by simdt;,nL'Ously controlling the video encoders to generate the VBR (Variable Bit Rate) compressed video streams. Our quality control scheme can prevent the video buffer overflow and underflow by total target bit allocation process, and also exactly control the relative picture quality in terms of PSNR (Peak Signal to Noise Ratio) between some programs requiring higher picture quality and others by rate-distortion modification. Furthermore we present a rate-distortion estimation method for MPEG-2 video, which is base of our joint quality control, and verify its performance by experiments. The most attractive features of this estimation method are as follows: 1) computational complexity is low because main operation for the estimation is to calculate the histogram of OCT coefficients into quantizer; 2) estimation results are very accurate enough to be applied to the practical MPEG-2 video coding applications. Simulation results show that the proposed joint quality control scheme accurately controls the relative picture quality among the video progran1s transmitted over a single channel as well as provides more consistent and higher picture quality than independent coding scheme that encodes each program independently.

  • PDF

Voice Activity Detection using Motion and Variation of Intensity in The Mouth Region (입술 영역의 움직임과 밝기 변화를 이용한 음성구간 검출 알고리즘 개발)

  • Kim, Gi-Bak;Ryu, Je-Woong;Cho, Nam-Ik
    • Journal of Broadcast Engineering
    • /
    • v.17 no.3
    • /
    • pp.519-528
    • /
    • 2012
  • Voice activity detection (VAD) is generally conducted by extracting features from the acoustic signal and a decision rule. The performance of such VAD algorithms driven by the input acoustic signal highly depends on the acoustic noise. When video signals are available as well, the performance of VAD can be enhanced by using the visual information which is not affected by the acoustic noise. Previous visual VAD algorithms usually use single visual feature to detect the lip activity, such as active appearance models, optical flow or intensity variation. Based on the analysis of the weakness of each feature, we propose to combine intensity change measure and the optical flow in the mouth region, which can compensate for each other's weakness. In order to minimize the computational complexity, we develop simple measures that avoid statistical estimation or modeling. Specifically, the optical flow is the averaged motion vector of some grid regions and the intensity variation is detected by simple thresholding. To extract the mouth region, we propose a simple algorithm which first detects two eyes and uses the profile of intensity to detect the center of mouth. Experiments show that the proposed combination of two simple measures show higher detection rates for the given false positive rate than the methods that use a single feature.

Earthquake-induced Liquefaction Areas and Safety Assessment of Facilities (지진으로 인한 액상화 지역 및 시설물 안정성 평가)

  • Jeon, Sang-Soo;Heo, DaeYang;Lee, Sang-Seung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.7
    • /
    • pp.133-143
    • /
    • 2018
  • Liquefaction is one of secondary damages after earthquake and has been rarely reported until earthquake except Mw = 5.4 15 November 2017 Pohang earthquake in Korea. In recent years, Mw = 5.8 12 September 2016 Gyeongju earthquake and Mw = 5.4 15 November 2017 Pohang earthquake, which induced liquefaction, occurred in fault zone of Yangsan City located at south-eastern part of Korea. This explains that Korea is not safe against liquefaction induced by earthquake. In this study, the distance between the centroid of administrative district and the epicenter located at Yangsan fault, peak ground velocity (PGA) induced by both Mw = 5.0 and 6.5, and liquefaction potential index (LPI), which is calculated by using groundwater level and standard penetration test results of 274 in the area of Gimhae city located in adjacent to Nakdong river and across Yangsan fault, have been estimated and then kriging method using geographical information systems has been used to evaluate liquefaction effects on the damage of facilities. This study presents that Mw = 5.0 earthquake induces a small and low level of liquefaction resulting in slight damage of facilities but Mw = 6.5 earthquake induces a large and high level of liquefaction resulting in severe damage of facilities.