• Title/Summary/Keyword: recursion

Search Result 103, Processing Time 0.015 seconds

The Sub-Peres Functions for Random Number Generation (무작위수생성을 위한 부 페레즈 함수)

  • Pae, Sung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.2
    • /
    • pp.19-30
    • /
    • 2013
  • We study sub-Peres functions that are defined recursively as Peres function for random number generation. Instead of using two parameter functions as in Peres function, the sub-Peres functions uses only one parameter function. Naturally, these functions produce less random bits, hence are not asymptotically optimal. However, the sub-Peres functions runs in linear time, i.e., in O(n) time rather than O(n logn) as in Peres's case. Moreover, the implementation is even simpler than Peres function not only because they use only one parameter function but because they are tail recursive, hence run in a simple iterative manner rather than by a recursion, eliminating the usage of stack and thus further reducing the memory requirement of Peres's method. And yet, the output rate of the sub-Peres function is more than twice as much as that of von Neumann's method which is widely known linear-time method. So, these methods can be used, instead of von Neumann's method, in an environment with limited computational resources like mobile devices. We report the analyses of the sub-Peres functions regarding their running time and the exact output rates in comparison with Peres function and other known methods for random number generation. Also, we discuss how these sub-Peres function can be implemented.

Lightweight Super-Resolution Network Based on Deep Learning using Information Distillation and Recursive Methods (정보 증류 및 재귀적인 방식을 이용한 심층 학습법 기반 경량화된 초해상도 네트워크)

  • Woo, Hee-Jo;Sim, Ji-Woo;Kim, Eung-Tae
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.378-390
    • /
    • 2022
  • With the recent development of deep composite multiplication neural network learning, deep learning techniques applied to single-image super-resolution have shown good results, and the strong expression ability of deep networks has enabled complex nonlinear mapping between low-resolution and high-resolution images. However, there are limitations in applying it to real-time or low-power devices with increasing parameters and computational amounts due to excessive use of composite multiplication neural networks. This paper uses blocks that extract hierarchical characteristics little by little using information distillation and suggests the Recursive Distillation Super Resolution Network (RDSRN), a lightweight network that improves performance by making more accurate high frequency components through high frequency residual purification blocks. It was confirmed that the proposed network restores images of similar quality compared to RDN, restores images 3.5 times faster with about 32 times fewer parameters and about 10 times less computation, and produces 0.16 dB better performance with about 2.2 times less parameters and 1.8 times faster processing time than the existing lightweight network CARN.

Single Image Super Resolution Based on Residual Dense Channel Attention Block-RecursiveSRNet (잔여 밀집 및 채널 집중 기법을 갖는 재귀적 경량 네트워크 기반의 단일 이미지 초해상도 기법)

  • Woo, Hee-Jo;Sim, Ji-Woo;Kim, Eung-Tae
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.429-440
    • /
    • 2021
  • With the recent development of deep convolutional neural network learning, deep learning techniques applied to single image super-resolution are showing good results. One of the existing deep learning-based super-resolution techniques is RDN(Residual Dense Network), in which the initial feature information is transmitted to the last layer using residual dense blocks, and subsequent layers are restored using input information of previous layers. However, if all hierarchical features are connected and learned and a large number of residual dense blocks are stacked, despite good performance, a large number of parameters and huge computational load are needed, so it takes a lot of time to learn a network and a slow processing speed, and it is not applicable to a mobile system. In this paper, we use the residual dense structure, which is a continuous memory structure that reuses previous information, and the residual dense channel attention block using the channel attention method that determines the importance according to the feature map of the image. We propose a method that can increase the depth to obtain a large receptive field and maintain a concise model at the same time. As a result of the experiment, the proposed network obtained PSNR as low as 0.205dB on average at 4× magnification compared to RDN, but about 1.8 times faster processing speed, about 10 times less number of parameters and about 1.74 times less computation.