• Title/Summary/Keyword: Non-Memory Strategy

Search Result 25, Processing Time 0.031 seconds

The Impact of Interactivity in Smart Signage and Flow on the Engagement and Memory Accessibility (스마트 사이니지의 상호작용성과 플로우(Flow)가 인게이지먼트와 기억 접근성에 미치는 영향)

  • Han, Kwang-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.2
    • /
    • pp.171-176
    • /
    • 2018
  • The purpose of this study is to establish smart signage in a certain space and to analyze not only media ingestion and advertising inducement as well as any information (high vs. low vs. low) and flow level And the memory effect related to whether or not to remember. The results of this study show that the higher the interaction level and the higher the engagement level, the higher the advertising engagement is. In addition, media involvement was high when interaction level was low and flow level was high. Finally, if the level of interactivity is low and the level of flow is high, then non - valued attribution information is more likely to be recalled than the comprehensive evaluation information. If the interaction of smart signage is high and the flow of users is low, Recalled more recall information. In the future, detailed strategies for enhancing user flow will be needed rather than a strategy for unconditional enhancement of interaction in smart signage strategy.

A Randomness Test by the Entropy (Entropy에 의한 Randomness 검정법)

  • 최봉대;신양우;이경현
    • Proceedings of the Korea Institutes of Information Security and Cryptology Conference
    • /
    • 1991.11a
    • /
    • pp.105-133
    • /
    • 1991
  • 본 논문에서는 임의의 이진 난수발생기의 source가 $BMS_{p}$ 이거나 M-memory를 갖는 마르코프연쇄로 모델화 되었을 경우에 비트당 entropy와 관련이 있는 새로운 randomness에 관한 통계적 검정법을 제안한다. 기존에 알려진 이진 난수발생기의 randomness검정법이 0또는 1의 분포의 편향성(bias)이나 연속된 비트간의 상관성(correlation)중의 한 종류만의 non-randomness를 추적해낼 수 있는 반면에 새로운 검정법은 위의 두가지 검정을 통과하였을 때 암호학적으로 중요한 측도인 비트당 entropy 를 측정하여 암호학적인 약점을 검정할 수 있다. 또한 대칭(비밀키) 암호시스템의 통계적 결점을 바탕으로 하여 키를 찾는 공격자의 최적 전략( optimal strategy)문제를 분석하여 이 최적 전략이 이진 수열의 비트당 entropy와 밀접한 관계가 있음을 보이고 이 비트당 entropy와 관련이 있는 새로운 통계량을 도입하여 이진 난수 발생기의 source의 이진수열이 다음 3가지 경우, 즉, i.i.d. symmetric인 경우, $BMS_{p}$ 인 경우, M-memory를 갖는 마르코프연쇄인 경우의 각각에 대하여 특성을 조사하고 새로운 통계량의 평균과 분산을 구한다. 이때 구한 새로운 통계량은 잘 알려진 중심 극한 정리에 의하여 근사적으로 정규분포를 따르므로 위의 평균과 분산을 이용하여 스트림 암호시스템에서 구성요소로 많이 사용되는 몇 몇 간단한 이진 난수 발생기에 적용하여 통계적 검정을 실시함으로써 entropy 관점의 검정법이 새로운 randomness 검정법으로 타당함을 보인다.

  • PDF

An Attribute Replicating Vertical File Partition Method by Genetic Algorithm (유전알고리듬을 이용한 속성의 중복 허용 파일 수직분할 방법)

  • 김재련;유종찬
    • The Journal of Information Technology and Database
    • /
    • v.6 no.2
    • /
    • pp.71-86
    • /
    • 1999
  • The performance of relational database is measured by the number of disk accesses necessary to transfer data from disk to main memory. The paper proposes to vertically partition relations into fragments and to allow attribute replication to reduce the number of disk accesses. To reduce the computational time, heuristic search method using genetic algorithm is used. Genetic algorithm used employs a rank-based-sharing fitness function and elitism. Desirable parameters of genetic algorithm are obtained through experiments and used to find the solutions. Solutions of attribute replication and attribute non-replication problems are compared. Optimal solutions obtained by branch and bound method and by heuristic solutions(genetic algorithm) are also discussed. The solution method proposed is able to solve large-sized problems within acceptable time limit and shows solutions near the optimal value.

  • PDF

Observer-Teacher-Learner-Based Optimization: An enhanced meta-heuristic for structural sizing design

  • Shahrouzi, Mohsen;Aghabaglou, Mahdi;Rafiee, Fataneh
    • Structural Engineering and Mechanics
    • /
    • v.62 no.5
    • /
    • pp.537-550
    • /
    • 2017
  • Structural sizing is a rewarding task due to its non-convex constrained nature in the design space. In order to provide both global exploration and proper search refinement, a hybrid method is developed here based on outstanding features of Evolutionary Computing and Teaching-Learning-Based Optimization. The new method introduces an observer phase for memory exploitation in addition to vector-sum movements in the original teacher and learner phases. Proper integer coding is suited and applied for structural size optimization together with a fly-to-boundary technique and an elitism strategy. Performance of the proposed method is further evaluated treating a number of truss examples compared with teaching-learning-based optimization. The results show enhanced capability of the method in efficient and stable convergence toward the optimum and effective capturing of high quality solutions in discrete structural sizing problems.

Adaptive Sampling for ECG Detection Based on Compression Dictionary

  • Yuan, Zhongyun;Kim, Jong Hak;Cho, Jun Dong
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.6
    • /
    • pp.608-616
    • /
    • 2013
  • This paper presents an adaptive sampling method for electrocardiogram (ECG) signal detection. First, by employing the strings matching process with compression dictionary, we recognize each segment of ECG with different characteristics. Then, based on the non-uniform sampling strategy, the sampling rate is determined adaptively. As the results of simulation indicated, our approach reconstructed the ECG signal at an optimized sampling rate with the guarantee of ECG integrity. Compared with the existing adaptive sampling technique, our approach acquires an ECG signal at a 30% lower sampling rate. Finally, the experiment exhibits its superiority in terms of energy efficiency and memory capacity performance.

Demand Paging Method Using Improved Algorithms on Non-OS Embedded System (Non-OS 임베디드 시스템에서 개선된 알고리즘을 적용한 요구 페이징 기법)

  • Lew, Kyeung Seek;Jeon, Chang Kyu;Kim, Yong Deak
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.5 no.4
    • /
    • pp.225-233
    • /
    • 2010
  • In this paper, we try to improve the performance of the demand paging loader suggested to use the demand paging way that is not based on operating system. The demand paging switching strategy used in the existing operating system can know the recently used pages by running multi-processing. Then, based on it, some page switching strategies have been made for the recently used pages or the frequently demanded pages. However, the strategies based on operating system cannot be applied in single processing that is not based on operating system because any context switching never occur on the single processing. So, this paper is trying to suggest the demand paging switching strategies that can be applied in paging loader running in single process. In the Return-Prediction-Algorithm, we saw the improved performance in the program that the function call occurred frequently in a long distance. And then, in the Most-Frequently-Used-Page-Remain-Algorithm, we saw the improved performance in the program that the references frequently occurred for the particular pages. Likewise, it had an enormous effect on keeping the memory reduction performance by the demand paging and reducing the running time delay at the same time.

Assessment of computational performance for a vector parallel implementation: 3D probabilistic model discrete cracking in concrete

  • Paz, Carmen N.M.;Alves, Jose L.D.;Ebecken, Nelson F.F.
    • Computers and Concrete
    • /
    • v.2 no.5
    • /
    • pp.345-366
    • /
    • 2005
  • This work presents an assessment of the computational performance of a vector-parallel implementation of probabilistic model for concrete cracking in 3D. This paper shows the continuing efforts towards code optimization as reported in earlier works Paz, et al. (2002a,b and 2003). The probabilistic crack approach is based on the direct Monte Carlo method. Cracking is accounted by means of 3D interface elements. This approach considers that all nonlinearities are restricted to interface elements modeling cracks. The heterogeneity governs the overall cracking behavior and related size effects on concrete fracture. Computational kernels in the implementation are the inexact Newton iterative driver to solve the non-linear problem and a preconditioned conjugate gradient (PCG) driver to solve linearized equations, using an element by element (EBE) strategy to compute matrix-vector products. In particular the paper analyzes code behavior using OpenMP directives in parallel vector processors (PVP), such as the CRAY SV1 and CRAY T94. The impact of the memory architecture on code performance, and also some strategies devised to circumvent this issue are addressed by numerical experiment.

Denoising ISTA-Net: learning based compressive sensing with reinforced non-linearity for side scan sonar image denoising (Denoising ISTA-Net: 측면주사 소나 영상 잡음제거를 위한 강화된 비선형성 학습 기반 압축 센싱)

  • Lee, Bokyeung;Ku, Bonwha;Kim, Wan-Jin;Kim, Seongil;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.4
    • /
    • pp.246-254
    • /
    • 2020
  • In this paper, we propose a learning based compressive sensing algorithm for the purpose of side scan sonar image denoising. The proposed method is based on Iterative Shrinkage and Thresholding Algorithm (ISTA) framework and incorporates a powerful strategy that reinforces the non-linearity of deep learning network for improved performance. The proposed method consists of three essential modules. The first module consists of a non-linear transform for input and initialization while the second module contains the ISTA block that maps the input features to sparse space and performs inverse transform. The third module is to transform from non-linear feature space to pixel space. Superiority in noise removal and memory efficiency of the proposed method is verified through various experiments.

Accurate and efficient GPU ray-casting algorithm for volume rendering of unstructured grid data

  • Gu, Gibeom;Kim, Duksu
    • ETRI Journal
    • /
    • v.42 no.4
    • /
    • pp.608-618
    • /
    • 2020
  • We present a novel GPU-based ray-casting algorithm for volume rendering of unstructured grid data. Our volume rendering system uses a ray-casting method that guarantees accurate rendering results. We also employ the per-pixel intersection list concept in the Bunyk algorithm to guarantee an accurate result for non-convex meshes. For efficient memory access for the lists on the GPU, we represent the intersection lists for all faces as an array with our novel construction algorithm. With the intersection lists, we perform ray-casting on a GPU, and a GPU thread handles each ray. To increase ray-coherency in a thread block and improve memory access efficiency, we extend a prior image-tile-based work distribution method to fit modern GPU architectures. We also show that a prior approach using a per-thread local buffer to reduce redundant computation is not appropriate for modern GPU architectures. Instead, we take an on-demand calculation strategy that achieves better performance even though it allows duplicate computations. We applied our method to three unstructured grid datasets with different characteristics. With a GPU, our method achieved up to 36.5 times higher performance for the ray-casting process and 19.7 times higher performance for the whole volume rendering process compared with the Bunyk algorithm using a CPU core. Also, our approach showed up to 8.2 times higher performance than a GPU-based cell projection method while generating more accurate rendering results. These results demonstrate the efficiency and accuracy of our method.

High Utility Itemset Mining over Uncertain Datasets Based on a Quantum Genetic Algorithm

  • Wang, Ju;Liu, Fuxian;Jin, Chunjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.8
    • /
    • pp.3606-3629
    • /
    • 2018
  • The discovered high potential utility itemsets (HPUIs) have significant influence on a variety of areas, such as retail marketing, web click analysis, and biological gene analysis. Thus, in this paper, we propose an algorithm called HPUIM-QGA (Mining high potential utility itemsets based on a quantum genetic algorithm) to mine HPUIs over uncertain datasets based on a quantum genetic algorithm (QGA). The proposed algorithm not only can handle the problem of the non-downward closure property by developing an upper bound of the potential utility (UBPU) (which prunes the unpromising itemsets in the early stage) but can also handle the problem of combinatorial explosion by introducing a QGA, which finds optimal solutions quickly and needs to set only very few parameters. Furthermore, a pruning strategy has been designed to avoid the meaningless and redundant itemsets that are generated in the evolution process of the QGA. As proof of the HPUIM-QGA, a substantial number of experiments are performed on the runtime, memory usage, analysis of the discovered itemsets and the convergence on real-life and synthetic datasets. The results show that our proposed algorithm is reasonable and acceptable for mining meaningful HPUIs from uncertain datasets.