• Title/Summary/Keyword: 벤치마크 테스트

Search Result 109, Processing Time 0.022 seconds

Glitch Reduction Through Path Balancing for Low-Power CMOS Digital Circuits (저전력 CMOS 디지털 회로 설계에서 경로 균등화에 의한 글리치 감소기법)

  • Yang, Jae-Seok;Kim, Seong-Jae;Kim, Ju-Ho;Hwang, Seon-Yeong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.10
    • /
    • pp.1275-1283
    • /
    • 1999
  • 본 논문은 CMOS 디지털 회로에서의 전력 소모의 주원인인 신호의 천이중에서 회로의 동작에 직접적인 영향을 미치지 않는 불필요한 신호의 천이인 글리치를 줄이기 위한 효율적인 알고리즘을 제시한다. 제안된 알고리즘은 회로의 지연 증가 없이 게이트 사이징과 버퍼 삽입에 의해 경로 균등(path balancing)을 이룸으로써 글리치를 감소시킨다. 경로 균등화를 위하여 먼저 게이트 사이징을 통해 글리치의 감소와 동시에, 게이트 크기의 최적화를 통해 회로 전체의 캐패시턴스까지 줄일 수 있으며, 게이트 사이징 만으로 경로 균등화가 이루어지지 않을 경우 버퍼 삽입으로 경로 균등화를 이루게 된다. 버퍼 자체에 의한 전력 소모 증가보다 글리치 감소에 의한 전력 감소가 큰 버퍼를 선택하여 삽입한다. 이때 버퍼 삽입에 의한 전력 감소는 다른 버퍼의 삽입 상태에 따라 크게 달라질 수 있어 ILP (Integer Linear Program)를 이용하여 적은 버퍼 삽입으로 전력 감소를 최대화 할 수 있는 저전력 설계 시스템을 구현하였다. 제안된 알고리즘은 LGSynth91 벤치마크 회로에 대한 테스트 결과 회로의 지연 증가 없이 평균적으로 30.4%의 전력 감소를 얻을 수 있었다.Abstract This paper presents an efficient algorithm for reducing glitches caused by spurious transitions in CMOS logic circuits. The proposed algorithm reduces glitches by achieving path balancing through gate sizing and buffer insertion. The gate sizing technique reduces not only glitches but also effective capacitance in the circuit. In the proposed algorithm, the buffers are inserted between the gates where power reduction achieved by glitch reduction is larger than the additional power consumed by the inserted buffers. To determine the location of buffer insertion, ILP (Integer Linear Program) has been employed in the proposed system. The proposed algorithm has been tested on LGSynth91 benchmark circuits. Experimental results show an average of 30.4% power reduction.

Crowd Behavior Detection using Convolutional Neural Network (컨볼루션 뉴럴 네트워크를 이용한 군중 행동 감지)

  • Ullah, Waseem;Ullah, Fath U Min;Baik, Sung Wook;Lee, Mi Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.6
    • /
    • pp.7-14
    • /
    • 2019
  • The automatic monitoring and detection of crowd behavior in the surveillance videos has obtained significant attention in the field of computer vision due to its vast applications such as security, safety and protection of assets etc. Also, the field of crowd analysis is growing upwards in the research community. For this purpose, it is very necessary to detect and analyze the crowd behavior. In this paper, we proposed a deep learning-based method which detects abnormal activities in surveillance cameras installed in a smart city. A fine-tuned VGG-16 model is trained on publicly available benchmark crowd dataset and is tested on real-time streaming. The CCTV camera captures the video stream, when abnormal activity is detected, an alert is generated and is sent to the nearest police station to take immediate action before further loss. We experimentally have proven that the proposed method outperforms over the existing state-of-the-art techniques.

A Representative Pattern Generation Algorithm Based on Evaluation And Selection (평가와 선택기법에 기반한 대표패턴 생성 알고리즘)

  • Yih, Hyeong-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.3
    • /
    • pp.139-147
    • /
    • 2009
  • The memory based reasoning just stores in the memory in the form of the training pattern of the representative pattern. And it classifies through the distance calculation with the test pattern. Because it uses the techniques which stores the training pattern whole in the memory or in which it replaces training patterns with the representative pattern. Due to this, the memory in which it is a lot for the other machine learning techniques is required. And as the moreover stored training pattern increases, the time required for a classification is very much required. In this paper, We propose the EAS(Evaluation And Selection) algorithm in order to minimize memory usage and to improve classification performance. After partitioning the training space, this evaluates each partitioned space as MDL and PM method. The partitioned space in which the evaluation result is most excellent makes into the representative pattern. Remainder partitioned spaces again partitions and repeat the evaluation. We verify the performance of Proposed algorithm using benchmark data sets from UCI Machine Learning Repository.

Deep-Learning Seismic Inversion using Laplace-domain wavefields (라플라스 영역 파동장을 이용한 딥러닝 탄성파 역산)

  • Jun Hyeon Jo;Wansoo Ha
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.2
    • /
    • pp.84-93
    • /
    • 2023
  • The supervised learning-based deep-learning seismic inversion techniques have demonstrated successful performance in synthetic data examples targeting small-scale areas. The supervised learning-based deep-learning seismic inversion uses time-domain wavefields as input and subsurface velocity models as output. Because the time-domain wavefields contain various types of wave information, the data size is considerably large. Therefore, research applying supervised learning-based deep-learning seismic inversion trained with a significant amount of field-scale data has not yet been conducted. In this study, we predict subsurface velocity models using Laplace-domain wavefields as input instead of time-domain wavefields to apply a supervised learning-based deep-learning seismic inversion technique to field-scale data. Using Laplace-domain wavefields instead of time-domain wavefields significantly reduces the size of the input data, thereby accelerating the neural network training, although the resolution of the results is reduced. Additionally, a large grid interval can be used to efficiently predict the velocity model of the field data size, and the results obtained can be used as the initial model for subsequent inversions. The neural network is trained using only synthetic data by generating a massive synthetic velocity model and Laplace-domain wavefields of the same size as the field-scale data. In addition, we adopt a towed-streamer acquisition geometry to simulate a marine seismic survey. Testing the trained network on numerical examples using the test data and a benchmark model yielded appropriate background velocity models.

Design of an Asynchronous Instruction Cache based on a Mixed Delay Model (혼합 지연 모델에 기반한 비동기 명령어 캐시 설계)

  • Jeon, Kwang-Bae;Kim, Seok-Man;Lee, Je-Hoon;Oh, Myeong-Hoon;Cho, Kyoung-Rok
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.3
    • /
    • pp.64-71
    • /
    • 2010
  • Recently, to achieve high performance of the processor, the cache is splits physically into two parts, one for instruction and one for data. This paper proposes an architecture of asynchronous instruction cache based on mixed-delay model that are DI(delay-insensitive) model for cache hit and Bundled delay model for cache miss. We synthesized the instruction cache at gate-level and constructed a test platform with 32-bit embedded processor EISC to evaluate performance. The cache communicates with the main memory and CPU using 4-phase hand-shake protocol. It has a 8-KB, 4-way set associative memory that employs Pseudo-LRU replacement algorithm. As the results, the designed cache shows 99% cache hit ratio and reduced latency to 68% tested on the platform with MI bench mark programs.

An Iterative Data-Flow Optimal Scheduling Algorithm based on Genetic Algorithm for High-Performance Multiprocessor (고성능 멀티프로세서를 위한 유전 알고리즘 기반의 반복 데이터흐름 최적화 스케줄링 알고리즘)

  • Chang, Jeong-Uk;Lin, Chi-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.115-121
    • /
    • 2015
  • In this paper, we proposed an iterative data-flow optimal scheduling algorithm based on genetic algorithm for high-performance multiprocessor. The basic hardware model can be extended to include detailed features of the multiprocessor architecture. This is illustrated by implementing a hardware model that requires routing the data transfers over a communication network with a limited capacity. The scheduling method consists of three layers. In the top layer a genetic algorithm takes care of the optimization. It generates different permutations of operations, that are passed on to the middle layer. The global scheduling makes the main scheduling decisions based on a permutation of operations. Details of the hardware model are not considered in this layer. This is done in the bottom layer by the black-box scheduling. It completes the scheduling of an operation and ensures that the detailed hardware model is obeyed. Both scheduling method can insert cycles in the schedule to ensure that a valid schedule is always found quickly. In order to test the performance of the scheduling method, the results of benchmark of the five filters show that the scheduling method is able to find good quality schedules in reasonable time.

Improvement of Address Pointer Assignment in DSP Code Generation (DSP용 코드 생성에서 주소 포인터 할당 성능 향상 기법)

  • Lee, Hee-Jin;Lee, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.37-47
    • /
    • 2008
  • Exploitation of address generation units which are typically provided in DSPs plays an important role in DSP code generation since that perform fast address computation in parallel to the central data path. Offset assignment is optimization of memory layout for program variables by taking advantage of the capabilities of address generation units, consists of memory layout generation and address pointer assignment steps. In this paper, we propose an effective address pointer assignment method to minimize the number of address calculation instructions in DSP code generation. The proposed approach reduces the time complexity of a conventional address pointer assignment algorithm with fixed memory layouts by using minimum cost-nodes breaking. In order to contract memory size and processing time, we employ a powerful pruning technique. Moreover our proposed approach improves the initial solution iteratively by changing the memory layout for each iteration because the memory layout affects the result of the address pointer assignment algorithm. We applied the proposed approach to about 3,000 sequences of the OffsetStone benchmarks to demonstrate the effectiveness of the our approach. Experimental results with benchmarks show an average improvement of 25.9% in the address codes over previous works.

Shell Finite Element for Nonlinear Analysis of Reinforced Concrete Containment Building (철근콘크리트 격납건물의 비선형 해석을 위한 쉘 유한요소)

  • Choun Young-Sun;Lee Hong-Pyo
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.19 no.1 s.71
    • /
    • pp.93-103
    • /
    • 2006
  • It is absolutely essential that safety assessment of the containment buildings during service life because containment buildings are last barrier to protect radioactive substance due to the accidents. Therefore, this study describes an enhanced degenerated shell finite element(FE) which has been developed for nonlinear FE analysis of reinforced concrete(RC) containment buildings with elasto-plastic material model. For the purpose of the material nonlinear analysis, Drucker-Prager failure criteria is adapted in compression region and material parameters which determine the shape of the failure envelop are derived from biaxial stress tests. Reissner-Mindlin(RM) assumptions are adopted to develop the degenerated shell FE so that transverse shear deformation effects is considered. However, it is found that there are serious defects such as locking phenomena in RM degenerated shell FE since the stiffness matrix has been overestimated in some situations. Therefore, shell formulation is provided in this paper with emphasis on the terms related to the stiffness matrix based on assumed strain method. Finally, the performance of the present shell element to analysis RC containment buildings is tested and demonstrated with several numerical examples. From the numerical tests, the present results show a good agreement with experimental data or other numerical results.

A Study on the Structural Optimization for Geodesic Dome (지오데식 돔의 구조최적화에 대한 연구)

  • Lee, Sang-Jin;Bae, Jung-Eun
    • Journal of Korean Association for Spatial Structures
    • /
    • v.8 no.4
    • /
    • pp.47-55
    • /
    • 2008
  • This paper deals with basic theories and some numerical results on structural optimization for geodesic dome. First of all, the space efficiency of geodesic dome is investigated by using the ratio of icosahedron's surface area to the internal volume enclosed by it. The procedure how to create the geodesic dome is also provided in systematic way and implemented and utilized into the design optimization code ISADO-OPT. The mathematical programming technique is introduced to find out the optimum pattern of member size of geodesic dome against a point load. In this study, total weight of structure is considered as the objective function to be minimized and the displacement occurred at loading point and member stresses of geodesic dome are used as the constraint functions. The finite difference method is used to calculate the design sensitivity of objective function with respect to design variables. The SLP, SQP and MFDM available in the optimizer DoT is used to search optimum member size patterns of geodesic dome. It is found to be that the optimum member size pattern can be efficiently obtained by using the proposed design optimization technique and numerical results can be used as benchmark test as a basic reference solution for design optimization of dome structures.

  • PDF