• Title/Summary/Keyword: Benchmark system

Search Result 648, Processing Time 0.023 seconds

Real-time hybrid simulation of a multi-story wood shear wall with first-story experimental substructure incorporating a rate-dependent seismic energy dissipation device

  • Shao, Xiaoyun;van de Lindt, John;Bahmani, Pouria;Pang, Weichiang;Ziaei, Ershad;Symans, Michael;Tian, Jingjing;Dao, Thang
    • Smart Structures and Systems
    • /
    • v.14 no.6
    • /
    • pp.1031-1054
    • /
    • 2014
  • Real-time hybrid simulation (RTHS) of a stacked wood shear wall retrofitted with a rate-dependent seismic energy dissipation device (viscous damper) was conducted at the newly constructed Structural Engineering Laboratory at the University of Alabama. This paper describes the implementation process of the RTHS focusing on the controller scheme development. An incremental approach was adopted starting from a controller for the conventional slow pseudodynamic hybrid simulation and evolving to the one applicable for RTHS. Both benchmark-scale and full-scale tests are discussed to provide a roadmap for future RTHS implementation at different laboratories and/or on different structural systems. The developed RTHS controller was applied to study the effect of a rate-dependent energy dissipation device on the seismic performance of a multi-story wood shear wall system. The test specimen, setup, program and results are presented with emphasis given to inter-story drift response. At 100% DBE the RTHS showed that the multi-story shear wall with the damper had 32% less inter-story drift and was noticeably less damaged than its un-damped specimen counterpart.

Detection of Faces with Partial Occlusions using Statistical Face Model (통계적 얼굴 모델을 이용한 부분적으로 가려진 얼굴 검출)

  • Seo, Jeongin;Park, Hyeyoung
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.921-926
    • /
    • 2014
  • Face detection refers to the process extracting facial regions in an input image, which can improve speed and accuracy of recognition or authorization system, and has diverse applicability. Since conventional works have tried to detect faces based on the whole shape of faces, its detection performance can be degraded by occlusion made with accessories or parts of body. In this paper we propose a method combining local feature descriptors and probability modeling in order to detect partially occluded face effectively. In training stage, we represent an image as a set of local feature descriptors and estimate a statistical model for normal faces. When the test image is given, we find a region that is most similar to face using our face model constructed in training stage. According to experimental results with benchmark data set, we confirmed the effect of proposed method on detecting partially occluded face.

An Improved Dynamic Branch Predictor by Selective Access of a Specific Element in 4-Way Cache (4-Way 캐쉬의 선택된 Element를 이용한 향상된 동적 분기 예측기 구현)

  • Hwang, In-Sung;Hwang, Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.12
    • /
    • pp.1094-1101
    • /
    • 2013
  • This paper proposes an improved branch predictor that reduces the number execution cycles of applications by selectively accessing a specific element in 4-way associative cache. When a branch instruction is fetched, the proposed branch predictor acquires a branch target address from the selected element in the cache by referring to MRU buffer. Branch prediction rate and application execution speed are considerably improved by increasing the number of BTAC entries in restricted power condition, when compared with that of previous branch predictor which accesses all elements. The effectiveness of the proposed dynamic branch predictor is verified by executing benchmark applications on the core simulator. Experimental results show that number of execution cycles decreases by an average of 10.1%, while power consumption increases an average of 7.4%, when compared to that of a core without a dynamic branch predictor. Execution cycles are reduced by 4.1% in comparison with a core which employs previous dynamic branch predictor.

A Fast Least-Squares Algorithm for Multiple-Row Downdatings (Multiple-Row Downdating을 수행하는 고속 최소자승 알고리즘)

  • Lee, Chung-Han;Kim, Seok-Il
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.1
    • /
    • pp.55-65
    • /
    • 1995
  • Existing multiple-row downdating algorithms have adopted a CFD(Cholesky Factor Downdating) that recursively downdates one row at a time. The CFD based algorithm requires 5/2p $n^{2}$ flops(floating point operations) downdating a p$\times$n observation matrix $Z^{T}$ . On the other hands, a HCFD(Hybrid CFD) based algorithm we propose in this paper, requires p $n^{2}$+6/5 $n^{3}$ flops v hen p$\geq$n. Such a HCFD based algorithm factorizes $Z^{T}$ at first, such that $Z^{T}$ = $Q_{z}$ RT/Z, and then applies the CFD onto the upper triangular matrix Rt/z, so that the total number of floating point operations for downdating $Z^{T}$ would be significantly reduced compared with that of the CFD based algorithm. Benchmark tests on the Sun SPARC/2 and the Tolerant System also show that performance of the HCFD based algorithm is superior to that of the CFD based algorithm, especially when the number of rows of the observation matrix is large.rge.

  • PDF

Ontology Alignment by Using Discrete Cuckoo Search (이산 Cuckoo Search를 이용한 온톨로지 정렬)

  • Han, Jun;Jung, Hyunjun;Baik, Doo-Kwon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.12
    • /
    • pp.523-530
    • /
    • 2014
  • Ontology alignment is the way to share and reuse of ontology knowledge. Because of the ambiguity of concept, most ontology alignment systems combine a set of various measures and complete enumeration to provide the satisfactory result. However, calculating process becomes more complex and required time increases exponentially since the number of concept increases, more errors can appear at the same time. Lately the focus is on meta-matching using the heuristic algorithm. Existing meta-matching system tune extra parameter and it causes complex calculating, as a consequence, the results in the various data of specific domain are not good performed. In this paper, we propose a high performance algorithm by using DCS that can solve ontology alignment through simple process. It provides an efficient search strategy according to distribution of Levy Flight. In order to evaluate the approach, benchmark data from the OAEI 2012 is employed. Through the comparison of the quality of the alignments which uses DCS with state of the art ontology matching systems.

Design and Implementation of Clusters with Single Process Space (단일 프로세스 공간을 제공하는 클러스터 시스템의 설계 및 구현)

  • Park, Min;Lee, Daewoo;Park, Dong-Gun;JungLok yu;Maeng, Seung-Ryoul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04a
    • /
    • pp.16-18
    • /
    • 2004
  • Single system image(SSI) have been the mainstay high-performance computing for many years. SSI requires the integration and aggregation of all types of resources in a cluster to present a single interface to users. In this paper, we describe a cluster computing architecture with the concept of single process space(SPS) where all processes share a uniform process identification scheme. With SPS, a process on any node can create child process on the same or different node or communicate with any other process on a remote node, as if they are on a single node. For this purpose, SPS is built with the support of unique cluster-wide pid, signal forwarding, and remote fork. We propose a novel design of SPS cluster which addresses the scalability and flexibility problem of traditional clusterwidely unique pid implementation by using blocked pid assignment. We have implemented this new design of SPS cluster, and we demonstrate its performance by comparing it to Beowulf distributed process space. Benchmark performance results show that our design of SPS cluster realized both scalability and flexibility that are essential to building SPS cluster.

  • PDF

Fault Tolerant Cache for Soft Error (소프트에러 결함 허용 캐쉬)

  • Lee, Jong-Ho;Cho, Jun-Dong;Pyo, Jung-Yul;Park, Gi-Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.1
    • /
    • pp.128-136
    • /
    • 2008
  • In this paper, we propose a new cache structure for effective error correction of soft error. We added check bit and SEEB(soft error evaluation block) to evaluate the status of cache line. The SEEB stores result of parity check into the two-bit shit register and set the check bit to '1' when parity check fails twice in the same cache line. In this case the line where parity check fails twice is treated as a vulnerable to soft error. When the data is filled into the cache, the new replacement algorithm is suggested that it can only use the valid block determined by SEEB. This structure prohibits the vulnerable line from being used and contributes to efficient use of cache by the reuse of line where parity check fails only once can be reused. We tried to minimize the side effect of the proposed cache and the experimental results, using SPEC2000 benchmark, showed 3% degradation in hit rate, 15% timing overhead because of parity logic and 2.7% area overhead. But it can be considered as trivial for SEEB because almost tolerant design inevitably adopt this parity method even if there are some overhead. And if only parity logic is used then it can have $5%{\sim}10%$ advantage than ECC logic. By using this proposed cache, the system will be protected from the threat of soft error in cache and the hit rate can be maintained to the level without soft error in the cache.

Generalization of Recurrent Cascade Correlation Algorithm and Morse Signal Experiments using new Activation Functions (순환 케스케이드 코릴레이션 알고리즘의 일반화와 새로운 활성화함수를 사용한 모스 신호 실험)

  • Song Hae-Sang;Lee Sang-Wha
    • Journal of Intelligence and Information Systems
    • /
    • v.10 no.2
    • /
    • pp.53-63
    • /
    • 2004
  • Recurrent-Cascade-Correlation(RCC) is a supervised teaming algorithm that automatically determines the size and topology of the network. RCC adds new hidden neurons one by one and creates a multi-layer structure in which each hidden layer has only one neuron. By second order RCC, new hidden neurons are added to only one hidden layer. These created neurons are not connected to each other. We present a generalization of the RCC Architecture by combining the standard RCC Architecture and the second order RCC Architecture. Whenever a hidden neuron has to be added, the new RCC teaming algorithm automatically determines whether the network topology grows vertically or horizontally. This new algorithm using sigmoid, tanh and new activation functions was tested with the morse-benchmark-problem. Therefore we recognized that the number of hidden neurons was decreased by the experiments of the RCC network generalization which used the activation functions.

  • PDF

Parallel Multithreaded Processing for Data Set Summarization on Multicore CPUs

  • Ordonez, Carlos;Navas, Mario;Garcia-Alvarado, Carlos
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.2
    • /
    • pp.111-120
    • /
    • 2011
  • Data mining algorithms should exploit new hardware technologies to accelerate computations. Such goal is difficult to achieve in database management system (DBMS) due to its complex internal subsystems and because data mining numeric computations of large data sets are difficult to optimize. This paper explores taking advantage of existing multithreaded capabilities of multicore CPUs as well as caching in RAM memory to efficiently compute summaries of a large data set, a fundamental data mining problem. We introduce parallel algorithms working on multiple threads, which overcome the row aggregation processing bottleneck of accessing secondary storage, while maintaining linear time complexity with respect to data set size. Our proposal is based on a combination of table scans and parallel multithreaded processing among multiple cores in the CPU. We introduce several database-style and hardware-level optimizations: caching row blocks of the input table, managing available RAM memory, interleaving I/O and CPU processing, as well as tuning the number of working threads. We experimentally benchmark our algorithms with large data sets on a DBMS running on a computer with a multicore CPU. We show that our algorithms outperform existing DBMS mechanisms in computing aggregations of multidimensional data summaries, especially as dimensionality grows. Furthermore, we show that local memory allocation (RAM block size) does not have a significant impact when the thread management algorithm distributes the workload among a fixed number of threads. Our proposal is unique in the sense that we do not modify or require access to the DBMS source code, but instead, we extend the DBMS with analytic functionality by developing User-Defined Functions.

Decision-making of alternative pylon shapes of a benchmark cable-stayed bridge using seismic risk assessment

  • Akhoondzade-Noghabi, Vahid;Bargi, Khosrow
    • Earthquakes and Structures
    • /
    • v.11 no.4
    • /
    • pp.583-607
    • /
    • 2016
  • One of the main applications of seismic risk assessment is that an specific design could be selected for a bridge from different alternatives by considering damage losses alongside primary construction costs. Therefore, in this paper, the focus is on selecting the shape of pylon, which is a changeable component in the design of a cable-stayed bridge, as a double criterion decision-making problem. Different shapes of pylons include H, A, Y, and diamond shape, and the two criterion are construction costs and probable earthquake losses. In this research, decision-making is performed by using developed seismic risk assessment process as a powerful method. Considering the existing uncertainties in seismic risk assessment process, the combined incremental dynamic analysis (IDA) and uniform design (UD) based fragility assessment method is proposed, in which the UD method is utilized to provide the logical capacity models of the structure, and the IDA method is employed to give the probabilistic seismic demand model of structure. Using the aforementioned models and by defining damage states, the fragility curves of the bridge system are obtained for the different pylon shapes usage. Finally, by combining the fragility curves with damage losses and implementing the proposed cost-loss-benefit (CLB) method, the seismic risk assessment process is developed with financial-comparative approach. Thus, the optimal shape of the pylon can be determined using double criterion decision-making. The final results of decision-making study indicate that the optimal pylon shapes for the studied span of cable-stayed bridge are, respectively, H shape, diamond shape, Y shape, and A shape.