• Title/Summary/Keyword: Large tables

Search Result 141, Processing Time 0.023 seconds

Evaluating Join Performance on Relational Database Systems

  • Ordonez, Carlos;Garcia-Garcia, Javier
    • Journal of Computing Science and Engineering
    • /
    • v.4 no.4
    • /
    • pp.276-290
    • /
    • 2010
  • The join operator is fundamental in relational database systems. Evaluating join queries on large tables is challenging because records need to be efficiently matched based on a given key. In this work, we analyze join queries in SQL with large tables in which a foreign key may be null, invalid or valid, given a referential integrity constraint. We conduct an extensive join performance evaluation on three DBMSs. Specifically, we study join queries varying table sizes, row size and key probabilistic distribution, inserting null, invalid or valid foreign key values. We also benchmark three well-known query optimizations: view materialization, secondary index and join reordering. Our experiments show certain optimizations perform well across DBMSs, whereas other optimizations depend on the DBMS architecture.

Successive Approximated Log Operation Circuit for SoftMax in CNN (CNN의 SoftMax 연산을 위한 연속 근사 방식의 로그 연산 회로)

  • Kang, Hyeong-Ju
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.2
    • /
    • pp.330-333
    • /
    • 2021
  • In a CNN for image classification, a SoftMax layer is usually placed at the end. The exponentinal and logarithmic operations in the SoftMax layer are not adequate to be implemented in an accelerator circuit. The operations are usually implemented with look-up tables, and the exponential operation can be implemented in an iterative method. This paper proposes a successive approximation method to calculate a logarithm to remove a very large look-up table. By substituing the large table with two very small tables, the circuit can be reduced much. The experimental results show that the 85% area reduction can be reached with a small error degradation.

Construction Method of Time-dependent Origin-Destination Traffic Flow for Expressway Corridor Using Individual Real Trip Data (실제 통행기록 자료를 활용한 고속도로 Corridor 시간대별 O-D 구축)

  • Yu, Jeong Whon;Lee, Mu Young
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.2D
    • /
    • pp.185-192
    • /
    • 2011
  • More practical outputs and insights can be obtained through transportation analysis considering the time-dependent traffic movements. This study proposes a method of constructing time-dependent O-D trip tables for expressway corridor using real-world individual trip data. In this study, time-dependent O-D trip tables for the nationwide highway network are constructed based on toll collection system data. The proposed methodology is to convert nationwide time-dependent O-D trip tables into Korean expressway corridor O-D trip tables in order to deal with the computational complexity arising from simulating a large-scale traffic network. The experiment results suggest that actual individual trip record data can be used to effectively construct time-dependent O-D trip tables. They also imply that the construction of time-dependent O-D trip tables for the national highway networks along with those for Korean expressway developed in this study would make transportation analysis more practical and applicable to real-time traffic operation and control.

Servo control strategy for uni-axial shake tables using long short-term memory networks

  • Pei-Ching Chen;Kui-Xing Lai
    • Smart Structures and Systems
    • /
    • v.32 no.6
    • /
    • pp.359-369
    • /
    • 2023
  • Servo-motor driven uniaxial shake tables have been widely used for education and research purposes in earthquake engineering. These shake tables are mostly displacement-controlled by a digital proportional-integral-derivative (PID) controller; however, accurate reproduction of acceleration time histories is not guaranteed. In this study, a control strategy is proposed and verified for uniaxial shake tables driven by a servo-motor. This strategy incorporates a deep-learning algorithm named Long Short-Term Memory (LSTM) network into a displacement PID feedback controller. The LSTM controller is trained by using a large number of experimental data of a self-made servo-motor driven uniaxial shake table. After the training is completed, the LSTM controller is implemented for directly generating the command voltage for the servo motor to drive the shake table. Meanwhile, a displacement PID controller is tuned and implemented close to the LSTM controller to prevent the shake table from permanent drift. The control strategy is named the LSTM-PID control scheme. Experimental results demonstrate that the proposed LSTM-PID improves the acceleration tracking performance of the uniaxial shake table for both bare condition and loaded condition with a slender specimen.

An Efficient Query Transformation for Multidimensional Data Views on Relational Databases (관계형 데이타베이스에서 다차원 데이타의 뷰를 위한 효율적인 질의 변환)

  • Shin, Sung-Hyun;Kim, Jin-Ho;Moon, Yang-Sae
    • Journal of KIISE:Databases
    • /
    • v.34 no.1
    • /
    • pp.18-34
    • /
    • 2007
  • In order to provide various business analysis methods, OLAP(On-Line Analytical Processing) systems represent their data with multidimensional structures. These multidimensional data are often delivered to users in the horizontal format of tables whose columns are corresponding to values of dimension attributes. Since the horizontal tables nay have a large number of columns, they cannot be stored directly in relational database systems. Furthermore, the tables are likely to have many null values (i.e., sparse tables). In order to manage the horizontal tables efficiently, we can store them as the vertical format of tables which has dimension attribute names as their columns thus transforms the columns of horizontal tables into rows. In this way, every queries for horizontal tables have to be transformed into those for vertical tables. This paper proposed a technique for transforming horizontal table queries into vertical table ones by utilizing not only traditional relational algebraic operators but also the PIVOT operator which recent DBMS versions are providing. For achieving this goal, we designed a relational algebraic expression equivalent to the PIVOT operator and we formally proved their equivalence. Then, we developed a transformation technique for horizontal table queries using the PIVOT operator. We also performed experiments to analyze the performance of the proposed method. From the experimental results, we revealed that the proposed method has better performance than existing methods.

Design and Implementation of a Efficient Storage Virtualization System based on Distributed Hash Tables (분산 해시 테이블 기반의 효율적인 저장 장치 가상화 시스템의 설계 및 구현)

  • Kim, Jong-Hyeon;Lee, Sang-Jun
    • Journal of Internet Computing and Services
    • /
    • v.10 no.3
    • /
    • pp.103-112
    • /
    • 2009
  • This paper proposes an efficient storage virtualization system which allows users to view hard disk resources of numerous nodes as a large logical space using distributed hash tables of P2P techniques. The proposed system is developed at device level of Windows operating system and is suitable for users in Intranet environments. This system is developed to be recognized as one hard disk at the Windows explorer for user conveniences and does not need a supplementary client program at the application layer. In addition, it enhances security via cutting off breaches from external networks.

  • PDF

Design tables and charts for uniform and non-uniform tuned liquid column dampers in harmonic pitching motion

  • Wu, Jong-Cheng;Wang, Yen-Po;Chen, Yi-Hsuan
    • Smart Structures and Systems
    • /
    • v.9 no.2
    • /
    • pp.165-188
    • /
    • 2012
  • In the first part of the paper, the optimal design parameters for tuned liquid column dampers (TLCD) in harmonic pitching motion were investigated. The configurations in design tables include uniform and non-uniform TLCDs with cross-sectional ratios of 0.3, 0.6, 1, 2 and 3 for the design in different situations. A closed-form solution of the structural response was used for performing numerical optimization. The results from optimization indicate that the optimal structural response always occurs when the two resonant peaks along the frequency axis are equal. The optimal frequency tuning ratio, optimal head loss coefficient, the corresponding response and other useful quantities are constructed in design tables as a guideline for practitioners. As the value of the head loss coefficient is only available through experiments, in the second part of the paper, the prediction of head loss coefficients in the form of a design chart are proposed based on a series of large scale tests in pitching base motions, aiming to ease the predicament of lacking the information of head loss for those who wishes to make designs without going through experimentation. A large extent of TLCDs with cross-sectional ratios of 0.3, 0.6, 1, 2 and 3 and orifice blocking ratios ranging from 0%, 20%, 40%, 60% to 80% were inspected by means of a closed-form solution under harmonic base motion for identification. For the convenience of practical use, the corresponding empirical formulas for predicting head loss coefficients of TLCDs in relation to the cross-sectional ratio and the orifice blocking ratio were also proposed. For supplemental information to horizontal base motion, the relation of head loss values versus blocking ratios and the corresponding empirical formulas were also presented in the end.

Design of Low Cost H.264/AVC Entropy Coding Unit Using Code Table Pattern Analysis (코드 테이블 패턴 분석을 통한 저비용 H.264/AVC 엔트로피 코딩 유닛 설계)

  • Song, Sehyun;Kim, Kichul
    • Journal of IKEEE
    • /
    • v.17 no.3
    • /
    • pp.352-359
    • /
    • 2013
  • This paper proposes an entropy coding unit for H.264/AVC baseline profile. Entropy coding requires code tables for macroblock encoding. There are patterns in codewords of each code tables. In this paper, the patterns between codewords are analyzed to reduce the hardware cost. The entropy coding unit consists of Exp-Golomb unit and CAVLC unit. The Exp-Golomb unit can process five code types in a single unit. It can perform Exp-Golomb processing using only two adders. While typical CAVLC units use various code tables which require large amounts of resources, the sizes of the tables are reduced to about 40% or less of typical CAVLC units using relationships between table elements in the proposed CAVLC unit. After the Exp-Golomb unit and the CAVLC unit generate code values, the entropy unit uses a small size shifter for bit-stream generation while typical methods are barrel shifters.

Large tests of independence in incomplete two-way contingency tables using fractional imputation

  • Kang, Shin-Soo;Larsen, Michael D.
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.4
    • /
    • pp.971-984
    • /
    • 2015
  • Imputation procedures fill-in missing values, thereby enabling complete data analyses. Fully efficient fractional imputation (FEFI) and multiple imputation (MI) create multiple versions of the missing observations, thereby reflecting uncertainty about their true values. Methods have been described for hypothesis testing with multiple imputation. Fractional imputation assigns weights to the observed data to compensate for missing values. The focus of this article is the development of tests of independence using FEFI for partially classified two-way contingency tables. Wald and deviance tests of independence under FEFI are proposed. Simulations are used to compare type I error rates and Power. The partially observed marginal information is useful for estimating the joint distribution of cell probabilities, but it is not useful for testing association. FEFI compares favorably to other methods in simulations.