• Title/Summary/Keyword: Computation cost

Search Result 647, Processing Time 0.029 seconds

A Tree-structured XPath Query Reduction Scheme for Enhancing XML Query Processing Performance (XML 질의의 수행성능 향상을 위한 트리 구조 XPath 질의의 축약 기법에 관한 연구)

  • Lee, Min-Soo;Kim, Yun-Mi;Song, Soo-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.585-596
    • /
    • 2007
  • XML data generally consists of a hierarchical tree-structure which is reflected in mechanisms to store and retrieve XML data. Therefore, when storing XML data in the database, the hierarchical relationships among the XML elements are taken into consideration during the restructuring and storing of the XML data. Also, in order to support the search queries from the user, a mechanism is needed to compute the hierarchical relationship between the element structures specified by the query. The structural join operation is one solution to this problem, and is an efficient computation method for hierarchical relationships in an in database based on the node numbering scheme. However, in order to process a tree structured XML query which contains a complex nested hierarchical relationship it still needs to carry out multiple structural joins and results in another problem of having a high query execution cost. Therefore, in this paper we provide a preprocessing mechanism for effectively reducing the cost of multiple nested structural joins by applying the concept of equivalence classes and suggest a query path reduction algorithm to shorten the path query which consists of a regular expression. The mechanism is especially devised to reduce path queries containing branch nodes. The experimental results show that the proposed algorithm can reduce the time requited for processing the path queries to 1/3 of the original execution time.

Monitoring-Based Secure Data Aggregation Protocol against a Compromised Aggregator in Wireless Sensor Networks (무선 센서 네트워크에서 Compromised Aggregator에 대응을 위한 모니터링 기반 시큐어 데이터 병합 프로토콜)

  • Anuparp, Boonsongsrikul;Lhee, Kyung-Suk;Park, Seung-Kyu
    • The KIPS Transactions:PartC
    • /
    • v.18C no.5
    • /
    • pp.303-316
    • /
    • 2011
  • Data aggregation is important in wireless sensor networks. However, it also introduces many security problems, one of which is that a compromised node may inject false data or drop a message during data aggregation. Most existing solutions rely on encryption, which however requires high computation and communication cost. But they can only detect the occurrence of an attack without finding the attacking node. This makes sensor nodes waste their energy in sending false data if attacks occur repeatedly. Even an existing work can identify the location of a false data injection attack but it has a limitation that at most 50% of total sensor nodes can participate in data transmission. Therefore, a novel approach is required such that it can identify an attacker and also increase the number of nodes which participate in data transmission. In this paper, we propose a monitoring-based secure data aggregation protocol to prevent against a compromised aggregator which injects false data or drops a message. The proposed protocol consists of aggregation tree construction and secure data aggregation. In secure data aggregation, we use integration of abnormal data detection with monitoring and a minimal cryptographic technique. The simulation results show the proposed protocol increases the number of participating nodes in data transmission to 95% of the total nodes. The proposed protocol also can identify the location of a compromised node which injects false data or drops a message. A communication overhead for tracing back a location of a compromised node is O(n) where n is the total number of nodes and the cost is the same or better than other existing solutions.

Advanced Optimization of Reliability Based on Cost Factor and Deploying On-Line Safety Instrumented System Supporting Tool (비용 요소에 근거한 신뢰도 최적화 및 On-Line SIS 지원 도구 연구)

  • Lulu, Addis;Park, Myeongnam;Kim, Hyunseung;Shin, Dongil
    • Journal of the Korean Institute of Gas
    • /
    • v.21 no.2
    • /
    • pp.32-40
    • /
    • 2017
  • Safety Instrumented Systems (SIS) have wide application area. They are of vital importance at process plants to detect the onset of hazardous events, for instance, a release of some hazardous material, and for mitigating their consequences to humans, material assets, and the environment. The integrated safety systems, where electrical, electronic, and/or programmable electronic (E/E/PE) devices interact with mechanical, pneumatic, and hydraulic systems are governed by international safety standards like IEC 61508. IEC 61508 organises its requirements according to a Safety Life Cycle (SLC). Fulfilling these requirements following the SLC can be complex without the aid of SIS supporting tools. This paper presents simple SIS support tool which can greatly help the user to implement the design phase of the safety lifecycle. This tool is modelled in the form of Android application which can be integrated with a Web-based data reading and modifying system. This tool can reduce the computation time spent on the design phase of the SLC and reduce the possible errors which can arise in the process. In addition, this paper presents an optimization approach to SISs based on cost measures. The multi-objective genetic algorithm has been used for the optimization to search for the best combinations of solutions without enumeration of all the solution space.

A Storage and Computation Efficient RFID Distance Bounding Protocol (저장 공간 및 연산 효율적인 RFID 경계 결정 프로토콜)

  • Ahn, Hae-Soon;Yoon, Eun-Jun;Bu, Ki-Dong;Nam, In-Gil
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9B
    • /
    • pp.1350-1359
    • /
    • 2010
  • Recently many researchers have been proved that general RFID system for proximity authentication is vulnerable to various location-based relay attacks such as distance fraud, mafia fraud and terrorist fraud attacks. The distance-bounding protocol is used to prevent the relay attacks by measuring the round trip time of single challenge-response bit. In 2008, Munilla and Peinado proposed an improved distance-bounding protocol applying void-challenge technique based on Hancke-Kuhn's protocol. Compare with Hancke-Kuhn's protocol, Munilla and Peinado's protocol is more secure because the success probability of an adversary has (5/8)n. However, Munilla and Peinado's protocol is inefficient for low-cost passive RFID tags because it requires large storage space and many hash function computations. Thus, this paper proposes a new RFID distance-bounding protocol for low-cost passive RFID tags that can be reduced the storage space and hash function computations. As a result, the proposed distance-bounding protocol not only can provide both storage space efficiency and computational efficiency, but also can provide strong security against the relay attacks because the adversary's success probability can be reduced by $(5/8)^n$.

Design of Naphtha Cracker Gas Splitter Process in Petlyuk Column (납사열분해 가스분리공정에서의 Petlyuk Column 설계)

  • Lee, Ju Yeong
    • Journal of the Korean Institute of Gas
    • /
    • v.24 no.1
    • /
    • pp.17-22
    • /
    • 2020
  • Light Naphtha is distillated from crude oil unit and separated into the methane, ethylene and propylene by boiling point difference in sequence. This separation is conducted using a series of binary-like columns. This separation method is known that the energy consumed in the reboiler is used to separate the heaviest components and most of this energy is discarded as vapor condensation in the overhead cooler. In this study, the first two columns of the separation process are replaced with the Petlyuk column. A structural design was exercised by the stage computation with ideal tray efficiency in equilibrium condition. Compared with the performance of a conventional system of 3-column model, The design outcome shows that the procedure is simple and efficient because the composition of the liquid component in the column tray was designed to be similar to the equilibrium distillation curve. The performance of the new process indicates that an energy saving of 12.1% is obtained and the cost savings of 44 million won per day based on gross domestic product is reduced under same total number of trays and the initial investment cost is saved.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Scaling up of single fracture using a spectral analysis and computation of its permeability coefficient (스펙트럼 분석을 응용한 단일 균열 규모확장과 투수계수 산정)

  • 채병곤
    • The Journal of Engineering Geology
    • /
    • v.14 no.1
    • /
    • pp.29-46
    • /
    • 2004
  • It is important to identify geometries of fracture that act as a conduit of fluid flow for characterization of ground water flow in fractured rock. Fracture geometries control hydraulic conductivity and stream lines in a rock mass. However, we have difficulties to acquire whole geometric data of fractures in a field scale because of discontinuous distribution of outcrops and impossibility of continuous collecting of subsurface data. Therefore, it is needed to develop a method to describe whole feature of a target fracture geometry. This study suggests a new approach to develop a method to characterize on the whole feature of a target fracture geometry based on the Fourier transform. After sampling of specimens along a target fracture from borehole cores, effective frequencies among roughness components were selected by the Fourier transform on each specimen. Then, the selected effective frequencies were averaged on each frequency. Because the averaged spectrum includes all the frequency profiles of each specimen, it shows the representative components of the fracture roughness of the target fracture. The inverse Fourier transform is conducted to reconstruct an averaged whole roughness feature after low pass filtering. The reconstructed roughness feature also shows the representative roughness of the target subsurface fracture including the geometrical characteristics of each specimen. It also means that overall roughness feature by scaling up of a fracture. In order to identify the characteristics of permeability coefficients along the target fracture, fracture models were constructed based on the reconstructed roughness feature. The computation of permeability coefficient was performed by the homogenization analysis that can calculate accurate permeability coefficients with full consideration of fracture geometry. The results show a range between $10^{-4}{\;}and{\;}10^{-3}{\;}cm/sec$, indicating reasonable values of permeability coefficient along a large fracture. This approach will be effectively applied to the analysis of permeability characteristics along a large fracture as well as identification of the whole feature of a fracture in a field scale.

A stratified random sampling design for paddy fields: Optimized stratification and sample allocation for effective spatial modeling and mapping of the impact of climate changes on agricultural system in Korea (농지 공간격자 자료의 층화랜덤샘플링: 농업시스템 기후변화 영향 공간모델링을 위한 국내 농지 최적 층화 및 샘플 수 최적화 연구)

  • Minyoung Lee;Yongeun Kim;Jinsol Hong;Kijong Cho
    • Korean Journal of Environmental Biology
    • /
    • v.39 no.4
    • /
    • pp.526-535
    • /
    • 2021
  • Spatial sampling design plays an important role in GIS-based modeling studies because it increases modeling efficiency while reducing the cost of sampling. In the field of agricultural systems, research demand for high-resolution spatial databased modeling to predict and evaluate climate change impacts is growing rapidly. Accordingly, the need and importance of spatial sampling design are increasing. The purpose of this study was to design spatial sampling of paddy fields (11,386 grids with 1 km spatial resolution) in Korea for use in agricultural spatial modeling. A stratified random sampling design was developed and applied in 2030s, 2050s, and 2080s under two RCP scenarios of 4.5 and 8.5. Twenty-five weather and four soil characteristics were used as stratification variables. Stratification and sample allocation were optimized to ensure minimum sample size under given precision constraints for 16 target variables such as crop yield, greenhouse gas emission, and pest distribution. Precision and accuracy of the sampling were evaluated through sampling simulations based on coefficient of variation (CV) and relative bias, respectively. As a result, the paddy field could be optimized in the range of 5 to 21 strata and 46 to 69 samples. Evaluation results showed that target variables were within precision constraints (CV<0.05 except for crop yield) with low bias values (below 3%). These results can contribute to reducing sampling cost and computation time while having high predictive power. It is expected to be widely used as a representative sample grid in various agriculture spatial modeling studies.

Topology Design Optimization and Experimental Validation of Heat Conduction Problems (열전도 문제에 관한 위상 최적설계의 실험적 검증)

  • Cha, Song-Hyun;Kim, Hyun-Seok;Cho, Seonho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.28 no.1
    • /
    • pp.9-18
    • /
    • 2015
  • In this paper, we verify the optimal topology design for heat conduction problems in steady stated which is obtained numerically using the adjoint design sensitivity analysis(DSA) method. In adjoint variable method(AVM), the already factorized system matrix is utilized to obtain the adjoint solution so that its computation cost is trivial for the sensitivity. For the topology optimization, the design variables are parameterized into normalized bulk material densities. The objective function and constraint are the thermal compliance of the structure and the allowable volume, respectively. For the experimental validation of the optimal topology design, we compare the results with those that have identical volume but designed intuitively using a thermal imaging camera. To manufacture the optimal design, we apply a simple numerical method to convert it into point cloud data and perform CAD modeling using commercial reverse engineering software. Based on the CAD model, we manufacture the optimal topology design by CNC.

An Empirical Study on Differential factors of Accounting Information (회계정보의 차별적 요인에 관한 실증연구)

  • Oh Sung-Geun;Kim Hyun-Ki
    • Management & Information Systems Review
    • /
    • v.12
    • /
    • pp.137-160
    • /
    • 2003
  • The association between accounting earnings and the stock price of an entity is the subject that has been most heavily researched during the past 25 years in accounting literature. Researcher's common finding is that there are positive relationships between accounting earnings and stock prices. However, the explanatory power of accounting earnings which was measured by $R^2$ of regression functions used was rather low. To be connected with these low results, The prior studies propose that there will be additional information, errors in variables. This study investigates empirically determinants of earnings response coefficients(ERCs), which measure the correlation between earnings and stock prices, using earnings level / change, as the dependent variable in the return/earnings regression. Specifically, the thesis tests whether the factors such as earnings persistence, growth, systematic risk, image, information asymmetry and firm size. specially, the determinable variables of ERC are explained in detail. The image / information asymmetry variables are selected to be connected with additional information stand point, The debt / growth variables are selected to be connected with errors in variables. In this study, The sample of firms, listed in Korean Stock Exchange was drawn from the KIS-DATA and was required to meet the following criteria: (1) Annual accounting earnings were available over the 1986-1999 period on the KIS-FAS to allow computation of variables parameter; (2) sufficient return data for estimation of market model parameters were available on the KIS-SMAT month returns: (3) each firm had a fiscal year ending in December throughout the study period. Implementation of these criteria yielded a sample of 1,141 firm-year observation over the 10-year(1990-1999) period. A conventional regression specification would use stock returns(abnormal returns) as a dependent variable and accounting earnings(unexpected earnings) changes interacted with other factors as independent variables. In this study, I examined the relation between other factors and the RRC by using reverse regression. For an empirical test, eight hypotheses(including six lower-hypotheses) were tested. The results of the performed empirical analysis can be summarized as follows; The first, The relationship between persistence of earnings and ERC have significance of each by itself, this result accord with one of the prior studies. The second, The relationship between growth and ERC have not significance. The third, The relationship between image and ERC have significance of each by itself, but a forecast code doesn't present. This fact shows that image cost does not effect on market management share, is used to prevent market occupancy decrease. The fourth, The relationship between information asymmetry variable and ERC have significance of each by. The fifth, The relationship between systematic risk$(\beta)$ and ERC have not significance. The sixth, The relationship between debt ratio and ERC have significance of each by itself, but a forecast code doesn't present. This fact is judged that it is due to the effect of financial leverage effect and a tendency of interest.

  • PDF