• Title/Summary/Keyword: data structure

Search Result 14,820, Processing Time 0.047 seconds

Efficient Continuous Skyline Query Processing Scheme over Large Dynamic Data Sets

  • Li, He;Yoo, Jaesoo
    • ETRI Journal
    • /
    • v.38 no.6
    • /
    • pp.1197-1206
    • /
    • 2016
  • Performing continuous skyline queries of dynamic data sets is now more challenging as the sizes of data sets increase and as they become more volatile due to the increase in dynamic updates. Although previous work proposed support for such queries, their efficiency was restricted to small data sets or uniformly distributed data sets. In a production database with many concurrent queries, the execution of continuous skyline queries impacts query performance due to update requirements to acquire exclusive locks, possibly blocking other query threads. Thus, the computational costs increase. In order to minimize computational requirements, we propose a method based on a multi-layer grid structure. First, relational data object, elements of an initial data set, are processed to obtain the corresponding multi-layer grid structure and the skyline influence regions over the data. Then, the dynamic data are processed only when they are identified within the skyline influence regions. Therefore, a large amount of computation can be pruned by adopting the proposed multi-layer grid structure. Using a variety of datasets, the performance evaluation confirms the efficiency of the proposed method.

Estimation of Material Requirement of Piping Materials in an Offshore Structure using Big Data Analysis (빅데이터 분석을 이용한 해양 구조물 배관 자재의 소요량 예측)

  • Oh, Min-Jae;Roh, Myung-Il;Park, Sung-Woo;Kim, Seong-Hoon
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.55 no.3
    • /
    • pp.243-251
    • /
    • 2018
  • In the shipyard, a lot of data is generated, stored, and managed during design, construction, and operation phases to build ships and offshore structures. However, it is difficult to handle such big data efficiently using existing data-handling technologies. As the big data technology is developed, the ship and offshore industries start to focus on the existing big data to find valuable information from it. In this paper, the material requirement estimation method of offshore structure piping materials using big data analysis is proposed. A big data platform for the data analysis in the shipyard is introduced and it is applied to the analysis of material requirement estimation to solve the problems in piping design by a designer. The regression model is developed from the big data of piping materials and verified using the existing data. This analysis can help a piping designer to estimate the exact amount of material requirement and schedule the purchase time.

Design of modified Feistel structure for high-capacity and high speed achievement (대용량 고속화 수행을 위한 변형된 Feistel 구조 설계에 관한 연구)

  • Lee Seon-Keun;Jung Woo-Yeol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.183-188
    • /
    • 2005
  • Parallel processing in block cryptographic algorithm is difficult, because Feistel structure that is basis structure of block cryptographic algorithm is sequential processing structure. Therefore this paper changes these sequential processing structure and Feistel structure made parallel processing to be possible. This paper that apply this modified structure designed DES that have parallel Feistel structure. Proposed parallel Feistel structure could prove greatly block cryptographic algorithm's performance such as DES and so on that could not but have trade-off relation the data processing speed and data security interval because block cryptographic algorithm can not use pipeline method because of itself structural problem. Therefore, modified Feistel structure is going to display more superior security function and processing ability of high speed than now in case apply way that is proposed to SEED, AES's Rijndael, Twofish etc. that apply Feistel structure.

  • PDF

An Empirical Analysis on Urban Consumption Structure in Shandong Province, China

  • Gao, Jian
    • Asian Journal of Business Environment
    • /
    • v.2 no.2
    • /
    • pp.23-26
    • /
    • 2012
  • Purpose - The study on the consumption structure of urban residents can help us to understand demand law and to grasp the changing consumption trend of people. Consumption structure is an important indicator reflecting the people's living standard. It is of realistic significance to study urban consumption structure. Research data and methodology - This study is carried out with data connected with urban residents from Shandong Statistical Yearbook for the period 2000-2010 analyzing eight commodity groups. The almost ideal demand system (AIDS) is one of the important models related to consumption structure. Results - This paper shows that firstly gives a brief introduction to AIDS. Then it makes an empirical analysis on the urban residents' consumption structure in Shandong province, China on the basis of AIDS model. Conclusions - the authorities are supposed to control the prices of HC, Foodstuff and Housing and encourage the consumption of HC, Housing, EE accordingly. At the same time, local government should increase the supply of goods connected with housing, HA, HC, and EE so as to attract more consumption from the urban residents in Shandong.

  • PDF

The Structure Type Introduced in Java (Java 언어에 structure type의 도입)

  • Lee, Ho-Suk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.7
    • /
    • pp.1883-1895
    • /
    • 1998
  • Java 프로그램밍 언어는 general-purpose concurrent object-oriented 언어로 알려져 있다. Java 언어는 개념과 구문 모두가 매우 간결하고 통일되어 있으며 인터넷 환경에서 최대한 활용되도록 하기 위하여 가상기계 개념을 도입하여 목적코드를 생성한다. 프로그래밍 언어에서 가장 중요한 부분이 data type 부분이다. Java 언어는 primitive type과 reference type을 지원한다. Primitive type과 reference type을 지원한다. Primitive type에는 boolean type integral type이 있다. Integral type에는 character, byte, short integer, integer, long integer, single-precision 과 double-precision floating point number가 있다. Reference type에는 class type, interface type, array type이 있다. 그러나 Java 언어는 general-purpose 프로그래밍 언어가 일반적으로 지원하는 structure type을 지원하지 않는다. 대신에 class type이 structure type을 포함하여 지원하는 구조로 되어 있다. 그러나 class type과 structure type은 서로 상이한 data type으로 판단된다. 따라서 Java 언어가 general-purpose의 성격을 가지기 위해서는 structure type을 명시적으로 지원하는 것이 바람직하다고 생각된다. 이 논문은 structure type을 Java 언어에 포함시킬 것을 제안한다.

  • PDF

Exchange of the Product Structure Data of STEP between CAD and PDM Systems (CAD와 PDM 시스템 간에 STEP 제품 구조 정보의 교환)

  • 오유천;한순흥
    • Korean Journal of Computational Design and Engineering
    • /
    • v.5 no.3
    • /
    • pp.215-223
    • /
    • 2000
  • The product data exchange between heterogeneous CAB and PDM systems is a crucial issue for the integration of product development systems. STEP offers an efficient mechanism of product data exchange between heterogeneous systems. This paper introduces a UML-based mapping methodology for the product data models. The suggested mapping method has been applied to exchange the product structure data between CAD and PDM systems. Based on the STEP methods, we developed an interface module between a CAD system and a PDM system.

  • PDF

Genome Scale Protein Secondary Structure Prediction Using a Data Distribution on a Grid Computing

  • Cho, Min-Kyu;Lee, Soojin;Jung, Jin-Won;Kim, Jai-Hoon;Lee, Weontae
    • Proceedings of the Korean Biophysical Society Conference
    • /
    • 2003.06a
    • /
    • pp.65-65
    • /
    • 2003
  • After many genome projects, algorithms and software to process explosively growing biological information have been developed. To process huge amount of biological information, high performance computing equipments are essential. If we use the remote resources such as computing power, storages etc., through a Grid to share the resources in the Internet environment, we will be able to obtain great efficiency to process data at a low cost. Here we present the performance improvement of the protein secondary structure prediction (PSIPred) by using the Grid platform, distributing protein sequence data on the Grid where each computer node analyzes its own part of protein sequence data to speed up the structure prediction. On the Grid, genome scale secondary structure prediction for Mycoplasma genitalium, Escherichia coli, Helicobacter pylori, Saccharomyces cerevisiae and Caenorhabditis slogans were performed and analyzed by a statistical way to show the protein structural deviation and comparison between the genomes. Experimental results show that the Grid is a viable platform to speed up the protein structure prediction and from the predicted structures.

  • PDF

Development of the Abstract Test Cases of Ship STEP

  • Kim Yong-Dae;Hwang Ho-Jin
    • Journal of Ship and Ocean Technology
    • /
    • v.9 no.3
    • /
    • pp.23-32
    • /
    • 2005
  • Ship STEP(Standard for the Exchange of Product Model Data) which is composed of AP 215 (Ship Arrangement), AP 216(Ship Hull Form), AP 218 (Ship Structure), has been developed more than last 10 years and it is now at the stage just before IS(International Standard). It is expected that ship STEP would be used for the seamless data exchange among various CAD/CAM/CAE systems of shipbuilding process. In this paper the huge and complicated data structure of ship STEP is briefly reviewed at the level of ARM(Application Reference Model) and some abstract test cases which will be included as part of the standards are introduced. Basically ship STEP has common data model to be used without losing compatibility among those three different ship AP's, and it is defined as the modeling framework. Typical cases of data exchange during shipbuilding process, such as hull form data exchange between design office and model basin, midship structure data between shipbuilding yard and classification society are reviewed and STEP physical data are generated using commercial geometric modeling kernel. Test cases of ship arrangement at initial design stage and hydrodynamic data of crude oil carrier are also included.

Conceptual Data Modeling: Entity-Relationship Models as Thinging Machines

  • Al-Fedaghi, Sabah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.247-260
    • /
    • 2021
  • Data modeling is a process of developing a model to design and develop a data system that supports an organization's various business processes. A conceptual data model represents a technology-independent specification of structure of data to be stored within a database. The model aims to provide richer expressiveness and incorporate a set of semantics to (a) support the design, control, and integrity parts of the data stored in data management structures and (b) coordinate the viewing of connections and ideas on a database. The described structure of the data is often represented in an entity–relationship (ER) model, which was one of the first data-modeling techniques and is likely to continue to be a popular way of characterizing entity classes, attributes, and relationships. This paper attempts to examine the basic ER modeling notions in order to analyze the concepts to which they refer as well as ways to represent them. In such a mission, we apply a new modeling methodology (thinging machine; TM) to ER in terms of its fundamental building constructs, representation entities, relationships, and attributes. The goal of this venture is to further the understanding of data models and enrich their semantics. Three specific contributions to modeling in this context are incorporated: (a) using the TM model's five generic actions to inject processing in the ER structure; (b) relating the single ontological element of TM modeling (i.e., a thing/machine or thimac) to ER entities and relationships; and (c) proposing a high-level integrated, extended ER model that includes structural and time-oriented notions (e.g., events or behavior).

Interactive analysis tools for the wide-angle seismic data for crustal structure study (Technical Report) (지각 구조 연구에서 광각 탄성파 자료를 위한 대화식 분석 방법들)

  • Fujie, Gou;Kasahara, Junzo;Murase, Kei;Mochizuki, Kimihiro;Kaneda, Yoshiyuki
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.1
    • /
    • pp.26-33
    • /
    • 2008
  • The analysis of wide-angle seismic reflection and refraction data plays an important role in lithospheric-scale crustal structure study. However, it is extremely difficult to develop an appropriate velocity structure model directly from the observed data, and we have to improve the structure model step by step, because the crustal structure analysis is an intrinsically non-linear problem. There are several subjective processes in wide-angle crustal structure modelling, such as phase identification and trial-and-error forward modelling. Because these subjective processes in wide-angle data analysis reduce the uniqueness and credibility of the resultant models, it is important to reduce subjectivity in the analysis procedure. From this point of view, we describe two software tools, PASTEUP and MODELING, to be used for developing crustal structure models. PASTEUP is an interactive application that facilitates the plotting of record sections, analysis of wide-angle seismic data, and picking of phases. PASTEUP is equipped with various filters and analysis functions to enhance signal-to-noise ratio and to help phase identification. MODELING is an interactive application for editing velocity models, and ray-tracing. Synthetic traveltimes computed by the MODELING application can be directly compared with the observed waveforms in the PASTEUP application. This reduces subjectivity in crustal structure modelling because traveltime picking, which is one of the most subjective process in the crustal structure analysis, is not required. MODELING can convert an editable layered structure model into two-way traveltimes which can be compared with time-sections of Multi Channel Seismic (MCS) reflection data. Direct comparison between the structure model of wide-angle data with the reflection data will give the model more credibility. In addition, both PASTEUP and MODELING are efficient tools for handling a large dataset. These software tools help us develop more plausible lithospheric-scale structure models using wide-angle seismic data.