• 제목/요약/키워드: Data journal

검색결과 189,213건 처리시간 0.119초

클라우드 환경에서의 암호화 데이터에 대한 효율적인 Top-K 질의 수행 기법 (Efficient Top-K Queries Computation for Encrypted Data in the Cloud)

  • 김종욱
    • 한국멀티미디어학회논문지
    • /
    • 제18권8호
    • /
    • pp.915-924
    • /
    • 2015
  • With growing popularity of cloud computing services, users can more easily manage massive amount of data by outsourcing them to the cloud, or more efficiently analyse large amount of data by leveraging IT infrastructure provided by the cloud. This, however, brings the security concerns of sensitive data. To provide data security, it is essential to encrypt sensitive data before uploading it to cloud computing services. Although data encryption helps provide data security, it negatively affects the performance of massive data analytics because it forbids the use of index and mathematical operation on encrypted data. Thus, in this paper, we propose a novel algorithm which enables to efficiently process a large amount of encrypted data. In particular, we propose a novel top-k processing algorithm on the massive amount of encrypted data in the cloud computing environments, and verify the performance of the proposed approach with real data experiments.

Web-based DNA Microarray Data Analysis Tool

  • Ryu, Ki-Hyun;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • 제17권4호
    • /
    • pp.1161-1167
    • /
    • 2006
  • Since microarray data structures are various and complicative, the data are generally stored in databases for approaching to and controlling the data effectively. But we have some difficulties to analyze and control the data when the data are stored in the several database management systems. The existing analysis tools for DNA microarray data have many difficult problems by complicated instructions, and dependency on data types and operating system, and high cost, etc. In this paper, we design and implement the web-based analysis tool for obtaining to useful information from DNA microarray data. When we use this tool, we can analyze effectively DNA microarray data without special knowledge and education for data types and analytical methods.

  • PDF

다기능레이더 데이터 획득 및 분석 장치 개발 (The Development of the Data Acquisition & Analysis System for Multi-Function Radar)

  • 송준호
    • 한국군사과학기술학회지
    • /
    • 제14권1호
    • /
    • pp.106-113
    • /
    • 2011
  • This paper describes Data Acquisition & Analysis System(DAS) for analysis of the multi-function radar. There are various information - beam probing data, clutter map data, plot data, target tracking data, RT tracking data, radar signal processing data, interface data - this device saves. The most important thing of data analysis is that a researcher gets a view of the whole data. The DAS intergrates with all of the data and provides overall information on the time matters occur. This is very useful advantage for approaching the matter easily. System algorithms of multi-function radar are improved by using this advantage. As a result of, range blank region have fallen about 72% and it is able to keep track in jammer environment.

Functional Requirements of Data Repository for DMP Support and CoreTrustSeal Authentication

  • Kim, Sun-Tae
    • International Journal of Knowledge Content Development & Technology
    • /
    • 제10권1호
    • /
    • pp.7-20
    • /
    • 2020
  • For research data to be shared without legal, financial and technical barriers in the Open Science era, data repositories must have the functional requirements asked by DMP and CoreTrustSeal. In order to derive functional requirements for the data repository, this study analyzed the Data Management Plan (DMP) and CoreTrustSeal, the criteria for certification of research data repositories. Deposit, Ethics, License, Discovery, Identification, Reuse, Security, Preservation, Accessibility, Availability, and (Meta) Data Quality, commonly required by DMP and CoreTrustSeal, were derived as functional requirements that should be implemented first in implementing data repositories. Confidentiality, Integrity, Reliability, Archiving, Technical Infrastructure, Documented Storage Procedure, Organizational Infrastructure, (Meta) Data Evaluation, and Policy functions were further derived from CoreTrustSeal. The functional requirements of the data repository derived from this study may be required as a key function when developing the repository. It is also believed that it could be used as a key item to introduce repository functions to researchers for depositing data.

데이터사전을 이용한 ERP애플리케이션 개발 (ERP Application Development Using Business Data Dictionary)

  • Minsu Jang;Joo-Chan Sohn;Jong-Myoung Baik
    • 한국전자거래학회지
    • /
    • 제7권1호
    • /
    • pp.141-152
    • /
    • 2002
  • Data dictionary is a collection of meta-data, which describes data produced and consumed while performing business processes. Data dictionary is an essential element for business process standardization and automation, and has a fundamental role in ERP application management and customization. Also, data dictionary facilitates B2B processes by enabling painless integration of business processes between various enterprises. We implemented data dictionary support in SEA+, a component- based scalable ERP system developed in ETRI, and found out that it's a plausible feature of business information system. We discovered that data dictionary promotes semantic, not syntactic, data management, which can make it possible to leverage viability of the tool in the coming age of more meta-data oriented computing world. We envision that business data dictionary is a firm foundation of adapting business knowledge, applications and processes into the semantic web based enterprise infra-structure.

  • PDF

Schema of Maintenance Data Exchange and Implementation Applied To Ship & Offshore Platform

  • Son, Gum Jun;Lee, Jang Hyun
    • Journal of Advanced Research in Ocean Engineering
    • /
    • 제4권3호
    • /
    • pp.96-104
    • /
    • 2018
  • The importance of data management for the efficient maintenance and operation of offshore structures is becoming increasingly important. This paper has discussed the data schema and business rules that standardize the data exchange between ship design, operation and maintenance. Technical documentation that meets the international standards of ShipDex and S1000D for exchanging the operation and management data in neutral or standard formats has been introduced into the life cycle management of ships. The schema of the data exchange is represented by XML (eXtensible Markup Language) and the lifecycle data is implemented by a structured document. Lifecycle data is represented as data modules defined by XML schema. Given the feasible data generation, an example of a technical document is introduced by a general XML authoring tool.

고객군의 지리적 패턴 발견을 위한 데이터마트 구현과 시각적 분석에 관한 연구 (Buying Pattern Discovery Using Spatio-Temporal Data Mart and Visual Analysis)

  • 조재희;하병국
    • 한국IT서비스학회지
    • /
    • 제9권1호
    • /
    • pp.127-139
    • /
    • 2010
  • Due to the development of information technology and business related to geographical location of customer, the need for the storage and analysis of geographical location data is increasing rapidly. Geographical location data have a spatio-temporal nature which is different from typical business data. Therefore, different methods of data storage and analysis are required. This paper proposes a multi-dimensional data model and data visualization to analyze geographical location data efficiently and effectively. Purchase order data of an online farm products brokerage business was used to build prototype datamart. RFM scores are calculated to classify customers and geocoding technology is applied to display information on maps, thereby to enhance data visualization.

Analyzing RDF Data in Linked Open Data Cloud using Formal Concept Analysis

  • Hwang, Suk-Hyung;Cho, Dong-Heon
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권6호
    • /
    • pp.57-68
    • /
    • 2017
  • The Linked Open Data(LOD) cloud is quickly becoming one of the largest collections of interlinked datasets and the de facto standard for publishing, sharing and connecting pieces of data on the Web. Data publishers from diverse domains publish their data using Resource Description Framework(RDF) data model and provide SPARQL endpoints to enable querying their data, which enables creating a global, distributed and interconnected dataspace on the LOD cloud. Although it is possible to extract structured data as query results by using SPARQL, users have very poor in analysis and visualization of RDF data from SPARQL query results. Therefore, to tackle this issue, based on Formal Concept Analysis, we propose a novel approach for analyzing and visualizing useful information from the LOD cloud. The RDF data analysis and visualization technique proposed in this paper can be utilized in the field of semantic web data mining by extracting and analyzing the information and knowledge inherent in LOD and supporting classification and visualization.

국내 마이데이터 활성화를 위한 미국, 유럽 마이데이터 비교 연구 (Comparison of MyData Use Among the U.S., Europe, and the Korean Governments)

  • 이명호
    • 한국비블리아학회지
    • /
    • 제31권2호
    • /
    • pp.183-201
    • /
    • 2020
  • 소셜 데이터, 공공 데이터, 개인정보 등을 이용한 다양한 서비스에 대한 요구가 증가하고 있다. 특히 개인정보를 활용하고 보호하기 위하여 미국, 유럽에서는 마이데이터에 대한 정책이 시도되고 있으며 2019년 국내에서도 마이데이터가 시행되었다. 본 연구는 우리나라 마이데이터 발전 방향을 제시하기 위하여 미국, 유럽의 마이데이터를 분석하였고 이를 기반으로 데이터 호환 및 데이터 품질 측면에서 발전 방향을 제안하였다.

JDL 자료융합 모델의 분산 자료융합 능력 개선 (Improving the Distributed Data Fusion Ability of the JDL Data Fusion Model)

  • 박규동;변영태
    • 한국군사과학기술학회지
    • /
    • 제15권2호
    • /
    • pp.147-154
    • /
    • 2012
  • In this paper, we revise the JDL data fusion model to have an ability of distributed data fusion(DDF). Data fusion is a function that produces valuable information using data from multiple sources. After the network centric warfare concept was introduced, the data fusion was required to be expanded to DDF. We identify the data transfer and control between nodes is the core function of DDF. The previous data fusion models can not be used for DDF because they don't include that function. Therefore, we revise the previous JDL data fusion model by adding the core function of DDF and propose this new model as a model for DDF. We show that our model is adequate and useful for DDF by using several examples.