• Title/Summary/Keyword: distributed database

Search Result 598, Processing Time 0.021 seconds

Transcriptome analysis of internal and external stress mechanisms in Aster spathulifolius Maxim.

  • Sivagami, Jean Claude;Park, SeonJoo
    • Proceedings of the Plant Resources Society of Korea Conference
    • /
    • 2019.04a
    • /
    • pp.35-35
    • /
    • 2019
  • Aster spathulifolius Maxim. is belongs to the Asteraceae family which is distributed only in Korea and Japan. It is recognize as a traditionally medicinal plants and economically valuable in ornamental field. However, among the Asteraceae family, the Aster genus, which is lacks in genomic resources and information of molecular function. Therefore, we used high throughput RNA-sequencing transcriptome data of the A. spathulifolius to know molecular level function. DeNovo assembly produced 98,660 unigene with N50 value 1126 bp. Unigenes was performed to analyses the functional annotation against NCBI database like plant database of nucleotide (Nt) and non-redundant protein (Nr), Pfam, Uniprot, KEGG and Transcriptional factor (TF). In addition, Distribution of SSR markers also analyzed for future perfectives. Further, Comparing with other two Asteraceae family species like, Karelinia caspica and Chrysanthemum morifolium to the A. spathulifolius shows the number of gene that regulated in internal and external stress respectively salt-tolerant and heat and drought stress to understand the molecular basis related to the different environments stress.

  • PDF

Prospect Analysis for Utilization of Virtual Assets using Blockchain Technology

  • Jeongkyu Hong
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.1
    • /
    • pp.64-69
    • /
    • 2024
  • Blockchain is a decentralized network in which data blocks are linked. Through a decentralized peer-to-peer network, users can create shared databases, resulting in a trustworthy and aggregated database known as a blockchain that enhances reliability and security. The distributed nature of the blockchain enables data to be stored on multiple nodes, eliminating the need for a central server or platform. This disintermediation significantly reduces the transaction and administrative costs. The blockchain is particularly valuable in applications where reliability and stability are critical because it establishes an open database that ensures data integrity, making it virtually impossible to tamper with or falsify data. This study explores the diverse applications of the blockchain technology in virtual assets, such as cryptocurrency, decentralized finance, central bank digital currency, nonfungible tokens, and metaverses. In addition, it analyzes the potential prospects and developments driven by these innovative technologies.

Flash-Aware Transaction Management Scheme for flash Memory Database (플래시 메모리 데이터베이스를 위한 플래시인지 트랜잭션 관리 기법)

  • Byun Si Woo
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.65-72
    • /
    • 2005
  • Flash memories are one of best media to support portable computers in mobile computing environment. The features of non-volatility, low power consumption. and fast access time for read operations are sufficient grounds to support flash memory as major database storage components of portable computers. However. we need to Improve traditional transaction management scheme due to the relatively slow characteristics of flash operation as compared to RAM memory. In order to achieve this goal. we devise a new scheme called flash-aware transaction management (FATM). FATM improves transaction performance by exploiting SRAM and W-Cache, We also propose a simulation model to show the performance of FATM. Based on the results of the performance evaluation, we conclude that FATM scheme outperforms the traditional scheme.

  • PDF

DESIGN AND IMPLEMENTATION OF METADATA MODEL FOR SENSOR DATA STREAM

  • Lee, Yang-Koo;Jung, Young-Jin;Ryu, Keun-Ho;Kim, Kwang-Deuk
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.768-771
    • /
    • 2006
  • In WSN(Wireless Sensor Network) environment, a large amount of sensors, which are small and heterogeneous, generates data stream successively in physical space. These sensors are composed of measured data and metadata. Metadata includes various features such as location, sampling time, measurement unit, and their types. Until now, wireless sensors have been managed with individual specification, not the explicit standardization of metadata, so it is difficult to collect and communicate between heterogeneous sensors. To solve this problem, OGC(Open Geospatial Consortium) has proposed a SensorML(Sensor Model Language) which can manage metadata of heterogeneous sensors with unique format. In this paper, we introduce a metadata model using SensorML specification to manage various sensors, which are distributed in a wide scope. In addition, we implement the metadata management module applied to the sensor data stream management system. We provide many functions, namely generating metadata file, registering and storing them according to definition of SensorML.

  • PDF

A Study on the Construction of Knowledge-based Digital Library Model in Korea University Library (지식기반 전자도서관 모형구축에 관한 연구 - 대학도서관을 중심으로 -)

  • Lee, Eung-Bong;Lu, Bum-Jong
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.34 no.4
    • /
    • pp.49-67
    • /
    • 2000
  • The purpose of this study is to suggest knowledge-based digital library model that is applicable in korea university library. This paper provides brief accounts of research and development trends of digital library in referring some major digital library projects that are in progress, or just completed. There follows a suggestion of eight essential modules for knowledge-based digital library system, that are infrastructure development of dissertation presentation and database construction, management and service of collections, infrastructure construction of journal service, development of unified viewer, database conversion, distributed & integrated retrieval system, cyber campus(private work space), and database connection.

  • PDF

Design and Implementation of a Content Manager in the Multimedia Steaming Framework (멀티미디어 스트리밍 프레임워크에서 컨텐츠 관리자의 설계 및 구현)

  • Hong, Yeong-Rae;Kim, Hyeong-Il;Lee, Seung-Ryong;Jeong, Byeong-Su;Yun, Seok-Hwan;Jeong, Chan-Geun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2S
    • /
    • pp.733-743
    • /
    • 2000
  • This paper describes design and implementation of a content manager in the Integrated Steaming Framework Architecture (ISSA) that is proposed by the authors. The ISSA can provide an environment to develop multimedia streaming applications under heterogeneous distributed systems. The goal of ISSA is to extend the limitations of existing streaming systems. It can support diverse media formats and high level programming environment for streaming application developers. Moreover, it is independent from underlying networks and operating systems, and compatible with the global real-time multimedia database system(BeeHive) so that streaming media is efficiently retrieved, stored, and serviced. The role of a content manager is important in the ISSA environment since it manages an information of media that are provided by the server, and allow users to access media more easily by means of conveying that information to the streaming server, Web server, and client efficiently. The proposed content manager is not only to meed these requirements, but also to provide streaming information to media source and transport manager in order to be an efficient streaming. Furthermore, it supports database transaction processing by using the database connector.

  • PDF

A Database Schema Integration Method Using XML Schema (XML Schema를 이용한 이질의 데이터베이스 스키마 통합)

  • 박우창
    • Journal of Internet Computing and Services
    • /
    • v.3 no.2
    • /
    • pp.39-56
    • /
    • 2002
  • In distributed computing environments, there are many database applications that should share data each other such as data warehousing and data mining with autonomy on local databases. The first step to such applications is the integration of heterogeneous database schema, but there is no accepted common data model for the integration and also are difficulties on the construction of integration program. In this paper, we use the XML Schema for the representation of common data model and exploit XSLT for reducing the programming difficulties. We define the schema integration operations and develop a methodology for the semi-automatic schema integration according to schema conflicts types. Our integration method has benefits on standardization, extendibility on schema integration process comparing to existing methodologies.

  • PDF

A Study on the Data Mining Preprocessing Tool For Efficient Database Marketing (효율적인 데이터베이스 마케팅을 위한 데이터마이닝 전처리도구에 관한 연구)

  • Lee, Jun-Seok
    • Journal of Digital Convergence
    • /
    • v.12 no.11
    • /
    • pp.257-264
    • /
    • 2014
  • This paper is to construction of the data mining preprocessing tool for efficient database marketing. We compare and evaluate the often used data mining tools based on the access method to local and remote databases, and on the exchange of information resources between different computers. The evaluated preprocessing of data mining tools are Answer Tree, Climentine, Enterprise Miner, Kensington, and Weka. We propose a design principle for an efficient system for data preprocessing for data mining on the distributed networks. This system is based on Java technology including EJB(Enterprise Java Beans) and XML(eXtensible Markup Language).

Design of Environmental Information Systems Architecture Based on the Internet : The Building of a Database for Environmental Factors and GIS (인터넷 환경에 기반한 환경정보시스템 아키텍쳐 설계 : 환경요인을 Database 구축과 이를 이용한 GIS 구축)

  • Suh, Eui-Ho;Lee, Dae-Ho;Yu, Sung-Ho
    • Asia pacific journal of information systems
    • /
    • v.8 no.2
    • /
    • pp.1-18
    • /
    • 1998
  • As the management and preservation of the environment become an important social issue, information required to support environmental task is required. So, there is an increasing demand for environmental information and appropriate systems to manage it. The vast volume of environmental data is distributed in different knowledge domains and systems. Environmental data objects have the complex structure containing environmental quality data and attribute data. Environmental information systems must be able to address these properties. This research has aimed at constructing well-defined schema design of environmental data, and making system architecture that environmental data kept by authorities should be made available to the public user. There are 3 major components in environmental information systems architecture ; User interface, Catalog libraries, Communication Provider. Web browsers provide consistent and intuitive user interfaces on Internet. The communication provider is a collection of diverse CGI functions. The main roles of the CGIs are to build interfaces between the Web, databases. Catalog libraries is libraries of various matadata including administration matadata. Administration matadata support the environmental administration and the managerial aspects of environmental data rather than explain a database itself or its properties.

  • PDF

HBase based Business Process Event Log Schema Design of Hadoop Framework

  • Ham, Seonghun;Ahn, Hyun;Kim, Kwanghoon Pio
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.49-55
    • /
    • 2019
  • Organizations design and operate business process models to achieve their goals efficiently and systematically. With the advancement of IT technology, the number of items that computer systems can participate in and the process becomes huge and complicated. This phenomenon created a more complex and subdivide flow of business process.The process instances that contain workcase and events are larger and have more data. This is an essential resource for process mining and is used directly in model discovery, analysis, and improvement of processes. This event log is getting bigger and broader, which leads to problems such as capacity management and I / O load in management of existing row level program or management through a relational database. In this paper, as the event log becomes big data, we have found the problem of management limit based on the existing original file or relational database. Design and apply schemes to archive and analyze large event logs through Hadoop, an open source distributed file system, and HBase, a NoSQL database system.