• Title/Summary/Keyword: file access

Search Result 404, Processing Time 0.024 seconds

A Study on the Construction of Database, Online Management System, and Analysis Instrument for Biological Diversity Data (생물다양성 자료의 데이터베이스화와 온라인 관리시스템 및 분석도구 구축에 관한 연구)

  • Bec Kee-Yul;Jung Jong-Chul;Park Seon-Joo;Lee Jong-Wook
    • Journal of Environmental Science International
    • /
    • v.14 no.12
    • /
    • pp.1119-1127
    • /
    • 2005
  • The management of data on biological diversity is presently complex and confusing. This study was initiated to construct a database so that such data could be stored in a data management, and analysis instrument to correct the problems inherent in the current incoherent storage methods. MySQL was used in DBMS(DataBase Management System), and the program was basically produced using Java technology Also, the program was developed so people could adapt to the requirements that are changing every minute. We hope this was accomplished by modifying easily and quickly the advanced programming technology and patterns. To this end, an effective and flexible database schema was devised to store and analyze diversity databases. Even users with no knowledge of databases should be able to access this management instrument and easily manage the database through the World Wide Web. On a basis of databases stored in this manner, it could become routinely used for various databases using this analysis instrument supplied on the World Wide Web. Supplying the derived results by using a simple table and making results visible using simple charts, researchers could easily adapt these methods to various data analyses. As the diversity data was stored in a database, not in a general file, this study makes the precise, error-free and high -quality storage in a consistent manner. The methods proposed here should also minimize the errors that might appear in each data search, data movement, or data conversion by supplying management instrumentation on the Web. Also, this study was to deduce the various results to the level we required and execute the comparative analysis without the lengthy time necessary to supply the analytical instrument with similar results as provided by various other methods of analysis. The results of this research may be summerized as follows: 1)This study suggests methods of storage by giving consistency to diversity data. 2)This study prepared a suggested foundation for comparative analysis of various data. 3)It may suggest further research, which could lead to more and better standardization of diversity data and to better methods for predicting changes in species diversity.

A Comparative Analysis of Subject Headings Related to Korean Border in the Subject Headings of Major Countries (주요 국가의 주제명표목표에 나타난 한국의 국경관련 주제명 비교 분석)

  • Kim, Jeong-Hyen
    • Journal of Korean Library and Information Science Society
    • /
    • v.44 no.2
    • /
    • pp.217-239
    • /
    • 2013
  • This research was conducted to analyze the actual condition of subject heading related to Korean border shown in the subject headings of 7 countries: United States, France, Germany, Spain, Russia, China, and Japan. The results are as follows. To begin with, Korean border-related records in most other national libraries are in extremely poor conditions except for some countries like United States. Amnokgang and Dumangang-related records did not search at all in the France. Yellow Sea, Dumangang, and Baekdusan-related records did not search at all in the Spain. Second, even Dokdo we have effective control, the geographical name 'Korea' is not marked with catalog records except the United States and France. The Germany is displayed with the geographical name of 'Korea' and 'Japan'. Third, the East Sea(Donghae) already is marked with 'Sea of Japan' in most of the national library catalogs, and Yellow Sea(Huanghai) is marked with 'Yellow Sea'. Fourth, Amnokgang and Dumangang is marked with Chinese pronunciation in most in most of the national library. Fifth, Baekdusan is marked with Korean pronunciation in most countries. However the United States showed in 'Baekdu Mountain' and 'Changbai Mountain' discrimination. In the case of the Germany, 'Changbai Mountain' are marked with variant access point of 'Baekdusan'.

A Study on Research Data Management Services of Research University Libraries in the U.S. (대학도서관의 연구데이터관리서비스에 관한 연구 - 미국 연구중심대학도서관을 중심으로 -)

  • Kim, Jihyun
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.25 no.3
    • /
    • pp.165-189
    • /
    • 2014
  • This study examined the current practices of Research Data Management (RDM) services recently built and implemented at research university libraries in the U.S. by analyzing the components of the services and the contents presented in their web sites. The study then analyzed the content of web pages describing the services provided by 31 Research Universities/Very High research activity determined based on the Carnegie Classification. The analysis was based on 9 components of the services suggested by previous studies, including (1) DMP support; (2) File organization; (3) Data description; (4) Data storage; (5) Data sharing and access; (6) Data preservation; (7) Data citation; (8) Data management training; (9) Intellectual property of data. As a result, the vast majority of the universities offered the service of DMP support. More than half of the universities provided the services for describing and preserving data, as well as data management training. Specifically, RDM services focused on offering the guidance to disciplinary metadata and repositories of relevance, or training via individual consulting services. More research and discussion is necessary to better understand an intra- or inter-institutional collaboration for implementing the services and knowledge and competency of librarians in charge of the services.

Code Generation for Integrity Constraint Check in Objectivity/C++ (Objectivity/C++에서 무결성 제약조건 확인을 위한 코드 생성)

  • Kim, In-Tae;Kim, Gi-Chang;Yu, Sang-Bong;Cha, Sang-Gyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.4
    • /
    • pp.416-425
    • /
    • 1999
  • 복잡한 무결성 제약 조건을 효율적으로 확인하기 위해 제약 조건들을 룰 베이스(rule base)에 저장하고 별도의 룰 관리 시스템과 제약 조건 관리 시스템을 통해 제약 조건을 확인하는 기법이 많은 연구자들에 의해 연구되고 발표되었다. 그러나 제약 조건 관리 시스템이 실행시간에 응용 프로그램을 항상 모니터링하고 있다가 데이타의 수정이 요청될 때마다 개입하여 프로세스를 중단시키고 관련 제약 조건을 확인하는 기존의 방법들은 처리 시간의 지연을 피할 수 없다. 본 논문은 컴파일 시간에 제약 조건 확인 코드를 응용 프로그램에 미리 삽입할 것을 제안한다. 응용 프로그램 자체 내에 제약 조건 확인 코드가 삽입되기 때문에 실행 시간에 다른 시스템의 제어를 받지 않고 직접 제약 조건의 확인 및 데이타베이스의 접근이 가능해져서 처리 시간의 지연을 피할 수 있을 것이다. 이를 위해 어떤 구문이 제약 조건의 확인을 유발하는 지를 추적하였고, 컴파일러가 그러한 구문을 어떻게 전처리 과정에서 검색하는지 그리고 그러한 구문마다 어떻게 해당 제약 조건 확인 코드를 삽입할 수 있는 지를 객체지향1) 데이타베이스 언어인 Objectivity/C++에 대해 gcc의 YACC 코드를 변경함으로써 보여 주었다.Abstract To cope with the complexity of handling integrity constraints, numerous researchers have suggested to use a rule-based system, where integrity constraints are expressed as rules and stored in a rule base. A rule manager and an integrity constraint manager cooperate to check the integrity constraints efficiently. In this approach, however, the integrity constraint manager has to monitor the activity of an application program constantly to catch any database operation. For each database operation, it has to check relevant rules with the help of the rule manager, resulting in considerable delays in database access. We propose to insert the constraints checking code in the application program directly at compile time. With constraints checking code inserted, the application program can check integrity constraints by itself without the intervention of the integrity constraint manager. We investigate what kind of statements require the checking of constraints, show how the compiler can detect those statements, and show how constraints checking code can be inserted into the program, by modifying the GCC YACC file for Objectivity/C++, an object-oriented database programming language.

Bitmap Indexes and Query Processing Strategies for Relational XML Twig Queries (관계형 XML 가지 패턴 질의를 위한 비트맵 인덱스와 질의 처리 기법)

  • Lee, Kyong-Ha;Moon, Bong-Ki;Lee, Kyu-Chul
    • Journal of KIISE:Databases
    • /
    • v.37 no.3
    • /
    • pp.146-164
    • /
    • 2010
  • Due to an increasing volume of XML data, it is considered prudent to store XML data on an industry-strength database system instead of relying on a domain specific application or a file system. For shredded XML data stored in relational tables, however, it may not be straightforward to apply existing algorithms for twig query processing, since most of the algorithms require XML data to be accessed in a form of streams of elements grouped by their tags and sorted in a particular order. In order to support XML query processing within the common framework of relational database systems, we first propose several bitmap indexes and their strategies for supporting holistic twig joining on XML data stored in relational tables. Since bitmap indexes are well supported in most of the commercial and open-source database systems, the proposed bitmapped indexes and twig query processing strategies can be incorporated into relational query processing framework with more ease. The proposed query processing strategies are efficient in terms of both time and space, because the compressed bitmap indexes stay compressed during data access. In addition, we propose a hybrid index which computes twig query solutions with only bit-vectors, without accessing labeled XML elements stored in the relational tables.

A Qualitative Study on Librarians' Recognition of the Joint Utilization of National Authority Data (국가전거데이터 공동활용에 대한 사서들의 인식에 관한 질적 탐구)

  • Lee, Sung-Sook
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.32 no.1
    • /
    • pp.443-467
    • /
    • 2021
  • The purpose of this study was to conduct interviews with librarians who have experience in establishing local authority data by participating in the national authority sharing system of the National Library of Korea and to understand librarians' recognition and support for the joint utilization of national authority data. For this purpose, a total of 10 librarians who participated in the national authority sharing system project were interviewed by telephone using semi-structured questionnaires. Through this, it was possible to investigate the benefits, difficulties, utilization plans, revision plans of headings, and opinions on necessary support. The results of this study showed that the participants recognized that the joint utilization of national authority data provides the basis for the authority work of the local library and brings about the efficiency of the authority work, but they recognized the difficulty of modifying, selecting, creating new data, lacking knowledge, and lacking support system. The necessary support for the joint utilization of national authority data was provided with education and manuals related to authority, provision of rules related to authority that fully consider the position of the institution, budget and manpower support for system development and maintenance, establishment of communication channel and council, system and data advancement, and incentive to participating libraries. Based on the results of the study, the method and direction for the future operation of the joint utilization of national authority data were presented.

A Study on Non-Fungible Token Platform for Usability and Privacy Improvement (사용성 및 프라이버시 개선을 위한 NFT 플랫폼 연구)

  • Kang, Myung Joe;Kim, Mi Hui
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.11
    • /
    • pp.403-410
    • /
    • 2022
  • Non-Fungible Tokens (NFTs) created on the basis of blockchain have their own unique value, so they cannot be forged or exchanged with other tokens or coins. Using these characteristics, NFTs can be issued to digital assets such as images, videos, artworks, game characters, and items to claim ownership of digital assets among many users and objects in cyberspace, as well as proving the original. However, interest in NFTs exploded from the beginning of 2020, causing a lot of load on the blockchain network, and as a result, users are experiencing problems such as delays in computational processing or very large fees in the mining process. Additionally, all actions of users are stored in the blockchain, and digital assets are stored in a blockchain-based distributed file storage system, which may unnecessarily expose the personal information of users who do not want to identify themselves on the Internet. In this paper, we propose an NFT platform using cloud computing, access gate, conversion table, and cloud ID to improve usability and privacy problems that occur in existing system. For performance comparison between local and cloud systems, we measured the gas used for smart contract deployment and NFT-issued transaction. As a result, even though the cloud system used the same experimental environment and parameters, it saved about 3.75% of gas for smart contract deployment and about 4.6% for NFT-generated transaction, confirming that the cloud system can handle computations more efficiently than the local system.

Signal and Telegram Security Messenger Digital Forensic Analysis study in Android Environment (안드로이드 환경에서 Signal과 Telegram 보안 메신저 디지털 포렌식분석 연구)

  • Jae-Min Kwon;Won-Hyung Park;Youn-sung Choi
    • Convergence Security Journal
    • /
    • v.23 no.3
    • /
    • pp.13-20
    • /
    • 2023
  • This study conducted a digital forensic analysis of Signal and Telegram, two secure messengers widely used in the Android environment. As mobile messengers currently play an important role in daily life, data management and security within these apps have become very important issues. Signal and Telegram, among others, are secure messengers that are highly reliable among users, and they safely protect users' personal information based on encryption technology. However, much research is still needed on how to analyze these encrypted data. In order to solve these problems, in this study, an in-depth analysis was conducted on the message encryption of Signal and Telegram and the database structure and encryption method in Android devices. In the case of Signal, we were able to successfully decrypt encrypted messages that are difficult to access from the outside due to complex algorithms and confirm the contents. In addition, the database structure of the two messenger apps was analyzed in detail and the information was organized into a folder structure and file format that could be used at any time. It is expected that more accurate and detailed digital forensic analysis will be possible in the future by applying more advanced technology and methodology based on the analyzed information. It is expected that this research will help increase understanding of secure messengers such as Signal and Telegram, which will open up possibilities for use in various aspects such as personal information protection and crime prevention.

Research on the Re-organization of the Administration of Labor's Records in the custody of the National Archives (노동청 기록의 재조직에 관한 연구 - 국가기록원 소장 기록을 중심으로 -)

  • Kwak, Kun-Hong
    • The Korean Journal of Archival Studies
    • /
    • no.23
    • /
    • pp.141-178
    • /
    • 2010
  • The Administration of Labor was responsible for the technical and practical functions like policy-making of labor matters and implementing the relevant laws. However, there has been a few record transferred to the National Archives to help find out the labor policy-making process. This is one of the typical examples that shows the discontinuity and unbalance, and disorderly filing of the administrative records in Korea. Naturally it is almost impossible to retrieve the appropriate content through the records file-name. Users should be at the trouble to compare the record items and their content one by one. For the re-organization of the Administration of Labor' records, this research suggests the four-level analysis of functions of the Administration. The Administration of Labor' survived records could be linked to each level function. And the publication of the 'Records Abstract Catalog' providing users with more information about the records would pave the way for easier access to the records. In addition, it also suggests the logical re-filing of the survived records of which we cannot find the order or sequence. This re-organization of the survived records would help to establish the acquisition and appraisal policy of the labor records as well as the new way of description and finding tool hereafter. Drawing up labor history map is a starting point for the acquisition strategy of the labor records, which could allow users to gain systematic access on the survived records. Of course, extensive investigation and research on the survived records is a prerequisite for the map. It would be required to research on the survived records of the other government agencies, including economic-social area ministries and investigation agencies and the National Assembly as well. It is also needed to arrange and typify the significant incidents and activities on thematic and periodic frames in the labor history. If possible to understand or connect the survived records and these accomplishments comprehensively, it would be of great help for the acquisition of the labor records and the related oral records projects.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.