• Title/Summary/Keyword: metadata quality

Search Result 114, Processing Time 0.02 seconds

Content Adaptation based on Metadata Hiding (메타데이터 은닉에 기반한 컨텐츠 적응변환)

  • Jung, Yong-Ju;Kang, Ho-Kyung;Ro, Yong-Man
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.31-34
    • /
    • 2003
  • In this paper, we propose an application of data hiding, especially for content adaptation, where one can reduce computational time and get better transcoding results or qualify. Hiding some useful information for content adaptation can help resource tailor to do effective transcoding so that one can achieve lower complexity as well as better quality in content adaptation. Experimental results show that the proposed method based on data hiding gives effectiveness to content adaptation with reasonable subjective quality.

  • PDF

Development of the Quality Management Software of Spatial Database (공간데이터베이스 품질유지관리 소프트웨어 개발)

  • 최병길;조광희
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2003.04a
    • /
    • pp.285-290
    • /
    • 2003
  • This study is aimed at standardizing the process of spatial database construction and developing a software tool for process management. Knowhow of five GIS firms and provisions of NGI(National Geography Institute) were analyzed. In this study, the process of spatial database construction was standardize. The system from this study has the capability to manage the process of construction database using GIS and to deal with the metadata of unit map, generated from the process, systematically and continuously The process was also shown by using either Gantt chart or PERT chart after developing an interface for "MS Project". A software "Visual Basic for Application" was used for this study.

  • PDF

Quality Evaluation of the Open Standard Data (공공데이터 개방표준 데이터의 품질평가)

  • Kim, Haklae
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.9
    • /
    • pp.439-447
    • /
    • 2020
  • Public data refers to all data or information created by public institutions, and public information that leads to communication and cooperation among all people. Public data is an important method to lead the next generation of new industries such as artificial intelligence and smart cities, Korea is continuously ranked high in the international evaluation related to public data. However, despite the continuous efforts, the use of public data or industrial influence is insufficient. Quality issues are continuously discussed in the use of public data, but the criteria for quantitatively evaluating data are insufficient. This paper reviews indicators for public data quality evaluation and performs quantitative evaluation on selected public data. In particular, the quality of open standard data constructed and opened based on public data management guidelines is examined to determine whether government guidelines are appropriate. The data quality assessment includes the metadata and data values of open standard data, and is reviewed based on completeness and accuracy indicators. Based on the data analysis results, this paper proposes policy and technical measures for quality improvement.

Development of Data Profiling Software Supporting a Microservice Architecture (마이크로 서비스 아키텍처를 지원하는 데이터 프로파일링 소프트웨어의 개발)

  • Chang, Jae-Young;Kim, Jihoon;Jee, Seowoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.5
    • /
    • pp.127-134
    • /
    • 2021
  • Recently, acquisition of high quality data has become an important issue as the expansion of the big data industry. In order to acquiring high quality data, accurate evaluation of data quality should be preceded first. The quality of data can be evaluated through meta-information such as statistics on data, and the task to extract such meta-information is called data profiling. Until now, data profiling software has typically been provided as a component or an additional service of traditional data quality or visualization tools. Hence, it was not suitable for utilizing directly in various environments. To address this problem, this paper presents the development result of data profiling software based on a microservice architecture that can be serviced in various environments. The presented data profiler provides an easy-to-use interface that requests of meta-information can be serviced through the restful API. Also, a proposed data profiler is independent of a specific environment, thus can be integrated efficiently with the various big data platforms or data analysis tools.

Design and Implementation of Integrated MapServer Based on GML (GML 기반 통합 맵서버 설계 및 구현)

  • Lee, Hye-Jin;Kim, Dong-Ho;Lee, Hyun-Ah;Kim, Jin-Suk
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.6 no.4
    • /
    • pp.71-84
    • /
    • 2003
  • This paper designs and implements an integrated map server to produce an integrated map. Two requirements for the proposed map server are as follows. First, the system should support a customization of metadata to meet user's requirements. The map server designed here will predefine information such as representations, regions required for client-side processing in the metadata. This will allow users easily acquire an integrated map in the required form of clients. Second, the system should be able to support not only the integration of spatial data but also their non-spatial data. Fusion service is realized by using the concept of linking. The integrated map server suggested in this paper can improve the quality of the fusion service by integrating the results of the map servers according to user's requests. Our experiments shows that the integrated map server reduces the response time in web mapping environments.

  • PDF

A Study on Developing a Name Access Point Control System to Improve the Performance of Information Retrieval from Institutional Repositories (기관 리포지터리의 검색기능 향상을 위한 인명 접근점제어 시스템 구축 연구)

  • Kim, Mi-Hyang;Kim, Tae-Soo
    • Journal of the Korean Society for information Management
    • /
    • v.27 no.3
    • /
    • pp.125-146
    • /
    • 2010
  • This study developed a name access point control system for better performance of information retrieval from institutional repositories, which are equipped with authorgenerated metadata processes for self-archiving. In developing name access point control data for the system, the primary data were created from the existing authority. However, unlike the existing authority data, the primary data did not use any authority forms. Instead, the data utilized all the forms provided by the resources as access points. Specifically, field of activity(subject) and title information on authorship were used to distinguish between persons who have the same name. The result showed that the system improved the performance of the information retrieval. The system has been also expected to be utilized over other metadata provided by libraries, in addition to the institutional repositories, in order to provide better quality information.

Comparison and Analysis of Metadata Schema for Academic Paper Integrated DB (학술논문 통합 DB 구축을 위한 메타데이터 스키마 비교 분석)

  • Choi, Wonjun;Hwang, Hyekyong;Kim, Jeonghwan;Lee, Kangsandajeong;Lim, Seokjong
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.2
    • /
    • pp.689-699
    • /
    • 2020
  • The National Science and Technology Information Center (NDSL) database, which provides academic papers at home and abroad, collects, builds, and manages data collected from various sources. In this study, we analyzed the DB paper schema and DB metadata that are currently constructed and managed to derive an integrated DB schema that can manage the high-value-added papers and manage them efficiently by analyzing distributed DB papers. Also, the final academic information data items were determined through comparison and analysis using the Web of Science and SCOPUS schemas that are currently purchased and possessed. The academic information data items constructed and serviced through this study were summarized into seven papers, authors, abstracts, institutions, themes, journals, and references, and defined as core contents under construction. The integrated DB schema was created through this study, and the results of this study will be used as a basis for constructing the integrated DB collection of high quality academic papers and designing the integrated system.

Development of Electoronic Book of Korea Standard (한국 전자책 문서표준(EBKS)의 개발)

  • 손원성;고승규;이경호;김재경;김성혁;임순범;최윤철
    • Journal of the Korean Society for information Management
    • /
    • v.18 no.2
    • /
    • pp.255-272
    • /
    • 2001
  • Generally, a paper book that is digitalized or a method of giving information to user in a form of paper book after digitalization is called ebook. Due to the rapid spread of internet and development of digital information technology, ebook has become an important issue globally. Not only can ebook send information efficiently but the price is low and compared to paper books it is more effective in sending information by using technologies like multimedia. However, ebook markets have problems to grow because there are different ebook formats. In this study, we will explain about the development of Electronic Book of Korea Standard(EBKS) to solve the problems. The purpose of EBKS is the exact exchange for ebook contents. To archive this purpose, we define the explict logical structure for documents and recommend metadata for doemetic enviroments and XSL-FO to provide high-quality layout. We expect that the establishment of EBKS will greatly contribute to growth of ebook markets by preventing the duplicable investment for contents and technologies.

  • PDF

A Design of Clinical Information Exchange Framework for Performance Improvement based on Lazy Response Model (지연 응답 모델에 기반한 성능 개선 진료정보 교류 프레임워크의 설계)

  • Lee, Se-Hoon;Shim, Woo-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.9
    • /
    • pp.157-164
    • /
    • 2012
  • Recently medical service environment, the clinical information exchange which contribute to medical safety, promotion of service quality and patient's convenience, efficiency of medical procedures and medical management is essential medical service model. But, practical exchange of clinical information which variation of information level, absence of standardization system, build of heterogeneous information systems is difficult in each medical institute. In this paper, We analyzed the related technical standardizations and the models of clinical information exchange. So, we designed the clinical information exchange system based on the ideal lazy response model which is aimed at vitalizations the exchange of clinical information under domestic law environment. In case of exchange the clinical information, we separate CDA document flow from metadata flow. As a experimental result we acquired 24% improved performance compared with existed system based on the lazy response model.

Metadata Analysis of Open Government Data by Formal Concept Analysis (형식 개념 분석을 통한 공공데이터의 메타데이터 분석)

  • Kim, Haklae
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.1
    • /
    • pp.305-313
    • /
    • 2018
  • Public open data is a database or electronic file produced by a public agency or government. The government is opening public data through the open data portals and individual agency websites. However, it is a reality that there is a limit to search and utilize desired public data from the perspective of data users. In particular, it takes a great deal of effort and time to understand the characteristics of data sets and to combine different data sets. This study suggests the possibility of interlinking between data sets by analyzing the common relationship of item names held by public data. The data sets are collected from the open data portal, and item names included in the data sets are extracted. The extracted item names consist of formal context and formal concept through formal concept analysis. The format concept has a list of data sets and a set of item name as extent and intent, respectively, and analyzes the common items of intent end to determine the possibility of data connection. The results derived from the formal concept analysis can be effectively applied to the semantic connection of the public data, and can be applied to data standard and quality improvement for public data release.