• Title/Summary/Keyword: 공통데이터모델

Search Result 225, Processing Time 0.023 seconds

A Study on the Evaluation of Public Librarian's Core Competency Value (공공도서관 사서의 공통역량 평가에 관한 연구)

  • Park, Heejin;Kim, Jinmook;Cha, Sung-Jong
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.32 no.1
    • /
    • pp.335-360
    • /
    • 2021
  • This study aimed to develop self-diagnosis tools to evaluate the common competence level of public library librarians, apply them to actual public library librarians, and analyze the factors of competency value evaluation through empirical research methods. To this end, the study modify the existing capacity value evaluation indicators of librarians from a public library perspective and conducted a survey to self-diagnose the common capabilities of public library librarians. As the results of analysis showed that librarians of public libraries themselves think that the level of core competence that professional librarians should acquire is relatively higher than the average. Among the overall capabilities of librarians, the average of the 'librarians' behavior and attitude' area was the highest, followed by the 'librarians' skill' and 'librarian's knowledge' areas. The study suggested to strengthen the capacity of public library librarians for various duties, the re-education system for librarians should be established, and a systematic system for promoting librarians' duties as professionals, and a personnel system for professional development.

Implementation of GPM Core Model Using OWL DL (OWL DL을 사용한 GPM 핵심 모델의 구현)

  • Choi, Ji-Woong;Park, Ho-Byung;Kim, Hyung-Jean;Kim, Myung-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.1
    • /
    • pp.31-42
    • /
    • 2010
  • GPM(Generic Product Model) developed by Hitachi in Japan is a common data model to integrate and share life cycle data of nuclear power plants. GPM consists of GPM core model, an abstract model, implementation language for the model and reference library written in the language. GPM core model has a feature that it can construct a semantic network model consisting of relationships among objects. Initial GPM developed and provided GPML as an implementation language to support the feature of the core model, but afterwards the GPML was replaced by GPM-XML based on XML to achieve data interoperability with heterogeneous applications accessing a GPM data model. However, data models written in GPM-XML are insufficient to be used as a semantic network model for lack of studies which support GPM-XML and enable the models to be used as a semantic network model. This paper proposes OWL as the implementation language for GPM core model because OWL can describe ontologies similar to semantic network models and has an abundant supply of technical standards and supporting tools. Also, OWL which can be expressed in terms of RDF/XML based on XML guarantees data interoperability. This paper uses OWL DL, one of three sublanguages of OWL, because it can guarantee complete reasoning and the maximum expressiveness at the same time. The contents of this paper introduce the way how to overcome the difference between GPM and OWL DL, and, base on this way, describe how to convert the reference library written in GPML into ontologies based on OWL DL written in RDF/XML.

A Study on Book Metadata Creation and Distribution on Supply Chain (공급사슬상의 도서메타데이터 생성.유통에 관한 고찰)

  • Cho, Jane
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.44 no.3
    • /
    • pp.61-80
    • /
    • 2010
  • Recently, the publishing community now recognizes the importance of metadata in customers' buying decisions. As a result, they are more interested in effective metadata creation and quality maintenance, as well as standardization of exchanging system in the supply chain. As the library community also investigates the economic effectiveness of creating metadata, they try to find the best model for simplifying metadata creation by using sources close to the original. This study analyzes metadata work flow which had same source but be used in different fields by their own type and standard. It also discusses the same issues about each section and possibility about interoperation. Finally this paper tries to find an effective creation and distribution model of book metadata which can be used in domestic publishing and the library community.

Effective and Statistical Quantification Model for Network Data Comparing (통계적 수량화 방법을 이용한 효과적인 네트워크 데이터 비교 방법)

  • Cho, Jae-Ik;Kim, Ho-In;Moon, Jong-Sub
    • Journal of Broadcast Engineering
    • /
    • v.13 no.1
    • /
    • pp.86-91
    • /
    • 2008
  • In the field of network data analysis, the research of how much the estimation data reflects the population data is inevitable. This paper compares and analyzes the well known MIT Lincoln Lab network data, which is composed of collectable standard information from the network with the KDD CUP 99 dataset which was composed from the MIT/LL data. For comparison and analysis, the protocol information of both the data was used. Correspondence analysis was used for analysis, SVD was used for 2 dimensional visualization and weigthed euclidean distance was used for network data quantification.

Study on HIPAA PHI application method to protect personal medical information in OMOP CDM construction (OMOP CDM 구축 시 개인의료정보 보호를 위한 HIPAA PHI 적용 방법 연구)

  • Kim, Hak-Ki;Jung, Eun-Young;Park, Dong-Kyun
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.6
    • /
    • pp.66-76
    • /
    • 2017
  • In this study, we investigated how to protect personal healthcare information when constructing OMOP (Observational Medical Outcomes Partnership) CDM (Common Data Model). There are two proposed methods; to restrict data corresponding to HIPAA (Health Insurance Portability and Accountability Act) PHI (Protected Health Information) to be extracted to CDM or to disable identification of it. While processing sensitive information is restricted by Korean Personal Information Protection Act and medical law, there is no clear regulation about what is regarded as sensitive information. Therefore, it was difficult to select the sensitive information for protecting personal healthcare information. In order to solve this problem, we defined HIPAA PHI as restriction criterion of Article 23 of the Personal Information Protection Act and maps data corresponding to CDM data. Through this study, we expected that it will contribute to the spread of CDM construction in Korea as providing solutions to the problem of protection of personal healthcare information generated during CDM construction.

Development of Reconfigurable Tactical Operation Display Framework by Battery and Battalion (포대/대대 별 재구성 가능한 전술작전화면 프레임워크 개발)

  • Lee, Sangtae;Lee, Seungyoung;Wi, SoungHyouk;Cho, Kyutae
    • Journal of KIISE
    • /
    • v.44 no.5
    • /
    • pp.476-485
    • /
    • 2017
  • The tactical operation centers of future anti-aircraft missile systems provide the environment for the research on future air threats, tactical information, integrated battlefield environment creation and management, engagement control and command and control algorithms. To develop the key functional elements of integrated battlefield situation creation and processing and tactical operation automation processing operations, battery/battalion tactical operation control and reconfiguration design software are required. Therefore, the algorithm software of each function and the tactical operation display software and link software for interworking between equipment were developed as reconfigurable through a data-centric design. In this paper, a tactical operation display framework that can be reconfigured on the operation display of the tactical operations according to the battery/battalion is introduced. This tactical operation display framework was used to develop a common data model design for the reconfigurable structure of multi-role tactical operations with battery / battalion and mission views, and a display configuration tool that provides a tactical operation display framework for view development was also developed using the MVC pattern. If the tactical operation display framework is used, it will be possible to reuse the view design through the common base structure, and a view that can be reconfigured easily and quickly will also be developed.

Design and Implementation of Ship Application System for Maritime Service Utilizing onboard Ship Collected Data (선내 수집데이터를 활용하는 선박 및 육상 서비스를 위한 선박용 어플리케이션 시스템 설계 및 구현)

  • Kang, Nam-seon;Kim, Yong-dea;Kim, Sang-yong;Lee, Bum-seok
    • Journal of Advanced Navigation Technology
    • /
    • v.20 no.2
    • /
    • pp.116-126
    • /
    • 2016
  • In this study, has designed the ship application system for efficient data integration management of onboard ship and shore application/service utilizing data collected onboard ship, and has implemented the module. In order to supports onboard ship and shore service utilizing onboard ship collected data and provide a easy to access among individual devices, the ship application system applied the XML structure of ISO 16425 and the data sharing system model discussed in IALA, and the common module for system operation, a windows service for data collection/integral management, and web service module for management has been implemented.

Design of CNN-based Gastrointestinal Landmark Classifier for Tracking the Gastrointestinal Location (캡슐내시경의 위치추적을 위한 CNN 기반 위장관 랜드마크 분류기 설계)

  • Jang, Hyeon-Woong;Lim, Chang-Nam;Park, Ye-Seul;Lee, Kwang-Jae;Lee, Jung-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.1019-1022
    • /
    • 2019
  • 최근의 영상 처리 분야는 딥러닝 기법들의 성능이 입증됨에 따라 다양한 분야에서 이와 같은 기법들을 활용해 영상에 대한 분류, 분석, 검출 등을 수행하려는 시도가 활발하다. 그중에서도 의료 진단 보조 역할을 할 수 있는 의료 영상 분석 소프트웨어에 대한 기대가 증가하고 있는데, 본 연구에서는 캡슐내시경 영상에 주목하였다. 캡슐내시경은 주로 소장 촬영을 목표로 하며 식도부터 대장까지 약 8~10시간 동안 촬영된다. 이로 인해 CT, MR, X-ray와 같은 다른 의료 영상과 다르게 하나의 데이터 셋이 10~15만 장의 이미지를 갖는다. 일반적으로 캡슐내시경 영상을 판독하는 순서는 위장관 교차점(Z-Line, 유문판, 회맹판)을 기준으로 위장관 랜드마크(식도, 위, 소장, 대장)를 구분한 뒤, 각 랜드마크 별로 병변 정보를 찾아내는 방식이다. 그러나 워낙 방대한 영상 데이터를 가지기 때문에 의사 혹은 의료 전문가가 영상을 판독하는데 많은 시간과 노력이 소모되고 있다. 본 논문의 목적은 캡슐내시경 영상의 판독에서 모든 환자에 대해 공통으로 수행되고, 판독하는 데 많은 시간을 차지하는 위장관 랜드마크를 찾는 것에 있다. 이를 위해, 위장관 랜드마크를 식별할 수 있는 CNN 학습 모델을 설계하였으며, 더욱 효과적인 학습을 위해 전처리 과정으로 학습에 방해가 되는 학습 노이즈 영상들을 제거하고 위장관 랜드마크 별 특징 분석을 진행하였다. 총 8명의 환자 데이터를 가지고 학습된 모델에 대해 평가 및 검증을 진행하였는데, 무작위로 환자 데이터를 샘플링하여 학습한 모델을 평가한 결과, 평균 정확도가 95% 가 확인되었으며 개별 환자별로 교차 검증 방식을 진행한 결과 평균 정확도 67% 가 확인되었다.

Development of Architecture Products Management System (아키텍처산출물 관리 시스템 개발)

  • Choi, Nam-Yong;Song, Young-Jae
    • The KIPS Transactions:PartD
    • /
    • v.12D no.6 s.102
    • /
    • pp.857-862
    • /
    • 2005
  • MND(Ministry of National Defense) has developed MND AF(Ministry of National Defense Architecture Framework) and CADM(Core Architecture Data Model to guarantee interoperability among defense information systems. But, it is very difficult to manage architecture product documented through MND AF and CADM. So, there Is necessity for development of modeling tool and repository system which can develop architecture products and manage architecture product informations in common repository In this paper, we developed architecture product management system which supports development and management of meta model and architecture product of MND AF and CADM. Through architecture product management system architect of each agency can construct architecture product in a more effective and efficient way with modeling method and a user can search and refer useful architecture product informations using query function. Also, architecture product management system provides the basis for system integration and interoperability with integration, analysis and comparison of architecture product.

Data Processing Architecture for Cloud and Big Data Services in Terms of Cost Saving (비용절감 측면에서 클라우드, 빅데이터 서비스를 위한 대용량 데이터 처리 아키텍쳐)

  • Lee, Byoung-Yup;Park, Jae-Yeol;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.5
    • /
    • pp.570-581
    • /
    • 2015
  • In recent years, many institutions predict that cloud services and big data will be popular IT trends in the near future. A number of leading IT vendors are focusing on practical solutions and services for cloud and big data. In addition, cloud has the advantage of unrestricted in selecting resources for business model based on a variety of internet-based technologies which is the reason that provisioning and virtualization technologies for active resource expansion has been attracting attention as a leading technology above all the other technologies. Big data took data prediction model to another level by providing the base for the analysis of unstructured data that could not have been analyzed in the past. Since what cloud services and big data have in common is the services and analysis based on mass amount of data, efficient operation and designing of mass data has become a critical issue from the early stage of development. Thus, in this paper, I would like to establish data processing architecture based on technological requirements of mass data for cloud and big data services. Particularly, I would like to introduce requirements that must be met in order for distributed file system to engage in cloud computing, and efficient compression technology requirements of mass data for big data and cloud computing in terms of cost-saving, as well as technological requirements of open-source-based system such as Hadoop eco system distributed file system and memory database that are available in cloud computing.