• Title/Summary/Keyword: open data

Search Result 5,605, Processing Time 0.029 seconds

Platform Business and Value Creation: Using Public Open Data (플랫폼 비즈니스와 가치 창출: 개방형 공공데이터 활용)

  • Han, Junghee
    • Knowledge Management Research
    • /
    • v.20 no.1
    • /
    • pp.155-174
    • /
    • 2019
  • Variety of data have been opened or connected by several levels of government. In smart city initiatives, open data become the source of a new business model. This paper is to foster ways of public open data (POD) by analyzing the start-up company that utilizes POD. In order to fulfill it, this paper adapts the case study research. Findings say that POD has potential to validate and further enrich the platform business. But to find which types of public open data are most prevalent is insufficient. To do this, it is more needed that sophisticated and many cases should be examined. However, this paper shows that platform business by using POD could lead to reduce the cost and increase the benefits for both providers and customers. From the findings, this paper shows that public open data has an important role not only to boost new venture creations which are prevalent ways of smart city but also to foster different platforms enabling new value capture and creation according to development of internet of things based on ICT technology.

A Study on the OpenURL META-TAG of Observation Research Data for Metadata Interoperability (관측분야 과학데이터 관련 메타데이터 상호운용성 확보를 위한 OpenURL 메타태그 연구)

  • Kim, Sun-Tae;Lee, Tae-Young
    • Journal of Information Management
    • /
    • v.42 no.3
    • /
    • pp.147-165
    • /
    • 2011
  • This paper presents a core meta-tag of OpenURL written in Key/Encoded-Value format in the field of observation research, to distribute the scientific data, produced in many experimentations and observations, on the OpenURL service architecture. So far, the OpenURL hasn't supplied a meta-tag represented scientific data because it has focused on circulation of scholarly and technological information extracted from thesis, proceedings, journals, literatures, etc. The DataCite consortium metadata were analyzed and compared with the Dublin Core metadata, OECD metadata, and Directory Interchange Format metadata to develop a core meta-tag in observation research.

Non-manifold Modeling Data Structure Based on Open Inventor (Open Inventor에 기초한 비다양체 모델링 자료구조)

  • 박상호;이호영;변문현
    • Korean Journal of Computational Design and Engineering
    • /
    • v.3 no.3
    • /
    • pp.154-160
    • /
    • 1998
  • In this study, we implement the prototype modeler with non-manifold data structure using Open Inventor. In these days, Open Inventor is a popular tool for computer graphics applications, even though Open Inventor could not store topological information including a non-manifold data structure which can represent an incomplete three dimensional shape such as a wireframe and a dangling surface during designing. Using Open Inventor, our modeler can handle a non-manifold model whose data structure is based on the radial edge data structure. A model editor is also implemented as an application which can construct a non-manifold model from two dimensional editing.

  • PDF

An Evaluation Study on Artificial Intelligence Data Validation Methods and Open-source Frameworks (인공지능 데이터 품질검증 기술 및 오픈소스 프레임워크 분석 연구)

  • Yun, Changhee;Shin, Hokyung;Choo, Seung-Yeon;Kim, Jaeil
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.10
    • /
    • pp.1403-1413
    • /
    • 2021
  • In this paper, we investigate automated data validation techniques for artificial intelligence training, and also disclose open-source frameworks, such as Google's TensorFlow Data Validation (TFDV), that support automated data validation in the AI model development process. We also introduce an experimental study using public data sets to demonstrate the effectiveness of the open-source data validation framework. In particular, we presents experimental results of the data validation functions for schema testing and discuss the limitations of the current open-source frameworks for semantic data. Last, we introduce the latest studies for the semantic data validation using machine learning techniques.

A Study about Library-Related Open Data through Public Data Portals (공공데이터 포털을 통해 개방된 도서관 관련 데이터 분석)

  • Cho, Jane
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.29 no.2
    • /
    • pp.35-56
    • /
    • 2018
  • This study examines the current state of library related data opened through public data portals, and analyzes how much data is being utilized according to the type of releasing organization, and open level. In addition, we analyzed the subject cluster of data and the centrality of data by performing PathFinder Network analysis using keywords assigned to data by dividing the releasing subject into local government and national/public institutions. Based on this, the subject area of library - related data disclosed by local governments and national/public organizations is understood. And identify the main open body that should be opened first by linking with data utilization analysis result and then suggest implications for future improvement in connection with library big data business.

207 NEW OPEN STAR CLUSTERS WITHIN 1 KPC FROM GAIA DATA RELEASE 2

  • Sim, Gyuheon;Lee, Sang Hyun;Ann, Hong Bae;Kim, Seunghyeon
    • Journal of The Korean Astronomical Society
    • /
    • v.52 no.5
    • /
    • pp.145-158
    • /
    • 2019
  • We conducted a survey of open clusters within 1 kpc from the Sun using the astrometric and photometric data of the Gaia Data Release 2. We found 655 cluster candidates by visual inspection of the stellar distributions in proper motion space and spatial distributions in l - b space. All of the 655 cluster candidates have a well defined main-sequence except for two candidates if we consider that the main sequence of very young clusters is somewhat broad due to differential extinction. Cross-matching of our 653 open clusters with known open clusters in various catalogs resulted in 207 new open clusters. We present the physical properties of the newly discovered open clusters. The majority of the newly discovered open clusters are of young to intermediate age and have less than ~50 member stars.

Opening the Nation: Leveraging Open Data to Create New Business and Provide Services

  • Cruz, Ruth Angelie B.;Lee, Hong Joo
    • Knowledge Management Research
    • /
    • v.16 no.4
    • /
    • pp.157-168
    • /
    • 2015
  • Opening government data has been one of the main goals of nations building their e-government structures. Nonetheless, more than publishing government data for public viewing, the bigger concern right now is promoting the use change to "and proving the usefulness of available public data". In order to do this, governments must be able to, not only publicize data but more so, publish the kind of data usable to infomediaries and developers in order to create new products and services for citizens. This research investigates 30 open data use cases of South Korea as listed in Data.go.kr. This study aims to contribute to a better understanding of open datasets utilization in a technologically-advanced and well-developed nation and hopefully provide some useful insights on how open data is currently being used, how it is opening up new business, and more importantly, how it is contributing to the civic society by providing services to the public.

A Study on Comparison of Open Application Programming Interface of Securities Companies Supporting Python

  • Ryu, Gui Yeol
    • International journal of advanced smart convergence
    • /
    • v.10 no.1
    • /
    • pp.97-104
    • /
    • 2021
  • Securities and investment services had the most data per company on the average, and used the most data. Investors are increasingly demanding to invest through their own analysis methods. Therefore, securities and investment companies provide stock data to investors through open API. The data received using the open API is in text format. Python is effective and convenient for requesting and receiving text data. We investigate there are 22 major securities and investment companies in Korea and only 6 companies. Only Daishin Securities Co. supports Python officially. We compare how to receive stock data through open API using Python, and Python programming features. The open APIs for the study are Daishin Securities Co. and eBest Investment & Securities Co. Comparing the two APIs for receiving the current stock data, we find the main two differences are the login method and the method of sending and receiving data. As for the login method, CYBOS plus has login information, but xingAPI does not have. As for the method of sending and receiving data, Cybos Plus sends and receives data by calling the request method, and the reply method. xingAPI sends and receives data in the form of an event. Therefore, the number of xingAPI codes is more than that of CYBOS plus. And we find that CYBOS plus executes a loop statement by lists and tuple, dictionary, and CYBOS plus supports the basic commands provided by Python.

A Study on Public Data Quality Factors Affecting the Confidence of the Public Data Open Policy (공공데이터 품질 요인이 공공데이터 개방정책의 신뢰에 미치는 영향에 관한 연구)

  • Kim, Hyun Cheol;Gim, Gwang Yong
    • Journal of Information Technology Services
    • /
    • v.14 no.1
    • /
    • pp.53-68
    • /
    • 2015
  • This article aims to identify the quality factors of public data which have increased as a public issue; analyze the impact of users satisfaction in the perspective of Technology Acceptance Model (TAM), and investigate the effect of service satisfaction on the government's open policy of public data. This study is consistent with Total Data Quality Management (TDQM) of MIT, it focuses on three main qualities except Contextual Data Quality (CDQ) and includes seven independent variables : accuracy, reliability, fairness for Intrinsic Data Quality (IDQ), accessibility, security for Accessibility Data Quality (ADQ), Consistent representation and understandability for Representational Data Quality (RDQ). Basing on TAM, the research model was conducted to examine which factors affect to perceived usefulness, perceived ease of use, service satisfaction and how service satisfaction affects to the government's open policy of public data. The results showed that accuracy, fairness, understandability affect both perceived ease of use and perceived usefulness; while reliability, consistent representation, security, and accessibility affect only perceived ease of use. This article found that the influence of perceived ease of use on perceived usefulness and the influence of these two causes on service satisfaction in the perspective of TAM were significant and it was consistent with prior studies. The service satisfaction when using public data leads to the reliability of public data open policy. As an initial study on unstructured public data open policy, this article offered quality factors that pubic data providers should consider and also present the operation plan of public data open policy in the future.

Quality Diagnosis of Library-Related Open Government Data: Focused on Book Details API of Data for Library (도서관 공공데이터의 품질에 관한 연구: 도서관 정보나루의 도서 상세 조회 API를 중심으로)

  • Yang, Suwan
    • Journal of the Korean Society for information Management
    • /
    • v.37 no.4
    • /
    • pp.181-206
    • /
    • 2020
  • With the popularization of open government data, Library-related open government data is also open and utilized to the public. The purpose of this paper is to diagnose the quality of library-related open government data and propose improvement measures to enhance the quality based on the diagnosis result. As a result of diagnosing the completeness of the data, a number of blanks are identified in the bibliographic elements essential for identifying and searching a book. As a result of diagnosing the accuracy of the data, the bibliographic elements that are not compliant with the data schema have been identified. Based on the result of data quality diagnosis, this study suggested improving the data collection procedure, establishing data set schema, providing details on data collection and data processing, and publishing raw data.