• Title/Summary/Keyword: Data journal

Search Result 189,213, Processing Time 0.116 seconds

A Study on the Data-Based Organizational Capabilities by Convergence Capabilities Level of Public Data (공공데이터 융합역량 수준에 따른 데이터 기반 조직 역량의 연구)

  • Jung, Byoungho;Joo, Hyungkun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.18 no.4
    • /
    • pp.97-110
    • /
    • 2022
  • The purpose of this study is to analyze the level of public data convergence capabilities of administrative organizations and to explore important variables in data-based organizational capabilities. The theoretical background was summarized on public data and use activation, joint use, convergence, administrative organization, and convergence constraints. These contents were explained Public Data Act, the Electronic Government Act, and the Data-Based Administrative Act. The research model was set as the data-based organizational capabilities effect by a data-based administrative capability, public data operation capabilities, and public data operation constraints. It was also set whether there is a capabilities difference data-based on an organizational operation by the level of data convergence capabilities. This study analysis was conducted with hierarchical cluster analysis and multiple regression analysis. As the research result, First, hierarchical cluster analysis was classified into three groups. It was classified into a group that uses only public data and structured data, a group that uses public data on both structured and unstructured data, and a group that uses both public and private data. Second, the critical variables of data-based organizational operation capabilities were found in the data-based administrative planning and administrative technology, the supervisory organizations and technical systems by public data convergence, and the data sharing and market transaction constraints. Finally, the essential independent variables on data-based organizational competencies differ by group. This study contributed. As a theoretical implication, this research is updated on management information systems by explaining the Public Data Act, the Electronic Government Act, and the Data-Based Administrative Act. As a practical implication, the activity reinforcement of public data should be promoting the establishment of data standardization and search convenience and elimination of the lukewarm attitudes and Selfishness behavior for data sharing.

An Empirical Study on the Effects of Source Data Quality on the Usefulness and Utilization of Big Data Analytics Results (원천 데이터 품질이 빅데이터 분석결과의 유용성과 활용도에 미치는 영향)

  • Park, Sohyun;Lee, Kukhie;Lee, Ayeon
    • Journal of Information Technology Applications and Management
    • /
    • v.24 no.4
    • /
    • pp.197-214
    • /
    • 2017
  • This study sheds light on the source data quality in big data systems. Previous studies about big data success have called for future research and further examination of the quality factors and the importance of source data. This study extracted the quality factors of source data from the user's viewpoint and empirically tested the effects of source data quality on the usefulness and utilization of big data analytics results. Based on the previous researches and focus group evaluation, four quality factors have been established such as accuracy, completeness, timeliness and consistency. After setting up 11 hypotheses on how the quality of the source data contributes to the usefulness, utilization, and ongoing use of the big data analytics results, e-mail survey was conducted at a level of independent department using big data in domestic firms. The results of the hypothetical review identified the characteristics and impact of the source data quality in the big data systems and drew some meaningful findings about big data characteristics.

A case study of ECN data conversion for Korean and foreign ecological data integration

  • Lee, Hyeonjeong;Shin, Miyoung;Kwon, Ohseok
    • Journal of Ecology and Environment
    • /
    • v.41 no.5
    • /
    • pp.142-144
    • /
    • 2017
  • In recent decades, as it becomes increasingly important to monitor and research long-term ecological changes, worldwide attempts are being conducted to integrate and manage ecological data in a unified framework. Especially domestic ecological data in South Korea should be first standardized based on predefined common protocols for data integration, since they are often scattered over many different systems in various forms. Additionally, foreign ecological data should be converted into a proper unified format to be used along with domestic data for association studies. In this study, our interest is to integrate ECN data with Korean domestic ecological data under our unified framework. For this purpose, we employed our semi-automatic data conversion tool to standardize foreign data and utilized ground beetle (Carabidae) datasets collected from 12 different observatory sites of ECN. We believe that our attempt to convert domestic and foreign ecological data into a standardized format in a systematic way will be quite useful for data integration and association analysis in many ecological and environmental studies.

A Study on Big Data Analytics Services and Standardization for Smart Manufacturing Innovation

  • Kim, Cheolrim;Kim, Seungcheon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.91-100
    • /
    • 2022
  • Major developed countries are seriously considering smart factories to increase their manufacturing competitiveness. Smart factory is a customized factory that incorporates ICT in the entire process from product planning to design, distribution and sales. This can reduce production costs and respond flexibly to the consumer market. The smart factory converts physical signals into digital signals, connects machines, parts, factories, manufacturing processes, people, and supply chain partners in the factory to each other, and uses the collected data to enable the smart factory platform to operate intelligently. Enhancing personalized value is the key. Therefore, it can be said that the success or failure of a smart factory depends on whether big data is secured and utilized. Standardized communication and collaboration are required to smoothly acquire big data inside and outside the factory in the smart factory, and the use of big data can be maximized through big data analysis. This study examines big data analysis and standardization in smart factory. Manufacturing innovation by country, smart factory construction framework, smart factory implementation key elements, big data analysis and visualization, etc. will be reviewed first. Through this, we propose services such as big data infrastructure construction process, big data platform components, big data modeling, big data quality management components, big data standardization, and big data implementation consulting that can be suggested when building big data infrastructure in smart factories. It is expected that this proposal can be a guide for building big data infrastructure for companies that want to introduce a smart factory.

A Study on the Role and Security Enhancement of the Expert Data Processing Agency: Focusing on a Comparison of Data Brokers in Vermont (데이터처리전문기관의 역할 및 보안 강화방안 연구: 버몬트주 데이터브로커 비교를 중심으로)

  • Soo Han Kim;Hun Yeong Kwon
    • Journal of Information Technology Services
    • /
    • v.22 no.3
    • /
    • pp.29-47
    • /
    • 2023
  • With the recent advancement of information and communication technologies such as artificial intelligence, big data, cloud computing, and 5G, data is being produced and digitized in unprecedented amounts. As a result, data has emerged as a critical resource for the future economy, and overseas countries have been revising laws for data protection and utilization. In Korea, the 'Data 3 Act' was revised in 2020 to introduce institutional measures that classify personal information, pseudonymized information, and anonymous information for research, statistics, and preservation of public records. Among them, it is expected to increase the added value of data by combining pseudonymized personal information, and to this end, "the Expert Data Combination Agency" and "the Expert Data Agency" (hereinafter referred to as the Expert Data Processing Agency) system were introduced. In comparison to these domestic systems, we would like to analyze similar overseas systems, and it was recently confirmed that the Vermont government in the United States enacted the first "Data Broker Act" in the United States as a measure to protect personal information held by data brokers. In this study, we aim to compare and analyze the roles and functions of the "Expert Data Processing Agency" and "Data Broker," and to identify differences in designated standards, security measures, etc., in order to present ways to contribute to the activation of the data economy and enhance information protection.

A Simulation Framework for Wireless Compressed Data Broadcast

  • Seokjin Im
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.2
    • /
    • pp.315-322
    • /
    • 2023
  • Intelligent IoT environments that accommodate a very large number of clients require technologies that provide secure information service regardless of the number of clients. Wireless data broadcast is an information service technique that ensures scalability to deliver data to all clients simultaneously regardless of the number of clients. In wireless data broadcasting, clients access the wireless channel linearly to explore the data, so the access time of clients is greatly affected by the broadcast cycle. Data compression-based data broadcasting can reduce the broadcast cycle and thus reduce client access time. Therefore, a simulation framework that can evaluate the performance of data broadcasting by applying different data compression algorithms is essential and important. In this paper, we propose a simulation framework to evaluate the performance of data broadcasting that can adopt data compression. We design the framework that enables to apply different data compression algorithms according to the data characteristics. In addition to evaluating the performance according to the data, the proposed framework can also evaluate the performance according to the data scheduling technique and the kind of queries the client wants to process. We implement the proposed framework and evaluate the performance of data broadcasting using the framework applying data compression algorithms to demonstrate the performances of data compression broadcasting.

An Investigation on Scientific Data for Data Journal and Data Paper (Scientific Data 학술지 분석을 통한 데이터 논문 현황에 관한 연구)

  • Chung, EunKyung
    • Journal of the Korean Society for information Management
    • /
    • v.36 no.1
    • /
    • pp.117-135
    • /
    • 2019
  • Data journals and data papers have grown and considered an important scholarly practice in the paradigm of open science in the context of data sharing and data reuse. This study investigates a total of 713 data papers published in Scientific Data in terms of author, citation, and subject areas. The findings of the study show that the subject areas of core authors are found as the areas of Biotechnology and Physics. An average number of co-authors is 12 and the patterns of co-authorship are recognized as several closed sub-networks. In terms of citation status, the subject areas of cited publications are highly similar to the areas of data paper authors. However, the citation analysis indicates that there are considerable citations on the journals specialized on methodology. The network with authors' keywords identifies more detailed areas such as marine ecology, cancer, genome, database, and temperature. This result indicates that biology oriented-subjects are primary areas in the journal although Scientific Data is categorized in multidisciplinary science in Web of Science database.

Research on Railway Safety Common Data Model and DDS Topic for Real-time Railway Safety Data Transmission

  • Park, Yunjung;Kim, Sang Ahm
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.5
    • /
    • pp.57-64
    • /
    • 2016
  • In this paper, we propose the design of railway safety common data model to provide common transformation method for collecting data from railway facility fields to Real-time railway safety monitoring and control system. This common data model is divided into five abstract sub-models according to the characteristics of data such as 'StateInfoMessage', 'ControlMessage', 'RequestMessage', 'ResponseMessage' and 'ExtendedXXXMessage'. This kind of model structure allows diverse heterogeneous data acquisitions and its common conversion method to DDS (Data Distribution Service) format to share data to the sub-systems of Real-time railway safety monitoring and control system. This paper contains the design of common data model and its DDS Topic expression for DDS communication, and presents two kinds of data transformation case studied for verification of the model design.

Applying Bootstrap to Time Series Data Having Trend (추세 시계열 자료의 부트스트랩 적용)

  • Park, Jinsoo;Kim, Yun Bae;Song, Kiburm
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.38 no.2
    • /
    • pp.65-73
    • /
    • 2013
  • In the simulation output analysis, bootstrap method is an applicable resampling technique to insufficient data which are not significant statistically. The moving block bootstrap, the stationary bootstrap, and the threshold bootstrap are typical bootstrap methods to be used for autocorrelated time series data. They are nonparametric methods for stationary time series data, which correctly describe the original data. In the simulation output analysis, however, we may not use them because of the non-stationarity in the data set caused by the trend such as increasing or decreasing. In these cases, we can get rid of the trend by differencing the data, which guarantees the stationarity. We can get the bootstrapped data from the differenced stationary data. Taking a reverse transform to the bootstrapped data, finally, we get the pseudo-samples for the original data. In this paper, we introduce the applicability of bootstrap methods to the time series data having trend, and then verify it through the statistical analyses.

Saliency Score-Based Visualization for Data Quality Evaluation

  • Kim, Yong Ki;Lee, Keon Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.4
    • /
    • pp.289-294
    • /
    • 2015
  • Data analysts explore collections of data to search for valuable information using various techniques and tricks. Garbage in, garbage out is a well-recognized idiom that emphasizes the importance of the quality of data in data analysis. It is therefore crucial to validate the data quality in the early stage of data analysis, and an effective method of evaluating the quality of data is hence required. In this paper, a method to visually characterize the quality of data using the notion of a saliency score is introduced. The saliency score is a measure comprising five indexes that captures certain aspects of data quality. Some experiment results are presented to show the applicability of proposed method.