• Title/Summary/Keyword: research data

Search Result 70,338, Processing Time 0.079 seconds

A Lifelog Common Data Reference Model for the Healthcare Ecosystem (디지털 헬스케어 생태계 활성화를 위한 라이프로그 공통데이터 참조모델)

  • Lee, Young-joo;Ko, Yoon-seok
    • Knowledge Management Research
    • /
    • v.19 no.4
    • /
    • pp.149-170
    • /
    • 2018
  • Healthcare lifelog, a personal record relating to disease treatment and healthcare, plays an important role in healthcare paradigm shifts in which medical and information technology converge. Healthcare services based on various healthcare lifelogs are being launched domestically by both large corporations and small and medium enterprises, however, they are being built on an individual platform that is dependent on each company. Therefore, the terms of lifelog data are different as well as the measurement specifications are not uniform. This study proposes a reference model for minimum common data required for sharing and utilization of healthcare lifelog. Literature study and expert survey derived 3 domain, 17 essential items, and 51 sub-items. The model provides definition, measurement data format, measurement method, and precautions for each detailed measurement item, and provides necessary guidelines for data and service design and construction for healthcare service. This study has its significance as a basic research supporting the activation of ecosystem by ensuring interoperability of data between heterogeneous healthcare devices linked to digital healthcare platform.

Platform Business and Value Creation: Using Public Open Data (플랫폼 비즈니스와 가치 창출: 개방형 공공데이터 활용)

  • Han, Junghee
    • Knowledge Management Research
    • /
    • v.20 no.1
    • /
    • pp.155-174
    • /
    • 2019
  • Variety of data have been opened or connected by several levels of government. In smart city initiatives, open data become the source of a new business model. This paper is to foster ways of public open data (POD) by analyzing the start-up company that utilizes POD. In order to fulfill it, this paper adapts the case study research. Findings say that POD has potential to validate and further enrich the platform business. But to find which types of public open data are most prevalent is insufficient. To do this, it is more needed that sophisticated and many cases should be examined. However, this paper shows that platform business by using POD could lead to reduce the cost and increase the benefits for both providers and customers. From the findings, this paper shows that public open data has an important role not only to boost new venture creations which are prevalent ways of smart city but also to foster different platforms enabling new value capture and creation according to development of internet of things based on ICT technology.

Technical Trends of Time-Series Data Imputation (시계열 데이터 결측치 처리 기술 동향)

  • Kim, E.D.;Ko, S.K.;Son, S.C.;Lee, B.T.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.4
    • /
    • pp.145-153
    • /
    • 2021
  • Data imputation is a crucial issue in data analysis because quality data are highly correlated with the performance of AI models. Particularly, it is difficult to collect quality time-series data for uncertain situations (for example, electricity blackout, delays for network conditions). Thus, it is necessary to research effective methods of time-series data imputation. Many studies on time-series data imputation can be divided into 5 parts, including statistical based, matrix-based, regression-based, deep learning (RNN and GAN) based methodologies. This study reviews and organizes these methodologies. Recently, deep learning-based imputation methods are developed and show excellent performance. However, it is associated to some computational problems that make it difficult to use in real-time system. Thus, the direction of future work is to develop low computational but high-performance imputation methods for application in the real field.

Design of Secure Information Center Using a Conventional Cryptography

  • Choi, Jun-Hyuk;Kim Tae-Gap;Go, Byung-Do;Ryou, Jae-Cheol
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.6 no.4
    • /
    • pp.53-66
    • /
    • 1996
  • World Wide Web is a total solution for multi-media data transmission on Internet. Because of its characteristics like ease of use, support for multi-media data and smart graphic user interface, WWW has extended to cover all kinds of applications. The Secure Information Center(SIC) is a data transmission system using conventional cryptography between client and server on WWW. It's main function is to support the encryption of sending data. For encryption of data IDEA(International Data Encryption Algorithm) is used and for authentication mechanism MD5 hash function is used. Since Secure Information Center is used by many users, conventional cryptosystem is efficient in managing their secure interactions. However, there are some restrictions on sharing of same key and data transmission between client and server, for example the risk of key exposure and the difficulty of key sharing mechanisms. To solve these problems, the Secure Information Center provides encryption mechanisms and key management policies.

A novel watermarking scheme for authenticating individual data integrity of WSNs

  • Guangyong Gao;Min Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.938-957
    • /
    • 2023
  • The limited computing power of sensor nodes in wireless sensor networks (WSNs) and data tampering during wireless transmission are two important issues. In this paper, we propose a scheme for independent individual authentication of WSNs data based on digital watermarking technology. Digital watermarking suits well for WSNs, owing to its lower computational cost. The proposed scheme uses independent individual to generate a digital watermark and embeds the watermark in current data item. Moreover, a sink node extracts the watermark in single data and compares it with the generated watermark, thereby achieving integrity verification of data. Inherently, individual validation differs from the grouping-level validation, and avoids the lack of grouping robustness. The improved performance of individual integrity verification based on proposed scheme is validated through experimental analysis. Lastly, compared to other state-of-the-art schemes, our proposed scheme significantly reduces the false negative rate by an average of 5%, the false positive rate by an average of 80% of data verification, and increases the correct verification rate by 50% on average.

Data Science and Deep Learning in Natural Sciences

  • Cha, Meeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.56.1-56.1
    • /
    • 2019
  • We are producing and consuming more data than ever before. Massive data allow us to better understand the world around us, yet they bring a new set of challenges due to their inherent noise and sheer enormity in size. Without smart algorithms and infrastructures, big data problems will remain intractable, and the same is true in natural science research. The mission of data science as a research field is to develop and apply computational methods in support of and in the replacement of costly practices in handling data. In this talk, I will introduce how data science and deep learning has been used for solving various problems in natural sciences. In particular, I will present a case study of analyzing high-resolution satellite images to infer socioeconomic scales of developing countries.

  • PDF

Methodology for determining optimal data sampling frequencies in water distribution systems (상수관망 데이터 수집의 최적 빈도 결정을 위한 방법론적 접근)

  • Hyunjun Kim;Eunhye Jeong;Kyungyup Hwang
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.37 no.6
    • /
    • pp.383-394
    • /
    • 2023
  • Currently, there is no definitive regulation for the appropriate frequency of data sampling in water distribution networks, yet it plays a crucial role in the efficient operation of these systems. This study proposes a new methodology for determining the optimal frequency of data acquisition in water distribution networks. Based on the decomposition of signals using harmonic series, this methodology has been validated using actual data from water distribution networks. By analyzing 12 types of data collected from two points, it was demonstrated that utilizing the factors and cumulative periodograms of harmonic series enables similar accuracy at lower data acquisition frequencies compared to the original signals. Type your abstract here.

Feasibility Study of Case-Finding for Breast Cancer by Community Health Workers in Rural Bangladesh

  • Chowdhury, Touhidul Imran;Love, Richard Reed;Chowdhury, Mohammad Touhidul Imran;Artif, Abu Saeem;Ahsan, Hasib;Mamun, Anwarul;Khanam, Tahmina;Woods, James;Salim, Reza
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.16 no.17
    • /
    • pp.7853-7857
    • /
    • 2015
  • Background: Mortality from breast cancer is high in low- and middle-income countries, in part because most patients have advanced stage disease when first diagnosed. Case-finding may be one approach to changing this situation. Materials and Methods: We conducted a pilot study to explore the feasibility of population-based case finding for breast cancer by community health workers (CHWs) using different data collection methods and approaches to management of women found to have breast abnormalities. After training 8 CHWs in breast problem recognition, manual paper data collection and operation of a cell-phone software platform for reporting demographic, history and physical finding information, these CHWs visited 3150 women >age 18 and over they could find-- from 2356 households in 8 villages in rural Bangladesh. By 4 random assignments of villages, data were collected manually (Group 1), or with the cell-phone program alone (Group 2) or with management algorithms (Groups 3 and 4), and women adjudged to have a serious breast problem were shown a motivational video (Group 3), or navigated/accompanied to a breast problem center for evaluation (Group 4). Results: Only three visited women refused evaluation. The manual data acquisition group (1) had missing data in 80% of cases, and took an average of 5 minutes longer to acquire, versus no missing data in the cell phone-reporting groups (2,3 and 4). One woman was identified with stage III breast cancer, and was appropriately treated. Conclusions: Among very poor rural Bangladeshi women, there was very limited reluctance to undergo breast evaluation. The estimated rarity of clinical breast cancer is supported by these population-based findings. The feasibility and efficient use of mobile technology in this setting is supported. Successor studies may most appropriately be trials focusing on improving the suggested benefits of motivation and navigation, on increasing the numbers of cases found, and on stage of disease at diagnosis as the primary endpoint.

Business Intelligence and Marketing Insights in an Era of Big Data: The Q-sorting Approach

  • Kim, Ki Youn
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.2
    • /
    • pp.567-582
    • /
    • 2014
  • The purpose of this study is to qualitatively identify the typologies and characteristics of the big data marketing strategy in major companies that are taking advantage of the big data business in Korea. Big data means piles accumulated from converging platforms such as computing infrastructures, smart devices, social networking and new media, and big data is also an analytic technique itself. Numerous enterprises have grown conscious that big data can be a most significant resource or capability since the issue of big data recently surfaced abruptly in Korea. Companies will be obliged to design their own implementing plans for big data marketing and to customize their own analytic skills in the new era of big data, which will fundamentally transform how businesses operate and how they engage with customers, suppliers, partners and employees. This research employed a Q-study, which is a methodology, model, and theory used in 'subjectivity' research to interpret professional panels' perceptions or opinions through in-depth interviews. This method includes a series of q-sorting analysis processes, proposing 40 stimuli statements (q-sample) compressed out of about 60 (q-population) and explaining the big data marketing model derived from in-depth interviews with 20 marketing managers who belong to major companies(q-sorters). As a result, this study makes fundamental contributions to proposing new findings and insights for small and medium-size enterprises (SMEs) and policy makers that need guidelines or direction for future big data business.

Pseudo-standard and Its Implementation for the Maintenance Data of Ship and Offshore Structures (선박 및 해양 구조물에 있어서 유지보수용 데이터 교환을 위한 준표준 분석과 사례 구현)

  • Son, Gum-Jun;Lee, Jang-Hyun;Lee, Jeongyoul;Han, Eun-Jung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.18 no.4
    • /
    • pp.267-274
    • /
    • 2013
  • This study focuses on the data schema and data content, which includes maintenance data, data structures and illustration data relevant with the maintenance process of ship and offshore structures. Product lifecycle management (PLM) is expected to encompass all the product data generated for the operation and maintenance information as well as the design and production. This paper introduces a data exchange schema in PLM of ship and offshore, serving as the basis for the role of standards required by the middle-of-life PLM. Also this paper identifies a typology of standards relevant to PLM that addresses the schema of evolving standards and identifies a XML schema supporting the exchange of data related with maintenance operations. Technical document based on standards in accordance with S1000D and Shipdex is explained. A case study illustrating the use of standard data exchange and technical document is presented.