• Title/Summary/Keyword: Data science

Search Result 55,244, Processing Time 0.063 seconds

Analysis of Reference Data in Science Guidebooks for Elementary Teachers Developed for 2015 Revised Curriculum - Focusing on Physics Section for the Third-Sixth Grade - (2015 개정 교육과정에 따른 초등학교 과학과 교사용 지도서의 참고자료 분석 - 3~6학년 물리영역을 중심으로 -)

  • Kim, Hyunguk;Song, Jinwoong
    • Journal of Korean Elementary Science Education
    • /
    • v.39 no.2
    • /
    • pp.155-167
    • /
    • 2020
  • This study analyzed reference data for the physics section in science guidebooks for the third-sixth grade in elementary schools, according to the 2015 revised curriculum. It analyzed the reference data by categorizing them in terms of subjects, objectives and presentation forms and the visual data used in the reference data by categorizing their types. The findings show that the ratio of the science knowledge type was highest (53.8%) among the subjects of reference data in guidebooks for the science section, followed by the application to real life, and then, supplementary inquiry experiments and activities. The ratios of other types such as advanced science, environment, scientists and science history were, however, less than 1%, so they need to be improved. The ratio of knowledge provision was highest (40.5%) among the objectives of reference data but the ratios of conceptual supplementation and deepening were similar in ratio. Meanwhile, While the expository type (88.4%) accounted for most of the present forms of reference data, and photographs and illustrations (93.6%) also accounted for most of visual data suggested with reference data. Thus more various types of presentation forms and the extension of visual data seemed to be needed. This study is expected to provide some suggestions for the meaningful use of reference data in guidebooks for teachers and for the development of science guidebooks for teachers in elementary schools.

Analysis of Computational Science and Engineering SW Data Format for Multi-physics and Visualization

  • Ryu, Gimyeong;Kim, Jaesung;Lee, Jongsuk Ruth
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.889-906
    • /
    • 2020
  • Analysis of multi-physics systems and the visualization of simulation data are crucial and difficult in computational science and engineering. In Korea, Korea Institute of Science and Technology Information KISTI developed EDISON, a web-based computational science simulation platform, and it is now the ninth year since the service started. Hitherto, the EDISON platform has focused on providing a robust simulation environment and various computational science analysis tools. However, owing to the increasing issues in collaborative research, data format standardization has become more important. In addition, as the visualization of simulation data becomes more important for users to understand, the necessity of analyzing input / output data information for each software is increased. Therefore, it is necessary to organize the data format and metadata for the representative software provided by EDISON. In this paper, we analyzed computational fluid dynamics (CFD) and computational structural dynamics (CSD) simulation software in the field of mechanical engineering where several physical phenomena (fluids, solids, etc.) are complex. Additionally, in order to visualize various simulation result data, we used existing web visualization tools developed by third parties. In conclusion, based on the analysis of these data formats, it is possible to provide a foundation of multi-physics and a web-based visualization environment, which will enable users to focus on simulation more conveniently.

RADIO ASTRONOMICAL DATA PROCESSING USING MARK5B (MARK5B 시스템을 이용한 전파천문 데이터 처리)

  • Oh, Se-Jin;Yeom, Jae-Hwan;Roh, Duk-Gyoo;Chung, Hyun-Soo;Je, Do-Heung;Kim, Kwang-Dong;Kim, Bum-Koog;Hwang, Cheol-Jun;Jung, Gu-Young
    • Publications of The Korean Astronomical Society
    • /
    • v.21 no.2
    • /
    • pp.95-100
    • /
    • 2006
  • In this paper, we describe the radio astronomical data processing system implementation using Mark5B and its development. KASI(Korea Astronomy and Space Science Institute) is constructing the KVN (Korean VLBI Network) until the end of 2007, which is the first VLBI(Very Long Baseline Interferometery) facility in Korea and dedicated for the mm-wave VLBI observation. KVN will adopt the DAS (Data Acquisition System) consisting of digital filter with various function and 1Gsps high-speed sampler to digitize the radio astronomical data for analyzing on the digital filter system. And the analyzed data will be recorded to recorder up to 1Gbps data rates. To test this, we have implemented the system which is able to process 1Gbps data rates and carried out the data recording experiment.

Verification Algorithm for the Duplicate Verification Data with Multiple Verifiers and Multiple Verification Challenges

  • Xu, Guangwei;Lai, Miaolin;Feng, Xiangyang;Huang, Qiubo;Luo, Xin;Li, Li;Li, Shan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.2
    • /
    • pp.558-579
    • /
    • 2021
  • The cloud storage provides flexible data storage services for data owners to remotely outsource their data, and reduces data storage operations and management costs for data owners. These outsourced data bring data security concerns to the data owner due to malicious deletion or corruption by the cloud service provider. Data integrity verification is an important way to check outsourced data integrity. However, the existing data verification schemes only consider the case that a verifier launches multiple data verification challenges, and neglect the verification overhead of multiple data verification challenges launched by multiple verifiers at a similar time. In this case, the duplicate data in multiple challenges are verified repeatedly so that verification resources are consumed in vain. We propose a duplicate data verification algorithm based on multiple verifiers and multiple challenges to reduce the verification overhead. The algorithm dynamically schedules the multiple verifiers' challenges based on verification time and the frequent itemsets of duplicate verification data in challenge sets by applying FP-Growth algorithm, and computes the batch proofs of frequent itemsets. Then the challenges are split into two parts, i.e., duplicate data and unique data according to the results of data extraction. Finally, the proofs of duplicate data and unique data are computed and combined to generate a complete proof of every original challenge. Theoretical analysis and experiment evaluation show that the algorithm reduces the verification cost and ensures the correctness of the data integrity verification by flexible batch data verification.

Deploying Linked Open Vocabulary (LOV) to Enhance Library Linked Data

  • Oh, Sam Gyun;Yi, Myongho;Jang, Wonghong
    • Journal of Information Science Theory and Practice
    • /
    • v.3 no.2
    • /
    • pp.6-15
    • /
    • 2015
  • Since the advent of Linked Data (LD) as a method for building webs of data, there have been many attempts to apply and implement LD in various settings. Efforts have been made to convert bibliographic data in libraries into Linked Data, thereby generating Library Linked Data (LLD). However, when memory institutions have tried to link their data with external sources based on principles suggested by Tim Berners-Lee, identifying appropriate vocabularies for use in describing their bibliographic data has proved challenging. The objective of this paper is to discuss the potential role of Linked Open Vocabularies (LOV) in providing better access to various open datasets and facilitating effective linking. The paper will also examine the ways in which memory institutions can utilize LOV to enhance the quality of LLD and LLD-based ontology design.

Study on the Current Status of Data Science Curriculum in Library and Information Science and its Direction (문헌정보학과의 데이터 사이언스 커리큘럼 개발 실태와 방향성 고찰)

  • Kang, Ji Hei
    • Journal of Korean Library and Information Science Society
    • /
    • v.47 no.3
    • /
    • pp.343-363
    • /
    • 2016
  • This study determines 69 iSchools provided which data science curriculum, and presents the direction for Korean LIS schools. It is certain that iSchools extend their subject territory including areas related to health, technology and biotechnology. However, this phenomenon is not actively observed in Korea. iSchools also focus on the area about how to process and manage data. iSchools, in deed, offer courses regarding data science, data management and data security. The 'database' was a higher proportion of 'data warehouse' curriculum, and, 'data statistic and analysis' curriculum are forming similar portion. As a result of analysis of the iSchool's curriculum and comparison with Korean curriculum, this study suggests: the expansion of LIS curriculum related to data science; the enhanced role of the data translational data science; development of curriculum to raise the mathematical analysis capabilities, development of specialized curriculum and experimental classes; and support new knowledge skills to interact with technology.

Development of the software for high speed data transfer of the high-speed, large capacity data archive system for the storage of the correlation data from Korea-Japan Joint VLBI Correlator (KJJVC)

  • Park, Sun-Youp;Kang, Yong-Woo;Roh, Duk-Gyoo;Oh, Se-Jin;Yeom, Jae-Hwan;Sohn, Bong-Won;Yukitoshi, Kanya;Byun, Do-Young
    • Bulletin of the Korean Space Science Society
    • /
    • 2008.10a
    • /
    • pp.37.2-37.2
    • /
    • 2008
  • Korea-Japan Joint VLBI Correlator (KJJVC), to be used for Korean VLBI Network (KVN) in Korea Astronomy & Space Science Institute (KASI), is a high-speed calculator that outputs the correlation results in the maximum speed of 1.4GB/sec.To receive and record this data keeping up with this speed and with no loss, the design of the software running on the data archive system for receving and recording the output data from the correlator is very important. But, the simple kind of programming using just single thread that receives data from network and records it by turns, can cause a bottleneck effect while processing high speed data and a probable data loss, and cannot utilize the merit of hardwares supporting multi core or hyper threading, or operating systems supporting these hardwares. In this talk we summarize the design of the data transfer software for KJJVC and high speed, large capacity data archive system using general socket programming and multi threading techniques, and the pre-BMT(Bench Marking Test) results from the tests of the storage product providers' proposals using this software.

  • PDF

Micro marketing using a cosmetic transaction data (화장품 고객 정보를 이용한 마이크로 마케팅)

  • Seok, Kyoung-Ha;Cho, Dae-Hyeon;Kim, Byung-Soo;Lee, Jong-Un;Paek, Seung-Hun;Jeon, Yu-Joong;Lee, Young-Bae;Kim, Jae-Gil
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.3
    • /
    • pp.535-546
    • /
    • 2010
  • There are two methods in grouping customers for micro marketing promotion. The one is based on how much they paid and the other is based on how many times they purchased. In this study we are interested in the repurchase probability of customers. By analysing the customer's transaction data and demographic data, we develop a forecasting model of repurchase and make epurchase indexes of them. As a modeling tool we use the logistic regression model. Finally we categorize the customers into five groups in according to their repurchase indexes so that we can control customers effectively and get higher profit.

Hybrid Recommendation Algorithm for User Satisfaction-oriented Privacy Model

  • Sun, Yinggang;Zhang, Hongguo;Zhang, Luogang;Ma, Chao;Huang, Hai;Zhan, Dongyang;Qu, Jiaxing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.10
    • /
    • pp.3419-3437
    • /
    • 2022
  • Anonymization technology is an important technology for privacy protection in the process of data release. Usually, before publishing data, the data publisher needs to use anonymization technology to anonymize the original data, and then publish the anonymized data. However, for data publishers who do not have or have less anonymized technical knowledge background, how to configure appropriate parameters for data with different characteristics has become a more difficult problem. In response to this problem, this paper adds a historical configuration scheme resource pool on the basis of the traditional anonymization process, and configuration parameters can be automatically recommended through the historical configuration scheme resource pool. On this basis, a privacy model hybrid recommendation algorithm for user satisfaction is formed. The algorithm includes a forward recommendation process and a reverse recommendation process, which can respectively perform data anonymization processing for users with different anonymization technical knowledge backgrounds. The privacy model hybrid recommendation algorithm for user satisfaction described in this paper is suitable for a wider population, providing a simpler, more efficient and automated solution for data anonymization, reducing data processing time and improving the quality of anonymized data, which enhances data protection capabilities.

A Fast and Exact Verification of Inter-Domain Data Transfer based on PKI

  • Jung, Im-Y.;Eom, Hyeon-Sang;Yeom, Heon-Y.
    • Journal of Information Technology Applications and Management
    • /
    • v.18 no.3
    • /
    • pp.61-72
    • /
    • 2011
  • Trust for the data created, processed and transferred on e-Science environments can be estimated with provenance. The information to form provenance, which says how the data was created and reached its current state, increases as data evolves. It is a heavy burden to trace and verify the massive provenance in order to trust data. On the other hand, it is another issue how to trust the verification of data with provenance. This paper proposes a fast and exact verification of inter-domain data transfer and data origin for e-Science environment based on PKI. The verification, which is called two-way verification, cuts down the tracking overhead of the data along the causality presented on Open Provenance Model with the domain specialty of e-Science environment supported by Grid Security Infrastructure (GSI). The proposed scheme is easy-applicable without an extra infrastructure, scalable irrespective of the number of provenance records, transparent and secure with cryptography as well as low-overhead.