• Title/Summary/Keyword: Big data Era

Search Result 361, Processing Time 0.023 seconds

System Construction and Data Development of National Standard Reference for Renewable Energy - Model-Based Standard Meteorological Year (신재생에너지 국가참조표준 시스템 구축 및 개발 - 모델 기반 표준기상년)

  • Boyoung Kim;Chang Ki Kim;Chang-yeol Yun;Hyun-goo Kim;Yong-heack Kang
    • New & Renewable Energy
    • /
    • v.20 no.1
    • /
    • pp.95-101
    • /
    • 2024
  • Since 1990, the Renewable Big Data Research Lab at the Korea Institute of Energy Technology has been observing solar radiation at 16 sites across South Korea. Serving as the National Reference Standard Data Center for Renewable Energy since 2012, it produces essential data for the sector. By 2020, it standardized meteorological year data from 22 sites. Despite user demand for data from approximately 260 sites, equivalent to South Korea's municipalities, this need exceeds the capability of measurement-based data. In response, our team developed a method to derive solar radiation data from satellite images, covering South Korea in 400,000 grids of 500 m × 500 m each. Utilizing satellite-derived data and ERA5-Land reanalysis data from the European Centre for Medium-Range Weather Forecasts (ECMWF), we produced standard meteorological year data for 1,000 sites. Our research also focused on data measurement traceability and uncertainty estimation, ensuring the reliability of our model data and the traceability of existing measurement-based data.

A Study of Ginseng Culture within 'Joseonwangjosilok' through Textual Frequency Analysis

  • Mi-Hye Kim
    • CELLMED
    • /
    • v.14 no.2
    • /
    • pp.2.1-2.10
    • /
    • 2024
  • Through big data analysis of the 'Joseonwangjosilok', this study examines the perception of ginseng among the ruling class and its utilization during the Joseon era. It aims to provide foundational data for the development of ginseng into a high-value cultural commodity. The focus of this research, the Joseonwangjosilok, comprises 1,968 volumes in 948 books, spanning a record of 518 years. Data was collected through web crawling on the website of the National Institute of Korean History, followed by frequency analysis of significant words. To assess the interest in ginseng across the reigns of 27 kings during the Joseon era, ginseng frequency records were adjusted based on years in power and the number of articles, creating an interest index for comparative rankings across reigns. Analysis revealed higher interest in ginseng during the reigns of King Jeongjo and King Yeongjo in the 18th century, King Sunjo in the 19th century, King Sejong in the 15th century, King Sukjong in the 17th century, and King Gojong in the 19th century. Examining the temporal emergence and changes in ginseng during the Joseon era, general ginseng types like insam and sansam had the highest frequency in the 15th century. It appears that Korea adeptly utilized ceremonial goods in diplomatic relations with China and Japan, meeting the demand for ginseng from their royal and aristocratic societies. Processed ginseng varieties such as hongsam and posam, along with traded and taxed ginseng, showed peak frequency in the 18th century. This coincided with increased cultivation, allowing a higher supply and fostering the development of ginseng processing technologies like hongsam.

A Meta Analysis of Innovation Diffusion Theory based on Behavioral Intention of Consumer (혁신확산이론 기반 소비자 행위의도에 관한 메타분석)

  • Nam, Soo-Tai;Kim, Do-Goan;Jin, Chan-Yong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.140-141
    • /
    • 2017
  • Big data analysis, in the large amount of data stored as the data warehouse which it refers the process of discovering meaningful new correlations, patterns, trends and creating new values. Thus, Big data analysis is an effective analysis of various big data that exist all over the world such as social big data, machine to machine (M2M) sensor data, and corporate customer relationship management data. In the big data era, it has become more important to effectively analyze not only structured data that is well organized in the database, but also unstructured big data such as the internet, social network services, and explosively generated web documents, e-mails, and social data in mobile environments. By the way, a meta analysis refers to a statistical literature synthesis method from the quantitative results of many known empirical studies. We reviewed a total of 750 samples among 50 studies published on the topic related as IDT between 2000 and 2017 in Korea.

  • PDF

A Study on the Application of SE Approach to the Design of Health Monitoring Pilot Platform utilizing Big Data in the Nuclear Power Plant (NPP) (원전 상태 감시 및 조기 경보용 빅데이터 시범 플랫폼의 설계를 위한 시스템 엔지니어링 방법론 적용 연구)

  • Cha, Jae-Min;Shin, Junguk;Son, Choong-Yeon;Hwang, Dong-Sik;Yeom, Choong Sub
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.11 no.2
    • /
    • pp.13-29
    • /
    • 2015
  • With the era of big data, the big data has been expected to have a large impact in the NPP safety areas. Although high interests of the big data for the NPP safety, only a limited researches concerning this issue are revealed. Especially, researches on the logical/physical structure and systematic design methods for the big data platform for the NPP safety were not dealt with. In this research, we design a new big data pilot platform for the NPP safety especially focusing on health monitoring and early warning services. For this, we propose a tailored design process based on SE approaches to manage inherent high complexities of the platform design. The proposed design process is consist of several steps from elicitate stakeholders to integration test via define operational concept and scenarios, and system requirements, design a conceptual functional architecture, select alternative physical modules for the derived functions and assess the applicability of the alternative modules, design a conceptual physical architecture, implement and integrate the physical modules. From the design process, this paper covers until the conceptual physical architecture design. In the following paper, the rest of the design process and results of the field test will be shown.

A Study on Securing Global Big Data Competitiveness based on its Environment Analysis (빅데이터 환경 분석과 글로벌 경쟁력 확보 방안에 대한 연구)

  • Moon, Seung Hyeog
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.2
    • /
    • pp.361-366
    • /
    • 2019
  • The amount of data created in the present intelligence information society is beyond imagination. Big data has a great diversity from every information via SNS and internet to the one created by government and enterprises. This various data is close at hand having infinite value as same as crude oil. Big data analysis and utilization by data mining over every areas in the modern industrial society is getting more important for finding useful correlation and strengthening forecasting power against the future uncertainty. Efficient management and utilization of big data produced by complex modern society will be researched in this paper. Also it addresses strategies and methods for securing overall industrial competitiveness, synergy creation among industries, cost reduction and effective application based on big data in the $4^{th}$ industrial revolution era.

A Classification Algorithm Based on Data Clustering and Data Reduction for Intrusion Detection System over Big Data

  • Wang, Qiuhua;Ouyang, Xiaoqin;Zhan, Jiacheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3714-3732
    • /
    • 2019
  • With the rapid development of network, Intrusion Detection System(IDS) plays a more and more important role in network applications. Many data mining algorithms are used to build IDS. However, due to the advent of big data era, massive data are generated. When dealing with large-scale data sets, most data mining algorithms suffer from a high computational burden which makes IDS much less efficient. To build an efficient IDS over big data, we propose a classification algorithm based on data clustering and data reduction. In the training stage, the training data are divided into clusters with similar size by Mini Batch K-Means algorithm, meanwhile, the center of each cluster is used as its index. Then, we select representative instances for each cluster to perform the task of data reduction and use the clusters that consist of representative instances to build a K-Nearest Neighbor(KNN) detection model. In the detection stage, we sort clusters according to the distances between the test sample and cluster indexes, and obtain k nearest clusters where we find k nearest neighbors. Experimental results show that searching neighbors by cluster indexes reduces the computational complexity significantly, and classification with reduced data of representative instances not only improves the efficiency, but also maintains high accuracy.

Dynamic Personal Knowledge Network Design based on Correlated Connection Structure (결합 연결구조 기반의 동적 개인 지식네트워크 설계)

  • Shim, JeongYon
    • The Journal of Korean Association of Computer Education
    • /
    • v.18 no.6
    • /
    • pp.71-79
    • /
    • 2015
  • In a new era of Cloud and Big data, how to search the useful data from dynamic huge data pool in a right time and right way is most important at the stage where the information is getting more important. Above all, in the era of s Big Data it is required to design the advanced efficient intelligent Knowledge system which can process the dynamic variable big data. Accordingly in this paper we propose Dynamic personal Knowledge Network as one of the advanced Intelligent system approach. Adopting the human brain function and its neuro dynamics, an Intelligent system which has a structural flexibility was designed. For Structure-Function association, a personal Knowledge Network is made to be structured and to have reorganizing function as connecting the common nodes. We also design this system to have a reasoning process in the extracted optimal paths from the Knowledge Network.

The Church's Social Responsibility in the IoT Era - Focus on the five essential elements (사물 인터넷 시대의 교회의 사회적 책임 - 5대 본질 요소 중심으로)

  • Lee, MyounJae
    • Journal of Internet of Things and Convergence
    • /
    • v.7 no.1
    • /
    • pp.27-34
    • /
    • 2021
  • Modern society is rapidly developing into the Internet of Things and Big Data, and there are industries that develop even in the Corona era. However, the church is not developing and responding enough to the Corona era, and it is shaking the existence of the church itself. The essence of the church consists of worship, evangelism, education, service, and companionship. In the era of Corona, where non-face-to-face safety is important, the essence does not change, but efforts to embody the essence of the church in a socially responsible manner are needed. This paper discusses the social responsibility of the church in this respect, focusing on the five essential elements of the church. To this end, we examine the essential elements of the church and study the implementation methods suitable for the Corona era.

Multiple testing and its applications in high-dimension (고차원자료에서의 다중검정의 활용)

  • Jang, Woncheol
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.5
    • /
    • pp.1063-1076
    • /
    • 2013
  • The power of modern technology is opening a new era of big data. The size of the datasets affords us the opportunity to answer many open scientific questions but also presents some interesting challenges. High-dimensional data such as microarray are common in big data. In this paper, we give an overview of recent development of multiple testing including global and simultaneous testing and its applications to high-dimensional data.

Decombined Distributed Parallel VQ Codebook Generation Based on MapReduce (맵리듀스를 사용한 디컴바인드 분산 VQ 코드북 생성 방법)

  • Lee, Hyunjin
    • Journal of Digital Contents Society
    • /
    • v.15 no.3
    • /
    • pp.365-371
    • /
    • 2014
  • In the era of big data, algorithms for the existing IT environment cannot accept on a distributed architecture such as hadoop. Thus, new distributed algorithms which apply a distributed framework such as MapReduce are needed. Lloyd's algorithm commonly used for vector quantization is developed using MapReduce recently. In this paper, we proposed a decombined distributed VQ codebook generation algorithm based on a distributed VQ codebook generation algorithm using MapReduce to get a result more fast. The result of applying the proposed algorithm to big data showed higher performance than the conventional method.