• Title/Summary/Keyword: learning distribution

Search Result 966, Processing Time 0.067 seconds

Study on the Seismic Random Noise Attenuation for the Seismic Attribute Analysis (탄성파 속성 분석을 위한 탄성파 자료 무작위 잡음 제거 연구)

  • Jongpil Won;Jungkyun Shin;Jiho Ha;Hyunggu Jun
    • Economic and Environmental Geology
    • /
    • v.57 no.1
    • /
    • pp.51-71
    • /
    • 2024
  • Seismic exploration is one of the widely used geophysical exploration methods with various applications such as resource development, geotechnical investigation, and subsurface monitoring. It is essential for interpreting the geological characteristics of subsurface by providing accurate images of stratum structures. Typically, geological features are interpreted by visually analyzing seismic sections. However, recently, quantitative analysis of seismic data has been extensively researched to accurately extract and interpret target geological features. Seismic attribute analysis can provide quantitative information for geological interpretation based on seismic data. Therefore, it is widely used in various fields, including the analysis of oil and gas reservoirs, investigation of fault and fracture, and assessment of shallow gas distributions. However, seismic attribute analysis is sensitive to noise within the seismic data, thus additional noise attenuation is required to enhance the accuracy of the seismic attribute analysis. In this study, four kinds of seismic noise attenuation methods are applied and compared to mitigate random noise of poststack seismic data and enhance the attribute analysis results. FX deconvolution, DSMF, Noise2Noise, and DnCNN are applied to the Youngil Bay high-resolution seismic data to remove seismic random noise. Energy, sweetness, and similarity attributes are calculated from noise-removed seismic data. Subsequently, the characteristics of each noise attenuation method, noise removal results, and seismic attribute analysis results are qualitatively and quantitatively analyzed. Based on the advantages and disadvantages of each noise attenuation method and the characteristics of each seismic attribute analysis, we propose a suitable noise attenuation method to improve the result of seismic attribute analysis.

Application study of random forest method based on Sentinel-2 imagery for surface cover classification in rivers - A case of Naeseong Stream - (하천 내 지표 피복 분류를 위한 Sentinel-2 영상 기반 랜덤 포레스트 기법의 적용성 연구 - 내성천을 사례로 -)

  • An, Seonggi;Lee, Chanjoo;Kim, Yongmin;Choi, Hun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.5
    • /
    • pp.321-332
    • /
    • 2024
  • Understanding the status of surface cover in riparian zones is essential for river management and flood disaster prevention. Traditional survey methods rely on expert interpretation of vegetation through vegetation mapping or indices. However, these methods are limited by their ability to accurately reflect dynamically changing river environments. Against this backdrop, this study utilized satellite imagery to apply the Random Forest method to assess the distribution of vegetation in rivers over multiple years, focusing on the Naeseong Stream as a case study. Remote sensing data from Sentinel-2 imagery were combined with ground truth data from the Naeseong Stream surface cover in 2016. The Random Forest machine learning algorithm was used to extract and train 1,000 samples per surface cover from ten predetermined sampling areas, followed by validation. A sensitivity analysis, annual surface cover analysis, and accuracy assessment were conducted to evaluate their applicability. The results showed an accuracy of 85.1% based on the validation data. Sensitivity analysis indicated the highest efficiency in 30 trees, 800 samples, and the downstream river section. Surface cover analysis accurately reflects the actual river environment. The accuracy analysis identified 14.9% boundary and internal errors, with high accuracy observed in six categories, excluding scattered and herbaceous vegetation. Although this study focused on a single river, applying the surface cover classification method to multiple rivers is necessary to obtain more accurate and comprehensive data.

Development of a Program for Calculating Typhoon Wind Speed and Data Visualization Based on Satellite RGB Images for Secondary-School Textbooks (인공위성 RGB 영상 기반 중등학교 교과서 태풍 풍속 산출 및 데이터 시각화 프로그램 개발)

  • Chae-Young Lim;Kyung-Ae Park
    • Journal of the Korean earth science society
    • /
    • v.45 no.3
    • /
    • pp.173-191
    • /
    • 2024
  • Typhoons are significant meteorological phenomena that cause interactions among the ocean, atmosphere, and land within Earth's system. In particular, wind speed, a key characteristic of typhoons, is influenced by various factors such as central pressure, trajectory, and sea surface temperature. Therefore, a comprehensive understanding based on actual observational data is essential. In the 2015 revised secondary school textbooks, typhoon wind speed is presented through text and illustrations; hence, exploratory activities that promote a deeper understanding of wind speed are necessary. In this study, we developed a data visualization program with a graphical user interface (GUI) to facilitate the understanding of typhoon wind speeds with simple operations during the teaching-learning process. The program utilizes red-green-blue (RGB) image data of Typhoons Mawar, Guchol, and Bolaven -which occurred in 2023- from the Korean geostationary satellite GEO-KOMPSAT-2A (GK-2A) as the input data. The program is designed to calculate typhoon wind speeds by inputting cloud movement coordinates around the typhoon and visualizes the wind speed distribution by inputting parameters such as central pressure, storm radius, and maximum wind speed. The GUI-based program developed in this study can be applied to typhoons observed by GK-2A without errors and enables scientific exploration based on actual observations beyond the limitations of textbooks. This allows students and teachers to collect, process, analyze, and visualize real observational data without needing a paid program or professional coding knowledge. This approach is expected to foster digital literacy, an essential competency for the future.

Development of an Anomaly Detection Algorithm for Verification of Radionuclide Analysis Based on Artificial Intelligence in Radioactive Wastes (방사성폐기물 핵종분석 검증용 이상 탐지를 위한 인공지능 기반 알고리즘 개발)

  • Seungsoo Jang;Jang Hee Lee;Young-su Kim;Jiseok Kim;Jeen-hyeng Kwon;Song Hyun Kim
    • Journal of Radiation Industry
    • /
    • v.17 no.1
    • /
    • pp.19-32
    • /
    • 2023
  • The amount of radioactive waste is expected to dramatically increase with decommissioning of nuclear power plants such as Kori-1, the first nuclear power plant in South Korea. Accurate nuclide analysis is necessary to manage the radioactive wastes safely, but research on verification of radionuclide analysis has yet to be well established. This study aimed to develop the technology that can verify the results of radionuclide analysis based on artificial intelligence. In this study, we propose an anomaly detection algorithm for inspecting the analysis error of radionuclide. We used the data from 'Updated Scaling Factors in Low-Level Radwaste' (NP-5077) published by EPRI (Electric Power Research Institute), and resampling was performed using SMOTE (Synthetic Minority Oversampling Technique) algorithm to augment data. 149,676 augmented data with SMOTE algorithm was used to train the artificial neural networks (classification and anomaly detection networks). 324 NP-5077 report data verified the performance of networks. The anomaly detection algorithm of radionuclide analysis was divided into two modules that detect a case where radioactive waste was incorrectly classified or discriminate an abnormal data such as loss of data or incorrectly written data. The classification network was constructed using the fully connected layer, and the anomaly detection network was composed of the encoder and decoder. The latter was operated by loading the latent vector from the end layer of the classification network. This study conducted exploratory data analysis (i.e., statistics, histogram, correlation, covariance, PCA, k-mean clustering, DBSCAN). As a result of analyzing the data, it is complicated to distinguish the type of radioactive waste because data distribution overlapped each other. In spite of these complexities, our algorithm based on deep learning can distinguish abnormal data from normal data. Radionuclide analysis was verified using our anomaly detection algorithm, and meaningful results were obtained.

An Analysis of Korean Floral Design Education Program and the Job Satisfaction of Florist and Applicants Florist (우리나라 화훼장식 교육프로그램 분석과 화훼장식가와 지망생 직업만족도 비교)

  • Moon, Hyun Sun;Hong, Jong Won;Han, Koh Woon;Jang, Eu Jean;Pak, Chun Ho
    • FLOWER RESEARCH JOURNAL
    • /
    • v.18 no.4
    • /
    • pp.315-322
    • /
    • 2010
  • To analyze our country's education program for flower decoration and occupational satisfaction of florist, 60 present florists and 60 applicants were surveyed. To investigate satisfaction of florist, the questionnaire items consisted of satisfaction for occupation etc. experienced by attendants, contents of related education, recognition from society, social treatment. And this study analyzed followings : considerations to select occupation, satisfaction on job of person who majored in related subject and non- person without such an educational background, satisfaction on present occupation, satisfaction on education period, significance of florist ability, significance of requirements for occupational development. The points which present florists and applicants consider as important were aptitude for gardening and prospect. From the analysis by major of florists, majored persons had more satisfaction than non-majored persons but there was no statistically significant difference between them. From the analysis by applicant, as in present florists, majored persons had more satisfaction than non-majored persons. For the satisfaction by career and education period of present florists and applicants, the satisfaction on education related to flower decoration or learning experiences and lecturer's teaching method showed that the lower the career is, the less the satisfaction is. Seeing the result by education period of applicants, the satisfaction on job was similar each other regardless of education period. For difference in recognition on ability by major of present florists and applicants, the result of analysis by major of present florists showed that majored persons considered the ability more important comparing to non-majored persons in the fields of gardening and making decorations. In the other hand, in the fields of quality maintenance, flower decoration, and flower distribution and management, there was no significance difference between majored and non-majored persons about the recognition of ability. The result of analysis by major of applicants showed that majored persons considered the ability more important comparing to non-majored persons in the fields of gardening, flower decoration, making decorations, flower distribution and management. For the significance of quality maintenance, majored persons wholly considered the significance more important comparing to non-majored persons but there was no significant difference. Based on the results of this study, in working as a florist, persons who majored in flower decoration had more occupational satisfaction than non-majored persons. And among the contents of education, the education related to gardening was recognized as most important. But at present the systematic and special education programs to cultivate professional florists are deficient. Therefore it is suggested that courses based on systematic educational contents which integrate theory and practice are needed to solve education problem related to flower decoration in this rapidly changing society.

Techniques and Traditional Knowledge of the Korean Onggi Potter (옹기장인의 옹기제작기술과 전통지식)

  • Kim, Jae-Ho
    • Korean Journal of Heritage: History & Science
    • /
    • v.48 no.2
    • /
    • pp.142-157
    • /
    • 2015
  • This study examines how traditional knowledge functions in the specific techniques to make pottery in terms of the traditional knowledge on the pottery techniques of Onggi potters. It focuses on how traditional pottery manufacturing skills are categorized and what aspects are observed with regard to the techniques. The pottery manufacturing process is divided into the preparation step of raw material, the molding step of pottery, and the final plasticity step. Each step involves unique traditional knowledge. The preparation step mainly comprises the knowledge on different kinds of mud. The knowledge is about the colors and properties of mud, the information on the regional distribution of quality mud, and the techniques to optimize mud for pottery manufacturing. The molding step mainly involves the structure and shape of spinning wheels, the techniques to accumulate mud, ways to use different kinds of tools, the techniques to dry processed pottery. The plasticity step involves the knowledge on kilns and the scheme to build kilns, the skills to stack pottery inside of the kilns, the knowledge on firewood and efficient ways of wood burning, the discrimination of different kinds of fire and the techniques to stoke the kilns. These different kinds of knowledge may be roughly divided into three categories : the preparation of raw material, molding, and plasticity. They are closely connected with one another, which is because it becomes difficult to manufacture quality pottery even with only one incorrect factor. The contents of knowledge involved in the manufacturing process of pottery focused are mainly about raw material, color, shape, distribution aspect, fusion point, durability, physical property, etc, which are all about science. They are rather obtained through the experimental learning process of apprenticeship, not through the official education. It is not easy to categorize the knowledge involved. Most of the knowledge can be understood in the category of ethnoscience. In terms of the UNESCO world heritage of intangible cultural assets, the knowledge is mainly about 'the knowledge on nature and universe'. Unique knowledge and skills are, however, identified in the molding step. They can be referred to 'body techniques', which unify the physical stance of potters, tools they employ, and the conceived pottery. Potters themselves find it difficult to articulate the knowledge. In case stated, it cannot be easily understood without the experience and knowledge on the field. From the preparation of raw material to the complete products, the techniques and traditional knowledge involved in the process of manufacturing pottery are closely connected, employing numerous categories and levels. Such an aspect can be referred to as a 'techniques chain'. Here the techniques mean not only the scientific techniques but also, in addition to the skills, the knowledge of various techniques and levels including habitual, unconscious behaviors of potters.

A Case Study: Improvement of Wind Risk Prediction by Reclassifying the Detection Results (풍해 예측 결과 재분류를 통한 위험 감지확률의 개선 연구)

  • Kim, Soo-ock;Hwang, Kyu-Hong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.3
    • /
    • pp.149-155
    • /
    • 2021
  • Early warning systems for weather risk management in the agricultural sector have been developed to predict potential wind damage to crops. These systems take into account the daily maximum wind speed to determine the critical wind speed that causes fruit drops and provide the weather risk information to farmers. In an effort to increase the accuracy of wind risk predictions, an artificial neural network for binary classification was implemented. In the present study, the daily wind speed and other weather data, which were measured at weather stations at sites of interest in Jeollabuk-do and Jeollanam-do as well as Gyeongsangbuk- do and part of Gyeongsangnam- do provinces in 2019, were used for training the neural network. These weather stations include 210 synoptic and automated weather stations operated by the Korean Meteorological Administration (KMA). The wind speed data collected at the same locations between January 1 and December 12, 2020 were used to validate the neural network model. The data collected from December 13, 2020 to February 18, 2021 were used to evaluate the wind risk prediction performance before and after the use of the artificial neural network. The critical wind speed of damage risk was determined to be 11 m/s, which is the wind speed reported to cause fruit drops and damages. Furthermore, the maximum wind speeds were expressed using Weibull distribution probability density function for warning of wind damage. It was found that the accuracy of wind damage risk prediction was improved from 65.36% to 93.62% after re-classification using the artificial neural network. Nevertheless, the error rate also increased from 13.46% to 37.64%, as well. It is likely that the machine learning approach used in the present study would benefit case studies where no prediction by risk warning systems becomes a relatively serious issue.

Vegetation classification based on remote sensing data for river management (하천 관리를 위한 원격탐사 자료 기반 식생 분류 기법)

  • Lee, Chanjoo;Rogers, Christine;Geerling, Gertjan;Pennin, Ellis
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.6-7
    • /
    • 2021
  • Vegetation development in rivers is one of the important issues not only in academic fields such as geomorphology, ecology, hydraulics, etc., but also in river management practices. The problem of river vegetation is directly connected to the harmony of conflicting values of flood management and ecosystem conservation. In Korea, since the 2000s, the issue of river vegetation and land formation has been continuously raised under various conditions, such as the regulating rivers downstream of the dams, the small eutrophicated tributary rivers, and the floodplain sites for the four major river projects. In this background, this study proposes a method for classifying the distribution of vegetation in rivers based on remote sensing data, and presents the results of applying this to the Naeseong Stream. The Naeseong Stream is a representative example of the river landscape that has changed due to vegetation development from 2014 to the latest. The remote sensing data used in the study are images of Sentinel 1 and 2 satellites, which is operated by the European Aerospace Administration (ESA), and provided by Google Earth Engine. For the ground truth, manually classified dataset on the surface of the Naeseong Stream in 2016 were used, where the area is divided into eight types including water, sand and herbaceous and woody vegetation. The classification method used a random forest classification technique, one of the machine learning algorithms. 1,000 samples were extracted from 10 pre-selected polygon regions, each half of them were used as training and verification data. The accuracy based on the verification data was found to be 82~85%. The model established through training was also applied to images from 2016 to 2020, and the process of changes in vegetation zones according to the year was presented. The technical limitations and improvement measures of this paper were considered. By providing quantitative information of the vegetation distribution, this technique is expected to be useful in practical management of vegetation such as thinning and rejuvenation of river vegetation as well as technical fields such as flood level calculation and flow-vegetation coupled modeling in rivers.

  • PDF

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.