• Title/Summary/Keyword: Generate Data

Search Result 3,066, Processing Time 0.03 seconds

Development a Downscaling Method of Remotely-Sensed Soil Moisture Data Using Neural Networks and Ancillary Data (신경망기법과 보조 자료를 사용한 원격측정 토양수분자료의 Downscaling기법 개발)

  • Kim, Gwang-Seob;Lee, Eul-Rae
    • Journal of Korea Water Resources Association
    • /
    • v.37 no.1
    • /
    • pp.21-29
    • /
    • 2004
  • The growth of water resources engineering associated with stable supply, management, development is essential to overcome the coming water deficit of our country. Large scale remote sensing and the analysis of sub-pixel variability of soil moisture fields are necessary in order to understand water cycle and to develop appropriate hydrologic model. The target resolution of coming Global monitoring of soil moisture field is about 10km which is not appropriate for the regional scale hydrologic model. Therefore, we need a downscaling scheme to generate hydrologic variables which are suitable for the regional hydrologic model. The results of the analysis of sub-pixel soil moisture variability show that the relationship between ancillary data and soil moisture fields shows there is very weak linear relationship. A downscaling scheme was developed using physically-based classification scheme and Neural Networks which are able to link the nonlinear relationship between ancillary data and soil moisture fields. The model is demonstrated by downscaling soil moisture fields from 4km to 0.2km resolution using remotely-sensed data from the Washita'92 experiment.

An Experiment for Surface Reflectance Image Generation of KOMPSAT 3A Image Data by Open Source Implementation (오픈소스 기반 다목적실용위성 3A호 영상자료의 지표면 반사도 영상 제작 실험)

  • Lee, Kiwon;Kim, Kwangseob
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_4
    • /
    • pp.1327-1339
    • /
    • 2019
  • Surface reflectance obtained by absolute atmospheric correction from satellite images is useful for scientific land applications and analysis ready data (ARD). For Landsat and Sentinel-2 images, many types of radiometric processing methods have been developed, and these images are supported by most commercial and open-source software. However, in the case of KOMPSAT 3/3A images, there are currently no tools or open source resources for obtaining the reflectance at the top-of-atmosphere (TOA) and top-of-canopy (TOC). In this study, the atmospheric correction module of KOMPSAT 3/3A images is newly implemented to the optical calibration algorithm supported in the Orfeo ToolBox (OTB), a remote sensing open-source tool. This module contains the sensor model and spectral response data of KOMPSAT 3A. Aerosol measurement properties, such as AERONET data, can be used to generate TOC reflectance image. Using this module, an experiment was conducted, and the reflection products for TOA and TOC with and without AERONET data were obtained. This approach can be used for building the ARD database for surface reflection by absolute atmospheric correction derived from KOMPSAT 3/3A satellite images.

Methodology for Processing GPS-based Bicycle Speed Data for Monitoring Bicycle Traffic (자전거 모니터링을 위한 자료처리 프로세스 개발 및 응용 - GPS기반 자전거 속도자료를 중심으로)

  • Rim, Heesub;Joo, Shinhye;Oh, Cheol
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.13 no.3
    • /
    • pp.10-24
    • /
    • 2014
  • Bicycle is a useful transportation mode that is healthy, emission-free, and environmentally compatible. Although large efforts have been made to promote the use of bicycling to date, there still exist various hurdles and limitations. One of the key issues to increase bicycling is how to gather bicycle-related data from the field and to generate valuable information for both users and operations agencies. This study proposes a method to process bicycle trajectory data which is obtained from tracing global positioning systems(GPS) equipped bicycle, which is defined as the probe bicycle. The proposed method is based on the concept of statistical quality control of data. In addition, a data collection and processing scenario in support of public bicycle system is presented. The outcomes of this study would be valuable fundamentals for developing bicycle traffic information systems that is a part of future intelligent transportation systems(ITS).

Applicability Assessment of Disaster Rapid Mapping: Focused on Fusion of Multi-sensing Data Derived from UAVs and Disaster Investigation Vehicle (재난조사 특수차량과 드론의 다중센서 자료융합을 통한 재난 긴급 맵핑의 활용성 평가)

  • Kim, Seongsam;Park, Jesung;Shin, Dongyoon;Yoo, Suhong;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_2
    • /
    • pp.841-850
    • /
    • 2019
  • The purpose of this study is to strengthen the capability of rapid mapping for disaster through improving the positioning accuracy of mapping and fusion of multi-sensing point cloud data derived from Unmanned Aerial Vehicles (UAVs) and disaster investigation vehicle. The positioning accuracy was evaluated for two procedures of drone mapping with Agisoft PhotoScan: 1) general geo-referencing by self-calibration, 2) proposed geo-referencing with optimized camera model by using fixed accurate Interior Orientation Parameters (IOPs) derived from indoor camera calibration test and bundle adjustment. The analysis result of positioning accuracy showed that positioning RMS error was improved 2~3 m to 0.11~0.28 m in horizontal and 2.85 m to 0.45 m in vertical accuracy, respectively. In addition, proposed data fusion approach of multi-sensing point cloud with the constraints of the height showed that the point matching error was greatly reduced under about 0.07 m. Accordingly, our proposed data fusion approach will enable us to generate effectively and timelinessly ortho-imagery and high-resolution three dimensional geographic data for national disaster management in the future.

An Extension of MSDL for Obtaining Weapon Effectiveness Data in a Military Simulation (국방 시뮬레이션에서 무기효과 데이터 획득을 위한 MSDL의 확장)

  • Lee, Sangjin;Oh, Hyun-Shik;Kim, Dohyung;Rhie, Ye Lim;Lee, Sunju
    • Journal of the Korea Society for Simulation
    • /
    • v.30 no.2
    • /
    • pp.1-9
    • /
    • 2021
  • Many factors such as wind direction, wind strength, temperature, and obstacles affect a munition's trajectory. Since these factors eventually determines the probability of hit and the hitting point of a target, these factors should be considered to obtain reliable weapon effectiveness data. In this study, we propose the extension of the MSDL(Military Scenario Definition Language) to reflect these factors to improve the reliability of weapon effectiveness data. Based on the existing MSDL, which has been used to set the initial condition of a military simulation scenarios, the newly identified subelements are added in ScenarioID, Environment, Organizations, and Installations as a scenario schema. Also, DamageAssessment and DesignOfExperiments element are added to make weapon effectiveness data easily. The extended MSDL enables to automatically generate the simulation scenarios that reflect various factors which affect the probability of hit or kill. This extended MSDL is applied to an integrated simulation software of weapon systems, named AddSIM version 4.0 for generation of weapon effectiveness data.

Oil Storage Tank Inspection using 3D Laser Scanner (3D 레이저스캐너를 활용한 유류 저장탱크의 검사)

  • Park, Joon-Kyu;Lee, Keun-Wang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.867-872
    • /
    • 2020
  • Oil storage tanks are a major structure in chemical industrial complexes. Damage to the structure due to natural disasters or poor management can cause additional damage, such as leakage of chemicals, fire, and explosion, so it is essential to understand the deformation. In this study, data on oil storage tanks were acquired using a 3D laser scanner, and various analyzes were performed for storage tank management by comparing them with design data. Modeling of the oil storage tank was performed using the data and design drawings acquired by a 3D laser scanner. An inspection of the oil storage tank was effectively performed by overlapping. In addition, cross-sectional and exploded views of the deformation were produced to generate visible data on the deformation of the facility, and it was suggested that the oil storage tank had a maximum deformation of -7.16mm through quantitative analysis. Data that can be used for additional work was obtained by producing drawings to be precisely inspected for areas with large deformation. In the future, an inspection of oil storage tanks using 3D laser scanners is quantitative and visible data on oil storage tank deformation. This will greatly improve the efficiency of facility management by rebuilding it.

A Study on Verification of Back TranScription(BTS)-based Data Construction (Back TranScription(BTS)기반 데이터 구축 검증 연구)

  • Park, Chanjun;Seo, Jaehyung;Lee, Seolhwa;Moon, Hyeonseok;Eo, Sugyeong;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.109-117
    • /
    • 2021
  • Recently, the use of speech-based interfaces is increasing as a means for human-computer interaction (HCI). Accordingly, interest in post-processors for correcting errors in speech recognition results is also increasing. However, a lot of human-labor is required for data construction. in order to manufacture a sequence to sequence (S2S) based speech recognition post-processor. To this end, to alleviate the limitations of the existing construction methodology, a new data construction method called Back TranScription (BTS) was proposed. BTS refers to a technology that combines TTS and STT technology to create a pseudo parallel corpus. This methodology eliminates the role of a phonetic transcriptor and can automatically generate vast amounts of training data, saving the cost. This paper verified through experiments that data should be constructed in consideration of text style and domain rather than constructing data without any criteria by extending the existing BTS research.

Big Data Analytics in RNA-sequencing (RNA 시퀀싱 기법으로 생성된 빅데이터 분석)

  • Sung-Hun WOO;Byung Chul JUNG
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.55 no.4
    • /
    • pp.235-243
    • /
    • 2023
  • As next-generation sequencing has been developed and used widely, RNA-sequencing (RNA-seq) has rapidly emerged as the first choice of tools to validate global transcriptome profiling. With the significant advances in RNA-seq, various types of RNA-seq have evolved in conjunction with the progress in bioinformatic tools. On the other hand, it is difficult to interpret the complex data underlying the biological meaning without a general understanding of the types of RNA-seq and bioinformatic approaches. In this regard, this paper discusses the two main sections of RNA-seq. First, two major variants of RNA-seq are described and compared with the standard RNA-seq. This provides insights into which RNA-seq method is most appropriate for their research. Second, the most widely used RNA-seq data analyses are discussed: (1) exploratory data analysis and (2) pathway enrichment analysis. This paper introduces the most widely used exploratory data analysis for RNA-seq, such as principal component analysis, heatmap, and volcano plot, which can provide the overall trends in the dataset. The pathway enrichment analysis section introduces three generations of pathway enrichment analysis and how they generate enriched pathways with the RNA-seq dataset.

Development of an AutoML Web Platform for Text Classification Automation (텍스트 분류 자동화를 위한 AutoML 웹 플랫폼 개발)

  • Ha-Yoon Song;Jeon-Seong Kang;Beom-Joon Park;Junyoung Kim;Kwang-Woo Jeon;Junwon Yoon;Hyun-Joon Chung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.10
    • /
    • pp.537-544
    • /
    • 2024
  • The rapid advancement of artificial intelligence and machine learning technologies is driving innovation across various industries, with natural language processing offering substantial opportunities for the analysis and processing of text data. The development of effective text classification models requires several complex stages, including data exploration, preprocessing, feature extraction, model selection, hyperparameter optimization, and performance evaluation, all of which demand significant time and domain expertise. Automated machine learning (AutoML) aims to automate these processes, thus allowing practitioners without specialized knowledge to develop high-performance models efficiently. However, current AutoML frameworks are primarily designed for structured data, which presents challenges for unstructured text data, as manual intervention is often required for preprocessing and feature extraction. To address these limitations, this study proposes a web-based AutoML platform that automates text preprocessing, word embedding, model training, and evaluation. The proposed platform substantially enhances the efficiency of text classification workflows by enabling users to upload text data, automatically generate the optimal ML model, and visually present performance metrics. Experimental results across multiple text classification datasets indicate that the proposed platform achieves high levels of accuracy and precision, with particularly notable performance when utilizing a Stacked Ensemble approach. This study highlights the potential for non-experts to effectively analyze and leverage text data through automated text classification and outlines future directions to further enhance performance by integrating Large language models.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.