• Title/Summary/Keyword: fast computation

Search Result 747, Processing Time 0.024 seconds

Evaluation of the DCT-PLS Method for Spatial Gap Filling of Gridded Data (격자자료 결측복원을 위한 DCT-PLS 기법의 활용성 평가)

  • Youn, Youjeong;Kim, Seoyeon;Jeong, Yemin;Cho, Subin;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1407-1419
    • /
    • 2020
  • Long time-series gridded data is crucial for the analyses of Earth environmental changes. Climate reanalysis and satellite images are now used as global-scale periodical and quantitative information for the atmosphere and land surface. This paper examines the feasibility of DCT-PLS (penalized least square regression based on discrete cosine transform) for the spatial gap filling of gridded data through the experiments for multiple variables. Because gap-free data is required for an objective comparison of original with gap-filled data, we used LDAPS (Local Data Assimilation and Prediction System) daily data and MODIS (Moderate Resolution Imaging Spectroradiometer) monthly products. In the experiments for relative humidity, wind speed, LST (land surface temperature), and NDVI (normalized difference vegetation index), we made sure that randomly generated gaps were retrieved very similar to the original data. The correlation coefficients were over 0.95 for the four variables. Because the DCT-PLS method does not require ancillary data and can refer to both spatial and temporal information with a fast computation, it can be applied to operative systems for satellite data processing.

Hierarchical Particle Swarm Optimization for Multi UAV Waypoints Planning Under Various Threats (다양한 위협 하에서 복수 무인기의 경로점 계획을 위한 계층적 입자 군집 최적화)

  • Chung, Wonmo;Kim, Myunggun;Lee, Sanha;Lee, Sang-Pill;Park, Chun-Shin;Son, Hungsun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.6
    • /
    • pp.385-391
    • /
    • 2022
  • This paper presents to develop a path planning algorithm combining gradient descent-based path planning (GBPP) and particle swarm optimization (PSO) for considering prohibited flight areas, terrain information, and characteristics of fixed-wing unmmaned aerial vehicle (UAV) in 3D space. Path can be generated fast using GBPP, but it is often happened that an unsafe path can be generated by converging to a local minimum depending on the initial path. Bio-inspired swarm intelligence algorithms, such as Genetic algorithm (GA) and PSO, can avoid the local minima problem by sampling several paths. However, if the number of optimal variable increases due to an increase in the number of UAVs and waypoints, it requires heavy computation time and efforts due to increasing the number of particles accordingly. To solve the disadvantages of the two algorithms, hierarchical path planning algorithm associated with hierarchical particle swarm optimization (HPSO) is developed by defining the initial path, which is the input of GBPP, as two variables including particles variables. Feasibility of the proposed algorithm is verified by software-in-the-loop simulation (SILS) of flight control computer (FCC) for UAVs.

Text Classification Using Heterogeneous Knowledge Distillation

  • Yu, Yerin;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.29-41
    • /
    • 2022
  • Recently, with the development of deep learning technology, a variety of huge models with excellent performance have been devised by pre-training massive amounts of text data. However, in order for such a model to be applied to real-life services, the inference speed must be fast and the amount of computation must be low, so the technology for model compression is attracting attention. Knowledge distillation, a representative model compression, is attracting attention as it can be used in a variety of ways as a method of transferring the knowledge already learned by the teacher model to a relatively small-sized student model. However, knowledge distillation has a limitation in that it is difficult to solve problems with low similarity to previously learned data because only knowledge necessary for solving a given problem is learned in a teacher model and knowledge distillation to a student model is performed from the same point of view. Therefore, we propose a heterogeneous knowledge distillation method in which the teacher model learns a higher-level concept rather than the knowledge required for the task that the student model needs to solve, and the teacher model distills this knowledge to the student model. In addition, through classification experiments on about 18,000 documents, we confirmed that the heterogeneous knowledge distillation method showed superior performance in all aspects of learning efficiency and accuracy compared to the traditional knowledge distillation.

A Study about Learning Graph Representation on Farmhouse Apple Quality Images with Graph Transformer (그래프 트랜스포머 기반 농가 사과 품질 이미지의 그래프 표현 학습 연구)

  • Ji Hun Bae;Ju Hwan Lee;Gwang Hyun Yu;Gyeong Ju Kwon;Jin Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • Recently, a convolutional neural network (CNN) based system is being developed to overcome the limitations of human resources in the apple quality classification of farmhouse. However, since convolutional neural networks receive only images of the same size, preprocessing such as sampling may be required, and in the case of oversampling, information loss of the original image such as image quality degradation and blurring occurs. In this paper, in order to minimize the above problem, to generate a image patch based graph of an original image and propose a random walk-based positional encoding method to apply the graph transformer model. The above method continuously learns the position embedding information of patches which don't have a positional information based on the random walk algorithm, and finds the optimal graph structure by aggregating useful node information through the self-attention technique of graph transformer model. Therefore, it is robust and shows good performance even in a new graph structure of random node order and an arbitrary graph structure according to the location of an object in an image. As a result, when experimented with 5 apple quality datasets, the learning accuracy was higher than other GNN models by a minimum of 1.3% to a maximum of 4.7%, and the number of parameters was 3.59M, which was about 15% less than the 23.52M of the ResNet18 model. Therefore, it shows fast reasoning speed according to the reduction of the amount of computation and proves the effect.

Fast Full Search Block Matching Algorithm Using The Search Region Subsampling and The Difference of Adjacent Pixels (탐색 영역 부표본화 및 이웃 화소간의 차를 이용한 고속 전역 탐색 블록 정합 알고리듬)

  • Cheong, Won-Sik;Lee, Bub-Ki;Lee, Kyeong-Hwan;Choi, Jung-Hyun;Kim, Kyeong-Kyu;Kim, Duk-Gyoo;Lee, Kuhn-Il
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.11
    • /
    • pp.102-111
    • /
    • 1999
  • In this paper, we propose a fast full search block matching algorithm using the search region subsampling and the difference of adjacent pixels in current block. In the proposed algorithm, we calculate the lower bound of mean absolute difference (MAD) at each search point using the MAD value of neighbor search point and the difference of adjacent pixels in current block. After that, we perform block matching process only at the search points that need block matching process using the lower bound of MAD at each search point. To calculate the lower bound of MAD at each search point, we need the MAD value of neighbor search point. Therefore, the search points are subsampled at the factor of 4 and the MAD value at the subsampled search points are calculated by the block matching process. And then, the lower bound of MAD at the rest search points are calculated using the MAD value of the neighbor subsampled search point and the difference of adjacent pixels in current block. Finally, we discard the search points that have the lower bound of MAD value exceed the reference MAD which is the minimum MAD value of the MAD values at the subsampled search points and we perform the block matching process only at the search points that need block matching process. By doing so, we can reduce the computation complexity drastically while the motion compensated error performance is kept the same as that of full search block matching algorithm (FSBMA). The experimental results show that the proposed method has a much lower computational complexity than that of FSBMA while the motion compensated error performance of the proposed method is kept same as that of FSBMA.

  • PDF

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

Design and Implementation of an Execution-Provenance Based Simulation Data Management Framework for Computational Science Engineering Simulation Platform (계산과학공학 플랫폼을 위한 실행-이력 기반의 시뮬레이션 데이터 관리 프레임워크 설계 및 구현)

  • Ma, Jin;Lee, Sik;Cho, Kum-won;Suh, Young-kyoon
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.77-86
    • /
    • 2018
  • For the past few years, KISTI has been servicing an online simulation execution platform, called EDISON, allowing users to conduct simulations on various scientific applications supplied by diverse computational science and engineering disciplines. Typically, these simulations accompany large-scale computation and accordingly produce a huge volume of output data. One critical issue arising when conducting those simulations on an online platform stems from the fact that a number of users simultaneously submit to the platform their simulation requests (or jobs) with the same (or almost unchanging) input parameters or files, resulting in charging a significant burden on the platform. In other words, the same computing jobs lead to duplicate consumption computing and storage resources at an undesirably fast pace. To overcome excessive resource usage by such identical simulation requests, in this paper we introduce a novel framework, called IceSheet, to efficiently manage simulation data based on execution metadata, that is, provenance. The IceSheet framework captures and stores each provenance associated with a conducted simulation. The collected provenance records are utilized for not only inspecting duplicate simulation requests but also performing search on existing simulation results via an open-source search engine, ElasticSearch. In particular, this paper elaborates on the core components in the IceSheet framework to support the search and reuse on the stored simulation results. We implemented as prototype the proposed framework using the engine in conjunction with the online simulation execution platform. Our evaluation of the framework was performed on the real simulation execution-provenance records collected on the platform. Once the prototyped IceSheet framework fully functions with the platform, users can quickly search for past parameter values entered into desired simulation software and receive existing results on the same input parameter values on the software if any. Therefore, we expect that the proposed framework contributes to eliminating duplicate resource consumption and significantly reducing execution time on the same requests as previously-executed simulations.