• Title/Summary/Keyword: high dimensional time series

Search Result 71, Processing Time 0.024 seconds

Comparison of the Wind Speed from an Atmospheric Pressure Map (Na Wind) and Satellite Scatterometer­observed Wind Speed (NSCAT) over the East (Japan) Sea

  • Park, Kyung-Ae;Kim, Kyung-Ryul;Kim, Kuh;Chung, Jong-Yul;Conillor, Peter-C.
    • Journal of the korean society of oceanography
    • /
    • v.38 no.4
    • /
    • pp.173-184
    • /
    • 2003
  • Major differences between wind speeds from atmospheric pressure maps (Na wind) and near­surface wind speeds derived from satellite scatterometer (NSCAT) observations over the East (Japan) Sea have been examined. The root­mean­square errors of Na wind and NSCAT wind speeds collocated with Japanese Meteorological Agency (JMA) buoy winds are about $3.84\;ms^{-1}\;and\;1.53\;ms^{-1}$, respectively. Time series of NSCAT wind speeds showed a high coherency of 0.92 with the real buoy measurements and contained higher spectral energy at low frequencies (>3 days) than the Na wind. The magnitudes of monthly Na winds are lower than NSCAT winds by up to 45%, particularly in September 1996. The spatial structures between the two are mostly coherent on basin­wide large scales; however, significant differences and energy loss are found on a spatial scale of less than 100 km. This was evidenced by the temporal EOFs (Empirical Orthogonal Functions) of the two wind speed data sets and by their two­dimensional spectra. Since the Na wind was based on the atmospheric pressures on the weather map, it overlooked small­scale features of less than 100 km. The center of the cold­air outbreak through Vladivostok, expressed by the Na wind in January 1997, was shifted towards the North Korean coast when compared with that of the NSCAT wind, whereas NSCAT winds revealed its temporal evolution as well as spatial distribution.

Performance of steel beams at elevated temperatures under the effect of axial restraints

  • Liu, T.C.H.;Davies, J.M.
    • Steel and Composite Structures
    • /
    • v.1 no.4
    • /
    • pp.427-440
    • /
    • 2001
  • The growing use of unprotected or partially protected steelwork in buildings has caused a lively debate regarding the safety of this form of construction. A good deal of recent research has indicated that steel members have a substantial inherent ability to resist fire so that additional fire protection can be either reduced or eliminated completely. A performance based philosophy also extends the study into the effect of structural continuity and the performance of the whole structural totality. As part of the structural system, thermal expansion during the heating phase or contraction during the cooling phase in most beams is likely to be restrained by adjacent parts of the whole system or sub-frame assembly due to compartmentation. This has not been properly addressed before. This paper describes an experimental programme in which unprotected steel beams were tested under load while it is restrained between two columns and additional horizontal restraints with particular concern on the effect of catenary action in the beams when subjected to large deflection at very high temperature. This paper also presents a three-dimensional mathematical modelling, based on the finite element method, of the series of fire tests on the part-frame. The complete analysis starts with an evaluation of temperature distribution in the structure at various time levels. It is followed by a detail 3-D finite element analysis on its structural response as a result of the changing temperature distribution. The principal part of the analysis makes use of an existing finite element package FEAST. The effect of columns being fire-protected and the beam being axially restrained has been modelled adequately in terms of their thermal and structural responses. The consequence of the beam being restrained is that the axial force in the restrained beam starts as a compression, which increases gradually up to a point when the material has deteriorated to such a level that the beam deflects excessively. The axial compression force drops rapidly and changes into a tension force leading to a catenary action, which slows down the beam deflection from running away. Design engineers will be benefited with the consideration of the catenary action.

Statistical Study and Prediction of Variability of Erythemal Ultraviolet Irradiance Solar Values in Valencia, Spain

  • Gurrea, Gonzalo;Blanca-Gimenez, Vicente;Perez, Vicente;Serrano, Maria-Antonia;Moreno, Juan-Carlos
    • Asia-Pacific Journal of Atmospheric Sciences
    • /
    • v.54 no.4
    • /
    • pp.599-610
    • /
    • 2018
  • The goal of this study was to statistically analyse the variability of global irradiance and ultraviolet erythemal (UVER) irradiance and their interrelationships with global and UVER irradiance, global clearness indices and ozone. A prediction of short-term UVER solar irradiance values was also obtained. Extreme values of UVER irradiance were included in the data set, as well as a time series of ultraviolet irradiance variability (UIV). The study period was from 2005 to 2014 and approximately 250,000 readings were taken at 5-min intervals. The effect of the clearness indices on global irradiance variability (GIV) and UIV was also recorded and bi-dimensional distributions were used to gather information on the two measured variables. With regard to daily GIV and UIV, it is also shown that for global clearness index ($k_t$) values lower than 0.6 both global and UVER irradiance had greater variability and that UIVon cloud-free days ($k_t$ higher than 0.65) exceeds GIV. To study the dependence between UIVand GIV the ${\chi}^2$ statistical method was used. It can be concluded that there is a 95% probability of a clear dependency between the variabilities. A connection between high $k_t$ (corresponding to cloudless days) and low variabilities was found in the analysis of bidimensional distributions. Extreme values of UVER irradiance were also analyzed and it was possible to calculate the probable future values of UVER irradiance by extrapolating the values of the adjustment curve obtained from the Gumbel distribution.

Robust estimation of sparse vector autoregressive models (희박 벡터 자기 회귀 모형의 로버스트 추정)

  • Kim, Dongyeong;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.5
    • /
    • pp.631-644
    • /
    • 2022
  • This paper considers robust estimation of the sparse vector autoregressive model (sVAR) useful in high-dimensional time series analysis. First, we generalize the result of Xu et al. (2008) that the adaptive lasso indeed has robustness in sVAR as well. However, adaptive lasso method in sVAR performs poorly as the number and sizes of outliers increases. Therefore, we propose new robust estimation methods for sVAR based on least absolute deviation (LAD) and Huber estimation. Our simulation results show that our proposed methods provide more accurate estimation in turn showed better forecasting performance when outliers exist. In addition, we applied our proposed methods to power usage data and confirmed that there are unignorable outliers and robust estimation taking such outliers into account improves forecasting.

Collapse failure mechanism of subway station under mainshock-aftershocks in the soft area

  • Zhen-Dong Cui;Wen-Xiang Yan;Su-Yang Wang
    • Geomechanics and Engineering
    • /
    • v.36 no.3
    • /
    • pp.303-316
    • /
    • 2024
  • Seismic records are composed of mainshock and a series of aftershocks which often result in the incremental damage to underground structures and bring great challenges to the rescue of post-disaster and the repair of post-earthquake. In this paper, the repetition method was used to construct the mainshock-aftershocks sequence which was used as the input ground motion for the analysis of dynamic time history. Based on the Daikai station, the two-dimensional finite element model of soil-station was established to explore the failure process of station under different seismic precautionary intensities, and the concept of incremental damage of station was introduced to quantitatively analyze the damage condition of structure under the action of mainshock and two aftershocks. An arc rubber bearing was proposed for the shock absorption. With the arc rubber bearing, the mode of the traditional column end connection was changed from "fixed connection" to "hinged joint", and the ductility of the structure was significantly improved. The results show that the damage condition of the subway station is closely related to the magnitude of the mainshock. When the magnitude of the mainshock is low, the incremental damage to the structure caused by the subsequent aftershocks is little. When the magnitude of the mainshock is high, the subsequent aftershocks will cause serious incremental damage to the structure, and may even lead to the collapse of the station. The arc rubber bearing can reduce the damage to the station. The results can offer a reference for the seismic design of subway stations under the action of mainshock-aftershocks.

Evaluation of Accuracy of Modified Equivalent Linear Method (수정된 등가선형해석기법의 정확성 평가)

  • Jeong, Chang-Gyun;Kwak, Dong-Yeop;Park, Duhee;Kim, Kwangkyun
    • Journal of the Korean GEO-environmental Society
    • /
    • v.11 no.6
    • /
    • pp.5-20
    • /
    • 2010
  • One-dimensional equivalent linear site response analysis is widely used in practice due to its simplicity, requiring only few input parameters, and low computational cost. The main limitation of the procedure is that it is essentially a linear method, in which the time dependent change in the soil properties cannot be modeled and constant values of shear modulus and damping is used throughout the duration of the analysis. Various forms of modified equivalent linear analyses have been developed to enhance the accuracy of the equivalent linear method by incorporating the dependence of the shear strain with the loading frequency. The methods are identical in that it uses the shear strain Fourier spectrum as the backbone of the analysis, but differ in the method in which the strain Fourier spectrum is smoothed. This study used two domestically measured soil profiles to perform a series of nonlinear, equivalent linear, and modified equivalent linear site response analyses to verify the accuracy of two modified procedures. The results of the analyses indicate that the modified equivalent linear analysis can highly overestimate the amplification of the high frequency components of the ground motion. The degree of overestimation is dependent on the characteristics of the input ground motion. Use of a motion rich in high frequency contents can result in unrealistic response.

Change Detection for High-resolution Satellite Images Using Transfer Learning and Deep Learning Network (전이학습과 딥러닝 네트워크를 활용한 고해상도 위성영상의 변화탐지)

  • Song, Ah Ram;Choi, Jae Wan;Kim, Yong Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.199-208
    • /
    • 2019
  • As the number of available satellites increases and technology advances, image information outputs are becoming increasingly diverse and a large amount of data is accumulating. In this study, we propose a change detection method for high-resolution satellite images that uses transfer learning and a deep learning network to overcome the limit caused by insufficient training data via the use of pre-trained information. The deep learning network used in this study comprises convolutional layers to extract the spatial and spectral information and convolutional long-short term memory layers to analyze the time series information. To use the learned information, the two initial convolutional layers of the change detection network are designed to use learned values from 40,000 patches of the ISPRS (International Society for Photogrammertry and Remote Sensing) dataset as initial values. In addition, 2D (2-Dimensional) and 3D (3-dimensional) kernels were used to find the optimized structure for the high-resolution satellite images. The experimental results for the KOMPSAT-3A (KOrean Multi-Purpose SATllite-3A) satellite images show that this change detection method can effectively extract changed/unchanged pixels but is less sensitive to changes due to shadow and relief displacements. In addition, the change detection accuracy of two sites was improved by using 3D kernels. This is because a 3D kernel can consider not only the spatial information but also the spectral information. This study indicates that we can effectively detect changes in high-resolution satellite images using the constructed image information and deep learning network. In future work, a pre-trained change detection network will be applied to newly obtained images to extend the scope of the application.

Effects of Spatio-temporal Features of Dynamic Hand Gestures on Learning Accuracy in 3D-CNN (3D-CNN에서 동적 손 제스처의 시공간적 특징이 학습 정확성에 미치는 영향)

  • Yeongjee Chung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.145-151
    • /
    • 2023
  • 3D-CNN is one of the deep learning techniques for learning time series data. Such three-dimensional learning can generate many parameters, so that high-performance machine learning is required or can have a large impact on the learning rate. When learning dynamic hand-gestures in spatiotemporal domain, it is necessary for the improvement of the efficiency of dynamic hand-gesture learning with 3D-CNN to find the optimal conditions of input video data by analyzing the learning accuracy according to the spatiotemporal change of input video data without structural change of the 3D-CNN model. First, the time ratio between dynamic hand-gesture actions is adjusted by setting the learning interval of image frames in the dynamic hand-gesture video data. Second, through 2D cross-correlation analysis between classes, similarity between image frames of input video data is measured and normalized to obtain an average value between frames and analyze learning accuracy. Based on this analysis, this work proposed two methods to effectively select input video data for 3D-CNN deep learning of dynamic hand-gestures. Experimental results showed that the learning interval of image data frames and the similarity of image frames between classes can affect the accuracy of the learning model.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

A Study on Development of a GIS based Post-processing System of the EFDC Model for Supporting Water Quality Management (수질관리 지원을 위한 GIS기반의 EFDC 모델 후처리 시스템 개발 연구)

  • Lee, Geon Hwi;Kim, Kye Hyun;Park, Yong Gil;Lee, Sung Joo
    • Spatial Information Research
    • /
    • v.22 no.4
    • /
    • pp.39-47
    • /
    • 2014
  • The Yeongsan river estuary has a serious water quality problem due to the water stagnation and it is imperative to predict the changes of water quality for mitigating water pollution. EFDC(Environmental Fluid Dynamics Code) model was mainly utilized to predict the changes of water quality for the estuary. The EFDC modeling normally accompanies the large volume of modeling output. For checking the spatial distribution of the modeling results, post-processing for converting of the output is prerequisite and mainly post-processing program is EFDC_Explorer. However, EFDC_Explorer only shows the spatial distribution of the time series and this doesn't support overlay function with other thematic maps. This means the impossible to the connection analysis with a various GIS data and high dimensional analysis. Therefore, this study aims to develop a post-processing system of a EFDC output to use them as GIS layers. For achieving this purpose, a editing module for main input files, and a module for converting binary format into an ASCII format, and a module for converting it into a layer format to use in a GIS based environment, and a module for visualizing the reconfigured model result efficiently were developed. Using the developed system, result file is possible to automatically convert the GIS based layer and it is possible to utilize for water quality management.