• Title/Summary/Keyword: Time-series Analysis

Search Result 3,215, Processing Time 0.035 seconds

A Brief Efficiency Measurement Way for the Korean Container Terminals Using Stochastic Frontier Analysis (확률프론티어분석을 통한 국내컨테이너 터미널의 효율성 측정방법 소고)

  • Park, Ro-Kyung
    • Journal of Korea Port Economic Association
    • /
    • v.26 no.4
    • /
    • pp.63-87
    • /
    • 2010
  • The purpose of this paper is to measure the efficiency of Korean container terminals by using SFA(Stochastic Frontier Analysis). Inputs[Number of Employee, Quay Length, Container Terminal Area, Number of Gantry Crane], and output[TEU] are used for 3 years(2002,2003, and 2004) for 8 Korean container terminals by applying both SFA and DEA models. Empirical main results are as follows: First, Null hypothesis that technical inefficiency is not existed is rejected and in the trasnslog model, the estimate is significant. Second, time-series models show the significant results. Third, average technical efficiency of Korean container terminals are 73.49% in Cobb-Douglas model, and 79.04% in translog model. Fourth, to enhance the technical efficiency, Korean container terminals should increase the handling amount of TEUs. Fifth, both SFA and DEA models have the high Spearman ranking of correlation coefficients(84.45%). The main policy implication based on the findings of this study is that the manager of port investment and management of Ministry of Land, Transport and Maritime Affairs in Korea should introduce the SFA with DEA models for measuring the efficiency of Korean ports and terminals.

Application of the Poisson Cluster Rainfall Generation Model to the Urban Flood Analysis (포아송 클러스터 강우 생성 모형을 이용한 도시 홍수 해석)

  • Park, Hyunjin;Yang, Jungsuk;Han, Jaemoon;Kim, Dongkyun
    • Journal of Korea Water Resources Association
    • /
    • v.48 no.9
    • /
    • pp.729-741
    • /
    • 2015
  • This study examined the applicability of MBLRP (Modified Bartlett-Lewis Rectangular Pulse) rainfall generation model for an urban flood simulation which is a type of Poisson cluster rainfall generation model. This study constructed XP-SWMM model for Namgajwa area of Hongjecheon basin, which is a two-dimensional pipe network-surface flood simulation program and computed a flood discharge and a flooded area with input data of synthetic rainfall time series of 200 years that were generated by the MBLRP model. This study compared the data of flood with synthetic rainfall and flood with corresponding values which were based on design rainfall. The results showed that the flooded area computed with MBLRP model was somewhat smaller than the corresponding values on the basis of the design. A degree of underestimation was from 8% (5 year) to 34% (200 year) and the degree of underestimation increased as a return period increased. This study is meaningful in that it proposes methodology that enables quantifiability of uncertain variables which are related to a flooding through Monte Carlo analysis of urban flooding simulation and applicability and limitations thereof.

Time Series Analysis of Area of Deltaic Barrier Island in Nakdong River Using Landsat Satellite Image (Landsat 위성영상을 활용한 낙동강 삼각주 연안사주의 면적 시계열 분석)

  • Lee, Seulki;Yang, Mihee;Lee, Changwook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.5
    • /
    • pp.457-469
    • /
    • 2016
  • Nakdong river barrage was affected by artificial interference such as construction of port, industrial complex and estuary barrage. This change in Nadong river lead to environmental changes and affected the ability of barrier islands. Therefore, it is decided that the observation of changes in the Nakdong river estuary is very important. In this paper, the topographic change of the Nakdong river barrage observe based on Landsat TM, ETM+ images from 1984 to 2015. In addition, this study tried to conduct a comparative analysis on the area for change of sandy sediment according to tide level. This results could estimate height and volume about sandy sediment accumulated on the lower sand dune. Also, these results are expected to be the basis for prediction of the changing topography of the sand dune. The area of the average change in region 1,2,3 was calculated as 3,015m2, 167,550m2, 14,596m2. This result is expected to be very useful for the continuous observation for sediment changes of Nakdong river.

Design of Data-centroid Radial Basis Function Neural Network with Extended Polynomial Type and Its Optimization (데이터 중심 다항식 확장형 RBF 신경회로망의 설계 및 최적화)

  • Oh, Sung-Kwun;Kim, Young-Hoon;Park, Ho-Sung;Kim, Jeong-Tae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.3
    • /
    • pp.639-647
    • /
    • 2011
  • In this paper, we introduce a design methodology of data-centroid Radial Basis Function neural networks with extended polynomial function. The two underlying design mechanisms of such networks involve K-means clustering method and Particle Swarm Optimization(PSO). The proposed algorithm is based on K-means clustering method for efficient processing of data and the optimization of model was carried out using PSO. In this paper, as the connection weight of RBF neural networks, we are able to use four types of polynomials such as simplified, linear, quadratic, and modified quadratic. Using K-means clustering, the center values of Gaussian function as activation function are selected. And the PSO-based RBF neural networks results in a structurally optimized structure and comes with a higher level of flexibility than the one encountered in the conventional RBF neural networks. The PSO-based design procedure being applied at each node of RBF neural networks leads to the selection of preferred parameters with specific local characteristics (such as the number of input variables, a specific set of input variables, and the distribution constant value in activation function) available within the RBF neural networks. To evaluate the performance of the proposed data-centroid RBF neural network with extended polynomial function, the model is experimented with using the nonlinear process data(2-Dimensional synthetic data and Mackey-Glass time series process data) and the Machine Learning dataset(NOx emission process data in gas turbine plant, Automobile Miles per Gallon(MPG) data, and Boston housing data). For the characteristic analysis of the given entire dataset with non-linearity as well as the efficient construction and evaluation of the dynamic network model, the partition of the given entire dataset distinguishes between two cases of Division I(training dataset and testing dataset) and Division II(training dataset, validation dataset, and testing dataset). A comparative analysis shows that the proposed RBF neural networks produces model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.

Visualization of Korean Speech Based on the Distance of Acoustic Features (음성특징의 거리에 기반한 한국어 발음의 시각화)

  • Pok, Gou-Chol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.3
    • /
    • pp.197-205
    • /
    • 2020
  • Korean language has the characteristics that the pronunciation of phoneme units such as vowels and consonants are fixed and the pronunciation associated with a notation does not change, so that foreign learners can approach rather easily Korean language. However, when one pronounces words, phrases, or sentences, the pronunciation changes in a manner of a wide variation and complexity at the boundaries of syllables, and the association of notation and pronunciation does not hold any more. Consequently, it is very difficult for foreign learners to study Korean standard pronunciations. Despite these difficulties, it is believed that systematic analysis of pronunciation errors for Korean words is possible according to the advantageous observations that the relationship between Korean notations and pronunciations can be described as a set of firm rules without exceptions unlike other languages including English. In this paper, we propose a visualization framework which shows the differences between standard pronunciations and erratic ones as quantitative measures on the computer screen. Previous researches only show color representation and 3D graphics of speech properties, or an animated view of changing shapes of lips and mouth cavity. Moreover, the features used in the analysis are only point data such as the average of a speech range. In this study, we propose a method which can directly use the time-series data instead of using summary or distorted data. This was realized by using the deep learning-based technique which combines Self-organizing map, variational autoencoder model, and Markov model, and we achieved a superior performance enhancement compared to the method using the point-based data.

An Adaptive Grid-based Clustering Algorithm over Multi-dimensional Data Streams (적응적 격자기반 다차원 데이터 스트림 클러스터링 방법)

  • Park, Nam-Hun;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.14D no.7
    • /
    • pp.733-742
    • /
    • 2007
  • A data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. Due to this reason, memory usage for data stream analysis should be confined finitely although new data elements are continuously generated in a data stream. To satisfy this requirement, data stream processing sacrifices the correctness of its analysis result by allowing some errors. The old distribution statistics are diminished by a predefined decay rate as time goes by, so that the effect of the obsolete information on the current result of clustering can be eliminated without maintaining any data element physically. This paper proposes a grid based clustering algorithm for a data stream. Given a set of initial grid cells, the dense range of a grid cell is recursively partitioned into a smaller cell based on the distribution statistics of data elements by a top down manner until the smallest cell, called a unit cell, is identified. Since only the distribution statistics of data elements are maintained by dynamically partitioned grid cells, the clusters of a data stream can be effectively found without maintaining the data elements physically. Furthermore, the memory usage of the proposed algorithm is adjusted adaptively to the size of confined memory space by flexibly resizing the size of a unit cell. As a result, the confined memory space can be fully utilized to generate the result of clustering as accurately as possible. The proposed algorithm is analyzed by a series of experiments to identify its various characteristics

Detection of Forest Fire and NBR Mis-classified Pixel Using Multi-temporal Sentinel-2A Images (다시기 Sentinel-2A 영상을 활용한 산불피해 변화탐지 및 NBR 오분류 픽셀 탐지)

  • Youn, Hyoungjin;Jeong, Jongchul
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_2
    • /
    • pp.1107-1115
    • /
    • 2019
  • Satellite data play a major role in supporting knowledge about forest fire by delivering rapid information to map areas damaged. This study, we used 7 Sentinel-2A images to detect change area in forests of Sokcho on April 4, 2019. The process of classify forest fire severity used 7 levels from Sentinel-2A dNBR(differenced Normalized Burn Ratio). In the process of classifying forest fire damage areas, the study selected three areas with high regrowth of vegetation level and conducted a detailed spatial analysis of the areas concerned. The results of dNBR analysis, regrowth of coniferous forest was greater than broad-leaf forest, but NDVI showed the lowest level of vegetation. This is the error of dNBR classification of dNBR. The results of dNBR time series, an area of forest fire damage decreased to a large extent between April 20th and May 3rd. This is an example of the regrowth by developing rare-plants and recovering broad-leaf plants vegetation. The results showed that change area was detected through the change detection of danage area by forest category and the classification errors of the coniferous forest were reached through the comparison of NDVI and dNBR. Therefore, the need to improve the precision Korean forest fire damage rating table accompanied by field investigations was suggested during the image classification process through dNBR.

A Study on Measurement of TFP and Determinant factor (IT제조업의 총요소생산성 추정 및 결정요인 분석)

  • Lee, Young-Soo;Kim, Jung-Un;Jung, Hyun-Joon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.1
    • /
    • pp.76-86
    • /
    • 2008
  • This paper estimates the TFP in IT manufacturing (total factor productivity) by employment size of establishment and analyses the determinants of it. And the panel data is consisted of time series and cross section data of 4 employment size of establishment over $1990{\sim}2004$. During the period from 1991 to 1997 TFP increased positively irrespective of the employment size of establishment, but from 1998 to 2004 TFP increase rate turned negative except large size(more than 300) of establishment. TFP assume macro variables and policy variables as the determinants of IT manufacturing TFP. The analysis of whole size of establishment shows that sales growth rate is significantly positive, which makes us conclude that there is a teaming by doing effect and economy of scale. But some variables(i.e. IT capital stock, policy financing, and openness etc.) are significant in only a few models. So there may be different effect by employment size of establishment. In TFP determinants analysis by employment size of establishment, we find that coefficients of policy financing and openness variables are significantly positive. The larger employment size of establishment is, the larger scale economy is. And for large size(more than 300) establishment, IT capital stock helps propel the increase of the productivity.

  • PDF

An Empirical Study on Predictive Modeling to enhance the Product-Technical Roadmap (제품-기술로드맵 개발을 강화하기 위한 예측모델링에 관한 실증 연구)

  • Park, Kigon;Kim, YoungJun
    • Journal of Technology Innovation
    • /
    • v.29 no.4
    • /
    • pp.1-30
    • /
    • 2021
  • Due to the recent development of system semiconductors, technical innovation for the electric devices of the automobile industry is rapidly progressing. In particular, the electric device of automobiles is accelerating technology development competition among automobile parts makers, and the development cycle is also changing rapidly. Due to these changes, the importance of strategic planning for R&D is further strengthened. Due to the paradigm shift in the automobile industry, the Product-Technical Roadmap (P/TRM), one of the R&D strategies, analyzes technology forecasting, technology level evaluation, and technology acquisition method (Make/Collaborate/Buy) at the planning stage. The product-technical roadmap is a tool that identifies customer needs of products and technologies, selects technologies and sets development directions. However, most companies are developing the product-technical roadmap through a qualitative method that mainly relies on the technical papers, patent analysis, and expert Delphi method. In this study, empirical research was conducted through simulations that can supplement and strengthen the product-technical roadmap centered on the automobile industry by fusing Gartner's hype cycle, cumulative moving average-based data preprocessing, and deep learning (LSTM) time series analysis techniques. The empirical study presented in this paper can be used not only in the automobile industry but also in other manufacturing fields in general. In addition, from the corporate point of view, it is considered that it will become a foundation for moving forward as a leading company by providing products to the market in a timely manner through a more accurate product-technical roadmap, breaking away from the roadmap preparation method that has relied on qualitative methods.

A Study constructing a Function-Based Records Classification System for Korean Individual Church (한국 개(個)교회기록물의 기능분류 방안)

  • Ma, Won-jun
    • The Korean Journal of Archival Studies
    • /
    • no.10
    • /
    • pp.145-194
    • /
    • 2004
  • Church archives are the evidential instruments to remember church activity and important information aggregate which has administrative, legal, financial, historical, faithful value as the collective memory of church community. So it must be managed necessarily and the management orders are based on the Bible. The western churches which have a correct understanding about the importance of church records and management order have taken multilateral endeavor to create, manage church archives systematically. On the other hand, korean churches don't have the records management systems. Therefore, Records created in individual church are mostly managed unsystematically and exist as 'backlogs', finally, they are destructed without reasonable formalities. In those problems, the purpose of this study is to offer the way of records classification and disposition instrument with recognition that records management should be done from the time of creation or previous to it. As a concrete device for them, I tried to embody the function-based classification method and disposal schedule. I prefer the function-based classification and disposal schedule to the organization and function-based classification to present stable classification and disposal schedule, as we can say the best feature of the modern organization is multilateral and also churches have same aspect. For this study, I applied DIRKS(Designing and Implementing Recordkeeping Systems) manual which National Archives of Australia provide and guidelines in ICA/IRMT series to construct the theory of the function-based classification in individual churches. Through them, it was possible to present a model for preliminary investigation, analysis of business activity, records survey, disposal schedule. And I took an example of 'Myong Sung Presbyterian Church' which belong to 'The Presbyterian church in Korea'. I explained in detail codifying process and results of preliminary investigation in 'Myong Sung Presbyterian Church', analysis of business activity based on it, process of presenting the function-based classification and disposal schedule got from all those steps. For establishing disposal schedule, I planned 'General Disposal Schedule' and 'Agency Disposal Schedule' which categorized 'general function' and 'agency function' of an agency, according to DIRKS in Australia and ICA/IRMT. And for estimation of disposal date I had a thorough grasp of important records category presented in 'Constitution of General Assembly', interview to know the importance of tasks, and added examples of disposal schedule in western church archives. This study has significance that it was intended to embody 'the function-based classification' and 'disposal schedule' suitable for individual church, applying DIRKS in Australia and ICA/IRMT on absence of the theory or example which tried to present the function-based classification and disposal schedule for individual church. Also it is meaningful to present a model that can classify and disposal real records according to the function in individual church which has no recognition or way about records management.