• Title/Summary/Keyword: Data Architectures

Search Result 365, Processing Time 0.036 seconds

Genetic Design of Granular-oriented Radial Basis Function Neural Network Based on Information Proximity (정보 유사성 기반 입자화 중심 RBF NN의 진화론적 설계)

  • Park, Ho-Sung;Oh, Sung-Kwun;Kim, Hyun-Ki
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.2
    • /
    • pp.436-444
    • /
    • 2010
  • In this study, we introduce and discuss a concept of a granular-oriented radial basis function neural networks (GRBF NNs). In contrast to the typical architectures encountered in radial basis function neural networks(RBF NNs), our main objective is to develop a design strategy of GRBF NNs as follows : (a) The architecture of the network is fully reflective of the structure encountered in the training data which are granulated with the aid of clustering techniques. More specifically, the output space is granulated with use of K-Means clustering while the information granules in the multidimensional input space are formed by using a so-called context-based Fuzzy C-Means which takes into account the structure being already formed in the output space, (b) The innovative development facet of the network involves a dynamic reduction of dimensionality of the input space in which the information granules are formed in the subspace of the overall input space which is formed by selecting a suitable subset of input variables so that the this subspace retains the structure of the entire space. As this search is of combinatorial character, we use the technique of genetic optimization to determine the optimal input subspaces. A series of numeric studies exploiting some nonlinear process data and a dataset coming from the machine learning repository provide a detailed insight into the nature of the algorithm and its parameters as well as offer some comparative analysis.

A Study on the MMORPG Server Architecture Applying with Arithmetic Server (연산서버를 적용한 MMORPG 게임서버에 관한 연구)

  • Bae, Sung-Gill;Kim, Hye-Young
    • Journal of Korea Game Society
    • /
    • v.13 no.2
    • /
    • pp.39-48
    • /
    • 2013
  • In MMORPGs(Massively Multi-player Online Role-Playing Games) a large number of players actively interact with one another in a virtual world. Therefore MMORGs must be able to quickly process real-time access requests and process requests from numerous gaming users. A key challenge is that the workload of the game server increases as the number of gaming users increases. To address this workload problem, many developers apply with distributed server architectures which use dynamic map partitioning and load balancing according to the server function. Therefore most MMORPG servers partition a virtual world into zones and each zone runs on multiple game servers. These methods cause of players frequently move between game servers, which imposes high overhead for data updates. In this paper, we propose a new architecture that apply with an arithmetic server dedicated to data operation. This architecture enables the existing game servers to process more access and job requests by reducing the load. Through mathematical modeling and experimental results, we show that our scheme yields higher efficiency than the existing ones.

Variations of AlexNet and GoogLeNet to Improve Korean Character Recognition Performance

  • Lee, Sang-Geol;Sung, Yunsick;Kim, Yeon-Gyu;Cha, Eui-Young
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.205-217
    • /
    • 2018
  • Deep learning using convolutional neural networks (CNNs) is being studied in various fields of image recognition and these studies show excellent performance. In this paper, we compare the performance of CNN architectures, KCR-AlexNet and KCR-GoogLeNet. The experimental data used in this paper is obtained from PHD08, a large-scale Korean character database. It has 2,187 samples of each Korean character with 2,350 Korean character classes for a total of 5,139,450 data samples. In the training results, KCR-AlexNet showed an accuracy of over 98% for the top-1 test and KCR-GoogLeNet showed an accuracy of over 99% for the top-1 test after the final training iteration. We made an additional Korean character dataset with fonts that were not in PHD08 to compare the classification success rate with commercial optical character recognition (OCR) programs and ensure the objectivity of the experiment. While the commercial OCR programs showed 66.95% to 83.16% classification success rates, KCR-AlexNet and KCR-GoogLeNet showed average classification success rates of 90.12% and 89.14%, respectively, which are higher than the commercial OCR programs' rates. Considering the time factor, KCR-AlexNet was faster than KCR-GoogLeNet when they were trained using PHD08; otherwise, KCR-GoogLeNet had a faster classification speed.

Cross-Layer Architecture for QoS Provisioning in Wireless Multimedia Sensor Networks

  • Farooq, Muhammad Omer;St-Hilaire, Marc;Kunz, Thomas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.1
    • /
    • pp.178-202
    • /
    • 2012
  • In this paper, we first survey cross-layer architectures for Wireless Sensor Networks (WSNs) and Wireless Multimedia Sensor Networks (WMSNs). Afterwards, we propose a novel cross-layer architecture for QoS provisioning in clustered and multi-hop based WMSNs. The proposed architecture provides support for multiple network-based applications on a single sensor node. For supporting multiple applications on a single node, an area in memory is reserved where each application can store its network protocols settings. Furthermore, the proposed cross-layer architecture supports heterogeneous flows by classifying WMSN traffic into six traffic classes. The architecture incorporates a service differentiation module for QoS provisioning in WMSNs. The service differentiation module defines the forwarding behavior corresponding to each traffic class. The forwarding behavior is primarily determined by the priority of the traffic class, moreover the service differentiation module allocates bandwidth to each traffic class with goals to maximize network utilization and avoid starvation of low priority flows. The proposal incorporates the congestion detection and control algorithm. Upon detection of congestion, the congested node makes an estimate of the data rate that should be used by the node itself and its one-hop away upstream nodes. While estimating the data rate, the congested node considers the characteristics of different traffic classes along with their total bandwidth usage. The architecture uses a shared database to enable cross-layer interactions. Application's network protocol settings and the interaction with the shared database is done through a cross-layer optimization middleware.

Modelling of starch industry wastewater microfiltration parameters by neural network

  • Jokic, Aleksandar I.;Seres, Laslo L.;Milovic, Nemanja R.;Seres, Zita I.;Maravic, Nikola R.;Saranovic, Zana;Dokic, Ljubica P.
    • Membrane and Water Treatment
    • /
    • v.9 no.2
    • /
    • pp.115-121
    • /
    • 2018
  • Artificial neural network (ANN) simulation is used to predict the dynamic change of permeate flux during wheat starch industry wastewater microfiltration with and without static turbulence promoter. The experimental program spans range of a sedimentation times from 2 to 4 h, for feed flow rates 50 to 150 L/h, at transmembrane pressures covering the range of $1{\times}10^5$ to $3{\times}10^5Pa$. ANN predictions of the wastewater microfiltration are compared with experimental results obtained using two different set of microfiltration experiments, with and without static turbulence promoter. The effects of the training algorithm, neural network architectures on the ANN performance are discussed. For the most of the cases considered, the ANN proved to be an adequate interpolation tool, where an excellent prediction was obtained using automated Bayesian regularization as training algorithm. The optimal ANN architecture was determined as 4-10-1 with hyperbolic tangent sigmoid transfer function transfer function for hidden and output layers. The error distributions of data revealed that experimental results are in very good agreement with computed ones with only 2% data points had absolute relative error greater than 20% for the microfiltration without static turbulence promoter whereas for the microfiltration with static turbulence promoter it was 1%. The contribution of filtration time variable to flux values provided by ANNs was determined in an important level at the range of 52-66% due to increased membrane fouling by the time. In the case of microfiltration with static turbulence promoter, relative importance of transmembrane pressure and feed flow rate increased for about 30%.

Object detection in financial reporting documents for subsequent recognition

  • Sokerin, Petr;Volkova, Alla;Kushnarev, Kirill
    • International journal of advanced smart convergence
    • /
    • v.10 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • Document page segmentation is an important step in building a quality optical character recognition module. The study examined already existing work on the topic of page segmentation and focused on the development of a segmentation model that has greater functional significance for application in an organization, as well as broad capabilities for managing the quality of the model. The main problems of document segmentation were highlighted, which include a complex background of intersecting objects. As classes for detection, not only classic text, table and figure were selected, but also additional types, such as signature, logo and table without borders (or with partially missing borders). This made it possible to pose a non-trivial task of detecting non-standard document elements. The authors compared existing neural network architectures for object detection based on published research data. The most suitable architecture was RetinaNet. To ensure the possibility of quality control of the model, a method based on neural network modeling using the RetinaNet architecture is proposed. During the study, several models were built, the quality of which was assessed on the test sample using the Mean average Precision metric. The best result among the constructed algorithms was shown by a model that includes four neural networks: the focus of the first neural network on detecting tables and tables without borders, the second - seals and signatures, the third - pictures and logos, and the fourth - text. As a result of the analysis, it was revealed that the approach based on four neural networks showed the best results in accordance with the objectives of the study on the test sample in the context of most classes of detection. The method proposed in the article can be used to recognize other objects. A promising direction in which the analysis can be continued is the segmentation of tables; the areas of the table that differ in function will act as classes: heading, cell with a name, cell with data, empty cell.

Shanghai Containerised Freight Index Forecasting Based on Deep Learning Methods: Evidence from Chinese Futures Markets

  • Liang Chen;Jiankun Li;Rongyu Pei;Zhenqing Su;Ziyang Liu
    • East Asian Economic Review
    • /
    • v.28 no.3
    • /
    • pp.359-388
    • /
    • 2024
  • With the escalation of global trade, the Chinese commodity futures market has ascended to a pivotal role within the international shipping landscape. The Shanghai Containerized Freight Index (SCFI), a leading indicator of the shipping industry's health, is particularly sensitive to the vicissitudes of the Chinese commodity futures sector. Nevertheless, a significant research gap exists regarding the application of Chinese commodity futures prices as predictive tools for the SCFI. To address this gap, the present study employs a comprehensive dataset spanning daily observations from March 24, 2017, to May 27, 2022, encompassing a total of 29,308 data points. We have crafted an innovative deep learning model that synergistically combines Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN) architectures. The outcomes show that the CNN-LSTM model does a great job of finding the nonlinear dynamics in the SCFI dataset and accurately capturing its long-term temporal dependencies. The model can handle changes in random sample selection, data frequency, and structural shifts within the dataset. It achieved an impressive R2 of 96.6% and did better than the LSTM and CNN models that were used alone. This research underscores the predictive prowess of the Chinese futures market in influencing the Shipping Cost Index, deepening our understanding of the intricate relationship between the shipping industry and the financial sphere. Furthermore, it broadens the scope of machine learning applications in maritime transportation management, paving the way for SCFI forecasting research. The study's findings offer potent decision-support tools and risk management solutions for logistics enterprises, shipping corporations, and governmental entities.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.

A Data-driven Classifier for Motion Detection of Soldiers on the Battlefield using Recurrent Architectures and Hyperparameter Optimization (순환 아키텍쳐 및 하이퍼파라미터 최적화를 이용한 데이터 기반 군사 동작 판별 알고리즘)

  • Joonho Kim;Geonju Chae;Jaemin Park;Kyeong-Won Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.107-119
    • /
    • 2023
  • The technology that recognizes a soldier's motion and movement status has recently attracted large attention as a combination of wearable technology and artificial intelligence, which is expected to upend the paradigm of troop management. The accuracy of state determination should be maintained at a high-end level to make sure of the expected vital functions both in a training situation; an evaluation and solution provision for each individual's motion, and in a combat situation; overall enhancement in managing troops. However, when input data is given as a timer series or sequence, existing feedforward networks would show overt limitations in maximizing classification performance. Since human behavior data (3-axis accelerations and 3-axis angular velocities) handled for military motion recognition requires the process of analyzing its time-dependent characteristics, this study proposes a high-performance data-driven classifier which utilizes the long-short term memory to identify the order dependence of acquired data, learning to classify eight representative military operations (Sitting, Standing, Walking, Running, Ascending, Descending, Low Crawl, and High Crawl). Since the accuracy is highly dependent on a network's learning conditions and variables, manual adjustment may neither be cost-effective nor guarantee optimal results during learning. Therefore, in this study, we optimized hyperparameters using Bayesian optimization for maximized generalization performance. As a result, the final architecture could reduce the error rate by 62.56% compared to the existing network with a similar number of learnable parameters, with the final accuracy of 98.39% for various military operations.

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taeksoo;Han, Ingoo
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.06a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support fer multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To date, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques' results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF