• Title/Summary/Keyword: linear network

Search Result 1,826, Processing Time 0.031 seconds

Demand Forecast For Empty Containers Using MLP (MLP를 이용한 공컨테이너 수요예측)

  • DongYun Kim;SunHo Bang;Jiyoung Jang;KwangSup Shin
    • The Journal of Bigdata
    • /
    • v.6 no.2
    • /
    • pp.85-98
    • /
    • 2021
  • The pandemic of COVID-19 further promoted the imbalance in the volume of imports and exports among countries using containers, which worsened the shortage of empty containers. Since it is important to secure as many empty containers as the appropriate demand for stable and efficient port operation, measures to predict demand for empty containers using various techniques have been studied so far. However, it was based on long-term forecasts on a monthly or annual basis rather than demand forecasts that could be used directly by ports and shipping companies. In this study, a daily and weekly prediction method using an actual artificial neural network is presented. In details, the demand forecasting model has been developed using multi-layer perceptron and multiple linear regression model. In order to overcome the limitation from the lack of data, it was manipulated considering the business process between the loaded container and empty container, which the fully-loaded container is converted to the empty container. From the result of numerical experiment, it has been developed the practically applicable forecasting model, even though it could not show the perfect accuracy.

A Traffic Equilibrium Model with Area-Based Non Additive Road Pricing Schemes (지역기반의 비가산성 도로통행료 부과에 따른 교통망 균형모형)

  • Jung, Jumlae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.5D
    • /
    • pp.649-654
    • /
    • 2008
  • In the definition of non additive path, the sum of travel costs of links making up the path is not equal to the path cost. There are a variety of cases that non-additivity assumption does not hold in transportation fields. Nonetheless, traffic equilibrium models are generally built up on the fundamental hypothesis of additivity assumption. In this case traffic equilibrium models are only applicable within restrictive conditions of the path cost being linear functions of link cost. Area-wide road pricing is known as an example of realistic transportation situations, which violates such additivity assumption. Because travel fare is charged at the moment of driver's passing by exit gate while identified at entry gate, it may not be added linearly proportional to link costs. This research proposes a novel Wordrop type of traffic equilibrium model in terms of area-wide road pricing schemes. It introduces binary indicator variable for the sake of transforming non-additive path cost to additive. Since conventional shortest path and Frank-Wolfe algorithm can be applied without route enumeration and network representation is not required, it can be recognized more generalized model compared to the pre-proposed approaches. Theoretical proofs and case studies are demonstrated.

Ashbery's Aesthetics of Difficulty: Information Theory and Hypertext

  • Ryoo, Gi Taek
    • Journal of English Language & Literature
    • /
    • v.58 no.6
    • /
    • pp.1001-1021
    • /
    • 2012
  • This paper is concerned with John Ashbery's poetics of difficulty, questioning in particular the nature of communication in his difficult poems. Ashbery has an idea of poetry as 'information' to be transmitted to the reader. Meaning, however, is to be created by a series of selections among equally probable choices. Ashbery's poetry has been characterized by resistance to the interpretive system of meaning. But the resistance itself, as I will argue, can be an effective medium of communication as the communicated message is not simply transmitted but 'selected' and thus created by the reader. In Ashbery's poetry, disruptive 'noise' elements can be processed as constructive information. What is normally considered a hindrance or noise can be reversed and added to the information. In Ashbery's poems, random ambiguities or noises can be effectively integrated into the final structure of meaning. Such a stochastic sense of information transfer has been embodied in Ashbery's idea of creating a network of verbal elements in his poetry, analogous to the interconnecting web of hypertext, the most dynamic medium 'information technology' has brought to us. John Ashbery, whose poems are simultaneously incomprehensible and intelligent, employs ambiguities or noise in his poetry, with an attempt to reach through linear language to express nonlinear realities. It is therefore my intention to examine Ashbery's poetics of difficulty, from a perspective of communication transmission, using the theories of information technology and the principles of hypertext theory. Ashbery's poetry raises precisely the problem confronted in the era of communication and information technology. The paper will also show how his aesthetics of difficulty reflects the culture of our uncertain times with overflowing information. With his difficult enigmatic poems, Ashbery was able to move ahead of the technological advances of his time to propose a new way of perceiving the world and life.

Dietary supplementation of solubles from shredded, steam-exploded pine particles modulates cecal microbiome composition in broiler chickens

  • Chris Major Ncho;Akshat Goel;Vaishali Gupta;Chae-Mi Jeong;Ji-Young Jung;Si-Young Ha;Jae-Kyung Yang;Yang-Ho Choi
    • Journal of Animal Science and Technology
    • /
    • v.65 no.5
    • /
    • pp.971-988
    • /
    • 2023
  • This study evaluated the effects of supplementing solubles from shredded, steam-exploded pine particles (SSPP) on growth performances, plasma biochemicals, and microbial composition in broilers. The birds were reared for 28 days and fed basal diets with or without the inclusion of SSPP from 8 days old. There were a total of three dietary treatments supplemented with 0% (0% SSPP), 0.1% (0.1% SSPP) and 0.4% (0.4% SSPP) SSPP in basal diets. Supplementation of SSPP did not significantly affect growth or plasma biochemicals, but there was a clear indication of diet-induced microbial shifts. Beta-diversity analysis revealed SSPP supplementation-related clustering (ANOSIM: r = 0.31, p < 0.01), with an overall lower (PERMDISP: p < 0.05) individual dispersion in comparison to the control group. In addition, the proportions of the Bacteroides were increased, and the relative abundances of the families Vallitaleaceae, Defluviitaleaceae, Clostridiaceae, and the genera Butyricicoccus and Anaerofilum (p < 0.05) were significantly higher in the 0.4% SSPP group than in the control group. Furthermore, the linear discriminant analysis effect size (LEfSe) also showed that beneficial bacteria such as Ruminococcus albus and Butyricicoccus pullicaecorum were identified as microbial biomarkers of dietary SSPP inclusion (p < 0.05; | LDA effect size | > 2.0). Finally, network analysis showed that strong positive correlations were established among microbial species belonging to the class Clostridia, whereas Erysipelotrichia and Bacteroidia were mostly negatively correlated with Clostridia. Taken together, the results suggested that SSPP supplementation modulates the cecal microbial composition of broilers toward a "healthier" profile.

Cortical Iron Accumulation as an Imaging Marker for Neurodegeneration in Clinical Cognitive Impairment Spectrum: A Quantitative Susceptibility Mapping Study

  • Hyeong Woo Kim;Subin Lee;Jin Ho Yang;Yeonsil Moon;Jongho Lee;Won-Jin Moon
    • Korean Journal of Radiology
    • /
    • v.24 no.11
    • /
    • pp.1131-1141
    • /
    • 2023
  • Objective: Cortical iron deposition has recently been shown to occur in Alzheimer's disease (AD). In this study, we aimed to evaluate how cortical gray matter iron, measured using quantitative susceptibility mapping (QSM), differs in the clinical cognitive impairment spectrum. Materials and Methods: This retrospective study evaluated 73 participants (mean age ± standard deviation, 66.7 ± 7.6 years; 52 females and 21 males) with normal cognition (NC), 158 patients with mild cognitive impairment (MCI), and 48 patients with AD dementia. The participants underwent brain magnetic resonance imaging using a three-dimensional multi-dynamic multi-echo sequence on a 3-T scanner. We employed a deep neural network (QSMnet+) and used automatic segmentation software based on FreeSurfer v6.0 to extract anatomical labels and volumes of interest in the cortex. We used analysis of covariance to investigate the differences in susceptibility among the clinical diagnostic groups in each brain region. Multivariable linear regression analysis was performed to study the association between susceptibility values and cognitive scores including the Mini-Mental State Examination (MMSE). Results: Among the three groups, the frontal (P < 0.001), temporal (P = 0.004), parietal (P = 0.001), occipital (P < 0.001), and cingulate cortices (P < 0.001) showed a higher mean susceptibility in patients with MCI and AD than in NC subjects. In the combined MCI and AD group, the mean susceptibility in the cingulate cortex (β = -216.21, P = 0.019) and insular cortex (β = -276.65, P = 0.001) were significant independent predictors of MMSE scores after correcting for age, sex, education, regional volume, and APOE4 carrier status. Conclusion: Iron deposition in the cortex, as measured by QSMnet+, was higher in patients with AD and MCI than in NC participants. Iron deposition in the cingulate and insular cortices may be an early imaging marker of cognitive impairment related neurodegeneration.

Research on Performance of Graph Algorithm using Deep Learning Technology (딥러닝 기술을 적용한 그래프 알고리즘 성능 연구)

  • Giseop Noh
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.471-476
    • /
    • 2024
  • With the spread of various smart devices and computing devices, big data generation is occurring widely. Machine learning is an algorithm that performs reasoning by learning data patterns. Among the various machine learning algorithms, the algorithm that attracts attention is deep learning based on neural networks. Deep learning is achieving rapid performance improvement with the release of various applications. Recently, among deep learning algorithms, attempts to analyze data using graph structures are increasing. In this study, we present a graph generation method for transferring to a deep learning network. This paper proposes a method of generalizing node properties and edge weights in the graph generation process and converting them into a structure for deep learning input by presenting a matricization We present a method of applying a linear transformation matrix that can preserve attribute and weight information in the graph generation process. Finally, we present a deep learning input structure of a general graph and present an approach for performance analysis.

Multi-Variate Tabular Data Processing and Visualization Scheme for Machine Learning based Analysis: A Case Study using Titanic Dataset (기계 학습 기반 분석을 위한 다변량 정형 데이터 처리 및 시각화 방법: Titanic 데이터셋 적용 사례 연구)

  • Juhyoung Sung;Kiwon Kwon;Kyoungwon Park;Byoungchul Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.121-130
    • /
    • 2024
  • As internet and communication technology (ICT) is improved exponentially, types and amount of available data also increase. Even though data analysis including statistics is significant to utilize this large amount of data, there are inevitable limits to process various and complex data in general way. Meanwhile, there are many attempts to apply machine learning (ML) in various fields to solve the problems according to the enhancement in computational performance and increase in demands for autonomous systems. Especially, data processing for the model input and designing the model to solve the objective function are critical to achieve the model performance. Data processing methods according to the type and property have been presented through many studies and the performance of ML highly varies depending on the methods. Nevertheless, there are difficulties in deciding which data processing method for data analysis since the types and characteristics of data have become more diverse. Specifically, multi-variate data processing is essential for solving non-linear problem based on ML. In this paper, we present a multi-variate tabular data processing scheme for ML-aided data analysis by using Titanic dataset from Kaggle including various kinds of data. We present the methods like input variable filtering applying statistical analysis and normalization according to the data property. In addition, we analyze the data structure using visualization. Lastly, we design an ML model and train the model by applying the proposed multi-variate data process. After that, we analyze the passenger's survival prediction performance of the trained model. We expect that the proposed multi-variate data processing and visualization can be extended to various environments for ML based analysis.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

A Performance Comparison of Super Resolution Model with Different Activation Functions (활성함수 변화에 따른 초해상화 모델 성능 비교)

  • Yoo, Youngjun;Kim, Daehee;Lee, Jaekoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.10
    • /
    • pp.303-308
    • /
    • 2020
  • The ReLU(Rectified Linear Unit) function has been dominantly used as a standard activation function in most deep artificial neural network models since it was proposed. Later, Leaky ReLU, Swish, and Mish activation functions were presented to replace ReLU, which showed improved performance over existing ReLU function in image classification task. Therefore, we recognized the need to experiment with whether performance improvements could be achieved by replacing the RELU with other activation functions in the super resolution task. In this paper, the performance was compared by changing the activation functions in EDSR model, which showed stable performance in the super resolution task. As a result, in experiments conducted with changing the activation function of EDSR, when the resolution was converted to double, the existing activation function, ReLU, showed similar or higher performance than the other activation functions used in the experiment. When the resolution was converted to four times, Leaky ReLU and Swish function showed slightly improved performance over ReLU. PSNR and SSIM, which can quantitatively evaluate the quality of images, were able to identify average performance improvements of 0.06%, 0.05% when using Leaky ReLU, and average performance improvements of 0.06% and 0.03% when using Swish. When the resolution is converted to eight times, the Mish function shows a slight average performance improvement over the ReLU. Using Mish, PSNR and SSIM were able to identify an average of 0.06% and 0.02% performance improvement over the RELU. In conclusion, Leaky ReLU and Swish showed improved performance compared to ReLU for super resolution that converts resolution four times and Mish showed improved performance compared to ReLU for super resolution that converts resolution eight times. In future study, we should conduct comparative experiments to replace activation functions with Leaky ReLU, Swish and Mish to improve performance in other super resolution models.

A Study on the Data Driven Neural Network Model for the Prediction of Time Series Data: Application of Water Surface Elevation Forecasting in Hangang River Bridge (시계열 자료의 예측을 위한 자료 기반 신경망 모델에 관한 연구: 한강대교 수위예측 적용)

  • Yoo, Hyungju;Lee, Seung Oh;Choi, Seohye;Park, Moonhyung
    • Journal of Korean Society of Disaster and Security
    • /
    • v.12 no.2
    • /
    • pp.73-82
    • /
    • 2019
  • Recently, as the occurrence frequency of sudden floods due to climate change increased, the flood damage on riverside social infrastructures was extended so that there has been a threat of overflow. Therefore, a rapid prediction of potential flooding in riverside social infrastructure is necessary for administrators. However, most current flood forecasting models including hydraulic model have limitations which are the high accuracy of numerical results but longer simulation time. To alleviate such limitation, data driven models using artificial neural network have been widely used. However, there is a limitation that the existing models can not consider the time-series parameters. In this study the water surface elevation of the Hangang River bridge was predicted using the NARX model considering the time-series parameter. And the results of the ANN and RNN models are compared with the NARX model to determine the suitability of NARX model. Using the 10-year hydrological data from 2009 to 2018, 70% of the hydrological data were used for learning and 15% was used for testing and evaluation respectively. As a result of predicting the water surface elevation after 3 hours from the Hangang River bridge in 2018, the ANN, RNN and NARX models for RMSE were 0.20 m, 0.11 m, and 0.09 m, respectively, and 0.12 m, 0.06 m, and 0.05 m for MAE, and 1.56 m, 0.55 m and 0.10 m for peak errors respectively. By analyzing the error of the prediction results considering the time-series parameters, the NARX model is most suitable for predicting water surface elevation. This is because the NARX model can learn the trend of the time series data and also can derive the accurate prediction value even in the high water surface elevation prediction by using the hyperbolic tangent and Rectified Linear Unit function as an activation function. However, the NARX model has a limit to generate a vanishing gradient as the sequence length becomes longer. In the future, the accuracy of the water surface elevation prediction will be examined by using the LSTM model.