• Title/Summary/Keyword: network optimization model

Search Result 811, Processing Time 0.027 seconds

Optimal Reservour Operation for Flood Control Using a Hybrid Approach (Case Study: Chungju Multipurpose Reservoir in Korea) (복합 모델링 기법을 이용한 홍수시 저수지 최적 운영 (사례 연구 : 충주 다목적 저수지))

  • Lee, Han-Gu;Lee, Sang-Ho
    • Journal of Korea Water Resources Association
    • /
    • v.31 no.6
    • /
    • pp.727-739
    • /
    • 1998
  • The main objectives o reservoir optimal operation can be described as follows : maximization of the benefits through optimal allocation of the limited water resources for various purpose; minimization of t도 costs by the flood damage in potential damaging regions and risk of dam failure, etc. through safe drainage of a bulky volume of excessive water by a proper reservoir operation. Reviewing the past research works related to reservoir operation, we can find that the study on the matter of the former has been extensively carried out in last decades rather than the matter of the latter. This study is focused on developing a methodology of optimal reservoir operation for flood control, and a case study is performed on the Chungju multipurpose reservoir in Korea. The final goal of the study is to establish a reservoir optimal operation system which can search optimal policy to compromise two conflicting objectives: downstream flood damage and dam safety-upstream flood damage. In order to reach the final goal of the study, the following items were studied : (1)validation of hydrological data using HYMOS: (2)establishment of a downstream flood routing model coupling a rainfall-runoff model and SOBEK system for 1-D hydrodynamic flood routing; (3)replication of a flood damage estimation model by a neural network; (4)development of an integrated reservoir optimization module for an optimal operation policy.

  • PDF

AutoML and Artificial Neural Network Modeling of Process Dynamics of LNG Regasification Using Seawater (해수 이용 LNG 재기화 공정의 딥러닝과 AutoML을 이용한 동적모델링)

  • Shin, Yongbeom;Yoo, Sangwoo;Kwak, Dongho;Lee, Nagyeong;Shin, Dongil
    • Korean Chemical Engineering Research
    • /
    • v.59 no.2
    • /
    • pp.209-218
    • /
    • 2021
  • First principle-based modeling studies have been performed to improve the heat exchange efficiency of ORV and optimize operation, but the heat transfer coefficient of ORV is an irregular system according to time and location, and it undergoes a complex modeling process. In this study, FNN, LSTM, and AutoML-based modeling were performed to confirm the effectiveness of data-based modeling for complex systems. The prediction accuracy indicated high performance in the order of LSTM > AutoML > FNN in MSE. The performance of AutoML, an automatic design method for machine learning models, was superior to developed FNN, and the total time required for model development was 1/15 compared to LSTM, showing the possibility of using AutoML. The prediction of NG and seawater discharged temperatures using LSTM and AutoML showed an error of less than 0.5K. Using the predictive model, real-time optimization of the amount of LNG vaporized that can be processed using ORV in winter is performed, confirming that up to 23.5% of LNG can be additionally processed, and an ORV optimal operation guideline based on the developed dynamic prediction model was presented.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.

Apartment Price Prediction Using Deep Learning and Machine Learning (딥러닝과 머신러닝을 이용한 아파트 실거래가 예측)

  • Hakhyun Kim;Hwankyu Yoo;Hayoung Oh
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.59-76
    • /
    • 2023
  • Since the COVID-19 era, the rise in apartment prices has been unconventional. In this uncertain real estate market, price prediction research is very important. In this paper, a model is created to predict the actual transaction price of future apartments after building a vast data set of 870,000 from 2015 to 2020 through data collection and crawling on various real estate sites and collecting as many variables as possible. This study first solved the multicollinearity problem by removing and combining variables. After that, a total of five variable selection algorithms were used to extract meaningful independent variables, such as Forward Selection, Backward Elimination, Stepwise Selection, L1 Regulation, and Principal Component Analysis(PCA). In addition, a total of four machine learning and deep learning algorithms were used for deep neural network(DNN), XGBoost, CatBoost, and Linear Regression to learn the model after hyperparameter optimization and compare predictive power between models. In the additional experiment, the experiment was conducted while changing the number of nodes and layers of the DNN to find the most appropriate number of nodes and layers. In conclusion, as a model with the best performance, the actual transaction price of apartments in 2021 was predicted and compared with the actual data in 2021. Through this, I am confident that machine learning and deep learning will help investors make the right decisions when purchasing homes in various economic situations.

Energy Efficiency Enhancement of Macro-Femto Cell Tier (매크로-펨토셀의 에너지 효율 향상)

  • Kim, Jeong-Su;Lee, Moon-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.1
    • /
    • pp.47-58
    • /
    • 2018
  • The heterogeneous cellular network (HCN) is most significant as a key technology for future fifth generation (5G) wireless networks. The heterogeneous network considered consists of randomly macrocell base stations (MBSs) overlaid with femtocell base stations (BSs). The stochastic geometry has been shown to be a very powerful tool to model, analyze, and design networks with random topologies such as wireless ad hoc, sensor networks, and multi- tier cellular networks. The HCNs can be energy-efficiently designed by deploying various BSs belonging to different networks, which has drawn significant attention to one of the technologies for future 5G wireless networks. In this paper, we propose switching off/on systems enabling the BSs in the cellular networks to efficiently consume the power by introducing active/sleep modes, which is able to reduce the interference and power consumption in the MBSs and FBSs on an individual basis as well as improve the energy efficiency of the cellular networks. We formulate the minimization of the power onsumption for the MBSs and FBSs as well as an optimization problem to maximize the energy efficiency subject to throughput outage constraints, which can be solved the Karush Kuhn Tucker (KKT) conditions according to the femto tier BS density. We also formulate and compare the coverage probability and the energy efficiency in HCNs scenarios with and without coordinated multi-point (CoMP) to avoid coverage holes.

A Novel Compressed Sensing Technique for Traffic Matrix Estimation of Software Defined Cloud Networks

  • Qazi, Sameer;Atif, Syed Muhammad;Kadri, Muhammad Bilal
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4678-4702
    • /
    • 2018
  • Traffic Matrix estimation has always caught attention from researchers for better network management and future planning. With the advent of high traffic loads due to Cloud Computing platforms and Software Defined Networking based tunable routing and traffic management algorithms on the Internet, it is more necessary as ever to be able to predict current and future traffic volumes on the network. For large networks such origin-destination traffic prediction problem takes the form of a large under- constrained and under-determined system of equations with a dynamic measurement matrix. Previously, the researchers had relied on the assumption that the measurement (routing) matrix is stationary due to which the schemes are not suitable for modern software defined networks. In this work, we present our Compressed Sensing with Dynamic Model Estimation (CS-DME) architecture suitable for modern software defined networks. Our main contributions are: (1) we formulate an approach in which measurement matrix in the compressed sensing scheme can be accurately and dynamically estimated through a reformulation of the problem based on traffic demands. (2) We show that the problem formulation using a dynamic measurement matrix based on instantaneous traffic demands may be used instead of a stationary binary routing matrix which is more suitable to modern Software Defined Networks that are constantly evolving in terms of routing by inspection of its Eigen Spectrum using two real world datasets. (3) We also show that linking this compressed measurement matrix dynamically with the measured parameters can lead to acceptable estimation of Origin Destination (OD) Traffic flows with marginally poor results with other state-of-art schemes relying on fixed measurement matrices. (4) Furthermore, using this compressed reformulated problem, a new strategy for selection of vantage points for most efficient traffic matrix estimation is also presented through a secondary compression technique based on subset of link measurements. Experimental evaluation of proposed technique using real world datasets Abilene and GEANT shows that the technique is practical to be used in modern software defined networks. Further, the performance of the scheme is compared with recent state of the art techniques proposed in research literature.

The Optimization of Hybrid BCI Systems based on Blind Source Separation in Single Channel (단일 채널에서 블라인드 음원분리를 통한 하이브리드 BCI시스템 최적화)

  • Yang, Da-Lin;Nguyen, Trung-Hau;Kim, Jong-Jin;Chung, Wan-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.19 no.1
    • /
    • pp.7-13
    • /
    • 2018
  • In the current study, we proposed an optimized brain-computer interface (BCI) which employed blind source separation (BBS) approach to remove noises. Thus motor imagery (MI) signal and steady state visual evoked potential (SSVEP) signal were easily to be detected due to enhancement in signal-to-noise ratio (SNR). Moreover, a combination between MI and SSVEP which is typically can increase the number of commands being generated in the current BCI. To reduce the computational time as well as to bring the BCI closer to real-world applications, the current system utilizes a single-channel EEG signal. In addition, a convolutional neural network (CNN) was used as the multi-class classification model. We evaluated the performance in term of accuracy between a non-BBS+BCI and BBS+BCI. Results show that the accuracy of the BBS+BCI is achieved $16.15{\pm}5.12%$ higher than that in the non-BBS+BCI by using BBS than non-used on. Overall, the proposed BCI system demonstrate a feasibility to be applied for multi-dimensional control applications with a comparable accuracy.

On Developing The Intellingent contro System of a Robot Manupulator by Fussion of Fuzzy Logic and Neural Network (퍼지논리와 신경망 융합에 의한 로보트매니퓰레이터의 지능형제어 시스템 개발)

  • 김용호;전홍태
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.52-64
    • /
    • 1995
  • Robot manipulator is a highly nonlinear-time varying system. Therefore, a lot of control theory has been applied to the system. Robot manipulator has two types of control; one is path planning, another is path tracking. In this paper, we select the path tracking, and for this purpose, propose the intelligent control¬ler which is combined with fuzzy logic and neural network. The fuzzy logic provides an inference morphorlogy that enables approximate human reasoning to apply to knowledge-based systems, and also provides a mathematical strength to capture the uncertainties associated with human cognitive processes like thinking and reasoning. Based on this fuzzy logic, the fuzzy logic controller(FLC) provides a means of converhng a linguistic control strategy based on expert knowledge into automahc control strategy. But the construction of rule-base for a nonlinear hme-varying system such as robot, becomes much more com¬plicated because of model uncertainty and parameter variations. To cope with these problems, a auto-tuning method of the fuzzy rule-base is required. In this paper, the GA-based Fuzzy-Neural control system combining Fuzzy-Neural control theory with the genetic algorithm(GA), that is known to be very effective in the optimization problem, will be proposed. The effectiveness of the proposed control system will be demonstrated by computer simulations using a two degree of freedom robot manipulator.

  • PDF

A BPM Activity-Performer Correspondence Analysis Method (BPM 기반의 업무-수행자 대응분석 기법)

  • Ahn, Hyun;Park, Chungun;Kim, Kwanghoon
    • Journal of Internet Computing and Services
    • /
    • v.14 no.4
    • /
    • pp.63-72
    • /
    • 2013
  • Business Process Intelligence (BPI) is one of the emerging technologies in the knowledge discovery and analysis area. BPI deals with a series of techniques from discovering knowledge to analyzing the discovered knowledge in BPM-supported organizations. By means of the BPI technology, we are able to provide the full functionality of control, monitoring, prediction, and optimization of process-supported organizational knowledge. Particularly, we focus on the focal organizational knowledge, which is so-called the BPM activity-performer affiliation networking knowledge that represents the affiliated relationships between performers and activities in enacting a specific business process model. That is, in this paper we devise a statistical analysis method to be applied to the BPM activity-performer affiliation networking knowledge, and dubbed it the activity-performer correspondence analysis method. The devised method consists of a series of pipelined phases from the generation of a bipartite matrix to the visualization of the analysis result, and through the method we are eventually able to analyze the degree of correspondences between a group of performers and a group of activities involved in a business process model or a package of business process models. Conclusively, we strongly expect the effectiveness and efficiency of the human resources allotments, and the improvement of the correlational degree between business activities and performers, in planning and designing business process models and packages for the BPM-supported organization, through the activity-performer correspondence analysis method.

Deep Neural Network Analysis System by Visualizing Accumulated Weight Changes (누적 가중치 변화의 시각화를 통한 심층 신경망 분석시스템)

  • Taelin Yang;Jinho Park
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.85-92
    • /
    • 2023
  • Recently, interest in artificial intelligence has increased due to the development of artificial intelligence fields such as ChatGPT and self-driving cars. However, there are still many unknown elements in training process of artificial intelligence, so that optimizing the model requires more time and effort than it needs. Therefore, there is a need for a tool or methodology that can analyze the weight changes during the training process of artificial intelligence and help out understatnding those changes. In this research, I propose a visualization system which helps people to understand the accumulated weight changes. The system calculates the weights for each training period to accumulates weight changes and stores accumulated weight changes to plot them in 3D space. This research will allow us to explore different aspect of artificial intelligence learning process, such as understanding how the model get trained and providing us an indicator on which hyperparameters should be changed for better performance. These attempts are expected to explore better in artificial intelligence learning process that is still considered as unknown and contribute to the development and application of artificial intelligence models.