• Title/Summary/Keyword: Stochastic systems

Search Result 767, Processing Time 0.028 seconds

Energy Efficiency Enhancement of Macro-Femto Cell Tier (매크로-펨토셀의 에너지 효율 향상)

  • Kim, Jeong-Su;Lee, Moon-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.1
    • /
    • pp.47-58
    • /
    • 2018
  • The heterogeneous cellular network (HCN) is most significant as a key technology for future fifth generation (5G) wireless networks. The heterogeneous network considered consists of randomly macrocell base stations (MBSs) overlaid with femtocell base stations (BSs). The stochastic geometry has been shown to be a very powerful tool to model, analyze, and design networks with random topologies such as wireless ad hoc, sensor networks, and multi- tier cellular networks. The HCNs can be energy-efficiently designed by deploying various BSs belonging to different networks, which has drawn significant attention to one of the technologies for future 5G wireless networks. In this paper, we propose switching off/on systems enabling the BSs in the cellular networks to efficiently consume the power by introducing active/sleep modes, which is able to reduce the interference and power consumption in the MBSs and FBSs on an individual basis as well as improve the energy efficiency of the cellular networks. We formulate the minimization of the power onsumption for the MBSs and FBSs as well as an optimization problem to maximize the energy efficiency subject to throughput outage constraints, which can be solved the Karush Kuhn Tucker (KKT) conditions according to the femto tier BS density. We also formulate and compare the coverage probability and the energy efficiency in HCNs scenarios with and without coordinated multi-point (CoMP) to avoid coverage holes.

Effect of Nozzle Shape and Injection Pressure on Performance of Hybrid Nozzle (노즐 형상 및 분사 압력이 하이브리드 노즐 성능에 미치는 영향 연구)

  • Ro, Kyoung-Chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.74-79
    • /
    • 2017
  • The fire extinguishing performance of hybrid nozzle systems is improved by injecting an extinguishing agent concentrically into the target site and, in this study, water mist is used as a water curtain to confine the droplets of the agent. In this study, we numerically investigated the effect of the foundation angle and injection pressure on the performance of a hybrid nozzle by evaluating the mean radius of the volume fractions of the agent and water mists. An experiment involving a water mist nozzle was carried out to validate the numerical method and then the droplet behaviors, e.g., stochastic collision, coalescence and breakup, were calculated with 2-way interaction Discrete Particle Modeling (DPM) in the steady state for the hybrid nozzle system. The mean radius of the water mists increased by about 40 %, whereas that of the agent decreased by about 21 %, when the injection pressure was increased from 30 bar to 60 bar. In addition, the mean radius of the agent increased by about 24 % as the foundation angle of the hybrid nozzle head increased from $30^{\circ}$ to $60^{\circ}$. As a result, it can be inferred that the injection angle and pressure are important factors for hybrid water mist designs.

Evaluation of Subsystem Importance Index considering Effective Supply in Water Distribution Systems (유효유량 개념을 도입한 상수관망 Subsystem 별 중요도 산정)

  • Seo, Min-Yeol;Yoo, Do-Guen;Kim, Joong-Hoon;Jun, Hwan-Don;Chung, Gun-Hui
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.9 no.6
    • /
    • pp.133-141
    • /
    • 2009
  • The main objective of water distribution system is to supply enough water to users with proper pressure. Hydraulic analysis of water distribution system can be divided into Demand Driven Analysis (DDA) and Pressure Driven Analysis (PDA). Demand-driven analysis can give unrealistic results such as negative pressures in nodes due to the assumption that nodal demands are always satisfied. Pressure-driven analysis which is often used as an alternative requires a Head-Outflow Relationship (HOR) to estimate the amount of possible water supply at a certain level of pressure. However, the lack of data causes difficulty to develop the relationship. In this study, effective supply, which is the possible amount of supply while meeting the pressure requirement in nodes, is proposed to estimate the serviceability and user's convenience of the network. The effective supply is used to calculate Subsystem Importance Index (SII) which indicates the effect of isolating a subsystem on the entire network. Harmony Search, a stochastic search algorithm, is linked with EPANET to maximize the effective supply. The proposed approach is applied in example networks to evaluate the capability of the network when a subsystem is isolated, which can also be utilized to prioritize the rehabilitation order or evaluate reliability of the network.

Process Development for Optimizing Sensor Placement Using 3D Information by LiDAR (LiDAR자료의 3차원 정보를 이용한 최적 Sensor 위치 선정방법론 개발)

  • Yu, Han-Seo;Lee, Woo-Kyun;Choi, Sung-Ho;Kwak, Han-Bin;Kwak, Doo-Ahn
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.3-12
    • /
    • 2010
  • In previous studies, the digital measurement systems and analysis algorithms were developed by using the related techniques, such as the aerial photograph detection and high resolution satellite image process. However, these studies were limited in 2-dimensional geo-processing. Therefore, it is necessary to apply the 3-dimensional spatial information and coordinate system for higher accuracy in recognizing and locating of geo-features. The objective of this study was to develop a stochastic algorithm for the optimal sensor placement using the 3-dimensional spatial analysis method. The 3-dimensional information of the LiDAR was applied in the sensor field algorithm based on 2- and/or 3-dimensional gridded points. This study was conducted with three case studies using the optimal sensor placement algorithms; the first case was based on 2-dimensional space without obstacles(2D-non obstacles), the second case was based on 2-dimensional space with obstacles(2D-obstacles), and lastly, the third case was based on 3-dimensional space with obstacles(3D-obstacles). Finally, this study suggested the methodology for the optimal sensor placement - especially, for ground-settled sensors - using the LiDAR data, and it showed the possibility of algorithm application in the information collection using sensors.

Research Trends in Record Management Using Unstructured Text Data Analysis (비정형 텍스트 데이터 분석을 활용한 기록관리 분야 연구동향)

  • Deokyong Hong;Junseok Heo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.23 no.4
    • /
    • pp.73-89
    • /
    • 2023
  • This study aims to analyze the frequency of keywords used in Korean abstracts, which are unstructured text data in the domestic record management research field, using text mining techniques to identify domestic record management research trends through distance analysis between keywords. To this end, 1,157 keywords of 77,578 journals were visualized by extracting 1,157 articles from 7 journal types (28 types) searched by major category (complex study) and middle category (literature informatics) from the institutional statistics (registered site, candidate site) of the Korean Citation Index (KCI). Analysis of t-Distributed Stochastic Neighbor Embedding (t-SNE) and Scattertext using Word2vec was performed. As a result of the analysis, first, it was confirmed that keywords such as "record management" (889 times), "analysis" (888 times), "archive" (742 times), "record" (562 times), and "utilization" (449 times) were treated as significant topics by researchers. Second, Word2vec analysis generated vector representations between keywords, and similarity distances were investigated and visualized using t-SNE and Scattertext. In the visualization results, the research area for record management was divided into two groups, with keywords such as "archiving," "national record management," "standardization," "official documents," and "record management systems" occurring frequently in the first group (past). On the other hand, keywords such as "community," "data," "record information service," "online," and "digital archives" in the second group (current) were garnering substantial focus.

Climate Change Impact on Nonpoint Source Pollution in a Rural Small Watershed (기후변화에 따른 농촌 소유역에서의 비점오염 영향 분석)

  • Hwang, Sye-Woon;Jang, Tae-Il;Park, Seung-Woo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.8 no.4
    • /
    • pp.209-221
    • /
    • 2006
  • The purpose of this study is to analyze the effects of climate change on the nonpoint source pollution in a small watershed using a mid-range model. The study area is a basin in a rural area that covers 384 ha with a composition of 50% forest and 19% paddy. The hydrologic and water quality data were monitored from 1996 to 2004, and the feasibility of the GWLF (Generalized Watershed Loading function) model was examined in the agricultural small watershed using the data obtained from the study area. As one of the studies on climate change, KEI (Korea Environment Institute) has presented the monthly variation ratio of rainfall in Korea based on the climate change scenario for rainfall and temperature. These values and observed daily rainfall data of forty-one years from 1964 to 2004 in Suwon were used to generate daily weather data using the stochastic weather generator model (WGEN). Stream runoff was calibrated by the data of $1996{\sim}1999$ and was verified in $2002{\sim}2004$. The results were determination coeff, ($R^2$) of $0.70{\sim}0.91$ and root mean square error (RMSE) of $2.11{\sim}5.71$. Water quality simulation for SS, TN and TP showed $R^2$ values of 0.58, 0.47 and 0.62, respectively, The results for the impact of climate change on nonpoint source pollution show that if the factors of watershed are maintained as in the present circumstances, pollutant TN loads and TP would be expected to increase remarkably for the rainy season in the next fifty years.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.