• Title/Summary/Keyword: random network

Search Result 1,203, Processing Time 0.025 seconds

Performance Analysis of Noncoherent OOK UWB Transceiver for LR-WPAN (저속 WPAN용 비동기 OOK 방식 UWB 송수신기 성능 분석)

  • Ki Myoungoh;Choi Sungsoo;Oh Hui-Myoung;Kim Kwan-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.11A
    • /
    • pp.1027-1034
    • /
    • 2005
  • IEEE802.15.4a, which is started to realize the PHY layer including high precision ranging/positioning and low data rate communication functions, requires a simple and low power consumable transceiver architecture. To satisfy this requirements, the simple noncoherent on-off keying (OOK) UWB transceiver with the parallel energy window banks (PEWB) giving high precision signal processing interface is proposed. The flexibility of the proposed system in multipath fading channel environments is acquired with the pulse and bit repetition method. To analyze the bit error rate (BER) performance of this proposed system, a noise model in receiver is derived with commonly used random variable distribution, chi-square. BER of $10^{-5}$ under the line-of-sight (LOS) residential channel is achieved with the integration time of 32 ns and signal to noise ratio (SNR) of 15.3 dB. For the non-line-of-sight (NLOS) outdoor channel, the integration time of 72 ns and SNR of 16.2 dB are needed. The integrated energy to total received energy (IRR) for the best BER performance is about $86\%$.

Energy Saving Characteristics of OSPF Routing Based on Energy Profiles (Energy Profile에 기반한 OSPF 라우팅 방식의 에너지 절약 특성)

  • Seo, Yusik;Han, Chimoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.7
    • /
    • pp.1296-1306
    • /
    • 2015
  • Nowadays the research of energy saving on the IP networks have been studied the various methods in many research institutes. This paper suggests the energy saving method in IP networks which have the various energy profiles, and analyzes its energy saving characteristics in detail. Especially this paper proposes the energy profile based OSPF routing method which have the selectable weighted value in OSPF metric and energy consumption in IP network. This paper analyzes the energy saving effects of the various situations to minimize the energy consumption using the various weighted value on the proposed scheme. The results show that the energy saving efficiency can get about 67% at in ingress input load ${\rho}=0.5$ by using random energy profiles in IP networks. Although the number of hops is a slight increased due to routing the paths for the minimum energy consumption in the algorithm of this method, the increment hop number is limited the mean 1.4 hops. This paper confirms that the energy profile of core router has the large effects of energy saving than the energy profile of edge router, and the proposed method has the excellent energy saving characteristics in IP networks.

Preliminary Research for Korean Twitter User Analysis Focusing on Extreme Heavy User's Twitter Log (국내 트위터 유저 분석을 위한 예비연구 )

  • Jung, Hye-Lan;Ji, Sook-Young;Lee, Joong-Seek
    • Journal of the HCI Society of Korea
    • /
    • v.5 no.1
    • /
    • pp.37-43
    • /
    • 2010
  • Twitter has been continuously growing since October, 2006. Especially, not only the users and the number of messages have been increasing but also a new concept in social networking called 'micro blogging' has diffused. Within Korea, service such as 'me2day' has already been introduced and the improvement of internet accessibility within mobile devices is expected to expand the 'micro blogs'. In this point, this research is executed to study the new medium, 'micro blog'. To do so, we collected and analyzed Twitter logs of Korean users. Especially, we were curious about the extreme heavy users using Twitter, despite of the linguistic and cultural barrier of the foreign service. Who they are, why and how they use the 'micro blog'. First, we reviewed the general aspect of followers and messages by collecting a certain number of random samples. Using the Lorenz curve we found out that there was the imbalance within the users and based on this phenomenon we deducted an extreme heavy user group. In order to perform further analysis, log analysis was performed on the extreme heavy users. As the result, the users used multiple mobile and desktop 'Twitter' clients. The usage pattern was similar to that of internet usage time but was used during their "micro" time. The users using 'Twitter' not only to spread messages about important information, special events and emotions, but also as a habitual 'chatting tool' to express ordinary personal chats similar to SMS and IM services. In this research, it is proved that 68% of the total messages were ordinary personal chats. Also, with 24% of the total messages were retweets, we were able to find out that virtually connected 'people' and 'relationships' acted as the dominant trigger of their articulation.

  • PDF

Optimal Release Problems based on a Stochastic Differential Equation Model Under the Distributed Software Development Environments (분산 소프트웨어 개발환경에 대한 확률 미분 방정식 모델을 이용한 최적 배포 문제)

  • Lee Jae-Ki;Nam Sang-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.7A
    • /
    • pp.649-658
    • /
    • 2006
  • Recently, Software Development was applied to new-approach methods as a various form : client-server system and web-programing, object-orient concept, distributed development with a network environments. On the other hand, it be concerned about the distributed development technology and increasing of object-oriented methodology. These technology is spread out the software quality and improve of software production, reduction of the software develop working. Futures, we considered about the distributed software development technique with a many workstation. In this paper, we discussed optimal release problem based on a stochastic differential equation model for the distributed Software development environments. In the past, the software reliability applied to quality a rough guess with a software development process and approach by the estimation of reliability for a test progress. But, in this paper, we decided to optimal release times two method: first, SRGM with an error counting model in fault detection phase by NHPP. Second, fault detection is change of continuous random variable by SDE(stochastic differential equation). Here, we decide to optimal release time as a minimum cost form the detected failure data and debugging fault data during the system test phase and operational phase. Especially, we discussed to limitation of reliability considering of total software cost probability distribution.

Modelling of the noise-added saturated steam table using neural networks (노이즈가 포함된 포화증기표의 신경회로망 모델링)

  • Lee, Tae-Hwan;Park, Jin-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.2
    • /
    • pp.413-418
    • /
    • 2011
  • The thermodynamic properties of steam table are obtained by measurement or approximate calculation under appropriate assumptions. Therefore they are supposed to have basic measurement errors. And thermodynamic properties should be modeled through function approximation for using in numerical analysis. In order to make noised thermodynamic properties corresponding to measurement errors, random numbers are generated, adjusted to appropriate magnitudes and added to original thermodynamic properties. Both neural networks and quadratic spline interpolation method are introduced for function approximation of these modified thermodynamic properties in the saturated water based on pressure and temperature. In analysis spline interpolation method gives much less relative errors than neural networks at both ends of data. Excluding the both ends of data, the relative errors of neural networks is generally within ${\pm}0.2%$ and those of spline interpolation method within ${\pm}0.5$~1.5%. This means that the neural networks give smaller relative errors compared with quadratic spline interpolation method within range of use. From this fact it was confirmed that the neural networks trace the original values better than the quadratic interpolation method and neural networks are more appropriate method in modelling the saturated steam table.

Implementing an Adaptive Neuro-Fuzzy Model for Emotion Prediction Based on Heart Rate Variability(HRV) (심박변이도를 이용한 적응적 뉴로 퍼지 감정예측 모형에 관한 연구)

  • Park, Sung Soo;Lee, Kun Chang
    • Journal of Digital Convergence
    • /
    • v.17 no.1
    • /
    • pp.239-247
    • /
    • 2019
  • An accurate prediction of emotion is a very important issue for the sake of patient-centered medical device development and emotion-related psychology fields. Although there have been many studies on emotion prediction, no studies have applied the heart rate variability and neuro-fuzzy approach to emotion prediction. We propose ANFEP(Adaptive Neuro Fuzzy System for Emotion Prediction) HRV. The ANFEP bases its core functions on an ANFIS(Adaptive Neuro-Fuzzy Inference System) which integrates neural networks with fuzzy systems as a vehicle for training predictive models. To prove the proposed model, 50 participants were invited to join the experiment and Heart rate variability was obtained and used to input the ANFEP model. The ANFEP model with STDRR and RMSSD as inputs and two membership functions per input variable showed the best results. The result out of applying the ANFEP to the HRV metrics proved to be significantly robust when compared with benchmarking methods like linear regression, support vector regression, neural network, and random forest. The results show that reliable prediction of emotion is possible with less input and it is necessary to develop a more accurate and reliable emotion recognition system.

Trip Assignment for Transport Card Based Seoul Metropolitan Subway Using Monte Carlo Method (Monte Carlo 기법을 이용한 교통카드기반 수도권 지하철 통행배정)

  • Meeyoung Lee;Doohee Nam
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.2
    • /
    • pp.64-79
    • /
    • 2023
  • This study reviewed the process of applying the Monte Carlo simulation technique to the traffic allocation problem of metropolitan subways. The analysis applied the assumption of a normal distribution in which the travel time information of the inter-station sample is the basis of the probit model. From this, the average and standard deviation are calculated by separating the traffic between stations. A plan was proposed to apply the simulation with the weights of the in-vehicle time of individual links and the walking and dispatch interval of transfer. Long-distance traffic with a low number of samples of 50 or fewer was evaluated as a way to analyze the characteristics of similar traffic. The research results were reviewed in two directions by applying them to the Seoul Metropolitan Subway Network. The travel time between single stations on the Seolleung-Seongsu route was verified by applying random sampling to the in-vehicle time and transfer time. The assumption of a normal distribution was accepted for sample sizes of more than 50 stations according to the inter-station traffic sample of the entire Seoul Metropolitan Subway. For long-distance traffic with samples numbering less than 50, the minimum distance between stations was 122Km. Therefore, it was judged that the sample deviation equality was achieved and the inter-station mean and standard deviation of the transport card data for stations at this distance could be applied.

Development of a Stochastic Precipitation Generation Model for Generating Multi-site Daily Precipitation (다지점 일강수 모의를 위한 추계학적 강수모의모형의 구축)

  • Jeong, Dae-Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.5B
    • /
    • pp.397-408
    • /
    • 2009
  • In this study, a stochastic precipitation generation framework for simultaneous simulation of daily precipitation at multiple sites is presented. The precipitation occurrence at individual sites is generated using hybrid-order Markov chain model which allows higher-order dependence for dry sequences. The precipitation amounts are reproduced using Anscombe residuals and gamma distributions. Multisite spatial correlations in the precipitation occurrence and amount series are represented with spatially correlated random numbers. The proposed model is applied for a network of 17 locations in the middle of Korean peninsular. Evaluation statistics are reported by generating 50 realizations of the precipitation of length equal to the observed record. The analysis of results show that the model reproduces wet day number, wet and dry day spell, and mean and standard deviation of wet day amount fairly well. However, mean values of 50 realizations of generated precipitation series yield around 23% Root Mean Square Errors (RMSE) of the average value of observed maximum numbers of consecutive wet and dry days and 17% RMSE of the average value of observed annual maximum precipitations for return periods of 100 and 200 years. The provided model also reproduces spatial correlations in observed precipitation occurrence and amount series accurately.

Increasing Accuracy of Stock Price Pattern Prediction through Data Augmentation for Deep Learning (데이터 증강을 통한 딥러닝 기반 주가 패턴 예측 정확도 향상 방안)

  • Kim, Youngjun;Kim, Yeojeong;Lee, Insun;Lee, Hong Joo
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.1-12
    • /
    • 2019
  • As Artificial Intelligence (AI) technology develops, it is applied to various fields such as image, voice, and text. AI has shown fine results in certain areas. Researchers have tried to predict the stock market by utilizing artificial intelligence as well. Predicting the stock market is known as one of the difficult problems since the stock market is affected by various factors such as economy and politics. In the field of AI, there are attempts to predict the ups and downs of stock price by studying stock price patterns using various machine learning techniques. This study suggest a way of predicting stock price patterns based on the Convolutional Neural Network(CNN) among machine learning techniques. CNN uses neural networks to classify images by extracting features from images through convolutional layers. Therefore, this study tries to classify candlestick images made by stock data in order to predict patterns. This study has two objectives. The first one referred as Case 1 is to predict the patterns with the images made by the same-day stock price data. The second one referred as Case 2 is to predict the next day stock price patterns with the images produced by the daily stock price data. In Case 1, data augmentation methods - random modification and Gaussian noise - are applied to generate more training data, and the generated images are put into the model to fit. Given that deep learning requires a large amount of data, this study suggests a method of data augmentation for candlestick images. Also, this study compares the accuracies of the images with Gaussian noise and different classification problems. All data in this study is collected through OpenAPI provided by DaiShin Securities. Case 1 has five different labels depending on patterns. The patterns are up with up closing, up with down closing, down with up closing, down with down closing, and staying. The images in Case 1 are created by removing the last candle(-1candle), the last two candles(-2candles), and the last three candles(-3candles) from 60 minutes, 30 minutes, 10 minutes, and 5 minutes candle charts. 60 minutes candle chart means one candle in the image has 60 minutes of information containing an open price, high price, low price, close price. Case 2 has two labels that are up and down. This study for Case 2 has generated for 60 minutes, 30 minutes, 10 minutes, and 5minutes candle charts without removing any candle. Considering the stock data, moving the candles in the images is suggested, instead of existing data augmentation techniques. How much the candles are moved is defined as the modified value. The average difference of closing prices between candles was 0.0029. Therefore, in this study, 0.003, 0.002, 0.001, 0.00025 are used for the modified value. The number of images was doubled after data augmentation. When it comes to Gaussian Noise, the mean value was 0, and the value of variance was 0.01. For both Case 1 and Case 2, the model is based on VGG-Net16 that has 16 layers. As a result, 10 minutes -1candle showed the best accuracy among 60 minutes, 30 minutes, 10 minutes, 5minutes candle charts. Thus, 10 minutes images were utilized for the rest of the experiment in Case 1. The three candles removed from the images were selected for data augmentation and application of Gaussian noise. 10 minutes -3candle resulted in 79.72% accuracy. The accuracy of the images with 0.00025 modified value and 100% changed candles was 79.92%. Applying Gaussian noise helped the accuracy to be 80.98%. According to the outcomes of Case 2, 60minutes candle charts could predict patterns of tomorrow by 82.60%. To sum up, this study is expected to contribute to further studies on the prediction of stock price patterns using images. This research provides a possible method for data augmentation of stock data.

  • PDF

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.