• Title/Summary/Keyword: Time Weighted Algorithm

Search Result 312, Processing Time 0.035 seconds

Modeling mechanical strength of self-compacting mortar containing nanoparticles using wavelet-based support vector machine

  • Khatibinia, Mohsen;Feizbakhsh, Abdosattar;Mohseni, Ehsan;Ranjbar, Malek Mohammad
    • Computers and Concrete
    • /
    • v.18 no.6
    • /
    • pp.1065-1082
    • /
    • 2016
  • The main aim of this study is to predict the compressive and flexural strengths of self-compacting mortar (SCM) containing $nano-SiO_2$, $nano-Fe_2O_3$ and nano-CuO using wavelet-based weighted least squares-support vector machines (WLS-SVM) approach which is called WWLS-SVM. The WWLS-SVM regression model is a relatively new metamodel has been successfully introduced as an excellent machine learning algorithm to engineering problems and has yielded encouraging results. In order to achieve the aim of this study, first, the WLS-SVM and WWLS-SVM models are developed based on a database. In the database, nine variables which consist of cement, sand, NS, NF, NC, superplasticizer dosage, slump flow diameter and V-funnel flow time are considered as the input parameters of the models. The compressive and flexural strengths of SCM are also chosen as the output parameters of the models. Finally, a statistical analysis is performed to demonstrate the generality performance of the models for predicting the compressive and flexural strengths. The numerical results show that both of these metamodels have good performance in the desirable accuracy and applicability. Furthermore, by adopting these predicting metamodels, the considerable cost and time-consuming laboratory tests can be eliminated.

A Study on Real Time Traffic Performance Improvement Considering QoS in IEEE 802.15.6 WBAN Environments (IEEE 802.15.6 WBAN 환경에서 QoS를 고려한 실시간 트래픽 성능향상에 관한 연구)

  • Ro, Seung-Min;Kim, Chung-Ho;Kang, Chul-Ho
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.48 no.4
    • /
    • pp.84-91
    • /
    • 2011
  • Recently, WBAN(Wireless Body Area Network) which has progressed standardization based on IEEE 802.15.6 standardization is a network for the purpose of the short-range wireless communications within around 3 meters from the inner or outer human body. Effective QoS control technique and data efficient management in limited bandwidth such as audio and video are important elements in terms of users and loads in short-range wireless networks. In this paper, for high-speed WBAN IEEE 802.15.6 standard, the dynamic allocation to give an efficient bandwidth management and weighted fair queueing algorithm have been proposed through the adjustment of the super-frame about limited data and Quality of Service (QoS) based on the queuing algorithm. Weighted Fair Queueing(WFQ) Algorithm represents the robust performance about elements to qualitative aspects as well as maintaining fairness and maximization of system performance. The performance results show that the dynamic allocation expanded transmission bandwidth five times and the weighted fair queueing increased maximum 24.3 % throughput and also resolved delay bound problem.

A Novel Algorithm of Joint Probability Data Association Based on Loss Function

  • Jiao, Hao;Liu, Yunxue;Yu, Hui;Li, Ke;Long, Feiyuan;Cui, Yingjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.7
    • /
    • pp.2339-2355
    • /
    • 2021
  • In this paper, a joint probabilistic data association algorithm based on loss function (LJPDA) is proposed so that the computation load and accuracy of the multi-target tracking algorithm can be guaranteed simultaneously. Firstly, data association is divided in to three cases based on the relationship among validation gates and the number of measurements in the overlapping area for validation gates. Also the contribution coefficient is employed for evaluating the contribution of a measurement to a target, and the loss function, which reflects the cost of the new proposed data association algorithm, is defined. Moreover, the equation set of optimal contribution coefficient is given by minimizing the loss function, and the optimal contribution coefficient can be attained by using the Newton-Raphson method. In this way, the weighted value of each target can be achieved, and the data association among measurements and tracks can be realized. Finally, we compare performances of LJPDA proposed and joint probabilistic data association (JPDA) algorithm via numerical simulations, and much attention is paid on real-time performance and estimation error. Theoretical analysis and experimental results reveal that the LJPDA algorithm proposed exhibits small estimation error and low computation complexity.

Search Algorithm for Efficient Optimal Path based on Time-weighted (시간 가중치 기반 효율적인 최적 경로 탐색 기법 연구)

  • Her, Yu-sung;Kim, Tae-woo;Ahn, Yonghak
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.2
    • /
    • pp.1-8
    • /
    • 2020
  • In this paper, we propose an optimal path search algorithm between each node and midpoint that applies the time weighting. Services for using a location of mid point usually provide a mid point location-based on the location of users. There is a problem that is not efficient in terms of time because a location-based search method is only considered for location. To solve the problem of the existing location-based search method, the proposed algorithm sets the weights between each node and midpoint by reflecting user's location information and required time. Then, by utilizing that, it is possible to search for an optimum path. In addition, to increase the efficiency of the search, it ensures high accuracy by setting weights adaptively to the information given. Experimental results show that the proposed algorithm is able to find the optimal path to the midpoint compared with the existing method.

Just-in-time Scheduling with Multiple Competing Agents (다수의 경쟁이 존재하는 환경에서 적시 스케줄링에 관한 연구)

  • Chung, Dae-Young;Choi, Byung-Cheon
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.37 no.1
    • /
    • pp.19-28
    • /
    • 2012
  • We consider a multi-agent scheduling problem such that each agent tries to maximize the weighted number of just-in-time jobs. Two objectives are considered : the first is to find the optimal solution for one agent with constraints on the other agents' weight functions, and the second is to find the largest set of efficient schedules of which corresponding objective vectors are different for the case with identical weights. We show that when the number of agents is fixed, the single machine case with the first objective is NP-hard in the ordinary sense, and present the polynomial- time algorithm for the two-machine flow shop case with the second objective and identical weights.

Robust Location Estimation based on TDOA and FDOA using Outlier Detection Algorithm (이상치 검출 알고리즘을 이용한 TDOA와 FDOA 기반 이동 신호원 위치 추정 기법)

  • Yoo, Hogeun;Lee, Jaehoon
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.9
    • /
    • pp.15-21
    • /
    • 2020
  • This paper presents the outlier detection algorithm in the estimation method of a source location and velocity based on two-step weighted least-squares method using time difference of arrival(TDOA) and frequency difference of arrival(FDOA) data. Since the accuracy of the estimated location and velocity of a moving source can be reduced by the outliers of TDOA and FDOA data, it is important to detect and remove the outliers. In this paper, the method to find the minimum inlier data and the method to determine whether TDOA and FDOA data are included in inliers or outliers are presented. The results of numerical simulations show that the accuracy of the estimated location and velocity is improved by removing the outliers of TDOA and FDOA data.

An Algorithm for Increasing Worm Detection Effetiveness in Virus Throttling (바이러스 쓰로틀링의 웜 탐지 효율 향상 알고리즘)

  • Kim, Jang-Bok;Kim, Sang-Joong;Choi, Sun-Jung;Shim, Jae-Hong;Chung, Gi-Hyun;Choi, Kyung-Hee
    • Journal of KIISE:Information Networking
    • /
    • v.34 no.3
    • /
    • pp.186-192
    • /
    • 2007
  • The virus throttling technique[5,6] is the one of well-known worm early technique. Virus throttling reduce the worm propagration by delaying connection packets artificially. However the worm detection time is not sufficiently fast as expected when the worm generated worm packets at a low rate. This is because the virus throttling technique use only delay queue length. In this paper we use the trend of weighted average delay queue length (TW AQL). By using TW AQL, the worm detection time is not only shorten at a low rate Internet worm, but also the false alarm does not largely increase. By experiment, we also proved our proposed algorithm had better performance.

A Recommendation System of Exponentially Weighted Collaborative Filtering for Products in Electronic Commerce (지수적 가중치를 적용한 협력적 상품추천시스템)

  • Lee, Gyeong-Hui;Han, Jeong-Hye;Im, Chun-Seong
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.625-632
    • /
    • 2001
  • The electronic stores have realized that they need to understand their customers and to quickly response their wants and needs. To be successful in increasingly competitive Internet marketplace, recommender systems are adapting data mining techniques. One of most successful recommender technologies is collaborative filtering (CF) algorithm which recommends products to a target customer based on the information of other customers and employ statistical techniques to find a set of customers known as neighbors. However, the application of the systems, however, is not very suitable for seasonal products which are sensitive to time or season such as refrigerator or seasonal clothes. In this paper, we propose a new adjusted item-based recommendation generation algorithms called the exponentially weighted collaborative filtering recommendation (EWCFR) one that computes item-item similarities regarding seasonal products. Finally, we suggest the recommendation system with relatively high quality computing time on main memory database (MMDB) in XML since the collaborative filtering systems are needed that can quickly produce high quality recommendations with very large-scale problems.

  • PDF

Effective Noise Reduction using STFT-based Content Analysis (STFT 기반 영상분석을 이용한 효과적인 잡음제거 알고리즘)

  • Baek, Seungin;Jeong, Soowoong;Choi, Jong-Soo;Lee, Sangkeun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.4
    • /
    • pp.145-155
    • /
    • 2015
  • Noise reduction has been actively studied in the digital image processing and recently, block-based denoising algorithms are widely used. In particular, a low rank approximation employing WNNM(Weighted Nuclear Norm Minimization) and block-based approaches demonstrated the potential for effective noise reduction. However, the algorithm based on low rank a approximation generates the artifacts in the image restoration step. In this paper, we analyzes the image content using the STFT(Short Time Fourier Transform) and proposes an effective method of minimizing the artifacts generated from the conventional algorithm. To evaluate the performance of the proposed scheme, we use the test images containing a wide range of noise levels and compare the results with the state-of-art algorithms.

High Performance Coprocessor Architecture for Real-Time Dense Disparity Map (실시간 Dense Disparity Map 추출을 위한 고성능 가속기 구조 설계)

  • Kim, Cheong-Ghil;Srini, Vason P.;Kim, Shin-Dug
    • The KIPS Transactions:PartA
    • /
    • v.14A no.5
    • /
    • pp.301-308
    • /
    • 2007
  • This paper proposes high performance coprocessor architecture for real time dense disparity computation based on a phase-based binocular stereo matching technique called local weighted phase-correlation(LWPC). The algorithm combines the robustness of wavelet based phase difference methods and the basic control strategy of phase correlation methods, which consists of 4 stages. For parallel and efficient hardware implementation, the proposed architecture employs SIMD(Single Instruction Multiple Data Stream) architecture for each functional stage and all stages work on pipelined mode. Such that the newly devised pipelined linear array processor is optimized for the case of row-column image processing eliminating the need for transposed memory while preserving generality and high throughput. The proposed architecture is implemented with Xilinx HDL tool and the required hardware resources are calculated in terms of look up tables, flip flops, slices, and the amount of memory. The result shows the possibility that the proposed architecture can be integrated into one chip while maintaining the processing speed at video rate.