• 제목/요약/키워드: combination-based algorithms

검색결과 233건 처리시간 0.023초

Compression-friendly Image Encryption Algorithm Based on Order Relation

  • Ganzorig Gankhuyag;Yoonsik Choe
    • Journal of Internet Technology
    • /
    • 제21권4호
    • /
    • pp.1013-1024
    • /
    • 2020
  • In this paper, we introduce an image encryption algorithm that can be used in combination with compression algorithms. Existing encryption algorithms focus on either encryption strength or speed without compression, whereas the proposed algorithm improves compression efficiency while ensuring security. Our encryption algorithm decomposes images into pixel values and pixel intensity subsets, and computes the order of permutations. An encrypted image becomes unpredictable after permutation. Order permutation reduces the discontinuity between signals in an image, increasing compression efficiency. The experimental results show that the security strength of the proposed algorithm is similar to that of existing algorithms. Additionally, we tested the algorithm on the JPEG and the JPEG2000 with variable compression ratios. Compared to existing methods applied without encryption, the proposed algorithm significantly increases PSNR and SSIM values.

Optimization-based method for structural damage detection with consideration of uncertainties- a comparative study

  • Ghiasi, Ramin;Ghasemi, Mohammad Reza
    • Smart Structures and Systems
    • /
    • 제22권5호
    • /
    • pp.561-574
    • /
    • 2018
  • In this paper, for efficiently reducing the computational cost of the model updating during the optimization process of damage detection, the structural response is evaluated using properly trained surrogate model. Furthermore, in practice uncertainties in the FE model parameters and modelling errors are inevitable. Hence, an efficient approach based on Monte Carlo simulation is proposed to take into account the effect of uncertainties in developing a surrogate model. The probability of damage existence (PDE) is calculated based on the probability density function of the existence of undamaged and damaged states. The current work builds a framework for Probability Based Damage Detection (PBDD) of structures based on the best combination of metaheuristic optimization algorithm and surrogate models. To reach this goal, three popular metamodeling techniques including Cascade Feed Forward Neural Network (CFNN), Least Square Support Vector Machines (LS-SVMs) and Kriging are constructed, trained and tested in order to inspect features and faults of each algorithm. Furthermore, three wellknown optimization algorithms including Ideal Gas Molecular Movement (IGMM), Particle Swarm Optimization (PSO) and Bat Algorithm (BA) are utilized and the comparative results are presented accordingly. Furthermore, efficient schemes are implemented on these algorithms to improve their performance in handling problems with a large number of variables. By considering various indices for measuring the accuracy and computational time of PBDD process, the results indicate that combination of LS-SVM surrogate model by IGMM optimization algorithm have better performance in predicting the of damage compared with other methods.

구성요소가 서로 종속인 네트워크시스템의 신뢰성모형과 계산알고리즘 (Reliability Modeling and Computational Algorithm of Network Systems with Dependent Components)

  • 홍정식;이창훈
    • 한국경영과학회지
    • /
    • 제14권1호
    • /
    • pp.88-96
    • /
    • 1989
  • General measure in the reliability is the k-terminal reliability, which is the probability that the specified vertices are connected by the working edges. To compute the k-terminal reliability components are usually assumed to be statistically independent. In this study the modeling and analysis of the k-terminal reliability are investigated when dependency among components is considered. As the size of the network increases, the number of the joint probability parameter to represent the dependency among components is increasing exponentially. To avoid such a difficulty the structured-event-based-reliability model (SERM) is presented. This model uses the combination of the network topology (physical representation) and reliability block diagram (logical representation). This enables us to represent the dependency among components in a network form. Computational algorithms for the k-terminal reliability in SERM are based on the factoring algorithm Two features of the ractoring algorithm are the reliability preserving reduction and the privoting edge selection strategy. The pivoting edge selction strategy is modified by two different ways to tackle the replicated edges occuring in SERM. Two algorithms are presented according to each modified pivoting strategy and illustrated by numerical example.

  • PDF

Interest Point Detection Using Hough Transform and Invariant Patch Feature for Image Retrieval

  • ;안영은;박종안
    • 한국ITS학회 논문지
    • /
    • 제8권1호
    • /
    • pp.127-135
    • /
    • 2009
  • This paper presents a new technique for corner shape based object retrieval from a database. The proposed feature matrix consists of values obtained through a neighborhood operation of detected corners. This results in a significant small size feature matrix compared to the algorithms using color features and thus is computationally very efficient. The corners have been extracted by finding the intersections of the detected lines found using Hough transform. As the affine transformations preserve the co-linearity of points on a line and their intersection properties, the resulting corner features for image retrieval are robust to affine transformations. Furthermore, the corner features are invariant to noise. It is considered that the proposed algorithm will produce good results in combination with other algorithms in a way of incremental verification for similarity.

  • PDF

PageRank 변형 알고리즘들 간의 순위 품질 평가 (Ranking Quality Evaluation of PageRank Variations)

  • 팜민득;허준석;이정훈;황규영
    • 전자공학회논문지CI
    • /
    • 제46권5호
    • /
    • pp.14-28
    • /
    • 2009
  • PageRank 알고리즘은 구글(Google)등의 검색 엔진에서 웹 페이지의 순위(rank)를 정하는 중요한 요소이다. PageRank 알고리즘의 순위 품질(ranking quality)을 향상시키기 위해 많은 변형 알고리즘들이 제안되었지만 어떤 변형 알고리즘(혹은 변형 알고리즘들간의 조합)이 가장 좋은 순위 품질을 제공하는지가 명확하지 않다. 본 논문에서는 PageRank 알고리즘의 잘 알려진 변형 알고리즘들과 그들 간의 조합들에 대해 순위 품질을 평가한다. 이를 위해, 먼저 변형 알고리즘들을 웹의 링크(link) 구조를 이용하는 링크기반 방법(Link-based approaches)과 웹의 의미 정보를 이용하는 지식기반 방법(Knowledge-based approaches)으로 분류한다. 다음으로, 이 두 가지 방법에 속하는 알고리즘들을 조합한 알고리즘들을 제안하고, 변형 알고리즘들과 그들을 조합한 알고리즘들을 구현한다. 백만 개의 웹 페이지들로 구성된 실제 데이터에 대한 실험을 통해 PageRank의 변형 알고리즘들과 그들 간의 조합들로부터 가장 좋은 순위 품질을 제공하는 알고리즘을 찾는다.

Automatic Switching of Clustering Methods based on Fuzzy Inference in Bibliographic Big Data Retrieval System

  • Zolkepli, Maslina;Dong, Fangyan;Hirota, Kaoru
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제14권4호
    • /
    • pp.256-267
    • /
    • 2014
  • An automatic switch among ensembles of clustering algorithms is proposed as a part of the bibliographic big data retrieval system by utilizing a fuzzy inference engine as a decision support tool to select the fastest performing clustering algorithm between fuzzy C-means (FCM) clustering, Newman-Girvan clustering, and the combination of both. It aims to realize the best clustering performance with the reduction of computational complexity from O($n^3$) to O(n). The automatic switch is developed by using fuzzy logic controller written in Java and accepts 3 inputs from each clustering result, i.e., number of clusters, number of vertices, and time taken to complete the clustering process. The experimental results on PC (Intel Core i5-3210M at 2.50 GHz) demonstrates that the combination of both clustering algorithms is selected as the best performing algorithm in 20 out of 27 cases with the highest percentage of 83.99%, completed in 161 seconds. The self-adapted FCM is selected as the best performing algorithm in 4 cases and the Newman-Girvan is selected in 3 cases.The automatic switch is to be incorporated into the bibliographic big data retrieval system that focuses on visualization of fuzzy relationship using hybrid approach combining FCM and Newman-Girvan algorithm, and is planning to be released to the public through the Internet.

Vibration-based delamination detection of composites using modal data and experience-based learning algorithm

  • Luo, Weili;Wang, Hui;Li, Yadong;Liang, Xing;Zheng, Tongyi
    • Steel and Composite Structures
    • /
    • 제42권5호
    • /
    • pp.685-697
    • /
    • 2022
  • In this paper, a vibration-based method using the change ratios of modal data and the experience-based learning algorithm is presented for quantifying the position, size, and interface layer of delamination in laminated composites. Three types of objective functions are examined and compared, including the ones using frequency changes only, mode shape changes only, and their combination. A fine three-dimensional FE model with constraint equations is utilized to extract modal data. A series of numerical experiments is carried out on an eight-layer quasi-isotropic symmetric (0/-45/45/90)s composited beam for investigating the influence of the objective function, the number of modal data, the noise level, and the optimization algorithms. Numerical results confirm that the frequency-and-mode-shape-changes-based technique yields excellent results in all the three delamination variables of the composites and the addition of mode shape information greatly improves the accuracy of interface layer prediction. Moreover, the EBL outperforms the other three state-of-the-art optimization algorithms for vibration-based delamination detection of composites. A laboratory test on six CFRP beams validates the frequency-and-mode-shape-changes-based technique and confirms again its superiority for delamination detection of composites.

Multi-objective Optimization Model with AHP Decision-making for Cloud Service Composition

  • Liu, Li;Zhang, Miao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권9호
    • /
    • pp.3293-3311
    • /
    • 2015
  • Cloud services are required to be composed as a single service to fulfill the workflow applications. Service composition in Cloud raises new challenges caused by the diversity of users with different QoS requirements and vague preferences, as well as the development of cloud computing having geographically distributed characteristics. So the selection of the best service composition is a complex problem and it faces trade-off among various QoS criteria. In this paper, we propose a Cloud service composition approach based on evolutionary algorithms, i.e., NSGA-II and MOPSO. We utilize the combination of multi-objective evolutionary approaches and Decision-Making method (AHP) to solve Cloud service composition optimization problem. The weights generated from AHP are applied to the Crowding Distance calculations of the above two evolutionary algorithms. Our algorithm beats single-objective algorithms on the optimization ability. And compared with general multi-objective algorithms, it is able to precisely capture the users' preferences. The results of the simulation also show that our approach can achieve a better scalability.

EVRC의 고속 구현 알고리듬 (Fast Implementation Algorithms for EVRC)

  • 정성교;최용수;김남건;윤대희
    • 한국음향학회지
    • /
    • 제20권1호
    • /
    • pp.43-49
    • /
    • 2001
  • EVRC (Enhanced Variable Rate Codec)는 북미 및 우리 나라 CDMA 디지털 셀룰러 시스템에 채택되었으며 8kbps의 전송률에서 우수한 성능을 갖는 부호화기이다. 본 논문에서는 복잡한 알고리듬으로 인해 많은 계산량을 갖는 EVRC 부호화기를 성능 저하 없이 고속으로 구현할 수 있는 알고리듬을 제시한다. 제안된 고속 알고리듬에서는 효율적인 피치 검색과 고정 코드북 탐색 과정이 구현되는데, 고정 코드북 탐색 과정에서는 펄스 위치 조합의 수를 제한하는 방법과 줄여진 임펄스 응답을 사용하여 연산량을 기존의 방법의 70% 정도로 감소시킨다. 주관적인 음질 평가를 통해 제안된 고속 EVRC 알고리듬이 기존의 방법에 비해 적은 계산량에 구현되지만 음질의 저하는 초래하지 않는다는 것을 확인하였다.

  • PDF

Fake News Detector using Machine Learning Algorithms

  • Diaa Salama;yomna Ibrahim;Radwa Mostafa;Abdelrahman Tolba;Mariam Khaled;John Gerges;Diaa Salama
    • International Journal of Computer Science & Network Security
    • /
    • 제24권7호
    • /
    • pp.195-201
    • /
    • 2024
  • With the Covid-19(Corona Virus) spread all around the world, people are using this propaganda and the desperate need of the citizens to know the news about this mysterious virus by spreading fake news. Some Countries arrested people who spread fake news about this, and others made them pay a fine. And since Social Media has become a significant source of news, .there is a profound need to detect these fake news. The main aim of this research is to develop a web-based model using a combination of machine learning algorithms to detect fake news. The proposed model includes an advanced framework to identify tweets with fake news using Context Analysis; We assumed that Natural Language Processing(NLP) wouldn't be enough alone to make context analysis as Tweets are usually short and do not follow even the most straightforward syntactic rules, so we used Tweets Features as several retweets, several likes and tweet-length we also added statistical credibility analysis for Twitter users. The proposed algorithms are tested on four different benchmark datasets. And Finally, to get the best accuracy, we combined two of the best algorithms used SVM ( which is widely accepted as baseline classifier, especially with binary classification problems ) and Naive Base.