• 제목/요약/키워드: computer algorithms

Search Result 3,778, Processing Time 0.027 seconds

An Adaptive Web Caching Method based on the Heterogeneity of Web Object (웹 객체 이질성 기반의 적응형 웹캐싱 기법)

  • Ko, Il-Suk;Na, Yun-Ji;Leem, Chun-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.1379-1382
    • /
    • 2004
  • The use of a cache for storing and processing of Web objects is becoming larger. Also, many studies on the efficient management of the storing scope of caches are being done. Web caching algorithms have many differences from traditional algorithms. Particularly, heterogeneity of Web objects that are processing units of Web caching, and a variation of Web object reference characteristic with time are the important causes of the decrease the performance of existing algorithms. In this study, we proposed the new web-caching algorithm. A heterogeneity variation of an object can be reduced as the proposed method dividedly managing Web objects and a cache scope with heterogeneity, and it is adaptively reflecting a variation of object reference characteristics with the flowing of time. In the experiments, we verified that the performance of the proposed method was more improved than existing algorithms through the two experiment models which considered heterogeneity of an object.

  • PDF

Fast Algorithms for Computing Floating-Point Reciprocal Cube Root Functions

  • Leonid Moroz;Volodymyr Samotyy;Cezary Walczyk
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.6
    • /
    • pp.84-90
    • /
    • 2023
  • In this article the problem of computing floating-point reciprocal cube root functions is considered. Our new algorithms for this task decrease the number of arithmetic operations used for computing $1/{\sqrt[3]{x}}$. A new approach for selection of magic constants is presented in order to minimize the computation time for reciprocal cube roots of arguments with movable decimal point. The underlying theory enables partitioning of the base argument range x∈[1,8) into 3 segments, what in turn increases accuracy of initial function approximation and decreases the number of iterations to one. Three best algorithms were implemented and carefully tested on 32-bit microcontroller with ARM core. Their custom C implementations were favourable compared with the algorithm based on cbrtf(x) function taken from C <math.h> library on three different hardware platforms. As a result, the new fast approximation algorithm for the function $1/{\sqrt[3]{x}}$ was determined that outperforms all other algorithms in terms of computation time and cycle count.

Deep Learning-Based Artificial Intelligence for Mammography

  • Jung Hyun Yoon;Eun-Kyung Kim
    • Korean Journal of Radiology
    • /
    • v.22 no.8
    • /
    • pp.1225-1239
    • /
    • 2021
  • During the past decade, researchers have investigated the use of computer-aided mammography interpretation. With the application of deep learning technology, artificial intelligence (AI)-based algorithms for mammography have shown promising results in the quantitative assessment of parenchymal density, detection and diagnosis of breast cancer, and prediction of breast cancer risk, enabling more precise patient management. AI-based algorithms may also enhance the efficiency of the interpretation workflow by reducing both the workload and interpretation time. However, more in-depth investigation is required to conclusively prove the effectiveness of AI-based algorithms. This review article discusses how AI algorithms can be applied to mammography interpretation as well as the current challenges in its implementation in real-world practice.

Improving the Quality of Web Spam Filtering by Using Seed Refinement (시드 정제 기술을 이용한 웹 스팸 필터링의 품질 향상)

  • Qureshi, Muhammad Atif;Yun, Tae-Seob;Lee, Jeong-Hoon;Whang, Kyu-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.6
    • /
    • pp.123-139
    • /
    • 2011
  • Web spam has a significant influence on the ranking quality of web search results because it promotes unimportant web pages. Therefore, web search engines need to filter web spam. web spam filtering is a concept that identifies spam pages - web pages contributing to web spam. TrustRank, Anti-TrustRank, Spam Mass, and Link Farm Spam are well-known web spam filtering algorithms in the research literature. The output of these algorithms depends upon the input seed. Thus, refinement in the input seed may lead to improvement in the quality of web spam filtering. In this paper, we propose seed refinement techniques for the four well-known spam filtering algorithms. Then, we modify algorithms, which we call modified spam filtering algorithms, by applying these techniques to the original ones. In addition, we propose a strategy to achieve better quality for web spam filtering. In this strategy, we consider the possibility that the modified algorithms may support one another if placed in appropriate succession. In the experiments we show the effect of seed refinement. For this goal, we first show that our modified algorithms outperform the respective original algorithms in terms of the quality of web spam filtering. Then, we show that the best succession significantly outperforms the best known original and the best modified algorithms by up to 1.38 times within typical value ranges of parameters in terms of recall while preserving precision.

Classifying Social Media Users' Stance: Exploring Diverse Feature Sets Using Machine Learning Algorithms

  • Kashif Ayyub;Muhammad Wasif Nisar;Ehsan Ullah Munir;Muhammad Ramzan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.79-88
    • /
    • 2024
  • The use of the social media has become part of our daily life activities. The social web channels provide the content generation facility to its users who can share their views, opinions and experiences towards certain topics. The researchers are using the social media content for various research areas. Sentiment analysis, one of the most active research areas in last decade, is the process to extract reviews, opinions and sentiments of people. Sentiment analysis is applied in diverse sub-areas such as subjectivity analysis, polarity detection, and emotion detection. Stance classification has emerged as a new and interesting research area as it aims to determine whether the content writer is in favor, against or neutral towards the target topic or issue. Stance classification is significant as it has many research applications like rumor stance classifications, stance classification towards public forums, claim stance classification, neural attention stance classification, online debate stance classification, dialogic properties stance classification etc. This research study explores different feature sets such as lexical, sentiment-specific, dialog-based which have been extracted using the standard datasets in the relevant area. Supervised learning approaches of generative algorithms such as Naïve Bayes and discriminative machine learning algorithms such as Support Vector Machine, Naïve Bayes, Decision Tree and k-Nearest Neighbor have been applied and then ensemble-based algorithms like Random Forest and AdaBoost have been applied. The empirical based results have been evaluated using the standard performance measures of Accuracy, Precision, Recall, and F-measures.

Education of Algorithms Using the RAPTOR Programming Educational Tool (RAPTOR 프로그래밍 교육도구를 이용한 알고리즘 교육)

  • KIM, SungYul;LEE, JongYun
    • The Journal of Korean Association of Computer Education
    • /
    • v.18 no.6
    • /
    • pp.23-31
    • /
    • 2015
  • The main aim in software education is to improve problem-solving ability based on computational thinking with the healthy information ethics. For this purpose, many institutions have attempted various educational programs such as Educational Programming Language, Physical Computing, and Robot education. However, it is possible to obscure the essence of computer education for computational thinking if the computer education focuses on using certain special education programming language and products. Therefore, this paper suggests a method of algorithm education using RAPTOR which is a visual programming development environment and is based on flowcharts. In order to verify the effectiveness of the algorithms education using the RAPTOR, 16 high-school students were applied to an educational program for twelve hours on five steps and then we obtained positive results.

Swarm Intelligence-based Power Allocation and Relay Selection Algorithm for wireless cooperative network

  • Xing, Yaxin;Chen, Yueyun;Lv, Chen;Gong, Zheng;Xu, Ling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1111-1130
    • /
    • 2016
  • Cooperative communications can significantly improve the wireless transmission performance with the help of relay nodes. In cooperative communication networks, relay selection and power allocation are two key issues. In this paper, we propose a relay selection and power allocation scheme RS-PA-PSACO (Relay Selection-Power Allocation-Particle Swarm Ant Colony Optimization) based on PSACO (Particle Swarm Ant Colony Optimization) algorithm. This scheme can effectively reduce the computational complexity and select the optimal relay nodes. As one of the swarm intelligence algorithms, PSACO which combined both PSO (Particle Swarm Optimization) and ACO (Ant Colony Optimization) algorithms is effective to solve non-linear optimization problems through a fast global search at a low cost. The proposed RS-PA-PSACO algorithm can simultaneously obtain the optimal solutions of relay selection and power allocation to minimize the SER (Symbol Error Rate) with a fixed total power constraint both in AF (Amplify and Forward) and DF (Decode and Forward) modes. Simulation results show that the proposed scheme improves the system performance significantly both in reliability and power efficiency at a low complexity.

A Parallel Genetic Algorithm for Solving Deadlock Problem within Multi-Unit Resources Systems

  • Ahmed, Rabie;Saidani, Taoufik;Rababa, Malek
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12
    • /
    • pp.175-182
    • /
    • 2021
  • Deadlock is a situation in which two or more processes competing for resources are waiting for the others to finish, and neither ever does. There are two different forms of systems, multi-unit and single-unit resource systems. The difference is the number of instances (or units) of each type of resource. Deadlock problem can be modeled as a constrained combinatorial problem that seeks to find a possible scheduling for the processes through which the system can avoid entering a deadlock state. To solve deadlock problem, several algorithms and techniques have been introduced, but the use of metaheuristics is one of the powerful methods to solve it. Genetic algorithms have been effective in solving many optimization issues, including deadlock Problem. In this paper, an improved parallel framework of the genetic algorithm is introduced and adapted effectively and efficiently to deadlock problem. The proposed modified method is implemented in java and tested on a specific dataset. The experiment shows that proposed approach can produce optimal solutions in terms of burst time and the number of feasible solutions in each advanced generation. Further, the proposed approach enables all types of crossovers to work with high performance.

Incremental Strategy-based Residual Regression Networks for Node Localization in Wireless Sensor Networks

  • Zou, Dongyao;Sun, Guohao;Li, Zhigang;Xi, Guangyong;Wang, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2627-2647
    • /
    • 2022
  • The easy scalability and low cost of range-free localization algorithms have led to their wide attention and application in node localization of wireless sensor networks. However, the existing range-free localization algorithms still have problems, such as large cumulative errors and poor localization performance. To solve these problems, an incremental strategy-based residual regression network is proposed for node localization in wireless sensor networks. The algorithm predicts the coordinates of the nodes to be solved by building a deep learning model and fine-tunes the prediction results by regression based on the intersection of the communication range between the predicted and real coordinates and the loss function, which improves the localization performance of the algorithm. Moreover, a correction scheme is proposed to correct the augmented data in the incremental strategy, which reduces the cumulative error generated during the algorithm localization. The analysis through simulation experiments demonstrates that our proposed algorithm has strong robustness and has obvious advantages in localization performance compared with other algorithms.

Parallel Multithreaded Processing for Data Set Summarization on Multicore CPUs

  • Ordonez, Carlos;Navas, Mario;Garcia-Alvarado, Carlos
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.2
    • /
    • pp.111-120
    • /
    • 2011
  • Data mining algorithms should exploit new hardware technologies to accelerate computations. Such goal is difficult to achieve in database management system (DBMS) due to its complex internal subsystems and because data mining numeric computations of large data sets are difficult to optimize. This paper explores taking advantage of existing multithreaded capabilities of multicore CPUs as well as caching in RAM memory to efficiently compute summaries of a large data set, a fundamental data mining problem. We introduce parallel algorithms working on multiple threads, which overcome the row aggregation processing bottleneck of accessing secondary storage, while maintaining linear time complexity with respect to data set size. Our proposal is based on a combination of table scans and parallel multithreaded processing among multiple cores in the CPU. We introduce several database-style and hardware-level optimizations: caching row blocks of the input table, managing available RAM memory, interleaving I/O and CPU processing, as well as tuning the number of working threads. We experimentally benchmark our algorithms with large data sets on a DBMS running on a computer with a multicore CPU. We show that our algorithms outperform existing DBMS mechanisms in computing aggregations of multidimensional data summaries, especially as dimensionality grows. Furthermore, we show that local memory allocation (RAM block size) does not have a significant impact when the thread management algorithm distributes the workload among a fixed number of threads. Our proposal is unique in the sense that we do not modify or require access to the DBMS source code, but instead, we extend the DBMS with analytic functionality by developing User-Defined Functions.