• Title/Summary/Keyword: 퀵정렬

Search Result 16, Processing Time 0.023 seconds

An Implementation of Efficient Quicksort Utilizing SIMD-Based VBP Technique (SIMD 기반의 VBP 기법을 적용한 효율적인 퀵정렬의 구현)

  • Hong, Gilseok;Kim, Hongyeon;Kang, Seonghyeon;Min, Jun-Ki
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.8
    • /
    • pp.498-503
    • /
    • 2017
  • SIMD (Single Instruction Multiple Data) is a representative parallelization architecture that processes multiple data loaded in a SIMD register with a single instruction. Quicksort is a sorting algorithm that picks an element as a pivot from the array and reorders the array such that all elements having the values less than the pivot value are located in the left side on the pivot as well as all elements having the value greater than the pivot value are located in the right side on the pivot and then the algorithm performs the same task on both sublist recursively. In this paper, we propose an efficient Quicksort algorithm applying the SIMD instructions which minimally invokes conditional branches to avoid the performance degradation incurred by branch misprediction in a pipeline architecture. In addition, we improve the performance of the Quicksort algorithm by fetching data into a SIMD register as a byte unit to apply VBP (Vertical Bit Parallel) and the early pruning technique.

3-Points Average Pivot Quicksort (3-점 평균 피벗 퀵정렬)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.6
    • /
    • pp.295-301
    • /
    • 2014
  • In the absence of a sorting algorithm faster than O(n log n), Quicksort remains the best and fastest of its kind in practice. For given n data, Quicksort records running in O(n log n) at best and $O(n^2)$ at its worst. In this paper, I propose an algorithm by which 3-points average P=(L+M+H)/3 is set as a pivot for first array L=a[s], last array H=a[e], and middle array $M=a[{\lfloor}(s+e)/2{\rfloor}]$ in order to find the more fast than Quicksort. Test results prove that the proposed 3-points average pivot Quicksort has the time complexity of O(n log n) at its best, average, and worst cases. And the proposed algorithm can be reduce the $O(n^2)$ time of Quicksort to O(n log n).

Finding the Worst-case Instances of Some Sorting Algorithms Using Genetic Algorithms (유전 알고리즘을 이용한 정렬 알고리즘의 최악의 인스턴스 탐색)

  • Jeon, So-Yeong;Kim, Yong-Hyuk
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06b
    • /
    • pp.1-5
    • /
    • 2010
  • 정렬 알고리즘에서 사용한 원소 간 비교횟수를 기준으로, 비교횟수가 많게 되는 순열을 최악의 인스턴스(worst-case instance)라 명명하고 이를 찾기 위해 유전 알고리즘(genetic algorithm)을 사용하였다. 잘 알려진 퀵 정렬(quick sort), 머지 정렬(merge sort), 힙 정렬(heap sort), 삽입 정렬(insertion sort), 쉘 정렬(shell sort), 개선된 퀵 정렬(advanced quick sort)에 대해서 실험하였다. 머지 정렬과 삽입 정렬에 대해 탐색한 인스턴스는 최악의 인스턴스에 거의 근접하였다. 퀵 정렬은 크기가 증가함에 따라 최악의 인스턴스 탐색이 어려웠다. 나머지 정렬에 대해서 찾은 인스턴스는 최악의 인스턴스인지 이론적으로 보장할 수 없지만, 임의의 1,000개 순열을 정렬해서 얻은 비교횟수들의 평균치보다는 훨씬 높았다. 본 논문의 최악의 인스턴스를 탐색하는 시도는 알고리즘의 성능 검증을 위한 테스트 데이터를 생성한다는 점에서 의미가 크다.

  • PDF

A New Sort Algorithm : Information Block Sort Algorithm(IBSA) (새로운 정렬 알고리즘 : 정보 블록 정렬 알고리즘)

  • 송태옥;김태영
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10a
    • /
    • pp.560-562
    • /
    • 2000
  • 본 논문에서는 정보블록알고리즘(IBPA;Information Block Preprocessing Algorithm)을 이용한 정보블록 정렬알고리즘 (IBSA; Information Block Sort Algotithm)을 제안하고 그 성능을 평가하였다. IBSA의 시간복잡도는 O(N)이며, 데이터의 분포상태에 영향을 받지 않는다. IBPA의 성능을 측정해본 결과, 2백만개의 랜덤데이터를 정렬한 경우, 중복값 허용의 경우 (a)는 퀵 정렬의 32.42%, 기수정렬의 9%정도의 비교회수만으로도 정렬할 수 있음을 보여주었으며, 중복값이 없는 경우 (b)는 퀵 정렬의 53.12%, 기수정렬의 12.79%정도의 비교회수만으로도 정렬할 수 있음을 보여주었다.

  • PDF

A New Sort Algorithm : Information Block Sort Algorithm(IBSA) (정보 블록 정렬 알고리즘)

  • Song, Tae-Ok;Jung, Sang-Wuk;Kim, Tae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.10a
    • /
    • pp.195-198
    • /
    • 2000
  • 본 논문에서는 정보블록알고리즘(IBPA; Information Block Preprocessing Algorithm)을 이용한 정보블록 정렬알고리즘(IBSA; Information Block Sort Algorithm)을 제안하고 그 성능을 평가하였다. IBSA의 시간복잡도는 O(N)이며, 데이터의 분포상태에 영향을 받지 않는다. IBPA의 성능을 측정해본 결과, 2백만개의 랜덤데이터를 정렬한 경우, 중복 값 허용의 경우(a)는 퀵 정렬의 32.42%, 기수정렬의 9%정도의 비교회수만으로도 정렬할 수 있음을 보여주었으며, 중복 값이 없는 경우(b)는 퀵 정렬의 53.12%, 기수정렬의 12.79%정도의 비교회수만으로도 정렬할 수 있음을 보여주었다.

  • PDF

Quicksort Using Range Pivot (범위 피벗 퀵정렬)

  • Lee, Sang-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.4
    • /
    • pp.139-145
    • /
    • 2012
  • Generally, Quicksort selects the pivot from leftmost, rightmost, middle, or random location in the array. This paper suggests Quicksort using middle range pivot $P_0$ and continually divides into 2. This method searches the minimum value $L$ and maximum value $H$ in the length n of list $A$. Then compute the initial pivot key $P_0=(H+L)/2$ and swaps $a[i]{\geq}P_0$,$a[j]<P_0$ until $i$=$j$ or $i$>$j$. After the swap, the length of list $A_0$ separates in two lists $a[1]{\leq}A_1{\leq}a[j]$ and $a[i]{\leq}A_2{\leq}a[n]$ and the pivot values are selected by $P_1=P_0/2$, $P_2=P_0+P_1$. This process repeated until the length of partial list is two. At the length of list is two and $a$[1]>$a$[2], swaps as $a[1]{\leftrightarrow}a[2]$. This method is simpler pivot key process than Quicksort and improved the worst-case computational complexity $O(n^2)$ to $O(n{\log}n)$.

Probabilistic analysis of efficiencies for sorting algorithms with a finite number of records based on an asymptotic algorithm analysis (점근적 분석 모형에 기초한 유한개 레코드 정렬 알고리즘 효율성의 확률적 분석)

  • 김숙영
    • Journal of the Korea Computer Industry Society
    • /
    • v.5 no.2
    • /
    • pp.325-330
    • /
    • 2004
  • The Big O notation of a sorting algorithm analysis is an asymptotic algorithm analysis which gives information of a rough mathematical function with an infinite increase of a sample size, without any specification of a probabilistic model. Hence. in an application with a limited finite number of data, it is necessary to test efficiencies of sorting algorithms. I estimated probabilistic models which analyze the number of exchanges varying input sizes to sort. The estimated models to explain the relationship of sorting efficiency on the sample size (N of the sample size and S of the number of exchange of elements) are S=0.9305 $N^{1.339}$ for Quick sort algorithm with O(nlogn) time complexity, and S=0.2232 $N^{2.0130}$ for Insertion sort algorithm with O( $n^2$) time complexity. Furthermore, there are strongly supports that more than 99% of the above relationship could be explained by the estimated models (p<0.001). These findings suggest it is necessary to analyze sorting algorithm efficiency in applications with a finite number of data or a newly developed sorting algorithm.

  • PDF

Proposal of Fast Counting Sort (빠른 계수 정렬법의 제안)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.5
    • /
    • pp.61-68
    • /
    • 2015
  • Among comparison sorts, no algorithm excels a current set lower bound of O(nlogn) in operation. Quicksort, the fastest of its kind, has a complexity of O(nlogn) at its best and on average and $O(n^2)$ at worst. This paper thus presents two methods: first is an O(n+k) simple counting sort which operates much more speedily than an O(n+k), (k=maximum value) counting sort, and second is an O(ln) radix counting sort which counts the frequency of numbers in the digit l of a data and saves it in a corresponding virtual bucket in an array, only to virtually divide the array into radix digit numbers. For the 6 experimental data, the proposed algorithm makes O(nlogn) or $O(n^2)$ of Quicksort simple into O(n+k) or O(ln). After all, the proposed sorting algorithm has proved to be much faster than the counting sort and Quicksort.

A Study on Information Block Sort Algorithm (정보 블록 정렬 알고리즘에 관한 연구)

  • Song, Tae-Ok
    • The Journal of Korean Association of Computer Education
    • /
    • v.6 no.3
    • /
    • pp.1-8
    • /
    • 2003
  • In this paper, I proposed a sort algorithm named Information Block Sort Algorithm(IBSAl which is not influenced on distribution of data in the list and has time complexity of O(NlogN). Also I evaluated the IBSA using a simulator. Performance analysis shows that, in case of sorting randomly generated two millions data, the number of actual comparisons has taken place about 36% of the number of comparisons in the improved Quick sort algorithm and 22% in Quick sort algorithm.

  • PDF