• Title/Summary/Keyword: Rank algorithm

Search Result 283, Processing Time 0.023 seconds

A Distributed Vertex Rearrangement Algorithm for Compressing and Mining Big Graphs (대용량 그래프 압축과 마이닝을 위한 그래프 정점 재배치 분산 알고리즘)

  • Park, Namyong;Park, Chiwan;Kang, U
    • Journal of KIISE
    • /
    • v.43 no.10
    • /
    • pp.1131-1143
    • /
    • 2016
  • How can we effectively compress big graphs composed of billions of edges? By concentrating non-zeros in the adjacency matrix through vertex rearrangement, we can compress big graphs more efficiently. Also, we can boost the performance of several graph mining algorithms such as PageRank. SlashBurn is a state-of-the-art vertex rearrangement method. It processes real-world graphs effectively by utilizing the power-law characteristic of the real-world networks. However, the original SlashBurn algorithm displays a noticeable slowdown for large-scale graphs, and cannot be used at all when graphs are too large to fit in a single machine since it is designed to run on a single machine. In this paper, we propose a distributed SlashBurn algorithm to overcome these limitations. Distributed SlashBurn processes big graphs much faster than the original SlashBurn algorithm does. In addition, it scales up well by performing the large-scale vertex rearrangement process in a distributed fashion. In our experiments using real-world big graphs, the proposed distributed SlashBurn algorithm was found to run more than 45 times faster than the single machine counterpart, and process graphs that are 16 times bigger compared to the original method.

Comparison of a Deep Learning-Based Reconstruction Algorithm with Filtered Back Projection and Iterative Reconstruction Algorithms for Pediatric Abdominopelvic CT

  • Wookon Son;MinWoo Kim;Jae-Yeon Hwang;Young-Woo Kim;Chankue Park;Ki Seok Choo;Tae Un Kim;Joo Yeon Jang
    • Korean Journal of Radiology
    • /
    • v.23 no.7
    • /
    • pp.752-762
    • /
    • 2022
  • Objective: To compare a deep learning-based reconstruction (DLR) algorithm for pediatric abdominopelvic computed tomography (CT) with filtered back projection (FBP) and iterative reconstruction (IR) algorithms. Materials and Methods: Post-contrast abdominopelvic CT scans obtained from 120 pediatric patients (mean age ± standard deviation, 8.7 ± 5.2 years; 60 males) between May 2020 and October 2020 were evaluated in this retrospective study. Images were reconstructed using FBP, a hybrid IR algorithm (ASiR-V) with blending factors of 50% and 100% (AV50 and AV100, respectively), and a DLR algorithm (TrueFidelity) with three strength levels (low, medium, and high). Noise power spectrum (NPS) and edge rise distance (ERD) were used to evaluate noise characteristics and spatial resolution, respectively. Image noise, edge definition, overall image quality, lesion detectability and conspicuity, and artifacts were qualitatively scored by two pediatric radiologists, and the scores of the two reviewers were averaged. A repeated-measures analysis of variance followed by the Bonferroni post-hoc test was used to compare NPS and ERD among the six reconstruction methods. The Friedman rank sum test followed by the Nemenyi-Wilcoxon-Wilcox all-pairs test was used to compare the results of the qualitative visual analysis among the six reconstruction methods. Results: The NPS noise magnitude of AV100 was significantly lower than that of the DLR, whereas the NPS peak of AV100 was significantly higher than that of the high- and medium-strength DLR (p < 0.001). The NPS average spatial frequencies were higher for DLR than for ASiR-V (p < 0.001). ERD was shorter with DLR than with ASiR-V and FBP (p < 0.001). Qualitative visual analysis revealed better overall image quality with high-strength DLR than with ASiR-V (p < 0.001). Conclusion: For pediatric abdominopelvic CT, the DLR algorithm may provide improved noise characteristics and better spatial resolution than the hybrid IR algorithm.

Cost-based Optimization of Extended Boolean Queries (확장 불리언 질의에 대한 비용 기반 최적화)

  • 박병권
    • Journal of the Korean Society for information Management
    • /
    • v.18 no.3
    • /
    • pp.29-40
    • /
    • 2001
  • In this paper, we suggest a query optimization algorithm to select the optimal processing method of an extended boolean query on inverted files. There can be a lot of methods for processing an extended boolean query according to the processing sequence oh the keywords con tamed in the query, In this sense, the problem of optimizing an extended boolean query it essentially that of optimizing the keyword sequence in the query. In this paper, we show that the problem is basically analogous to the problem of finding the optimal join order in database query optimization, and apply the ideas in the area to the problem solving. We establish the cost model for processing an extended boolean query and develop an algorithm to filled the optimal keyword-processing sequence based on the concept of keyword rank using the keyword selectivity and the access costs of inverted file. We prove that the method selected by the optimization algorithm is really optimum, and show, through experiments, that the optimal method is superior to the others in performance We believe that the suggested optimization algorithm will contribute to the significant enhancement of the information retrieval performance.

  • PDF

Joint Time Delay and Angle Estimation Using the Matrix Pencil Method Based on Information Reconstruction Vector

  • Li, Haiwen;Ren, Xiukun;Bai, Ting;Zhang, Long
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5860-5876
    • /
    • 2018
  • A single snapshot data can only provide limited amount of information so that the rank of covariance matrix is not full, which is not adopted to complete the parameter estimation directly using the traditional super-resolution method. Aiming at solving the problem, a joint time delay and angle estimation using matrix pencil method based on information reconstruction vector for orthogonal frequency division multiplexing (OFDM) signal is proposed. Firstly, according to the channel frequency response vector of each array element, the algorithm reconstructs the vector data with delay and angle parameter information from both frequency and space dimensions. Then the enhanced data matrix for the extended array element is constructed, and the parameter vector of time delay and angle is estimated by the two-dimensional matrix pencil (2D MP) algorithm. Finally, the joint estimation of two-dimensional parameters is accomplished by the parameter pairing. The algorithm does not need a pseudo-spectral peak search, and the location of the target can be determined only by a single receiver, which can reduce the overhead of the positioning system. The theoretical analysis and simulation results show that the estimation accuracy of the proposed method in a single snapshot and low signal-to-noise ratio environment is much higher than that of Root Multiple Signal Classification algorithm (Root-MUSIC), and this method also achieves the higher estimation performance and efficiency with lower complexity cost compared to the one-dimensional matrix pencil algorithm.

Study on Volume Measurement of Cerebral Infarct using SVD and the Bayesian Algorithm (SVD와 Bayesian 알고리즘을 이용한 뇌경색 부피 측정에 관한 연구)

  • Kim, Do-Hun;Lee, Hyo-Young
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.5
    • /
    • pp.591-602
    • /
    • 2021
  • Acute ischemic stroke(AIS) should be diagnosed within a few hours of onset of cerebral infarction symptoms using diagnostic radiology. In this study, we evaluated the clinical usefulness of SVD and the Bayesian algorithm to measure the volume of cerebral infarction using computed tomography perfusion(CTP) imaging and magnetic resonance diffusion-weighted imaging(MR DWI). We retrospectively included 50 patients (male : female = 33 : 17) who visited the emergency department with symptoms of AIS from September 2017 to September 2020. The cerebral infarct volume measured by SVD and the Bayesian algorithm was analyzed using the Wilcoxon signed rank test and expressed as a median value and an interquartile range of 25 - 75 %. The core volume measured by SVD and the Bayesian algorithm using was CTP imaging was 18.07 (7.76 - 33.98) cc and 47.3 (23.76 - 79.11) cc, respectively, while the penumbra volume was 140.24 (117.8 - 176.89) cc and 105.05 (72.52 - 141.98) cc, respectively. The mismatch ratio was 7.56 % (4.36 - 15.26 %) and 2.08 % (1.68 - 2.77 %) for SVD and the Bayesian algorithm, respectively, and all the measured values had statistically significant differences (p < 0.05). Spearman's correlation analysis showed that the correlation coefficient of the cerebral infarct volume measured by the Bayesian algorithm using CTP imaging and MR DWI was higher than that of the cerebral infarct volume measured by SVD using CTP imaging and MR DWI (r = 0.915 vs. r = 0.763 ; p < 0.01). Furthermore, the results of the Bland Altman plot analysis demonstrated that the slope of the scatter plot of the cerebral infarct volume measured by the Bayesian algorithm using CTP imaging and MR DWI was more steady than that of the cerebral infarct volume measured by SVD using CTP imaging and MR DWI (y = -0.065 vs. y = -0.749), indicating that the Bayesian algorithm was more reliable than SVD. In conclusion, the Bayesian algorithm is more accurate than SVD in measuring cerebral infarct volume. Therefore, it can be useful in clinical utility.

Parallelization of sheet forming analysis program using MPI (MPI를 이용한 판재성형해석 프로그램의 병렬화)

  • Kim, Eui-Joong;Suh, Yeong-Sung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.22 no.1
    • /
    • pp.132-141
    • /
    • 1998
  • A parallel version of sheet forming analysis program was developed. This version is compatible with any parallel computers which support MPI that is one of the most recent and popular message passing libraries. For this purpose, SERI-SFA, a vector version which runs on Cray Y-MP C90, a sequential vector computer, was used as a source code. For the sake of the effectiveness of the work, the parallelization was focused on the selected part after checking the rank of CPU consumed from the exemplary calculation on Cray Y-MP C90. The subroutines associated with contact algorithm was selected as targe parts. For this work, MPI was used as a message passing library. For the performance verification, an oil pan and an S-rail forming simulation were carried out. The performance check was carried out by the kernel and total CPU time along with theoretical performance using Amdahl's Law. The results showed some performance improvement within the limit of the selective paralellization.

Determination of optimal accelerometer locations using modal sensitivity for identifying a structure

  • Kwon, Soon-Jung;Woo, Sungkwon;Shin, Soobong
    • Smart Structures and Systems
    • /
    • v.4 no.5
    • /
    • pp.629-640
    • /
    • 2008
  • A new algorithm is proposed to determine optimal accelerometer locations (OAL) when a structure is identified by frequency domain system identification (SI) method. As a result, a guideline is presented for selecting OAL which can reflect modal response of a structure properly. The guideline is to provide a minimum number of necessary accelerometers with the variation in the number of measurable target modes. To determine OAL for SI applications effectively, the modal sensitivity effective independence distribution vector (MS-EIDV) is developed with the likelihood function of measurements. By maximizing the likelihood of the occurrence of the measurements relative to the predictions, Fisher Information Matrix (FIM) is derived as a function of mode shape sensitivity. This paper also proposes a statistical approach in determining the structural parameters with a presumed parameter error which reflects the epistemic paradox between the determination of OAL and the application of a SI scheme. Numerical simulations have been carried out to examine the proposed OAL algorithm. A two-span multi-girder bridge and a two-span truss bridge were used for the simulation studies. To overcome a rank deficiency frequently occurred in inverting a FIM, the singular value decomposition scheme has been applied.

A Study on System Identification of Active Magnetic Bearing Rotor System Considering Sensor and Actuator Dynamics (센서와 작동기를 고려한 자기베어링 시스템의 식별에 관한 연구)

  • Kim, Chan-Jung;Ahn, Hyeong-Joon;Han, Dong-Chul
    • Proceedings of the KSME Conference
    • /
    • 2003.11a
    • /
    • pp.1458-1463
    • /
    • 2003
  • This paper presents an improved identification algorithm of active magnetic bearing rotor systems considering sensor and actuator dynamics. An AMB rotor system has both real and complex poles so that it is very hard to identify them together. In previous research, a linear transformation through a fictitious proportional feedback was used in order to shift the real poles close to the imaginary axis. However, the identification result highly depends on the fictitious feedback gain, and it is not easy to identify the additional dynamics including sensor and actuators at the same time. First, this paper discusses the necessity and a selection criterion of the fictitious feedback gain. An appropriate feedback gain minimizes dominant SVD(Singular Value Decomposition) error through maximizing rank deficiency. Second, more improvement in the identification is achieved through separating the common additional dynamics in all elements of frequency response matrix. The feasibility of the proposed identification algorithm is proved with two theoretical AMB rotor models. Finally, the proposed scheme is compared with previous identification methods using experimental data, and a great improvement in model quality and large amount of time saving can be achieved with the proposed method.

  • PDF

Color Correlogram using Combined RGB and HSV Color Spaces for Image Retrieval (RGB와 HSV 칼라 형태를 조합하여 사용한 칼라 코렐로그램 영상 검색)

  • An, Young-Eun;Park, Jong-An
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.5C
    • /
    • pp.513-519
    • /
    • 2007
  • Color correlogram is widely used in content-based image retrieval (CBIR) because it extracts not only the color distribution of pixels in images like color histogram, but also extracts the spatial information of pixels in the images. The color correlogram uses single color space. Therefore, the color correlograms does not have robust discriminative features. In this paper, we use both RGB and HSV color spaces together for the color correlogram to achieve better discriminative features. The proposed algorithm is tested on a large database of images and the results are compared with the single color space color correlogram. In simulation results, the proposed algorithm 5.63 average retrieval rank less than single color space correlogram.

Distorted Image Database Retrieval Using Low Frequency Sub-band of Wavelet Transform (웨이블릿 변환의 저주파수 부대역을 이용한 왜곡 영상 데이터베이스 검색)

  • Park, Ha-Joong;Kim, Kyeong-Jin;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.1
    • /
    • pp.8-18
    • /
    • 2008
  • In this paper, we propose an efficient algorithm using wavelet transform for still image database retrieval. Especially, it uses only the lowest frequency sub-band in multi-level wavelet transform so that a retrieval system uses a smaller quantity of memory and takes a faster processing time. We extract different textured features, statistical information such as mean, variance and histogram, from low frequency sub-band. Then we measure the distances between the query image and the images in a database in terms of these features. To obtain good retrieval performance, we use the first feature (mean and variance of wavelet coefficients) to filter out most of the unlikely images. The rest of the images are considered to be candidate images. Then we apply the second feature (histogram of wavelet coefficient) to rank all the candidate images. To evaluate the algorithm, we create various distorted image databases using MIT VisTex texture images and PICS natural images. Through simulations, we demonstrate that our method can achieve performance satisfactorily in terms of the retrieval accuracy as well as the both memory requirement and computational complexity. Therefore it is expected to provide good retrieval solution for JPEG-2000 using wavelet transform.

  • PDF