• Title/Summary/Keyword: Linear Time Complexity

Search Result 247, Processing Time 0.026 seconds

Heart Rate Variability in Patients with Coronary Artery Disease (관상동맥질환 환자의 심박동변이도)

  • Kim Wuon-Shik;Bae Jang-Ho;Choi Hyoung-Min;Lee Sang-Tae
    • Science of Emotion and Sensibility
    • /
    • v.8 no.2
    • /
    • pp.95-101
    • /
    • 2005
  • This study is based on previous information regarding reduced cardiac vagal activity in patients with coronary artery disease(CAD), on reduced variance(SDNN : standard deviation of all normal RR intervals), low-frequency power(LF), and the complexity of heart rate variability(HRV) in patients with chronic heart failure(CHF), and on the normalized high-frequency power of HRV is the highest in the right lateral decubitus position among 3 recumbent postures in patients with CAD, However, nothing is known about the nonlinear dynamics of HRV for the 3 recumbent postures in patients with CAD. To investigate the linear and non-linear characteristics of HRV in patients with CAD, 29 patients as CAD group and 23 patients as control group were studied. Electrocardiogram(ECG) with lead II channel was measured on these patients for 3 recumbent postures in random order. The HRV from ECG was analyzed with linear method(for time and frequency domains) and nonlinear method. The lower the high-frequency power in normalized unit(nHF) in the supine or left lateral decubitous position, the higher the increase in nHF when the position was changed from supine or left lateral decubitous to right lateral decubitous. Among the 3 recumbent postures in patients with severe CAD, the right lateral decubitus position was observed to induce the highest vagal modulation, the lowest sympathetic modulation, and the highest complexity of human physiology system.

  • PDF

Selection of Cluster Hierarchy Depth and Initial Centroids in Hierarchical Clustering using K-Means Algorithm (K-Means 알고리즘을 이용한 계층적 클러스터링에서 클러스터 계층 깊이와 초기값 선정)

  • Lee, Shin-Won;An, Dong-Un;Chong, Sung-Jong
    • Journal of the Korean Society for information Management
    • /
    • v.21 no.4 s.54
    • /
    • pp.173-185
    • /
    • 2004
  • Fast and high-quality document clustering algorithms play an important role in providing data exploration by organizing large amounts of information into a small number of meaningful clusters. Many papers have shown that the hierarchical clustering method takes good-performance, but is limited because of its quadratic time complexity. In contrast, with a large number of variables, K-means has a time complexity that is linear in the number of documents, but is thought to produce inferior clusters. In this paper, Condor system using K-Means algorithm Compares with regular method that the initial centroids have been established in advance, our method performance has been improved a lot.

A Novel Resource Allocation Algorithm in Multi-media Heterogeneous Cognitive OFDM System

  • Sun, Dawei;Zheng, Baoyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.5
    • /
    • pp.691-708
    • /
    • 2010
  • An important issue of supporting multi-users with diverse quality-of-service (QoS) requirements over wireless networks is how to optimize the systematic scheduling by intelligently utilizing the available network resource while, at the same time, to meet each communication service QoS requirement. In this work, we study the problem of a variety of communication services over multi-media heterogeneous cognitive OFDM system. We first divide the communication services into two parts. Multimedia applications such as broadband voice transmission and real-time video streaming are very delay-sensitive (DS) and need guaranteed throughput. On the other side, services like file transmission and email service are relatively delay tolerant (DT) so varying-rate transmission is acceptable. Then, we formulate the scheduling as a convex optimization problem, and propose low complexity distributed solutions by jointly considering channel assignment, bit allocation, and power allocation. Unlike prior works that do not care computational complexity. Furthermore, we propose the FAASA (Fairness Assured Adaptive Sub-carrier Allocation) algorithm for both DS and DT users, which is a dynamic sub-carrier allocation algorithm in order to maximize throughput while taking into account fairness. We provide extensive simulation results which demonstrate the effectiveness of our proposed schemes.

A Study on Cluster Hierarchy Depth in Hierarchical Clustering (계층적 클러스터링에서 분류 계층 깊이에 관한 연구)

  • Jin, Hai-Nan;Lee, Shin-won;An, Dong-Un;Chung, Sung-Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.673-676
    • /
    • 2004
  • Fast and high-quality document clustering algorithms play an important role in providing data exploration by organizing large amounts of information into a small number of meaningful clusters. In particular, hierarchical clustering provide a view of the data at different levels, making the large document collections are adapted to people's instinctive and interested requires. Many papers have shown that the hierarchical clustering method takes good-performance, but is limited because of its quadratic time complexity. In contrast, K-means has a time complexity that is linear in the number of documents, but is thought to produce inferior clusters. Think of the factor of simpleness, high-quality and high-efficiency, we combine the two approaches providing a new system named CONDOR system [10] with hierarchical structure based on document clustering using K-means algorithm to "get the best of both worlds". The performance of CONDOR system is compared with the VIVISIMO hierarchical clustering system [9], and performance is analyzed on feature words selection of specific topics and the optimum hierarchy depth.

  • PDF

Optimal seismic retrofit design method for asymmetric soft first-story structures

  • Dereje, Assefa Jonathan;Kim, Jinkoo
    • Structural Engineering and Mechanics
    • /
    • v.81 no.6
    • /
    • pp.677-689
    • /
    • 2022
  • Generally, the goal of seismic retrofit design of an existing structure using energy dissipation devices is to determine the optimum design parameters of a retrofit device to satisfy a specified limit state with minimum cost. However, the presence of multiple parameters to be optimized and the computational complexity of performing non-linear analysis make it difficult to find the optimal design parameters in the realistic 3D structure. In this study, genetic algorithm-based optimal seismic retrofit methods for determining the required number, yield strength, and location of steel slit dampers are proposed to retrofit an asymmetric soft first-story structure. These methods use a multi-objective and single-objective evolutionary algorithms, each of which varies in computational complexity and incorporates nonlinear time-history analysis to determine seismic performance. Pareto-optimal solutions of the multi-objective optimization are found using a non-dominated sorting genetic algorithm (NSGA-II). It is demonstrated that the developed multi-objective optimization methods can determine the optimum number, yield strength, and location of dampers that satisfy the given limit state of a three-dimensional asymmetric soft first-story structure. It is also shown that the single-objective distribution method based on minimizing plan-wise stiffness eccentricity turns out to produce similar number of dampers in optimum locations without time consuming nonlinear dynamic analysis.

An Optimal Parallel Algorithm for Generating Computation Tree Form on Linear Array with Slotted Optical Buses (LASOB 상에서 계산 트리 형식을 생성하기 위한 최적 병렬 알고리즘)

  • Kim, Young-Hak
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.5
    • /
    • pp.475-484
    • /
    • 2000
  • Recently, processor arrays to enhance the banRecently, processor arrays to enhance the bandwidth of buses and to reduce the complexity of hardwares, using optical buses instead of electronic buses, have been proposed in manyliteratures. In this paper, we first propose a constant-time algorithm for parentheses matching problemon a linear array with slotted optical buses (LASOB).Then, given an algebraic expression of length n, we also propose a cost optimal parallel algorithmthat constructs computational tree form in the steps of constant time on LASOB with n processorsby using parentheses matching algorithm. A cost optimal parallel algorithm for this problem that runsin constant time has not yet been known on any parallel computation models.

  • PDF

Dynamic Survivable Routing for Shared Segment Protection

  • Tapolcai, Janos;Ho, Pin-Han
    • Journal of Communications and Networks
    • /
    • v.9 no.2
    • /
    • pp.198-209
    • /
    • 2007
  • This paper provides a thorough study on shared segment protection (SSP) for mesh communication networks in the complete routing information scenario, where the integer linear program (ILP) in [1] is extended such that the following two constraints are well addressed: (a) The restoration time constraint for each connection request, and (b) the switching/merging capacity constraint at each node. A novel approach, called SSP algorithm, is developed to reduce the extremely high computation complexity in solving the ILP formulation. Basically, our approach is to derive a good approximation on the parameters in the ILP by referring to the result of solving the corresponding shared path protection (SPP) problem. Thus, the design space can be significantly reduced by eliminating some edges in the graphs. We will show in the simulation that with our approach, the optimality can be achieved in most of the cases. To verify the proposed formulation and investigate the performance impairment in terms of average cost and success rate by the additional two constraints, extensive simulation work has been conducted on three network topologies, in which SPP and shared link protection (SLP) are implemented for comparison. We will demonstrate that the proposed SSP algorithm can effectively and efficiently solve the survivable routing problem with constraints on restoration time and switching/merging capability of each node. The comparison among the three protection types further verifies that SSP can yield significant advantages over SPP and SLP without taking much computation time.

Non-iterative pulse tail extrapolation algorithms for correcting nuclear pulse pile-up

  • Mohammad-Reza Mohammadian-Behbahani
    • Nuclear Engineering and Technology
    • /
    • v.55 no.12
    • /
    • pp.4350-4356
    • /
    • 2023
  • Radiation detection systems working at high count rates suffer from the overlapping of their output electric pulses, known as pulse pile-up phenomenon, resulting in spectrum distortion and degradation of the energy resolution. Pulse tail extrapolation is a pile-up correction method which tries to restore the shifted baseline of a piled-up pulse by extrapolating the overlapped part of its preceding pulse. This needs a mathematical model which is almost always nonlinear, fitted usually by a nonlinear least squares (NLS) technique. NLS is an iterative, potentially time-consuming method. The main idea of the present study is to replace the NLS technique by an integration-based non-iterative method (NIM) for pulse tail extrapolation by an exponential model. The idea of linear extrapolation, as another non-iterative method, is also investigated. Analysis of experimental data of a NaI(Tl) radiation detector shows that the proposed non-iterative method is able to provide a corrected spectrum quite similar with the NLS method, with a dramatically reduced computation time and complexity of the algorithm. The linear extrapolation approach suffers from a poor energy resolution and throughput rate in comparison with NIM and NLS techniques, but provides the shortest computation time.

Multi-Mode Reconstruction of Subsampled Chrominance Information using Inter-Component Correlation in YCbCr Colorspace (YCbCr 컬러공간에서 구성성분간의 상관관계를 이용한 축소된 채도 정보의 다중 모드 재구성)

  • Kim, Young-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.2
    • /
    • pp.74-82
    • /
    • 2008
  • This paper investigates chrominance reconstruction methods that reconstruct subsampled chrominance information efficiently using the correlation between luminance and chrominance components in the decompression process of compressed images, and analyzes drawbacks involved in the adaptive-weighted 2-dimensional linear interpolation among the methods, which shows higher efficiency in the view of computational complexity than other methods. To improve the drawback that the spatial frequency distribution is not considered for the decompressed image and to support the application on a low-performance system in behalf of 2-dimensional linear interpolation, this paper proposes the multi-mode reconstruction method which uses three reconstruction methods having different computational complexity from each other according to the degree of edge response of luminance component. The performance evaluation on a development platform for embedded systems showed that the proposed reconstruction method supports the similar level of image quality for decompressed images while reducing the overall computation time for chrominance reconstruction in comparison with the 2-dimensional linear interpolation.

A Profile Tolerance Usage in GD&T for Precision Manufacturing (정밀제조를 위한 기하공차에서의 윤곽공차 사용)

  • Kim, Kyung-Wook;Chang, Sung-Ho
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.40 no.2
    • /
    • pp.145-149
    • /
    • 2017
  • One of the challenges facing precision manufacturers is the increasing feature complexity of tight tolerance parts. All engineering drawings must account for the size, form, orientation, and location of all features to ensure manufacturability, measurability, and design intent. Geometric controls per ASME Y14.5 are typically applied to specify dimensional tolerances on engineering drawings and define size, form, orientation, and location of features. Many engineering drawings lack the necessary geometric dimensioning and tolerancing to allow for timely and accurate inspection and verification. Plus-minus tolerancing is typically ambiguous and requires extra time by engineering, programming, machining, and inspection functions to debate and agree on a single conclusion. Complex geometry can result in long inspection and verification times and put even the most sophisticated measurement equipment and processes to the test. In addition, design, manufacturing and quality engineers are often frustrated by communication errors over these features. However, an approach called profile tolerancing offers optimal definition of design intent by explicitly defining uniform boundaries around the physical geometry. It is an efficient and effective method for measurement and quality control. There are several advantages for product designers who use position and profile tolerancing instead of linear dimensioning. When design intent is conveyed unambiguously, manufacturers don't have to field multiple question from suppliers as they design and build a process for manufacturing and inspection. Profile tolerancing, when it is applied correctly, provides manufacturing and inspection functions with unambiguously defined tolerancing. Those data are manufacturable and measurable. Customers can see cost and lead time reductions with parts that consistently meet the design intent. Components can function properly-eliminating costly rework, redesign, and missed market opportunities. However a supplier that is poised to embrace profile tolerancing will no doubt run into resistance from those who would prefer the way things have always been done. It is not just internal naysayers, but also suppliers that might fight the change. In addition, the investment for suppliers can be steep in terms of training, equipment, and software.