• Title/Summary/Keyword: algorithmic complexity

Search Result 21, Processing Time 0.027 seconds

Automation of M.E.P Design Using Large Language Models (대형 언어 모델을 활용한 설비설계의 자동화)

  • Park, Kyung Kyu;Lee, Seung-Been;Seo, Min Jo;Kim, Si Uk;Choi, Won Jun;Kim, Chee Kyung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.11a
    • /
    • pp.237-238
    • /
    • 2023
  • Urbanization and the increase in building scale have amplified the complexity of M.E.P design. Traditional design methods face limitations when considering intricate pathways and variables, leading to an emergent need for research in automated design. Initial algorithmic approaches encountered challenges in addressing complex architectural structures and the diversity of M.E.P types. However, with the launch of OpenAI's ChatGPT-3.5 beta version in 2022, new opportunities in the automated design sector were unlocked. ChatGPT, based on the Large Language Model (LLM), has the capability to deeply comprehend the logical structures and meanings within training data. This study analyzed the potential application and latent value of LLMs in M.E.P design. Ultimately, the implementation of LLM in M.E.P design will make genuine automated design feasible, which is anticipated to drive advancements across designs in the construction sector.

  • PDF

A Study on The Effect of Perceived Value and Innovation Resistance Factors on Adoption Intention of Artificial Intelligence Platform: Focused on Drug Discovery Fields (인공지능(AI) 플랫폼의 지각된 가치 및 혁신저항 요인이 수용의도에 미치는 영향: 신약 연구 분야를 중심으로)

  • Kim, Yeongdae;Kim, Ji-Young;Jeong, Wonkyung;Shin, Yongtae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.12
    • /
    • pp.329-342
    • /
    • 2021
  • The pharmaceutical industry is experiencing a productivity crisis with a low probability of success despite a long period of time and enormous cost. As a strategy to solve the productivity crisis, the use cases of Artificial Intelligence(AI) and Bigdata are increasing worldwide and tangible results are coming out. However, domestic pharmaceutical companies are taking a wait-and-see attitude to adopt AI platform for drug research. This study proposed a research model that combines the Value-based Adoption Model and the Innovation Resistance Model to empirically study the effect of value perception and resistance factors on adopting AI Platform. As a result of empirical verification, usefulness, knowledge richness, complexity, and algorithmic opacity were found to have a significant effect on perceived values. And, usefulness, knowledge richness, algorithmic opacity, trialability, technology support infrastructure were found to have a significant effect on the innovation resistance.

Fast Self-Similar Network Traffic Generation Based on FGN and Daubechies Wavelets (FGN과 Daubechies Wavelets을 이용한 빠른 Self-Similar 네트워크 Traffic의 생성)

  • Jeong, Hae-Duck;Lee, Jong-Suk
    • The KIPS Transactions:PartC
    • /
    • v.11C no.5
    • /
    • pp.621-632
    • /
    • 2004
  • Recent measurement studies of real teletraffic data in modern telecommunication networks have shown that self-similar (or fractal) processes may provide better models of teletraffic in modern telecommunication networks than Poisson processes. If this is not taken into account, it can lead to inaccurate conclusions about performance of telecommunication networks. Thus, an important requirement for conducting simulation studies of telecommunication networks is the ability to generate long synthetic stochastic self-similar sequences. A new generator of pseu-do-random self-similar sequences, based on the fractional Gaussian nois and a wavelet transform, is proposed and analysed in this paper. Specifically, this generator uses Daubechies wavelets. The motivation behind this selection of wavelets is that Daubechies wavelets lead to more accurate results by better matching the self-similar structure of long range dependent processes, than other types of wavelets. The statistical accuracy and time required to produce sequences of a given (long) length are experimentally studied. This generator shows a high level of accuracy of the output data (in the sense of the Hurst parameter) and is fast. Its theoretical algorithmic complexity is 0(n).

A Coherent Algorithm for Noise Revocation of Multispectral Images by Fast HD-NLM and its Method Noise Abatement

  • Hegde, Vijayalaxmi;Jagadale, Basavaraj N.;Naragund, Mukund N.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.556-564
    • /
    • 2021
  • Numerous spatial and transform-domain-based conventional denoising algorithms struggle to keep critical and minute structural features of the image, especially at high noise levels. Although neural network approaches are effective, they are not always reliable since they demand a large quantity of training data, are computationally complicated, and take a long time to construct the model. A new framework of enhanced hybrid filtering is developed for denoising color images tainted by additive white Gaussian Noise with the goal of reducing algorithmic complexity and improving performance. In the first stage of the proposed approach, the noisy image is refined using a high-dimensional non-local means filter based on Principal Component Analysis, followed by the extraction of the method noise. The wavelet transform and SURE Shrink techniques are used to further culture this method noise. The final denoised image is created by combining the results of these two steps. Experiments were carried out on a set of standard color images corrupted by Gaussian noise with multiple standard deviations. Comparative analysis of empirical outcome indicates that the proposed method outperforms leading-edge denoising strategies in terms of consistency and performance while maintaining the visual quality. This algorithm ensures homogeneous noise reduction, which is almost independent of noise variations. The power of both the spatial and transform domains is harnessed in this multi realm consolidation technique. Rather than processing individual colors, it works directly on the multispectral image. Uses minimal resources and produces superior quality output in the optimal execution time.

Quantum-based exact pattern matching algorithms for biological sequences

  • Soni, Kapil Kumar;Rasool, Akhtar
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.483-510
    • /
    • 2021
  • In computational biology, desired patterns are searched in large text databases, and an exact match is preferable. Classical benchmark algorithms obtain competent solutions for pattern matching in O (N) time, whereas quantum algorithm design is based on Grover's method, which completes the search in $O(\sqrt{N})$ time. This paper briefly explains existing quantum algorithms and defines their processing limitations. Our initial work overcomes existing algorithmic constraints by proposing the quantum-based combined exact (QBCE) algorithm for the pattern-matching problem to process exact patterns. Next, quantum random access memory (QRAM) processing is discussed, and based on it, we propose the QRAM processing-based exact (QPBE) pattern-matching algorithm. We show that to find all t occurrences of a pattern, the best case time complexities of the QBCE and QPBE algorithms are $O(\sqrt{t})$ and $O(\sqrt{N})$, and the exceptional worst case is bounded by O (t) and O (N). Thus, the proposed quantum algorithms achieve computational speedup. Our work is proved mathematically and validated with simulation, and complexity analysis demonstrates that our quantum algorithms are better than existing pattern-matching methods.

Design of a 4kb/s ACELP Codec Using the Generalized AbS Principle (Generalized AbS 구조를 이용한 4kb/s ACELP 음성 부호화기의 설계)

  • 성호상;강상원
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.7
    • /
    • pp.33-38
    • /
    • 1999
  • In this paper, we combine a generalized analysis-by-synthesis (AbS) structure and an algebraic excitation scheme to propose a new 4kb/s speech codec. This codec partly uses the structure of G.729. We design a line spectrum pair (LSP) quantizer, an adaptive codebook, and an excitation codebook to fit the 4 kb/s bit rate. The codec has a 25㎳ algorithmic delay, which corresponds to a 20㎳ frame size and a 5㎳ lookahead. At the bit rates below 4kb/s, most CELP speech codecs using the AbS principle have a drawback that results a rapid degradation of speech quality. To overcome this drawback we use the generalized AbS structure which is efficient for the low bit rate speech codec. LP coefficients are converted to LSP and quantized using a predictive 2-stage VQ. A low complexity algebraic codebook which uses shifting method is used for the fixed codebook excitation, and gains of the adaptive codebook and the fixed codebook are quantized using the VQ. To evaluate the performance of the proposed codec A-B preference tests are done with the fixed rate 8kb/s QCELP. As the result of the test, the performance of the codec is similar to that of the fixed rate 8kb/s QCELP.

  • PDF

Improving the I/O Performance of Disk-Based Graph Engine by Graph Ordering (디스크 기반 그래프 엔진의 입출력 성능 향상을 위한 그래프 오더링)

  • Lim, Keunhak;Kim, Junghyun;Lee, Eunjae;Seo, Jiwon
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.1
    • /
    • pp.40-45
    • /
    • 2018
  • With the advent of big data and social networks, large-scale graph processing becomes popular research topic. Recently, an optimization technique called Gorder has been proposed to improve the performance of in-memory graph processing. This technique improves performance by optimizing the graph layout on memory to have better cache locality. However, since it is designed for in-memory graph processing systems, the technique is not suitable for disk-based graph engines; also the cost for applying the technique is significantly high. To solve the problem, we propose a new graph ordering called I/O Order. I/O Order considers the characteristics of I/O accesses for SSDs and HDDs to improve the performance of disk-based graph engine. In addition, the algorithmic complexity of I/O Order is simple compared to Gorder, hence it is cheaper to apply I/O Ordering. I/O order reduces the cost of pre-processing up to 9.6 times compared to that of Gorder's, still its performance is 2 times higher compared to the Random in low-locality graph algorithms.

Performance Comparison of DCT Algorithm Implementations Based on Hardware Architecture (프로세서 구조에 따른 DCT 알고리즘의 구현 성능 비교)

  • Lee Jae-Seong;Pack Young-Cheol;Youn Dae-Hee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.6C
    • /
    • pp.637-644
    • /
    • 2006
  • This paper presents performance and implementation comparisons of standard and fast DCT algorithms that are commonly used for subband filter bank in MPEG audio coders. The comparison is made according to the architectural difference of the implementation hardware. Fast DCT algorithms are known to have much less computational complexity than the standard method that involves computing a vector dot product of cosine coefficient. But, due to structural irregularity, fast DCT algorithms require extra cycles to generate the addresses for operands and to realign interim data. When algorithms are implemented using DSP processors that provide special operations such as single-cycle MAC (multiply-accumulate), zero-overhead nested loop, the standard algorithm is more advantageous than the fast algorithms. Also, in case of the finite-precision processing, the error performance of the standard method is far superior to that of the fast algorithms. In this paper, truncation errors and algorithmic suitability are analyzed and implementation results are provided to support the analysis.

Multi-Dimensional Traveling Salesman Problem Scheme Using Top-n Skyline Query (Top-n 스카이라인 질의를 이용한 다차원 외판원 순회문제 기법)

  • Jin, ChangGyun;Oh, Dukshin;Kim, Jongwan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.17-24
    • /
    • 2020
  • The traveling salesman problem is an algorithmic problem tasked with finding the shortest route that a salesman visits, visiting each city and returning to the started city. Due to the exponential time complexity of TSP, it's hard to implement on cases like amusement park or delivery. Also, TSP is hard to meet user's demand that is associated with multi-dimensional attributes like travel time, interests, waiting time because it uses only one attribute - distance between nodes. This paper proposed Top-n Skyline-Multi Dimension TSP to resolve formerly adverted problems. The proposed algorithm finds the shortest route faster than the existing method by decreasing the number of operations, selecting multi-dimensional nodes according to the dominance of skyline. In the simulation, we compared computation time of dynamic programming algorithm to the proposed a TS-MDT algorithm, and it showed that TS-MDT was faster than dynamic programming algorithm.

Exploring Efficient Solutions for the 0/1 Knapsack Problem

  • Dalal M. Althawadi;Sara Aldossary;Aryam Alnemari;Malak Alghamdi;Fatema Alqahtani;Atta-ur Rahman;Aghiad Bakry;Sghaier Chabani
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.15-24
    • /
    • 2024
  • One of the most significant issues in combinatorial optimization is the classical NP-complete conundrum known as the 0/1 Knapsack Problem. This study delves deeply into the investigation of practical solutions, emphasizing two classic algorithmic paradigms, brute force, and dynamic programming, along with the metaheuristic and nature-inspired family algorithm known as the Genetic Algorithm (GA). The research begins with a thorough analysis of the dynamic programming technique, utilizing its ability to handle overlapping subproblems and an ideal substructure. We evaluate the benefits of dynamic programming in the context of the 0/1 Knapsack Problem by carefully dissecting its nuances in contrast to GA. Simultaneously, the study examines the brute force algorithm, a simple yet comprehensive method compared to Branch & Bound. This strategy entails investigating every potential combination, offering a starting point for comparison with more advanced techniques. The paper explores the computational complexity of the brute force approach, highlighting its limitations and usefulness in resolving the 0/1 Knapsack Problem in contrast to the set above of algorithms.