• Title/Summary/Keyword: Computation amount

Search Result 604, Processing Time 0.025 seconds

A Study on Computational Efficiency Enhancement by Using Full Gray Code Genetic Algorithm (전 영역 그레이코드 유전자 알고리듬의 효율성 증대에 관한 연구)

  • 이원창;성활경
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.10
    • /
    • pp.169-176
    • /
    • 2003
  • Genetic algorithm (GA), which has a powerful searching ability and is comparatively easy to use and also to apply, is in the spotlight in the field of the optimization for mechanical systems these days. However, it also contains some problems of slow convergence and low efficiency caused by a huge amount of repetitive computation. To improve the processing efficiency of repetitive computation, some papers have proposed paralleled GA these days. There are some cases that mention the use of gray code or suggest using gray code partially in GA to raise its slow convergence. Gray code is an encoding of numbers so that adjacent numbers have a single digit differing by 1. A binary gray code with n digits corresponds to a hamiltonian path on an n-dimensional hypercube (including direction reversals). The term gray code is open used to refer to a reflected code, or more specifically still, the binary reflected gray code. However, according to proposed reports, gray code GA has lower convergence about 10-20% comparing with binary code GA without presenting any results. This study proposes new Full gray code GA (FGGA) applying a gray code throughout all basic operation fields of GA, which has a good data processing ability to improve the slow convergence of binary code GA.

Fast Rate Distortion Optimization Algorithm for Inter Predictive Coding of H.264/AVC (H.264/AVC의 인터 예측 부호화를 위한 고속 율왜곡 최적화 알고리즘)

  • Sin, Se-Ill;Oh, Jeong-Su
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.1C
    • /
    • pp.56-62
    • /
    • 2009
  • In H.264/AVC, rate distortion optimization algorithm is used to decide the best block mode from various block modes. It improves a bit rate but greatly increases an amount of computation. This paper proposes a fast rate distortion optimization algorithm that omits a rate distortion optimization adaptively by predicting its cost from the cost calculated for motion estimation. The simulation results show that the proposed algorithm, on average, keeps nearly the image quality and the bit rate made by the rate distortion optimization while reduces 69.86% and 69.63% of computation added by it in CIF and QCIF respectively.

Backwater Computation in River Channel by the Runoff-Frequency (유출변화(流出変化)에 의한 배수현상(背水現象) 해석(解析))

  • Suh, Seung Duk;Suk, Ki Hong
    • Current Research on Agriculture and Life Sciences
    • /
    • v.2
    • /
    • pp.77-90
    • /
    • 1984
  • Results investigated backwater phenomena at Geumho river basin to get a basic data for Daegu basin area development plan are as the follows. 1. It is a A=0.35 L 1.848 (r = 0.97), the relationship between basin area and river length at Geumho river. 2. Dividing the rainfall of Geumho river basin as two parts, a first half rainfall and a second half rainfall, the amount of a first half rainfall appeared 57.5% comparison with total rainfall. 3. The maximum flood discharge appeared 12 hrs. continuous rainfall rather than 24 hrs. continuous rainfall. 4. Results investigated backwater phenomena from Geumho II bridge to chungchun appeared the rising water level of 69 cm, 55 cm, 44 cm, at section III in the starting point water level of 1.8 m, 2.4 m, 4.0 m respectively. 5. Results investigated backwater phenomena by the flood water level appeared a similar form. There was a average rising water level of 30 cm at section III. At the results of this computation, it was confirmed that section III was affected the highest backwater phenomena among the observed river reaches in Geumho river. In addition, this paper should be given a assistance to decide a economic and safe section in construction of bank of river and estuary barrage.

  • PDF

Color Correction with Optimized Hardware Implementation of CIE1931 Color Coordinate System Transformation (CIE1931 색좌표계 변환의 최적화된 하드웨어 구현을 통한 색상 보정)

  • Kim, Dae-Woon;Kang, Bong-Soon
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.10-14
    • /
    • 2021
  • This paper presents a hardware that improves the complexity of the CIE1931 color coordinate algorithm operation. The conventional algorithm has disadvantage of growing hardware due to 4-Split Multiply operations used to calculate large bits in the computation process. But the proposed algorithm pre-calculates the defined R2X, X2R Matrix operations of the conventional algorithm and makes them a matrix. By applying the matrix to the images and improving the color, it is possible to reduce the amount of computation and hardware size. By comparing the results of Xilinx synthesis of hardware designed with Verilog, we can check the performance for real-time processing in 4K environments with reduced hardware resources. Furthermore, this paper validates the hardware mount behavior by presenting the execution results of the FPGA board.

A Locally Adaptive HDR Algorithm Using Integral Image and MSRCR Method (적분 영상과 MSRCR 기법을 이용한 국부적응적 HDR 알고리즘)

  • Han, Kyu-Phil
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.9
    • /
    • pp.1273-1283
    • /
    • 2022
  • This paper presents a locally adaptive HDR algorithm using the integral image and MSRCR for LDR images with inadequate exposure. There are two categories in controlling the dynamic range, which are global and local tone mappings. Since the global ones are relatively simple but have some limitations at considering regional characteristics, the local ones are often utilized and MSRCR is a representative method. MSRCR gives moderate results, but it requires lots of computations for multi-scale surround Gaussian functions and produces the Halo effect around the edges. Therefore, in order to resolve these main problems, the proposed algorithm remarkably reduces the computation of the surrounds due to the use of the integral image. And a set of variable-sized windows is adopted to decrease the Halo effect, according to the type of pixel's region. In addition, an offset controlling function is presented, which is mainly affected to the subjective image quality and based on the global input and the desired output means. As the results, the proposed algorithm no more use Gaussian functions and can reduce the computation amount and the Halo effect.

A Study on Data Sharing Scheme using ECP-ABSC that Provides Data User Traceability in the Cloud

  • Hwang, Yong-Woon;Kim, Taehoon;Seo, Daehee;Lee, Im-Yeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.4042-4061
    • /
    • 2022
  • Recently, various security threats such as data leakage and data forgery have been possible in the communication and storage of data shared in the cloud environment. This paper conducted a study on the CP-ABSC scheme to solve these security threats. In the existing CP-ABSC scheme, if the data is obtained by the unsigncryption of the data user incorrectly, the identity of the data owner who uploaded the ciphertext cannot be known. Also, when verifying the leaked secret key, the identity information of the data user who leaked the secret key cannot be known. In terms of efficiency, the number of attributes can affect the ciphertext. In addition, a large amount of computation is required for the user to unsigncrypt the ciphertext. In this paper, we propose ECP-ABSC that provides data user traceability, and use it in a cloud environment to provide an efficient and secure data sharing scheme. The proposed ECP-ABSC scheme can trace and verify the identity of the data owner who uploaded the ciphertext incorrectly and the data user who leaked the secret key for the first time. In addition, the ciphertext of a constant size is output and the efficiency of the user's unsigncryption computation were improved.

Straight Line Detection Using PCA and Hough Transform (주성분 분석과 허프 변환을 이용한 직선 검출)

  • Oh, Jeong-su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.2
    • /
    • pp.227-232
    • /
    • 2018
  • In a Hough transform that is a representative algorithm for the straight line detection, a great number of edge pixels generated from noisy or complex images cause enormous amount of computation and pseudo straight lines. This paper proposes a two step straight line detection algorithm to improve the conventional Hough transform. In the first step, the proposed algorithm divides an image into non-overlapping blocks and detects the information related to the straight line of the edge pixels in the block using a principal component analysis (PCA). In the second step, it detects the straight lines by performing the Hough transform limited slope area to the pixels associated with the straight line. Simulation results show that the proposed algorithm reduces average of ${\rho}$ computation by 94.6% and prevents the pseudo straight lines although some additional computation is needed.

Snapshot-Based Offloading for Web Applications with HTML5 Canvas (HTML5 캔버스를 활용하는 웹 어플리케이션의 스냅샷 기반 연산 오프로딩)

  • Jeong, InChang;Jeong, Hyuk-Jin;Moon, Soo-Mook
    • Journal of KIISE
    • /
    • v.44 no.9
    • /
    • pp.871-877
    • /
    • 2017
  • A vast amount of research has been carried out for executing compute-intensive applications on resource-constrained mobile devices. Computation offloading is a method in which heavy computations are dynamically migrated from a mobile device to a server, exploiting the powerful hardware of the server to perform complex computations. An important issue for offloading is the complexity of reconciling the execution state of applications between the server and the client. To address this issue, snapshot-based offloading has recently been proposed, which utilizes the snapshot of a web app as the portable description of the execution state. However, for web applications using the HTML5 canvas, snapshot-based offloading does not function correctly, because the snapshot cannot capture the state of the canvas. In this paper, we propose a code generation technique to save the canvas state as part of a snapshot, so that the snapshot-based offloading can be applied to web applications using the canvas.

Flexible Decision-Making for Autonomous Agent Through Computation of Urgency in Time-Critical Domains (실시간 환경에서 긴급한 정도의 계산을 통한 자율적인 에이전트의 유연한 의사결정)

  • Noh Sanguk
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1196-1203
    • /
    • 2004
  • Autonomous agents need considerable computational resources to perform rational decision-making. The complexity of decision-making becomes prohibitive when large number of agents are present and when decisions have to be made under time pressure. One of approaches in time-critical domains is to respond to an observed condition with a predefined action. Although such a system may be able to react very quickly to environmental conditions, predefined plans are of less value if a situation changes and re-planning is needed. In this paper we investigate strategies intended to tame the computational burden by using off-line computation in conjunction with on-line reasoning. We use performance profiles computed off-line and the notion of urgency (i.e., the value of time) computed on-line to choose the amount of information to be included during on-line deliberation. This method can adjust to various levels of real-time demands, but incurs some overhead associated with iterative deepening. We test our framework with experiments in a simulated anti-air defense domain. The experiments show that the off-line performance profiles and the on-line computation of urgency are effective in time-critical situations.

Parallel Computation of a Nonlinear Structural Problem using Parallel Multifrontal Solver (다중 프런트 해법을 이용한 비선형 구조문제의 병렬계산)

  • Jeong, Sun Wan;Kim, Seung Jo
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.31 no.2
    • /
    • pp.41-50
    • /
    • 2003
  • In this paper, nonlinear parallel structural analyses are introduced by using the parallel multifrontal solver and damage localization for 2D and 3D crack models is presented as the application of nonlinear parallel computation. The parallel algorithms related with nonliear reduce the amount of memory used is carried out because many variables should be utilized for this highly nonlinear damage analysis. Also, Riks' continuation method is parallelized to search the solution when strain softening occurs due to damage evolution. For damage localization problem, several computational models having up to around 1-million degree of freedoms are used. The parallel performance in this nonlinear parallel algorithm is shown through these examples and the local variation of damage at crack tip is compared among the models with different degree of freedoms.