• Title/Summary/Keyword: Vectorization

Search Result 56, Processing Time 0.026 seconds

H(sub)$\infty$ Design for Decoupling Controllers Based on the Two-Degree-of-Freedom Standard Model Using LMI Methods (LMI 기법을 이용한 2자유도 표준모델에 대한 비결합 제어기의 H(sub)$\infty$ 설계)

  • Gang, Gi-Won;Lee, Jong-Sung;Park, Kiheon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.3
    • /
    • pp.183-192
    • /
    • 2001
  • In this paper, the decoupling H(sub)$\infty$ controller which minimizes the maximum energy in the output signal is designed to reduce the coupling properties between the input/output variables which make it difficult to control a system efficiently. The state-space formulas corresponding to the existing transfer matrix formulas of the controller are derived for computational efficiency. And for a given decoupling $H_{\infty}$ problem, an efficient method are sought to find the controller coefficients through the LMI(Linear Matrix Inequalities) method by which the problem is formulated into a convex optimization problem.

  • PDF

Generation of OC and MMA topology optimizer by using accelerating design variables

  • Lee, Dongkyu;Nguyen, Hong Chan;Shin, Soomi
    • Structural Engineering and Mechanics
    • /
    • v.55 no.5
    • /
    • pp.901-911
    • /
    • 2015
  • The goal of this study is to investigate computational convergence of optimal solutions, with respect to optimality criteria (OC) method and methods of moving asymptotes (MMA) as optimization model for non-linear programming of material topology optimization using an acceleration method that makes design variables rapidly move toward almost 0 and 1 values. 99 line topology optimization MATLAB code uses loop vectorization and memory pre-allocation as properly exploiting the strengths of MATLAB and moves portions of code out of the optimization loop so that they are only executed once as restructuring the program. Numerical examples of a simple beam under a lateral load and a given material density limitation provide merits and demerits of the present OC and MMA for 99 line topology optimization code of continuous material topology optimization design.

자동차 산업의 CAE 응용 II - Design Analysis 관련 수퍼 컴퓨팅 파워의 활용 -

  • 이성철;김대영
    • Journal of the KSME
    • /
    • v.30 no.3
    • /
    • pp.267-274
    • /
    • 1990
  • 기술분야의 발전이 거듭됨과 병행하여, 산업계의 설계 해석 부서에서 CAE의 중요성은 컴퓨팅 시 스템과 더불어 더욱 강조되고 있다. 슈펴 컴퓨팅 파워의 필요성은, 첫째, applied mechanics의 advanced 해석분야, 둘째, 대형 엔지니어링 문제해석 결과처리 (data acquisition & management), 셋째, real time response와 integrated 엔지니어링 시스템개발 (CAD/CAM 데이터와의 연계성) 등의 측면에서 나타난다고 볼 수 있다. 원가, 안정성 그리고 신뢰성은 모두 만족시키는 효율적인 방안으로서는 대형 수퍼 컴퓨터와 마이크로 컴퓨터의 중간 역할을 할 수 있는 hybrid type의 미니 수퍼 시스템, 즉 departmental highly-parallel 시스템의 등장이 필수적이라고 할 수 있다. 또한, 산업계의 설계 해석 지원과 관련, 구조, 유체, 동력학 등의 분야별 응용 문제와 numerical formulation등의 특성에 적합한 시스템 configuration과 프로세싱이 개발되어야 한다. 이는 실 수요자의 상황에 맞는 "목적성 전용 machine"의 등장이 가능해 질 것으로 판단된다. 향 후, vectorization 그리고 parallelization 된 소프트웨어의 가용성이 극복해야 할 큰 과제로 남아 있 다고 본다.

  • PDF

Text Area Segmentation and Layout Vectorization of Off-line Handwritten Forms (손으로 설계한 서식 문서의 문자 영역 분리 및 서식 벡터화)

  • Kim, Byeong-Yong;Gwon, O-Seok
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.10
    • /
    • pp.3086-3097
    • /
    • 2000
  • 본 논문에서는 손으로 자유스럽게 그린 서식 문서에서 문자 영역을 분리하고, 이 중 선 성분을 벡터화하는 방법을 제안한다. 제안된 방법은 우선 이진화 및 세선화 과정에서의 데이터 손실을 방지하기 위해 스캔한 영상에 DRC 알고리즘을 적용한다. 그리고 영상의 기울어짐을 교정하기 위해 세선화된 영상에 허프 변환을 적용하여 기울어짐을 추정하고 교정한 다음, 서식의 구조를 이루는 선 성분을 추출해 낸다. 그리고 문자 영역은 연결 요소 분석법에 의해 문자 영역을 나타내는 데이터로 변환되며, 추출된 선 성분을 정렬, 합병 및 교정처리를 통해 벡터화 된다. 제안된 방법의 실효성을 입증하기 위해 각각 25명의 다른 사람이 필기구에 제한을 두지 않고 하나는 자를 사용하여 작성하고 다른 하나는 자를 사용하지 않고 작성한 서식에 대해 실험한 결과 전체 750개의 벡터 집합 중에서 전처리를 하지 않은 경우에는 666개, 전처리를 한 경우에는 746개의 서식 벡터 검출에 성공하여 그 유효성을 확인할 수 있었다.

  • PDF

Main Points Extraction and Layout Vectorization of Hand-designed Forms (손으로 설계한 서식 문서의 주요점 검출 및 서식 구조 벡터화)

  • Kim, Byeong-Yong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.10a
    • /
    • pp.519-522
    • /
    • 2001
  • 본 논문은 손으로 자유롭게 그린 서식 문서의 주요점을 검출하여 서식의 구조를 벡터화하는 방법을 제안한다. 선 성분의 주요점을 검출하여 그 구조를 벡터화하는 방법은 주로 인쇄 서식 문서의 구조 분석에 적용하기 좋은 방법이다. 이에 반해 손으로 설계한 서식 문서는 주요점 부분이 왜곡되어 있기 때문에 주요점의 검출이 손쉽게 이루어지기 곤란하다. 이 논문에서는 이러한 문제를 해결하기 위해 손으로 설계한 서식 문서를 세선화한 다음 여유 성분을 갖는 마스크를 적용하고 후처리를 통해 주요점 부분의 심한 왜곡을 보상하는 방법을 제안하여 손으로 설계한 서식 문서에서도 주요점의 검출이 가능하도록 하였다. 제안한 방법의 유효성을 확인하기 위한 실험 결과 손으로 설계한 서식의 경우 91.9%, 인쇄 서식의 경우 100%의 벡터화 성공률을 보여주어 제안한 방법이 손으로 설계한 서식 구조의 벡터화에 유효함을 확인하였다.

  • PDF

The Accuracy of the Non-continuous I Test for One-Dimensional Arrays with References Created by Induction Variables

  • Zhang, Qing
    • Journal of Information Processing Systems
    • /
    • v.10 no.4
    • /
    • pp.523-542
    • /
    • 2014
  • One-dimensional arrays with subscripts formed by induction variables in real programs appear quite frequently. For most famous data dependence testing methods, checking if integer-valued solutions exist for one-dimensional arrays with references created by induction variable is very difficult. The I test, which is a refined combination of the GCD and Banerjee tests, is an efficient and precise data dependence testing technique to compute if integer-valued solutions exist for one-dimensional arrays with constant bounds and single increments. In this paper, the non-continuous I test, which is an extension of the I test, is proposed to figure out whether there are integer-valued solutions for one-dimensional arrays with constant bounds and non-sing ularincrements or not. Experiments with the benchmarks that have been cited from Livermore and Vector Loop, reveal that there are definitive results for 67 pairs of one-dimensional arrays that were tested.

Preliminary Study on the Enhancement of Reconstruction Speed for Emission Computed Tomography Using Parallel Processing (병렬 연산을 이용한 방출 단층 영상의 재구성 속도향상 기초연구)

  • Park, Min-Jae;Lee, Jae-Sung;Kim, Soo-Mee;Kang, Ji-Yeon;Lee, Dong-Soo;Park, Kwang-Suk
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.443-450
    • /
    • 2009
  • Purpose: Conventional image reconstruction uses simplified physical models of projection. However, real physics, for example 3D reconstruction, takes too long time to process all the data in clinic and is unable in a common reconstruction machine because of the large memory for complex physical models. We suggest the realistic distributed memory model of fast-reconstruction using parallel processing on personal computers to enable large-scale technologies. Materials and Methods: The preliminary tests for the possibility on virtual manchines and various performance test on commercial super computer, Tachyon were performed. Expectation maximization algorithm with common 2D projection and realistic 3D line of response were tested. Since the process time was getting slower (max 6 times) after a certain iteration, optimization for compiler was performed to maximize the efficiency of parallelization. Results: Parallel processing of a program on multiple computers was available on Linux with MPICH and NFS. We verified that differences between parallel processed image and single processed image at the same iterations were under the significant digits of floating point number, about 6 bit. Double processors showed good efficiency (1.96 times) of parallel computing. Delay phenomenon was solved by vectorization method using SSE. Conclusion: Through the study, realistic parallel computing system in clinic was established to be able to reconstruct by plenty of memory using the realistic physical models which was impossible to simplify.

Accuracy Assessment of Feature Collection Method with Unmanned Aerial Vehicle Images Using Stereo Plotting Program StereoCAD (수치도화 프로그램 StereoCAD를 이용한 무인 항공영상의 묘사 정확도 평가)

  • Lee, Jae One;Kim, Doo Pyo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.257-264
    • /
    • 2020
  • Vectorization is currently the main method in feature collection (extraction) during digital mapping using UAV-Photogrammetry. However, this method is time consuming and prone to gross elevation errors when extracted from a DSM (Digital Surface Model), because three-dimensional feature coordinates are vectorized separately: plane information from an orthophoto and height from a DSM. Consequently, the demand for stereo plotting method capable of acquiring three- dimensional spatial information simultaneously is increasing. However, this method requires an expensive equipment, a Digital Photogrammetry Workstation (DPW), and the technology itself is still incomplete. In this paper, we evaluated the accuracy of low-cost stereo plotting system, Menci's StereoCAD, by analyzing its three-dimensional spatial information acquisition. Images were taken with a FC 6310 camera mounted on a Phantom4 pro at a 90 m altitude with a Ground Sample Distance (GSD) of 3 cm. The accuracy analysis was performed by comparing differences in coordinates between the results from the ground survey and the stereo plotting at check points, and also at the corner points by layers. The results showed that the Root Mean Square Error (RMSE) at check points was 0.048 m for horizontal and 0.078 m for vertical coordinates, respectively, and for different layers, it ranged from 0.104 m to 0.127 m for horizontal and 0.086 m to 0.092 m for vertical coordinates, respectively. In conclusion, the results showed 1: 1,000 digital topographic map can be generated using a stereo plotting system with UAV images.

A Method for Prediction of Quality Defects in Manufacturing Using Natural Language Processing and Machine Learning (자연어 처리 및 기계학습을 활용한 제조업 현장의 품질 불량 예측 방법론)

  • Roh, Jeong-Min;Kim, Yongsung
    • Journal of Platform Technology
    • /
    • v.9 no.3
    • /
    • pp.52-62
    • /
    • 2021
  • Quality control is critical at manufacturing sites and is key to predicting the risk of quality defect before manufacturing. However, the reliability of manual quality control methods is affected by human and physical limitations because manufacturing processes vary across industries. These limitations become particularly obvious in domain areas with numerous manufacturing processes, such as the manufacture of major nuclear equipment. This study proposed a novel method for predicting the risk of quality defects by using natural language processing and machine learning. In this study, production data collected over 6 years at a factory that manufactures main equipment that is installed in nuclear power plants were used. In the preprocessing stage of text data, a mapping method was applied to the word dictionary so that domain knowledge could be appropriately reflected, and a hybrid algorithm, which combined n-gram, Term Frequency-Inverse Document Frequency, and Singular Value Decomposition, was constructed for sentence vectorization. Next, in the experiment to classify the risky processes resulting in poor quality, k-fold cross-validation was applied to categorize cases from Unigram to cumulative Trigram. Furthermore, for achieving objective experimental results, Naive Bayes and Support Vector Machine were used as classification algorithms and the maximum accuracy and F1-score of 0.7685 and 0.8641, respectively, were achieved. Thus, the proposed method is effective. The performance of the proposed method were compared and with votes of field engineers, and the results revealed that the proposed method outperformed field engineers. Thus, the method can be implemented for quality control at manufacturing sites.

A reordering scheme for the vectorizable preconditioner for the large sparse linear systems on the CRAY-2 (CRAY-2에서의 대형희귀행렬 연립방정식의 해법을 위한 벡터준비행렬의 재배열 방법)

  • Ma, Sang-Baek
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.6
    • /
    • pp.960-968
    • /
    • 1995
  • In this paper we present a reordering scheme that could lead to efficient vectorization of the preconditioners for the large sparse linear systems arising from partial differential equations on the CRAY-2, This reordering scheme is a line version of the conventional red/black ordering. This reordering scheme, coupled with a variant of ILU(Incomplete LU) preconditioning, can overcome the poor rate of convergence of the conventional red/black reordering, if relatively large number of fill-ins were used. We substantiate our claim by conducting various experiments on the CRAY-2 machine. Also, the computation of the Frobenius norm of the error matrices agree with our claim.

  • PDF