• Title/Summary/Keyword: Automatic computation

Search Result 161, Processing Time 0.031 seconds

Face Extraction using Genetic Algorithm, Stochastic Variable and Geometrical Model (유전 알고리즘, 통계적 변수, 기하학적 모델에 의한 얼굴 영역 추출)

  • 이상진;홍준표이종실홍승홍
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.891-894
    • /
    • 1998
  • This paper introduces an automatic face region extraction method. This method consists of two part: face recognition and extraction of facial organs which are eye, eyebrow, nose and mouth. In first stage, we use genetic algorithms(GAs) to get face region in complex background. In second stage, we use Geometrical Face Model to textract eye, eyebrow, nose and mouth. In both stage, stochastic component is used to deal with the problems caused by had lighting condition. According to this value, blurring number is determined. Average Computation time is less than 1 sec, and using this method we can extract facial feature efficiently from several images which has different lightning condition.

  • PDF

Locality-Conscious Nested-Loops Parallelization

  • Parsa, Saeed;Hamzei, Mohammad
    • ETRI Journal
    • /
    • v.36 no.1
    • /
    • pp.124-133
    • /
    • 2014
  • To speed up data-intensive programs, two complementary techniques, namely nested loops parallelization and data locality optimization, should be considered. Effective parallelization techniques distribute the computation and necessary data across different processors, whereas data locality places data on the same processor. Therefore, locality and parallelization may demand different loop transformations. As such, an integrated approach that combines these two can generate much better results than each individual approach. This paper proposes a unified approach that integrates these two techniques to obtain an appropriate loop transformation. Applying this transformation results in coarse grain parallelism through exploiting the largest possible groups of outer permutable loops in addition to data locality through dependence satisfaction at inner loops. These groups can be further tiled to improve data locality through exploiting data reuse in multiple dimensions.

THE COMPUTATION OF UNSTEADY FLOWS AROUND THREE DIMENSIONAL WINGS ON DYNAMICALLY DEFORMING MESH (변형격자계를 이용한 3차원 날개 주변의 비정상 유동 해석)

  • Yoo, Il-Yong;Lee, Byung-Kwon;Lee, Seung-Soo
    • Journal of computational fluids engineering
    • /
    • v.15 no.1
    • /
    • pp.37-45
    • /
    • 2010
  • Deforming mesh should be used when bodies are deforming or moving relative to each other due to the presence of aerodynamic forces and moments. Also, the flow solver for such a flow problem should satisfy the geometric conservation law to ensure the accuracy of the solutions. In this paper, a RANS(Reynolds Averaged Navier-Stokes) solver including automatic mesh capability using TFI(Transfinite Interpolation) method and GCL is developed and applied to flows induced by oscillating wings with given frequencies. The computations are performed both on deforming meshes and on rigid meshes. The computational results are compared with experimental data, which shows a good agreement.

Impact of Instance Selection on kNN-Based Text Categorization

  • Barigou, Fatiha
    • Journal of Information Processing Systems
    • /
    • v.14 no.2
    • /
    • pp.418-434
    • /
    • 2018
  • With the increasing use of the Internet and electronic documents, automatic text categorization becomes imperative. Several machine learning algorithms have been proposed for text categorization. The k-nearest neighbor algorithm (kNN) is known to be one of the best state of the art classifiers when used for text categorization. However, kNN suffers from limitations such as high computation when classifying new instances. Instance selection techniques have emerged as highly competitive methods to improve kNN through data reduction. However previous works have evaluated those approaches only on structured datasets. In addition, their performance has not been examined over the text categorization domain where the dimensionality and size of the dataset is very high. Motivated by these observations, this paper investigates and analyzes the impact of instance selection on kNN-based text categorization in terms of various aspects such as classification accuracy, classification efficiency, and data reduction.

Path Planning for Parking using Multi-dimensional Path Grid Map (다차원 경로격자지도를 이용한 주차 경로계획 알고리즘)

  • Choi, Jong-An;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.2
    • /
    • pp.152-160
    • /
    • 2017
  • Recent studies on automatic parking have actively adopted the technology developed for mobile robots. Among them, the path planning scheme plans a route for a vehicle to reach a target parking position while satisfying the kinematic constraints of the vehicle. However, previous methods require a large amount of computation and/or cannot be easily applied to different environmental conditions. Therefore, there is a need for a path planning scheme that is fast, efficient, and versatile. In this study, we use a multi-dimensional path grid map to solve the above problem. This multi-dimensional path grid map contains a route which has taken a vehicle's kinematic constraints into account; it can be used with the $A^*$ algorithm to plan an efficient path. The proposed method was verified using Prescan which is a simulation program based on MATLAB. It is shown that the proposed scheme can successfully be applied to both parallel and vertical parking in an efficient manner.

Acceleration method of fission source convergence based on RMC code

  • Pan, Qingquan;Wang, Kan
    • Nuclear Engineering and Technology
    • /
    • v.52 no.7
    • /
    • pp.1347-1354
    • /
    • 2020
  • To improve the efficiency of MC criticality calculation, an acceleration method of fission source convergence which gives an improved initial fission source is proposed. In this method, the MC global homogenization is carried out to obtain the macroscopic cross section of each material mesh, and then the nonlinear iterative solution of the SP3 equations is used to determine the fission source distribution. The calculated fission source is very close to the real fission source, which describes its space and energy distribution. This method is an automatic computation process and is tested by the C5G7 benchmark, the results show that this acceleration method is helpful to reduce the inactive cycles and overall running time.

Rotor design functional standard of Synchronous Reluctance Motor according to torque/volume using FEM & SUMT (유한요소법과 SUMT를 이용한 동기형 릴럭턴스 전동기의 토크와 부피에 따른 회전자 설계의 함수화)

  • Lee, Rae-Hwa;Lee, Jung-Ho
    • The Transactions of the Korean Institute of Electrical Engineers B
    • /
    • v.55 no.11
    • /
    • pp.577-581
    • /
    • 2006
  • This paper deals with an automatic rotor design functional standard computation based on torque/volume for a synchronous reluctance motor(SynRM). The focus of this paper is the design relative to the torque/volume on the basis of each rated watt according to the rotor diameters of a SynRM. The coupled finite elements analysis (FEA) & sequential unconstrained minimization technique (SUMT) have been used to evaluate design solutions. The proposed procedure allows to define the rotor geometric design function according to the rotor diameter and rated watt starting from an existing motor or a preliminary design.

The Optimization Design of Adder-based Distributed Arithmetic and DCT Processor design (가산기-기반 분산 연산의 최적화 설계 및 이를 이용한 DCT 프로세서 설계)

  • 임국찬;장영진;이현수
    • Proceedings of the IEEK Conference
    • /
    • 2000.11b
    • /
    • pp.116-119
    • /
    • 2000
  • The Process of Inner Product has been widely used in a DSP. But it is difficult to implement by a dedicated hardware because it needs many computation steps for multiplication and addition. To reduce these steps, it is essential to design efficient hardware architecture. This paper proposes the design method of adder-based distributed arithmetic for implementation of DCT module and the automatic design of summation-network which is a core block in the proposed design method. Finally, it shows that the proposed design method is more efficient than a ROM-based distributed arithmetic which is the typical design method.

  • PDF

Optimal strapdown coning compensation algorithm (최적 스트랩다운 원추 보상 알고리듬)

  • Park, Chan-Gook;Kim, Kwang-Jin;Lee, Jang-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.2 no.3
    • /
    • pp.242-247
    • /
    • 1996
  • In this paper, an optimal coning compensation algorithm for strapdown system is proposed by minimizing the coning error. The proposed algorithm is derived as a generalized form in that it contains the class of the existing coning algorithms and allows the design of optimal algorithm for various combinations of gyro samples. It is shown the magnitude of resulting algorithm errors depends mainly on the total number of gyro samples including present and previous gyro samples. Based on the results, the proposed algorithm enables the algorithm designers to develop the effective coning compensation algorithm according to their attitude computation specifications with ease. In addition, the multirate method which can efficiently implement the algorithm is presented.

  • PDF

Generalized Partially Linear Additive Models for Credit Scoring

  • Shim, Ju-Hyun;Lee, Young-K.
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.4
    • /
    • pp.587-595
    • /
    • 2011
  • Credit scoring is an objective and automatic system to assess the credit risk of each customer. The logistic regression model is one of the popular methods of credit scoring to predict the default probability; however, it may not detect possible nonlinear features of predictors despite the advantages of interpretability and low computation cost. In this paper, we propose to use a generalized partially linear model as an alternative to logistic regression. We also introduce modern ensemble technologies such as bagging, boosting and random forests. We compare these methods via a simulation study and illustrate them through a German credit dataset.