• Title/Summary/Keyword: Preprocessing Process

Search Result 440, Processing Time 0.031 seconds

preprocessing methodology to reducing calculation errors in 3 dimensional model for development of heat transfer analysis program for 3 dimensional structure of building (건물의 3차원 구조체에 대한 전열해석 프로그램 개발 중 3차원 모델의 해석 오류 저감을 위한 사전 수정 방법 연구)

  • Lee, Kyusung;Lee, Juhee;Lee, Yongjun
    • KIEAE Journal
    • /
    • v.16 no.1
    • /
    • pp.89-94
    • /
    • 2016
  • This study is part of three-dimensional(3D) heat transfer analysis program developmental process. The program is being developed without it's own built in 3D-modeller. So 3D-model must be created from another 3D-modeller such as generic CAD programs and imported to the developed program. After that, according to the 3D-geometric data form imported model, 3D-mesh created for numerical calculation. But the 3D-model created from another 3D-modeller is likely to have errors in it's geometric data such as mismatch of position between vertexes or surfaces. these errors make it difficult to create 3D-mesh for calculation. These errors are must be detected and cured in the pre-process before creating 3D-mesh. So, in this study four kinds of filters and functions are developed and tested. Firstly, 'vertex error filter' is developed for detecting and curing for position data errors between vertexes. Secondly, 'normal vector error filter' is developed for errors of surface's normal vector in 3D-model. Thirdly, 'intersection filter' is developed for extracting and creating intersection surface between adjacent objects. fourthly, 'polygon-line filter' is developed for indicating outlines of object in 3D-model. the developed filters and functions were tested on several shapes of 3D-models. and confirmed applicability. these developed filters and functions will be applied to the developed program and tested and modified continuously for less errors and more accuracy.

Bounding Box based Shadow Ray Culling Method for Real-Time Ray Tracer (실시간 광선추적기를 위한 바운딩 박스 기반의 그림자 검사 컬링 기법)

  • Kim, Sangduk;Kim, Jin-Woo;Park, Woo-Chan;Han, Tack-Don
    • Journal of Korea Game Society
    • /
    • v.13 no.3
    • /
    • pp.85-94
    • /
    • 2013
  • In this paper, we propose a scheme to reduce the number of shadow tests conducted during rendering of ray tracing. The shadow test is a very important process in ray tracing to generate photo-realistic images. In the rendering phase, the ray tracer determines whether to cull the shadow test based on information calculated from a shadow test conducted on the kd-tree in the preprocessing phase. In conventional rendering process, the proposed method can be used with little modification. The proposed method is suitable for a static scene, in which the geometry and light source does not change in the same manner as it does in the conventional method. The validity of the proposed scheme is verified and its performance is evaluated during cycle-accurate simulation. Through experiment results, we found that we could reduce up to 17% of the shadow test.

Multi-FNN Identification Based on HCM Clustering and Evolutionary Fuzzy Granulation

  • Park, Ho-Sung;Oh, Sung-Kwun
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.2
    • /
    • pp.194-202
    • /
    • 2003
  • In this paper, we introduce a category of Multi-FNN (Fuzzy-Neural Networks) models, analyze the underlying architectures and propose a comprehensive identification framework. The proposed Multi-FNNs dwell on a concept of fuzzy rule-based FNNs based on HCM clustering and evolutionary fuzzy granulation, and exploit linear inference being treated as a generic inference mechanism. By this nature, this FNN model is geared toward capturing relationships between information granules known as fuzzy sets. The form of the information granules themselves (in particular their distribution and a type of membership function) becomes an important design feature of the FNN model contributing to its structural as well as parametric optimization. The identification environment uses clustering techniques (Hard C - Means, HCM) and exploits genetic optimization as a vehicle of global optimization. The global optimization is augmented by more refined gradient-based learning mechanisms such as standard back-propagation. The HCM algorithm, whose role is to carry out preprocessing of the process data for system modeling, is utilized to determine the structure of Multi-FNNs. The detailed parameters of the Multi-FNN (such as apexes of membership functions, learning rates and momentum coefficients) are adjusted using genetic algorithms. An aggregate performance index with a weighting factor is proposed in order to achieve a sound balance between approximation and generalization (predictive) abilities of the model. To evaluate the performance of the proposed model, two numeric data sets are experimented with. One is the numerical data coming from a description of a certain nonlinear function and the other is NOx emission process data from a gas turbine power plant.

A Study on the Automatic Lexical Acquisition for Multi-lingustic Speech Recognition (다국어 음성 인식을 위한 자동 어휘모델의 생성에 대한 연구)

  • 지원우;윤춘덕;김우성;김석동
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.6
    • /
    • pp.434-442
    • /
    • 2003
  • Software internationalization, the process of making software easier to localize for specific languages, has deep implications when applied to speech technology, where the goal of the task lies in the very essence of the particular language. A greatdeal of work and fine-tuning has gone into language processing software based on ASCII or a single language, say English, thus making a port to different languages difficult. The inherent identity of a language manifests itself in its lexicon, where its character set, phoneme set, pronunciation rules are revealed. We propose a decomposition of the lexicon building process, into four discrete and sequential steps. For preprocessing to build a lexical model, we translate from specific language code to unicode. (step 1) Transliterating code points from Unicode. (step 2) Phonetically standardizing rules. (step 3) Implementing grapheme to phoneme rules. (step 4) Implementing phonological processes.

A Big Data Preprocessing using Statistical Text Mining (통계적 텍스트 마이닝을 이용한 빅 데이터 전처리)

  • Jun, Sunghae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.470-476
    • /
    • 2015
  • Big data has been used in diverse areas. For example, in computer science and sociology, there is a difference in their issues to approach big data, but they have same usage to analyze big data and imply the analysis result. So the meaningful analysis and implication of big data are needed in most areas. Statistics and machine learning provide various methods for big data analysis. In this paper, we study a process for big data analysis, and propose an efficient methodology of entire process from collecting big data to implying the result of big data analysis. In addition, patent documents have the characteristics of big data, we propose an approach to apply big data analysis to patent data, and imply the result of patent big data to build R&D strategy. To illustrate how to use our proposed methodology for real problem, we perform a case study using applied and registered patent documents retrieved from the patent databases in the world.

Multi-Document Summarization Method Based on Semantic Relationship using VAE (VAE를 이용한 의미적 연결 관계 기반 다중 문서 요약 기법)

  • Baek, Su-Jin
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.341-347
    • /
    • 2017
  • As the amount of document data increases, the user needs summarized information to understand the document. However, existing document summary research methods rely on overly simple statistics, so there is insufficient research on multiple document summaries for ambiguity of sentences and meaningful sentence generation. In this paper, we investigate semantic connection and preprocessing process to process unnecessary information. Based on the vocabulary semantic pattern information, we propose a multi-document summarization method that enhances semantic connectivity between sentences using VAE. Using sentence word vectors, we reconstruct sentences after learning from compressed information and attribute discriminators generated as latent variables, and semantic connection processing generates a natural summary sentence. Comparing the proposed method with other document summarization methods showed a fine but improved performance, which proved that semantic sentence generation and connectivity can be increased. In the future, we will study how to extend semantic connections by experimenting with various attribute settings.

A Study on AWGN Removal using Modified Edge Detection (변형된 에지 검출을 이용한 AWGN 제거에 관한 연구)

  • Kwon, Se-Ik;Hwang, Yeong-Yeun;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.790-792
    • /
    • 2017
  • As the demand of digital image processing devices has been rapidly increased recently, the excellent image quality is required. However, degradation can be occurred with multiple causes during transmission and processing process. Therefore, the needs to eliminate the noise are increased and the noise elimination technology became the major study area. Therefore, image restoration algorithm was suggested to apply the filter differently by edge and non-edge areas, using modified edge detection with preprocessing process so as to relieve the effect of additive white Gaussian noise(AWGN) which is added in the image, in this article. In addition, it was compared with the existing methods using peak signal to noise ratio(PSNR) as the objective determination standard of the improvement effect.

  • PDF

A Genetic Programming Approach to Blind Deconvolution of Noisy Blurred Images (잡음이 있고 흐릿한 영상의 블라인드 디컨벌루션을 위한 유전 프로그래밍 기법)

  • Mahmood, Muhammad Tariq;Chu, Yeon Ho;Choi, Young Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.1
    • /
    • pp.43-48
    • /
    • 2014
  • Usually, image deconvolution is applied as a preprocessing step in surveillance systems to reduce the effect of motion or out-of-focus blur problem. In this paper, we propose a blind-image deconvolution filtering approach based on genetic programming (GP). A numerical expression is developed using GP process for image restoration which optimally combines and exploits dependencies among features of the blurred image. In order to develop such function, first, a set of feature vectors is formed by considering a small neighborhood around each pixel. At second stage, the estimator is trained and developed through GP process that automatically selects and combines the useful feature information under a fitness criterion. The developed function is then applied to estimate the image pixel intensity of the degraded image. The performance of developed function is estimated using various degraded image sequences. Our comparative analysis highlights the effectiveness of the proposed filter.

A Fingerprint Identification System using Large Database (대용량 DB를 사용한 지문인식 시스템)

  • Cha, Jeong-Hee;Seo, Jeong-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.4 s.36
    • /
    • pp.203-211
    • /
    • 2005
  • In this paper, we propose a new automatic fingerprint identification system that identifies individuals in large databases. The algorithm consists of three steps; preprocessing, classification, and matching, in the classification. we present a new classification technique based on the statistical approach for directional image distribution. In matching, we also describe improved minutiae candidate pair extraction algorithm that is faster and more accurate than existing algorithm. In matching stage, we extract fingerprint minutiaes from its thinned image for accuracy, and introduce matching process using minutiae linking information. Introduction of linking information into the minutiae matching process is a simple but accurate way, which solves the problem of reference minutiae pair selection in comparison stage of two fingerprints quickly. This algorithm is invariant to translation and rotation of fingerprint. The proposed system was tested on 1000 fingerprint images from the semiconductor chip style scanner. Experimental results reveal false acceptance rate is decreased and genuine acceptance rate is increased than existing method.

  • PDF

A Study on Condition-based Maintenance Policy using Minimum-Repair Block Replacement (최소수리 블록교체 모형을 활용한 상태기반 보전 정책 연구)

  • Lim, Jun Hyoung;Won, Dong-Yeon;Sim, Hyun Su;Park, Cheol Hong;Koh, Kwan-Ju;Kang, Jun-Gyu;Kim, Yong Soo
    • Journal of Applied Reliability
    • /
    • v.18 no.2
    • /
    • pp.114-121
    • /
    • 2018
  • Purpose: This study proposes a process for evaluating the preventive maintenance policy for a system with degradation characteristics and for calculating the appropriate preventive maintenance cycle using time- and condition-based maintenance. Methods: First, the collected data is divided into the maintenance history lifetime and degradation lifetime, and analysis datasets are extracted through preprocessing. Particle filter algorithm is used to estimate the degradation lifetime from analysis datasets and prior information is obtained using LSE. The suitability and cost of the existing preventive maintenance policy are each evaluated based on the degradation lifetime and by using a minimum repair block replacement model of time-based maintenance. Results: The process is applied to the degradation of the reverse osmosis (RO) membrane in a seawater reverse osmosis (SWRO) plant to evaluate the existing preventive maintenance policy. Conclusion: This method can be used for facilities or systems that undergo degradation, which can be evaluated in terms of cost and time. The method is expected to be used in decision-making for devising the optimal preventive maintenance policy.