• Title/Summary/Keyword: Computational Method

Search Result 9,790, Processing Time 0.035 seconds

Optimization of impeller blade shape for high-performance and low-noise centrifugal pump (고성능 저소음 원심펌프 개발을 위한 임펠러 익형 최적설계)

  • Younguk Song;Seo-Yoon Ryu;Cheolung Cheong;Tae-hoon Kim;Junhyo Koo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.519-528
    • /
    • 2023
  • The aim of this study was to enhance the flow rate and noise performance of a centrifugal pump in dishwashers by designing an optimized impeller shape through numerical and experimental investigations. To evaluate the performance of the target centrifugal pump, experiment was conducted using a pump performance tester and noise experiment was carried out in a semi-anechoic chamber with microphones and a reflecting wall behind the dishwasher. Through the use of advanced computational fluid dynamics techniques, numerical simulations were performed to analyze the flow and aeroacoustics performance of our target centrifugal pump impeller. To achieve this, numerical simulations were carried out using the Reynolds-Average Navier-Stokes equations and Ffowcs-Willliams and Hawkings equations as governing equations. In order to ensure the validity of numerical methods, a thorough comparison of numerical results with experimental results. After having confirmed the reliability of the current numerical method of this study, the optimization of the target centrifugal pump impeller was conducted. An improvement in flow rate was confirmed numerically, and a manufactured proto-type of the optimized model was used for experimental investigation. Furthermore, it was observed that by applying the fan law, we could effectively reduce noise levels without reducing the flow rate.

Study on the Selection of Optimal Operation Position Using AI Techniques (인공지능 기법에 의한 최적 운항자세 선정에 관한 연구)

  • Dong-Woo Park
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.6
    • /
    • pp.681-687
    • /
    • 2023
  • The selection technique for optimal operation position selection technique is used to present the initial bow and stern draft with minimum resistance, for achievingthat is, the optimal fuel consumption efficiency at a given operating displacement and speed. The main purpose of this studypaper is to develop a program to select the optimal operating position with maximum energy efficiency under given operating conditions based on the effective power data of the target ship. This program was written as a Python-based GUI (Graphic User Interface) usingbased on artificial intelligence techniques sucho that ship owners could easily use the GUIit. In the process, tThe introduction of the target ship, the collection of effective power data through computational fluid dynamics (CFD), the learning method of the effective power model using deep learning, and the program for presenting the optimal operation position using the deep neural network (DNN) model were specifically explained. Ships are loaded and unloaded for each operation, which changes the cargo load and changes the displacement. The shipowners wants to know the optimal operating position with minimum resistance, that is, maximum energy efficiency, according to the given speed of each displacement. The developed GUI can be installed on the ship's tablet PC and application and used to determineselect the optimal operating position.

Numerical and experimental investigations on the aerodynamic and aeroacoustic performance of the blade winglet tip shape of the axial-flow fan (축류팬 날개 끝 윙렛 형상의 적용 유무에 따른 공기역학적 성능 및 유동 소음에 관한 수치적/실험적 연구)

  • Seo-Yoon Ryu;Cheolung Cheong;Jong Wook Kim;Byeong Il Park
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.1
    • /
    • pp.103-111
    • /
    • 2024
  • Axial-flow fans are used to transport fluids in relatively low-pressure flow regimes, and a variety of design variables are employed. The tip geometry of an axial fan plays a dominant role in its flow and noise performance, and two of the most prominent flow phenomena are the tip vortex and the tip leakage vortex that occur at the tip of the blade. Various studies have been conducted to control these three-dimensional flow structures, and winglet geometries have been developed in the aircraft field to suppress wingtip vortices and increase efficiency. In this study, a numerical and experimental study was conducted to analyze the effect of winglet geometry applied to an axial fan blade for an air conditioner outdoor unit. The unsteady Reynolds-Averaged Navier-Stokes (RANS) equation and the FfocwsWilliams and Hawkings (FW-H) equation were numerically solved based on computational fluid dynamics techniques to analyze the three-dimensional flow structure and flow noise numerically, and the validity of the numerical method was verified by comparison with experimental results. The differences in the formation of tip vortex and tip leakage vortex depending on the winglet geometry were compared through a three-dimensional flow field, and the resulting aerodynamic performance was quantitatively compared. In addition, the effect of winglet geometry on flow noise was evaluated by numerically simulating noise based on the predicted flow field. A prototype of the target fan model was built, and flow and noise experiments were conducted to evaluate the actual performance quantitatively.

Numerical and Experimental Study on the Coal Reaction in an Entrained Flow Gasifier (습식분류층 석탄가스화기 수치해석 및 실험적 연구)

  • Kim, Hey-Suk;Choi, Seung-Hee;Hwang, Min-Jung;Song, Woo-Young;Shin, Mi-Soo;Jang, Dong-Soon;Yun, Sang-June;Choi, Young-Chan;Lee, Gae-Goo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.2
    • /
    • pp.165-174
    • /
    • 2010
  • The numerical modeling of a coal gasification reaction occurring in an entrained flow coal gasifier is presented in this study. The purposes of this study are to develop a reliable evaluation method of coal gasifier not only for the basic design but also further system operation optimization using a CFD(Computational Fluid Dynamics) method. The coal gasification reaction consists of a series of reaction processes such as water evaporation, coal devolatilization, heterogeneous char reactions, and coal-off gaseous reaction in two-phase, turbulent and radiation participating media. Both numerical and experimental studies are made for the 1.0 ton/day entrained flow coal gasifier installed in the Korea Institute of Energy Research (KIER). The comprehensive computer program in this study is made basically using commercial CFD program by implementing several subroutines necessary for gasification process, which include Eddy-Breakup model together with the harmonic mean approach for turbulent reaction. Further Lagrangian approach in particle trajectory is adopted with the consideration of turbulent effect caused by the non-linearity of drag force, etc. The program developed is successfully evaluated against experimental data such as profiles of temperature and gaseous species concentration together with the cold gas efficiency. Further intensive investigation has been made in terms of the size distribution of pulverized coal particle, the slurry concentration, and the design parameters of gasifier. These parameters considered in this study are compared and evaluated each other through the calculated syngas production rate and cold gas efficiency, appearing to directly affect gasification performance. Considering the complexity of entrained coal gasification, even if the results of this study looks physically reasonable and consistent in parametric study, more efforts of elaborating modeling together with the systematic evaluation against experimental data are necessary for the development of an reliable design tool using CFD method.

A Hierarchical Cluster Tree Based Fast Searching Algorithm for Raman Spectroscopic Identification (계층 클러스터 트리 기반 라만 스펙트럼 식별 고속 검색 알고리즘)

  • Kim, Sun-Keum;Ko, Dae-Young;Park, Jun-Kyu;Park, Aa-Ron;Baek, Sung-June
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.3
    • /
    • pp.562-569
    • /
    • 2019
  • Raman spectroscopy has been receiving increased attention as a standoff explosive detection technique. In addition, there is a growing need for a fast search method that can identify raman spectrum for measured chemical substances compared to known raman spectra in large database. By far the most simple and widely used method is to calculate and compare the Euclidean distance between the given spectrum and the spectra in a database. But it is non-trivial problem because of the inherent high dimensionality of the data. One of the most serious problems is the high computational complexity of searching for the closet spectra. To overcome this problem, we presented the MPS Sort with Sorted Variance+PDS method for the fast algorithm to search for the closet spectra in the last paper. the proposed algorithm uses two significant features of a vector, mean values and variance, to reject many unlikely spectra and save a great deal of computation time. In this paper, we present two new methods for the fast algorithm to search for the closet spectra. the PCA+PDS algorithm reduces the amount of computation by reducing the dimension of the data through PCA transformation with the same result as the distance calculation using the whole data. the Hierarchical Cluster Tree algorithm makes a binary hierarchical tree using PCA transformed spectra data. then it start searching from the clusters closest to the input spectrum and do not calculate many spectra that can not be candidates, which save a great deal of computation time. As the Experiment results, PCA+PDS shows about 60.06% performance improvement for the MPS Sort with Sorted Variance+PDS. also, Hierarchical Tree shows about 17.74% performance improvement for the PCA+PDS. The results obtained confirm the effectiveness of the proposed algorithm.

Comparison of Algorithms for Generating Parametric Image of Cerebral Blood Flow Using ${H_2}^{15}O$ PET Positron Emission Tomography (${H_2}^{15}O$ PET을 이용한 뇌혈류 파라메트릭 영상 구성을 위한 알고리즘 비교)

  • Lee, Jae-Sung;Lee, Dong-Soo;Park, Kwang-Suk;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.5
    • /
    • pp.288-300
    • /
    • 2003
  • Purpose: To obtain regional blood flow and tissue-blood partition coefficient with time-activity curves from ${H_2}^{15}O$ PET, fitting of some parameters in the Kety model is conventionally accomplished by nonlinear least squares (NLS) analysis. However, NLS requires considerable compuation time then is impractical for pixel-by-pixel analysis to generate parametric images of these parameters. In this study, we investigated several fast parameter estimation methods for the parametric image generation and compared their statistical reliability and computational efficiency. Materials and Methods: These methods included linear least squres (LLS), linear weighted least squares (LWLS), linear generalized least squares (GLS), linear generalized weighted least squares (GWLS), weighted Integration (WI), and model-based clustering method (CAKS). ${H_2}^{15}O$ dynamic brain PET with Poisson noise component was simulated using numerical Zubal brain phantom. Error and bias in the estimation of rCBF and partition coefficient, and computation time in various noise environments was estimated and compared. In audition, parametric images from ${H_2}^{15}O$ dynamic brain PET data peformed on 16 healthy volunteers under various physiological conditions was compared to examine the utility of these methods for real human data. Results: These fast algorithms produced parametric images with similar image qualify and statistical reliability. When CAKS and LLS methods were used combinedly, computation time was significantly reduced and less than 30 seconds for $128{\times}128{\times}46$ images on Pentium III processor. Conclusion: Parametric images of rCBF and partition coefficient with good statistical properties can be generated with short computation time which is acceptable in clinical situation.

Fast Full Search Block Matching Algorithm Using The Search Region Subsampling and The Difference of Adjacent Pixels (탐색 영역 부표본화 및 이웃 화소간의 차를 이용한 고속 전역 탐색 블록 정합 알고리듬)

  • Cheong, Won-Sik;Lee, Bub-Ki;Lee, Kyeong-Hwan;Choi, Jung-Hyun;Kim, Kyeong-Kyu;Kim, Duk-Gyoo;Lee, Kuhn-Il
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.11
    • /
    • pp.102-111
    • /
    • 1999
  • In this paper, we propose a fast full search block matching algorithm using the search region subsampling and the difference of adjacent pixels in current block. In the proposed algorithm, we calculate the lower bound of mean absolute difference (MAD) at each search point using the MAD value of neighbor search point and the difference of adjacent pixels in current block. After that, we perform block matching process only at the search points that need block matching process using the lower bound of MAD at each search point. To calculate the lower bound of MAD at each search point, we need the MAD value of neighbor search point. Therefore, the search points are subsampled at the factor of 4 and the MAD value at the subsampled search points are calculated by the block matching process. And then, the lower bound of MAD at the rest search points are calculated using the MAD value of the neighbor subsampled search point and the difference of adjacent pixels in current block. Finally, we discard the search points that have the lower bound of MAD value exceed the reference MAD which is the minimum MAD value of the MAD values at the subsampled search points and we perform the block matching process only at the search points that need block matching process. By doing so, we can reduce the computation complexity drastically while the motion compensated error performance is kept the same as that of full search block matching algorithm (FSBMA). The experimental results show that the proposed method has a much lower computational complexity than that of FSBMA while the motion compensated error performance of the proposed method is kept same as that of FSBMA.

  • PDF

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Building Change Detection Methodology in Urban Area from Single Satellite Image (단일위성영상 기반 도심지 건물변화탐지 방안)

  • Seunghee Kim;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1097-1109
    • /
    • 2023
  • Urban is an area where small-scale changes to individual buildings occur frequently. An existing urban building database requires periodic updating to increase its usability. However, there are limitations in data collection for building changes over a wide urban. In this study, we check the possibility of detecting building changes and updating a building database by using satellite images that can capture a wide urban region by a single image. For this purpose, building areas in a satellite image are first extracted by projecting 3D coordinates of building corners available in a building database onto the image. Building areas are then divided into roof and facade areas. By comparing textures of the roof areas projected, building changes such as height change or building removal can be detected. New height values are estimated by adjusting building heights until projected roofs align to actual roofs observed in the image. If the projected image appeared in the image while no building is observed, it corresponds to a demolished building. By checking buildings in the original image whose roofs and facades areas are not projected, new buildings are identified. Based on these results, the building database is updated by the three categories of height update, building deletion, or new building creation. This method was tested with a KOMPSAT-3A image over Incheon Metropolitan City and Incheon building database available in public. Building change detection and building database update was carried out. Updated building corners were then projected to another KOMPSAT-3 image. It was confirmed that building areas projected by updated building information agreed with actual buildings in the image very well. Through this study, the possibility of semi-automatic building change detection and building database update based on single satellite image was confirmed. In the future, follow-up research is needed on technology to enhance computational automation of the proposed method.

Data-centric XAI-driven Data Imputation of Molecular Structure and QSAR Model for Toxicity Prediction of 3D Printing Chemicals (3D 프린팅 소재 화학물질의 독성 예측을 위한 Data-centric XAI 기반 분자 구조 Data Imputation과 QSAR 모델 개발)

  • ChanHyeok Jeong;SangYoun Kim;SungKu Heo;Shahzeb Tariq;MinHyeok Shin;ChangKyoo Yoo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.523-541
    • /
    • 2023
  • As accessibility to 3D printers increases, there is a growing frequency of exposure to chemicals associated with 3D printing. However, research on the toxicity and harmfulness of chemicals generated by 3D printing is insufficient, and the performance of toxicity prediction using in silico techniques is limited due to missing molecular structure data. In this study, quantitative structure-activity relationship (QSAR) model based on data-centric AI approach was developed to predict the toxicity of new 3D printing materials by imputing missing values in molecular descriptors. First, MissForest algorithm was utilized to impute missing values in molecular descriptors of hazardous 3D printing materials. Then, based on four different machine learning models (decision tree, random forest, XGBoost, SVM), a machine learning (ML)-based QSAR model was developed to predict the bioconcentration factor (Log BCF), octanol-air partition coefficient (Log Koa), and partition coefficient (Log P). Furthermore, the reliability of the data-centric QSAR model was validated through the Tree-SHAP (SHapley Additive exPlanations) method, which is one of explainable artificial intelligence (XAI) techniques. The proposed imputation method based on the MissForest enlarged approximately 2.5 times more molecular structure data compared to the existing data. Based on the imputed dataset of molecular descriptor, the developed data-centric QSAR model achieved approximately 73%, 76% and 92% of prediction performance for Log BCF, Log Koa, and Log P, respectively. Lastly, Tree-SHAP analysis demonstrated that the data-centric-based QSAR model achieved high prediction performance for toxicity information by identifying key molecular descriptors highly correlated with toxicity indices. Therefore, the proposed QSAR model based on the data-centric XAI approach can be extended to predict the toxicity of potential pollutants in emerging printing chemicals, chemical process, semiconductor or display process.