• Title/Summary/Keyword: Optimization Technique

Search Result 2,680, Processing Time 0.027 seconds

Study on Radionuclide Migration Modelling for a Single Fracture in Geologic Medium : Characteristics of Hydrodynamic Dispersion Diffusion Model and Channeling Dispersion Diffusion Model (단일균열 핵종이동모델에 관한 연구 -수리분산확산모델과 국부통로확산모델의 특성-)

  • Keum, D.K.;Cho, W.J.;Hahn, P.S.;Park, H.H.
    • Nuclear Engineering and Technology
    • /
    • v.26 no.3
    • /
    • pp.401-410
    • /
    • 1994
  • Validation study of two radionuclide migration models for single fracture developed in geologic medium the hydrodynamic dispersion diffusion model(HDDM) and the channeling dispersion diffusion model(CDDM), was studied by migration experiment of tracers through an artificial granite fracture on the labolatory scale. The tracers used were Uranine and Sodium lignosulfonate know as nonsorbing material. The flow rate ranged 0.4 to 1.5 cc/min. Related parameters for the models were estimated by optimization technique. Theoretical breakthrough curves with experimental data were compared. In the experiment, it was deduced that the surface sorption for both tracers did not play an important role while the diffusion of Uranine into the rock matrix turned out to be an important mass transfer mechanism. The parameter characterizing the rock matrix diffusion of each model agreed well The simulated result showed that the amount of flow rate could not tell the CDDM from the HDDM quantitatively. On the other hand, the variation of fracture length gave influence on the two models in a different degree. The dispersivity of breakthrough curve of the CDDM was more amplified than that of the CDDM when the fracture length was increased. A good agreement between the models and experimental data gave a confirmation that both models were very useful in predicting the migration system through a single fracture.

  • PDF

A Development of Hydrological Model Calibration Technique Considering Seasonality via Regional Sensitivity Analysis (지역적 민감도 분석을 이용하여 계절성을 고려한 수문 모형 보정 기법 개발)

  • Lee, Ye-Rin;Yu, Jae-Ung;Kim, Kyungtak;Kwon, Hyun-Han
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.3
    • /
    • pp.337-352
    • /
    • 2023
  • In general, Rainfall-Runoff model parameter set is optimized using the entire data to calculate unique parameter set. However, Korea has a large precipitation deviation according to the season, and it is expected to even worsen due to climate change. Therefore, the need for hydrological data considering seasonal characteristics. In this study, we conducted regional sensitivity analysis(RSA) using the conceptual Rainfall-Runoff model, GR4J aimed at the Soyanggang dam basin, and clustered combining the RSA results with hydrometeorological data using Self-Organizing map(SOM). In order to consider the climate characteristics in parameter estimation, the data was divided based on clustering, and a calibration approach of the Rainfall-Runoff model was developed by comparing the objective functions of the Global Optimization method. The performance of calibration was evaluated by statistical techniques. As a result, it was confirmed that the model performance during the Cold period(November~April) with a relatively low flow rate was improved. This is expected to improve the performance and predictability of the hydrological model for areas that have a large precipitation deviation such as Monsoon climate.

Sensitivity Analysis of Wake Diffusion Patterns in Mountainous Wind Farms according to Wake Model Characteristics on Computational Fluid Dynamics (전산유체역학 후류모델 특성에 따른 산악지형 풍력발전단지 후류확산 형태 민감도 분석)

  • Kim, Seong-Gyun;Ryu, Geon Hwa;Kim, Young-Gon;Moon, Chae-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.265-278
    • /
    • 2022
  • The global energy paradigm is rapidly changing by centering on carbon neutrality, and wind energy is positioning itself as a leader in renewable energy-based power sources. The success of onshore and offshore wind energy projects focuses on securing the economic feasibility of the project, which depends on securing high-quality wind resources and optimal arrangement of wind turbines. In the process of constructing the wind farm, the optimal arrangement method of wind turbines considering the main wind direction is important, and this is related to minimizing the wake effect caused by the fluid passing through the structure located on the windward side. The accuracy of the predictability of the wake effect is determined by the wake model and modeling technique that can properly simulate it. Therefore, in this paper, using WindSim, a commercial CFD model, the wake diffusion pattern is analyzed through the sensitivity study of each wake model of the proposed onshore wind farm located in the mountainous complex terrain in South Korea, and it is intended to be used as basic research data for wind energy projects in complex terrain in the future.

Guide for Processing of Textured Piezoelectric Ceramics Through the Template Grain Growth Method

  • Temesgen Tadeyos Zate;Jeong-Woo Sun;Nu-Ri Ko;Hye-Lim Yu;Woo-Jin Choi;Jae-Ho Jeon;Wook Jo
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.36 no.4
    • /
    • pp.341-350
    • /
    • 2023
  • The templated grain growth (TGG) method has gained significant attention for its ability to produce highly textured piezoelectric ceramics with significantly enhanced performance, making it a promising method for transducer and actuator applications. However, the texturing process using the TGG method requires the optimization of multiple steps, which can be challenging for beginners in this field. Therefore, in this tutorial, we provide an overview of the TGG method mainly based on our previous published works, including its various processing steps such as synthesizing anisotropic-shaped templates with size and size distribution control using the molten salt synthesis technique, tape casting, and identifying key factors for proper alignment of the templates in the target matrix system. Our goal is to provide a resource that can serve as a basic reference for researchers and engineers looking to improve their understanding and utilization of the TGG method for producing textured piezoelectric ceramics.

Explainable Artificial Intelligence (XAI) Surrogate Models for Chemical Process Design and Analysis (화학 공정 설계 및 분석을 위한 설명 가능한 인공지능 대안 모델)

  • Yuna Ko;Jonggeol Na
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.542-549
    • /
    • 2023
  • Since the growing interest in surrogate modeling, there has been continuous research aimed at simulating nonlinear chemical processes using data-driven machine learning. However, the opaque nature of machine learning models, which limits their interpretability, poses a challenge for their practical application in industry. Therefore, this study aims to analyze chemical processes using Explainable Artificial Intelligence (XAI), a concept that improves interpretability while ensuring model accuracy. While conventional sensitivity analysis of chemical processes has been limited to calculating and ranking the sensitivity indices of variables, we propose a methodology that utilizes XAI to not only perform global and local sensitivity analysis, but also examine the interactions among variables to gain physical insights from the data. For the ammonia synthesis process, which is the target process of the case study, we set the temperature of the preheater leading to the first reactor and the split ratio of the cold shot to the three reactors as process variables. By integrating Matlab and Aspen Plus, we obtained data on ammonia production and the maximum temperatures of the three reactors while systematically varying the process variables. We then trained tree-based models and performed sensitivity analysis using the SHAP technique, one of the XAI methods, on the most accurate model. The global sensitivity analysis showed that the preheater temperature had the greatest effect, and the local sensitivity analysis provided insights for defining the ranges of process variables to improve productivity and prevent overheating. By constructing alternative models for chemical processes and using XAI for sensitivity analysis, this work contributes to providing both quantitative and qualitative feedback for process optimization.

$V_H$ Gene Expression and its Regulation on Several Different B Cell Population by using in situ Hybridization technique

  • Jeong, Hyun-Do
    • Journal of fish pathology
    • /
    • v.6 no.2
    • /
    • pp.111-122
    • /
    • 1993
  • The mechanism by which $V_H$ region gene segments is selected in B lymphocyte is not known. Moreover, evidence for both random and nonrandom expression of $V_H$ genes in matured B cells has been presented previously. In this report, the technique of in situ hybridization allowed us to analyze expressed $V_H$ gene families in normal B lymphocyte at the single cell level. The analysis of normal B cells in this study eliminated any posssible bias resulting from transformation protocols used previously and minimized limitation associated with sampling size. Therefore, an accurate measure of the functional and expressed $V_H$ gene repertoire in B lymphocyte could be made. One of the most important controls for the optimization of in situ hybridization is to establish probe concentration and washing stringency due to the degree of nucleotide sequence similarlity between different families which in some cases can be as high as 70%. When the radioactive $C{\mu}$ and $V_{H}J558$ RNA probes are tested on LPS-stimulated adult spleen cells, $2{\sim}4{\times}106cpm$/slide shows low background and reasonable frequency of specific positive cells. For the washing condition. 40~50% formamide at $54^{\circ}C$ is found to be optimum for the $C{\mu}$. $V_{H}S107$ and $V_{H}J558$ probes. The analyzed results clearly demonstrate that the level of each different $V_H$ gene family expression is dependent upon the complexity or size of that family. These findings are also extended to the level of $V_H$ gene family expression in separated bone marrow B cells depend upon the various stage of differentiation and conclude no preferential utilization of specific $V_H$ gene family. Thus, the utilization of VH gene segments in B lymphocyte of adult BALB/c mice is random and is not regulated or changed during the differentiation of B cells.

  • PDF

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Preparation of EVA/Intumescent/Nano-Clay Composite with Flame Retardant Properties and Cross Laminated Timber (CLT) Application Technology (난연특성을 가지는 EVA/Intumescent/나노클레이 복합재료 제조 및 교호집성재(Cross Laminated Timber) 적용 기술)

  • Choi, Yo-Seok;Park, Ji-Won;Lee, Jung-Hun;Shin, Jae-Ho;Jang, Seong-Wook;Kim, Hyun-Joong
    • Journal of the Korean Wood Science and Technology
    • /
    • v.46 no.1
    • /
    • pp.73-84
    • /
    • 2018
  • Recently, the importance of flame retardation treatment technology has been emphasized due to the increase in urban fire accidents and fire damage incidents caused by building exterior materials. Particularly, in the utilization of wood-based building materials, the flame retarding treatment technology is more importantly evaluated. An Intumescent system is one of the non-halogen flame retardant treatment technologies and is a system that realizes flame retardancy through foaming and carbonization layer formation. To apply the Intumescent system, composite material was prepared by using Ethylene vinyl acetate (EVA) as a matrix. To enhance the flame retardant properties of the Intumescent system, a nano-clay was applied together. Composite materials with Intumescent system and nano - clay technology were processed into sheet - like test specimens, and then a new structure of cross laminated timber with improved flame retardant properties was fabricated. In the evaluation of combustion characteristics of composite materials using Intumescent system, it was confirmed that the maximum heat emission was reduced efficiently. Depending on the structure attached to the surface, the CLT had two stages of combustion. Also, it was confirmed that the maximum calorific value decreased significantly during the deep burning process. These characteristics are expected to have a delayed combustion diffusion effect in the combustion process of CLT. In order to improve the performance, the flame retardation treatment technique for the surface veneer and the optimization technique of the application of the composite material are required. It is expected that it will be possible to develop a CLT structure with improved fire characteristics.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

HW/SW Partitioning Techniques for Multi-Mode Multi-Task Embedded Applications (멀티모드 멀티태스크 임베디드 어플리케이션을 위한 HW/SW 분할 기법)

  • Kim, Young-Jun;Kim, Tae-Whan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.337-347
    • /
    • 2007
  • An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.