• Title/Summary/Keyword: Performance Optimization

Search Result 5,433, Processing Time 0.039 seconds

Evaluation of Soil Parameters Using Adaptive Management Technique (적응형 관리 기법을 이용한 지반 물성 값의 평가)

  • Koo, Bonwhee;Kim, Taesik
    • Journal of the Korean GEO-environmental Society
    • /
    • v.18 no.2
    • /
    • pp.47-51
    • /
    • 2017
  • In this study, the optimization algorithm by inverse analysis that is the core of the adaptive management technique was adopted to update the soil engineering properties based on the ground response during the construction. Adaptive management technique is the framework wherein construction and design procedures are adjusted based on observations and measurements made as construction proceeds. To evaluate the performance of the adaptive management technique, the numerical simulation for the triaxial tests and the synthetic deep excavation were conducted with the Hardening Soil model. To effectively conduct the analysis, the effective parameters among the parameters employed in the model were selected based on the composite scaled sensitivity analysis. The results from the undrained triaxial tests performed with soft Chicago clays were used for the parameter calibration. The simulation for the synthetic deep excavation were conducted assuming that the soil engineering parameters obtained from the triaxial simulation represent the actual field condition. These values were used as the reference values. The observation for the synthetic deep excavation simulations was the horizontal displacement of the support wall that has the highest composite scaled sensitivity among the other possible observations. It was found that the horizontal displacement of the support wall with the various initial soil properties were converged to the reference displacement by using the adaptive management technique.

Rapid HPLC Method for the Simultaneous Determination of Eight Urinary Metabolites of Toluene, Xylene and Styrene

  • Lee, Cheol-Woo;Lee, Jeong-Mi;Lee, Jae-Hyun;Eom, Han-Young;Kim, Min-Kyung;Suh, Joon-Hyuk;Yeom, Hye-Sun;Kim, Un-Yong;Youm, Jeong-Rok;Han, Sang-Beom
    • Bulletin of the Korean Chemical Society
    • /
    • v.30 no.9
    • /
    • pp.2021-2026
    • /
    • 2009
  • Toluene, xylene and styrene are volatile organic solvents that are commonly used in mixtures in many industries. Because these solvents are metabolized and then excreted in urine, their urinary metabolites are thought to be biomarkers of occupational exposure to these solvents. Therefore, a simple, rapid, and yet reliable analytical method for determining the metabolites is required for accurate biological monitoring. In the present study, a simple and rapid HPLC-UV method was developed for the simultaneous determination of eight major metabolites of toluene, xylene and styrene: hippuric acid (HA), mandelic acid (MA), o-, m- and p-methylhippuric acids (o-, m- and p-MHAs), and o-, m- and p-cresols. A monolithic column was employed as the stationary phase and several conditions, including flow rate, composition of mobile phase and column temperature, were variables for the optimization of the chromatographic resolution. All eight metabolites were successfully resolved within 5 minutes in 10% aqueous ethanol containing 0.3% acetic acid and 1.6% $\beta$-cyclodextrin, using a flow rate gradient of 1.0 - 5.0 mL/min at 25 ${^{\circ}C}$. The performance of this method was validated by linearity, intra- and inter-day accuracy, and precision. The linearity was observed with correlation coefficients of 0.9998 for HA, 0.9999 for MA, 0.9989 for o-MHA, 0.9998 for m-MHA, 0.9991 for p-MHA, 0.9997 for o-cresol, 0.9998 for m-cresol, and 0.9986 for p-cresol. The intra- and inter-day precision of the method were less than 5.89% (CV) and the accuracy ranged from 92.95 to 106.62%. The validity was further confirmed by analysis of reference samples that were prepared by the inter-laboratory quality assurance program of the Korea Occupational Safety and Health Agency (KOSHA, Seoul, Korea). All measured concentrations of the analytes agreed with the certified values.

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taeksoo;Han, Ingoo
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.06a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support fer multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To date, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques' results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Roles of B-dot Controller and Failure Analysis for Dawn-dusk LEO Satellite (6시 저궤도 위성에서 B-dot 제어기 역할과 고장분석)

  • Rhee, Seung-Wu;Kim, Hong-Joong;Son, Jun-Won
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.3
    • /
    • pp.200-209
    • /
    • 2013
  • In this paper, the types of B-dot controller and the review results of B-dot controller stability are summarized. Also, it is confirmed that B-dot controller is very useful and essential tool when a dawn-dusk low earth orbit(LEO) large satellite has especially to capture the Sun for a required power supply in a reliable way after anomaly and that its algorithm is very simple for on-board implementation. New physical interpretation of B-dot controller is presented as a result of extensive theoretical investigation introducing the concept of transient control torque and steady state control torque. Also, the failure effect analysis results of magnetic torquers as well as a simulation verification are included. And the design recommendation for optimal design is provided to cope with the failure of magnetic torquer. Nonlinear simulation results are included to justify its capability as well as its performance for an application to a dawn-dusk LEO large satellite.

Development of a New Advanced Water Treatment Process (PMR) and Assessment of Its Treatment Efficiency (고도정수처리 신(新) 공정(PMR)개발 및 처리효율 평가)

  • Ahn, Hyo-Won;Noh, Soo-Hong;Kwon, Oh-Sung;Park, Yong-Hyo;Wang, Chang-Keun
    • Membrane Journal
    • /
    • v.18 no.2
    • /
    • pp.157-167
    • /
    • 2008
  • Removal of organic substances and taste/odor control are ones of the main issues in water supply, resulting in introduction of advanced processes such as ozon/GAC, or PAC. However, raw water quality deteriorates, new pollutants advent, so water quality is not acceptable enough even with those existing advanced processes. In this paper, a new advanced water treatment process using PAC slurry blanket, where PAC particles stay in the basin as slurry blanket, coupled with submerged membranes is introduced. A pilot plant $(80m^3/day)$ was installed to assess the performance of this new process using actual raw water, and DOC was removed higher than 90% in the beginning and $70{\sim}80%$ afterwards, while 2-MIB and geosmin were removed completely. This new process still requires future study on process optimization and long-term assessment, however it seems highly possible to countermeasure as a new advanced process with high removal efficiency.

Optimization of Tank Model Parameters Using Multi-Objective Genetic Algorithm (I): Methodology and Model Formulation (다목적 유전자알고리즘을 이용한 Tank 모형 매개변수 최적화(I): 방법론과 모형구축)

  • Kim, Tae-Soon;Jung, Il-Won;Koo, Bo-Young;Bae, Deg-Hyo
    • Journal of Korea Water Resources Association
    • /
    • v.40 no.9
    • /
    • pp.677-685
    • /
    • 2007
  • The objective of this study is to evaluate the applicability of multi-objective genetic algorithm(MOGA) in order to calibrate the parameters of conceptual rainfall-runoff model, Tank model. NSGA-II, one of the most imitating MOGA implementations, is combined with Tank model and four multi-objective functions such as to minimize volume error, root mean square error (RMSE), high flow RMSE, and low flow RMSE are used. When NSGA-II is employed with more than three multi-objective functions, a number of Pareto-optimal solutions usually becomes too large. Therefore, selecting several preferred Pareto-optimal solutions is essential for stakeholder, and preference-ordering approach is used in this study for the sake of getting the best preferred Pareto-optimal solutions. Sensitivity analysis is performed to examine the effect of initial genetic parameters, which are generation number and Population size, to the performance of NSGA-II for searching the proper paramters for Tank model, and the result suggests that the generation number is 900 and the population size is 1000 for this study.

Development of Slurry Flow Control and Slot Die Optimization Process for Manufacturing Improved Electrodes in Production of Lithium-ion Battery for Electric Vehicles (전기자동차 리튬이온 배터리 제조공정에서 Loading Level 산포최소화 코팅을 통한 전극 품질개선에 관한 연구)

  • Jang, Chan-Hee;Lee, Jae-Chon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.14-20
    • /
    • 2018
  • Electric vehicles are environmentally friendly because they emit no exhaust gas, unlike gasoline automobiles. However, since they are driven by the electric power from batteries, the distance they can travel based on a single charge depends on their energy density. Therefore, the lithium-ion battery having a high energy density is a good candidate for the batteries of electric vehicles. Since the electrode is an essential component that governs their efficiency, the electrode manufacturing process plays a vital role in the entire production process of lithium-ion batteries. In particular, the coating process is a critical step in the manufacturing of the electrode, which has a significant influence on its performance. In this paper, we propose an innovative process for improving the efficiency and productivity of the coating process in electrode manufacturing and describe the equipment design method and development results. Specifically, we propose a design procedure and development method in order to improve the core plate coating quality by 25%, using a technology capable of reducing the assembly margin due to its high output/high capacity and improving the product capacity quality and assembly process yield. Using this method, the battery life of the lithium-ion battery cell was improved. Compared with the existing coating process, the target loading level is maintained and dispersed to maintain the anode capacity (${\pm}0.4{\rightarrow}{\pm}0.3mg/cm^2r$ reduction).

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taek-Soo;Han, In-Goo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.03a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support for multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To data, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Analysis of Optimal Infiltraction Route using Genetic Algorithm (유전자 알고리즘을 이용한 최적침투경로 분석)

  • Bang, Soo-Nam;Sohn, Hyong-Gyoo;Kim, Sang-Pil;Kim, Chang-Jae;Heo, Joon
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.1
    • /
    • pp.59-68
    • /
    • 2011
  • The analysis of optimal infiltration path is one of the representative fields in which the GIS technology can be useful for the military purpose. Usually the analysis of the optimal path is done with network data. However, for military purpose, it often needs to be done with raster data. Because raster data needs far more computation than network data, it is difficult to apply the methods usually used in network data, such as Dijkstra algorithm. The genetic algorithm, which has shown great outcomes in optimization problems, was applied. It was used to minimize the detection probability of infiltration route. 2D binary array genes and its crossover and mutation were suggested to solve this problem with raster data. 30 tests were performed for each population size, 500, 1000, 2000, and 3000. With each generation, more adoptable routes survived and made their children routes. Results indicate that as the generations increased, average detection probability decreased and the routes converged to the optimal path. Also, as the population size increases, more optimal routes were found. The suggested genetic algorithm successfully finds the optimal infiltration route, and it shows better performance with larger population.

Study on Optimization of Detection System of Prompt Gamma Distribution for Proton Dose Verification (양성자 선량 분포 검증을 위한 즉발감마선 분포측정 장치 최적화 연구)

  • Lee, Han Rim;Min, Chul Hee;Park, Jong Hoon;Kim, Seong Hoon;Kim, Chan Hyeong
    • Progress in Medical Physics
    • /
    • v.23 no.3
    • /
    • pp.162-168
    • /
    • 2012
  • In proton therapy, in vivo dose verification is one of the most important parts to fully utilize characteristics of proton dose distribution concentrating high dose with steep gradient and guarantee the patient safety. Currently, in order to image the proton dose distribution, a prompt gamma distribution detection system, which consists of an array of multiple CsI(Tl) scintillation detectors in the vertical direction, a collimator, and a multi-channel DAQ system is under development. In the present study, the optimal design of prompt gamma distribution detection system was studied by Monte Carlo simulations using the MCNPX code. For effective measurement of high-energy prompt gammas with enough imaging resolution, the dimensions of the CsI(Tl) scintillator was determined to be $6{\times}6{\times}50mm^3$. In order to maximize the detection efficiency for prompt gammas while minimizing the contribution of background gammas generated by neutron captures, the hole size and the length of the collimator were optimized as $6{\times}6mm^2$ and 150 mm, respectively. Finally, the performance of the detection system optimized in the present study was predicted by Monte Carlo simulations for a 150 MeV proton beam. Our result shows that the detection system in the optimal dimensions can effectively measure the 2D prompt gamma distribution and determine the beam range within 1 mm errors for 150 MeV proton beam.