• Title/Summary/Keyword: Optimal Process Mean

Search Result 192, Processing Time 0.024 seconds

The Initial Value Setting-Up Method for Extending the Range of the Optimal Step Parameter under LMS Algorithm (LMS 알고리즘에서 최적 매개변수의 선택 폭 확대를 위한 초기치의 설정방법)

  • Cho, Ki-Ryang;An, Hyuk;Choo, Byoung-Yoon;Lee, Chun-Jae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.284-292
    • /
    • 2003
  • In this paper, we carried out the numerical examination of the initial value setting-up method to extend the range of optimal step parameter in a adaptive system which is controlled by LMS algorithm. For initial value setting-up methods, the general method which select the initial value randomly and the other method which applies the approximate value obtained from the direct method to initial value, were used. And then, we compared to the ranges of step parameter setting, the convergence speeds of mean-square-error, and the stabilities during the convergence process when the initial values were applied to the optimal directivity synthesis problem. According to the numerical simulation results, the initial value setting-up method by means of the direct method provides wider range for the step parameter, more efficient capability for convergence and stability, and more error correction ability than the general method.

Treatment Characteristics of Plating Wastewater Containing Freecyanide, Cyanide Complexes and Heavy Metals (I) (도금폐수내 유리시안과 착염시안 및 중금속의 처리특성 (I))

  • Jung, Yeon-Hoon;Lee, Soo-Koo
    • Journal of Korean Society on Water Environment
    • /
    • v.25 no.6
    • /
    • pp.979-983
    • /
    • 2009
  • The mean pH of wastewater discharged from the plating process is 2, so a less amount of alkali is required to raise pH 2 to 5. In addition, if sodium sulfite is used to raise pH 5 to 9 in the secondary treatment, caustic soda or slaked lime is not necessary or only a small amount is necessary because sodium sulfite is alkali. Thus, it is considered desirable to use only $FeSO_4{\cdot}7H_2O$ in the primary treatment. At that time, the free cyanide removal rate was highest as around 99.3%, and among heavy metals, Ni showed the highest removal rate as around 92%, but zinc and chrome showed a low removal rate. In addition, the optimal amount of $FeSO_4{\cdot}7H_2O$ was 0.3g/L, at which the cyanide removal rate was highest. Besides, the free cyanide removal rate was highest when pH value was 5. Of cyanide removed in the primary treatment, the largest part was removed through the precipitation of ferric ferrocyanide: $[Fe_4(Fe(CN)_6]_3$, and the rest was precipitated and removed through the production of $Cu_2[Fe(CN)_6]$, $Ni_2[Fe(CN)_6]$, CuCN, etc. Furthermore, it appeared more effective in removing residual cyanide in wastewater to mix $Na_2SO_3$ and $Na_2S_2O_5$ at an optimal ratio and put the mixture than to put them separately, and the optimal weight ratio of $Na_2SO_3$ to $Na_2S_2O_5$ was 1:2, at which the oxidative decomposition of residual cyanide was the most active. However, further research is required on the simultaneous removal of heavy metals such as chrome and zinc.

CFD Study for the Design of Coolant Path in Cryogenic Etch Chuck

  • Jo, Soo Hyun;Han, Ji Hee;Kim, Jong Oh;Han, Hwi;Hong, Sang Jeen
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.2
    • /
    • pp.92-97
    • /
    • 2021
  • The importance of processes in cryogenic environments is increasing in a way to address problems such as critical dimension (CD) narrow and bottlenecks in micro-processing. Accordingly, in this paper, we proceed with the design and analysis of Electrostatic Chuck(ESC) and Coolant in cryogenic environments, and present optimal model conditions to provide the temperature distribution analysis of ESC in these environments and the appropriate optimal design. The wafer temperature uniformity was selected as the reference model that the operating conditions of the refrigerant of the liquid nitrogen in the doubled aluminum path were excellent. Design of simulation (DOS) was carried out based on the wheel settings within the selected reference model and the classification of three mass flow and diameter case, respectively. The comparison between factors with p-value less than 0.05 indicates that the optimal design point is when five turns of coolant have a flow rate of 0.3 kg/s and a diameter of 12 mm. ANOVA determines the interactions between the above factor, indicating that mass flow is the most significant among the parameters of interests. In variable selection procedure, Case 2 was also determined to be superior through the two-Sample T-Test of the mean and variance values by dividing five coolant wheels into two (Case 1 : 2+3, Case 2: 3+2). Finally, heat transfer analysis processes such as final difference method (FDM) and heat transfer were also performed to demonstrate the feasibility and adequacy of the analysis process.

The Study of the Cycle Time Improvement by Work-In-Process Statistical Process Control Method for IC Foundry Manufacturing

  • Lin, Yu-Cheng;Tsai, Chih-Hung;Li, Rong-Kwei;Chen, Ching-Piao;Chen, Hsien-Ching
    • International Journal of Quality Innovation
    • /
    • v.9 no.3
    • /
    • pp.71-91
    • /
    • 2008
  • The definition of cycle time is the time from the wafer start to the wafer output. It usually takes one or two months to get the product since customer decides to produce it. The cycle time is a critical factor for customer satisfaction because it represents the response time to the market. Long cycle time reflects the ineffective investment for the capital. The cycle time is very important for foundry because long cycle time will cause customer unsatisfied and the order loss. Consequently, all of the foundries put lots of human source in the cycle time improvement. Usually, we make decisions based on the experience in the cycle time management. We have no mechanism or theory for cycle time management. We do work-in-process (WIP) management based on turn rate and standard WIP (STD WIP) set by experiences. But the experience didn't mean the optimal solution, when the situation changed, the cycle time or the standard WIP will also be changed. The experience will not always be applicable. If we only have the experience and no mechanism, management will not be work out. After interview several foundry fab managers, all of the fab can't reflect the situation. That is, all of them will have an impact period after product mix or utilization varied. In this study, we want to develop a formula for standard WIP and use statistical process control (SPC) concept to set WIP upper/lower limit level. When WIP exceed the limit level, it will trigger action plans to compensate WIP Profile. If WIP Profile balances, we don't need too much WIP. So WIP level could be reduced and cycle time also could be reduced.

The Bi-directional Least Mean Square Algorithm and Its Application to Echo Cancellation (양방향 최소 평균 제곱 알고리듬과 반향 제거로의 응용)

  • Kwon, Oh-Sang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.12
    • /
    • pp.1337-1344
    • /
    • 2014
  • The objective of an echo canceller connected to any end of a communication line such as digital subscriber line (DSL) is to compensate the outgoing transmit signal in the receiving path that the hybrid circuit leaks. The echo canceller working in a full duplex environment is an adaptive system driven by the local signal. Conventional echo canceller that implement the least mean square (LMS) algorithm provides a low computational burden but poor convergence properties. The length of the echo canceller will directly affect both the degree of performance and the convergence speed of the adaptation process. To cancel long time-varying echoes, the number of tap coefficients of a conventional echo canceller must be large, which decreases the convergence speed of the adaptive filter. This paper proposes an alternative technique for the echo cancellation in a telecommunication channel. The new technique employs the bi-directional least mean square (LMS) algorithm for adaptively computing the optimal set of the coefficients of the echo canceller, which is composed of weighted combination of both feedforward and feedback algorithms. Finally, Simulation results as well as mathematical analysis demonstrates that the proposed echo canceller has faster convergence speed than the conventional LMS echo canceller with nearly equivalent complexity of computation.

6MV Photon Beam Commissioning in Varian 2300C/D with BEAM/EGS4 Monte Carlo Code

  • Kim, Sangroh;Jason W. Sohn;Cho, Byung-Chul;Suh, Tae-Suk;Choe, Bo-Yong;Lee, Hyoung-Koo
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.113-115
    • /
    • 2002
  • The Monte Carlo simulation method is a numerical solution to a problem that models objects interacting with other objects or their environment based upon simple object-object or object-environment relationships. In spite of its great accuracy, It was turned away because of long calculation time to simulate a model. But, it is used to simulate a linear accelerator frequently with the advance of computer technology. To simulate linear accelerator in Monte Carlo simulations, there are many parameters needed to input to Monte Carlo code. These data can be supported by a linear accelerator manufacturer. Although the model of a linear accelerator is the same, a different characteristic property can be found. Thus, we performed a commissioning process of 6MV photon beam in Varian 2300C/D model with BEAM/EGS4 Monte Carlo code. The head geometry data were put into BEAM/EGS4 data. The mean energy and energy spread of the electron beam incident on the target were varied to match Monte Carlo simulations to measurements. TLDs (thermoluminescent dosimeter) and radiochromic films were employed to measure the absorbed dose in a water phantom. Beam profile was obtained in 40cm${\times}$40cm field size and Depth dose was in 10cm${\times}$10cm. At first, we compared the depth dose between measurements and Monte Carlo simulations varying the mean energy of an incident electron beam. Then, we compared the beam profile with adjusting the beam radius of the incident electron beam in Monte Carlo simulation. The results were found that the optimal mean energy was 6MV and beam radius of 0.1mm was well matched to measurements.

  • PDF

Analytical Evaluation of FFR-aided Heterogeneous Cellular Networks with Optimal Double Threshold

  • Abdullahi, Sani Umar;Liu, Jian;Mohadeskasaei, Seyed Alireza
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.7
    • /
    • pp.3370-3392
    • /
    • 2017
  • Next Generation Beyond 4G/5G systems will rely on the deployment of small cells over conventional macrocells for achieving high spectral efficiency and improved coverage performance, especially for indoor and hotspot environments. In such heterogeneous networks, the expected performance gains can only be derived with the use of efficient interference coordination schemes, such as Fractional Frequency Reuse (FFR), which is very attractive for its simplicity and effectiveness. In this work, femtocells are deployed according to a spatial Poisson Point Process (PPP) over hexagonally shaped, 6-sector macro base stations (MeNBs) in an uncoordinated manner, operating in hybrid mode. A newly introduced intermediary region prevents cross-tier, cross-boundary interference and improves user equipment (UE) performance at the boundary of cell center and cell edge. With tools of stochastic geometry, an analytical framework for the signal-to-interference-plus-noise-ratio (SINR) distribution is developed to evaluate the performance of all UEs in different spatial locations, with consideration to both co-tier and cross-tier interference. Using the SINR distribution framework, average network throughput per tier is derived together with a newly proposed harmonic mean, which ensures fairness in resource allocation amongst all UEs. Finally, the FFR network parameters are optimized for maximizing average network throughput, and the harmonic mean using a fair resource assignment constraint. Numerical results verify the proposed analytical framework, and provide insights into design trade-offs between maximizing throughput and user fairness by appropriately adjusting the spatial partitioning thresholds, the spectrum allocation factor, and the femtocell density.

Sensitivity Approach of Sequential Sampling Using Adaptive Distance Criterion (적응거리 조건을 이용한 순차적 실험계획의 민감도법)

  • Jung, Jae-Jun;Lee, Tae-Hee
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.29 no.9 s.240
    • /
    • pp.1217-1224
    • /
    • 2005
  • To improve the accuracy of a metamodel, additional sample points can be selected by using a specified criterion, which is often called sequential sampling approach. Sequential sampling approach requires small computational cost compared to one-stage optimal sampling. It is also capable of monitoring the process of metamodeling by means of identifying an important design region for approximation and further refining the fidelity in the region. However, the existing critertia such as mean squared error, entropy and maximin distance essentially depend on the distance between previous selected sample points. Therefore, although sufficient sample points are selected, these sequential sampling strategies cannot guarantee the accuracy of metamodel in the nearby optimum points. This is because criteria of the existing sequential sampling approaches are inefficient to approximate extremum and inflection points of original model. In this research, new sequential sampling approach using the sensitivity of metamodel is proposed to reflect the response. Various functions that can represent a variety of features of engineering problems are used to validate the sensitivity approach. In addition to both root mean squared error and maximum error, the error of metamodel at optimum points is tested to access the superiority of the proposed approach. That is, optimum solutions to minimization of metamodel obtained from the proposed approach are compared with those of true functions. For comparison, both mean squared error approach and maximin distance approach are also examined.

Preparation of Alginate Microspheres by Rotating Membrane Emulsification (회전 막유화에 의한 알지네이트 미소 구체의 제조)

  • Min, Kyoung Won;Youm, Kyung Ho
    • Membrane Journal
    • /
    • v.31 no.1
    • /
    • pp.52-60
    • /
    • 2021
  • When preparing calcium alginate microspheres using rotating membrane emulsification that rotates SPG (Shirasu porous glass) tubular membrane in the continuous phase, the optimal conditions of rotating membrane emulsification process parameters for producing monodisperse microspheres were determined. We determined the effects of process parameters of rotating membrane emulsification (the rotating speed of membrane module, the transmembrane pressure, the ratio of dispersed phase to continuous phase, the alginate concentration, the emulsifier concentration, the stabilizer concentration, the crosslinking agent concentration, and the membrane pore size) on the mean size and size distribution of alginate microspheres. As a result, the size of the microspheres decreased as the rotating speed of membrane module, the emulsifier concentration, and the crosslinking agent concentration increased among the process parameters of rotating membrane emulsification. On the contrary, as the ratio of dispersed phase to continuous phase, the transmembrane pressure, and the alginate concentration increased, the size of the microspheres increased. In the rotating membrane emulsification using an SPG membrane with a pore size of 3.2 ㎛, it was possible to finally prepare monodisperse alginate microspheres with a particle size of 4.5 ㎛ through the control of process parameters.

Performance Evaluation of Loss Functions and Composition Methods of Log-scale Train Data for Supervised Learning of Neural Network (신경 망의 지도 학습을 위한 로그 간격의 학습 자료 구성 방식과 손실 함수의 성능 평가)

  • Donggyu Song;Seheon Ko;Hyomin Lee
    • Korean Chemical Engineering Research
    • /
    • v.61 no.3
    • /
    • pp.388-393
    • /
    • 2023
  • The analysis of engineering data using neural network based on supervised learning has been utilized in various engineering fields such as optimization of chemical engineering process, concentration prediction of particulate matter pollution, prediction of thermodynamic phase equilibria, and prediction of physical properties for transport phenomena system. The supervised learning requires training data, and the performance of the supervised learning is affected by the composition and the configurations of the given training data. Among the frequently observed engineering data, the data is given in log-scale such as length of DNA, concentration of analytes, etc. In this study, for widely distributed log-scaled training data of virtual 100×100 images, available loss functions were quantitatively evaluated in terms of (i) confusion matrix, (ii) maximum relative error and (iii) mean relative error. As a result, the loss functions of mean-absolute-percentage-error and mean-squared-logarithmic-error were the optimal functions for the log-scaled training data. Furthermore, we figured out that uniformly selected training data lead to the best prediction performance. The optimal loss functions and method for how to compose training data studied in this work would be applied to engineering problems such as evaluating DNA length, analyzing biomolecules, predicting concentration of colloidal suspension.