• Title/Summary/Keyword: Parameter Update

Search Result 114, Processing Time 0.027 seconds

Application-aware Design Parameter Exploration of NAND Flash Memory

  • Bang, Kwanhu;Kim, Dong-Gun;Park, Sang-Hoon;Chung, Eui-Young;Lee, Hyuk-Jun
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.4
    • /
    • pp.291-302
    • /
    • 2013
  • NAND flash memory (NFM) based storage devices, e.g. Solid State Drive (SSD), are rapidly replacing conventional storage devices, e.g. Hard Disk Drive (HDD). As NAND flash memory technology advances, its specification has evolved to support denser cells and larger pages and blocks. However, efforts to fully understand their impacts on design objectives such as performance, power, and cost for various applications are often neglected. Our research shows this recent trend can adversely affect the design objectives depending on the characteristics of applications. Past works mostly focused on improving the specific design objectives of NFM based systems via various architectural solutions when the specification of NFM is given. Several other works attempted to model and characterize NFM but did not access the system-level impacts of individual parameters. To the best of our knowledge, this paper is the first work that considers the specification of NFM as the design parameters of NAND flash storage devices (NFSDs) and analyzes the characteristics of various synthesized and real traces and their interaction with design parameters. Our research shows that optimizing design parameters depends heavily on the characteristics of applications. The main contribution of this research is to understand the effects of low-level specifications of NFM, e.g. cell type, page size, and block size, on system-level metrics such as performance, cost, and power consumption in various applications with different characteristics, e.g. request length, update ratios, read-and-modify ratios. Experimental results show that the optimized page and block size can achieve up to 15 times better performance than the conventional NFM configuration in various applications. The results can be used to optimize the system-level objectives of a system with specific applications, e.g. embedded systems with NFM chips, or predict the future direction of NFM.

Modal Test and Finite Element Model Update of Aircraft with High Aspect Ratio Wings (고세장비 항공기의 모드 시험 및 동특성 유한요소모델 개선)

  • Kim, Sang-Yong
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.22 no.5
    • /
    • pp.480-488
    • /
    • 2012
  • The aircrafts with high aspect ratio wings made by a composite material have been developed, which enable high energy efficiency and long-term flight by reducing air resistance and structural weight. However, they have difficulties in securing the aeroelastic stability such as the flutter because of their long and flexible wings. The flutter is unstable self-excited-vibration caused by interaction between the structural dynamics and the aerodynamics. It should be verified analytically prior to first flight test that the flutter does not happen in the range of flight mission. Normally, the finite element model is used for the flutter analysis. So it is important to construct the finite element model representing dynamic characteristics similar to those of a real aircraft. Accordingly, in this research, to acquire dynamic characteristics experimentally the modal test of the aircraft with high aspect ratio composite wings was conducted. And then the modal parameters from the finite element analysis(FEA) were compared with those from the modal test. To make analysis results closer to test results, the finite element model was updated by means of the sensitivity analysis on variables and the optimization. Finally, it was proved that the updated finite element model is reliable as compared with the results of the modal test.

A Parallel Equalization Algorithm with Weighted Updating by Two Error Estimation Functions (두 오차 추정 함수에 의해 가중 갱신되는 병렬 등화 알고리즘)

  • Oh, Kil-Nam
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.7
    • /
    • pp.32-38
    • /
    • 2012
  • In this paper, to eliminate intersymbol interference of the received signal due to multipath propagation, a parallel equalization algorithm using two error estimation functions is proposed. In the proposed algorithm, multilevel two-dimensional signals are considered as equivalent binary signals, then error signals are estimated using the sigmoid nonlinearity effective at the initial phase equalization and threshold nonlinearity with high steady-state performance. The two errors are scaled by a weight depending on the relative accuracy of the two error estimations, then two filters are updated differentially. As a result, the combined output of two filters was to be the optimum value, fast convergence at initial stage of equalization and low steady-state error level were achieved at the same time thanks to the combining effect of two operation modes smoothly. Usefulness of the proposed algorithm was verified and compared with the conventional method through computer simulations.

Design and Performance Evaluation of DGPS Based on Optimal and Sub-optimal Reference Point (Optimal 및 Sub-optimal 기준점을 사용한 DGPS 설계 및 성능평가)

  • 고광섭;홍성래;정세모
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.2 no.3
    • /
    • pp.343-352
    • /
    • 1998
  • The use of DGPS enhances standalone GPS accuracy and removes common errors from two or more receivers viewing the same satellites. The design of DGPS system contains a precise reference point which is able to compute the common errors to update the pseudo range of users receivers. It should take a great time and cost to provide precise and sufficient accuracy of the reference point. That is, it is natural to measure the parameters from satellites with specific survey instrument system, and then obtain that by post processing. The purpose of the study is to examine the bounds of accuracy which resulted from RTCM correction data transmitted from a simply designed DGPS system. In the paper, We design and evaluate the DGPS system based m the surveyed reference point, and Sub-optimal no by a Standalone GPS as well. As a result of the study, it is shown that the designed system may be applied to the specific marine activity in civilian and military.

  • PDF

Dynamic Control of Learning Rate in the Improved Adaptive Gaussian Mixture Model for Background Subtraction (배경분리를 위한 개선된 적응적 가우시안 혼합모델에서의 동적 학습률 제어)

  • Kim, Young-Ju
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.366-369
    • /
    • 2005
  • Background subtraction is mainly used for the real-time extraction and tracking of moving objects from image sequences. In the outdoor environment, there are many changeable factor such as gradually changing illumination, swaying trees and suddenly moving objects, which are to be considered for the adaptive processing. Normally, GMM(Gaussian Mixture Model) is used to subtract the background adaptively considering the various changes in the scenes, and the adaptive GMMs improving the real-time performance were worked. This paper, for on-line background subtraction, applied the improved adaptive GMM, which uses the small constant for learning rate ${\alpha}$ and is not able to speedily adapt the suddenly movement of objects, So, this paper proposed and evaluated the dynamic control method of ${\alpha}$ using the adaptive selection of the number of component distributions and the global variances of pixel values.

  • PDF

Neural network based numerical model updating and verification for a short span concrete culvert bridge by incorporating Monte Carlo simulations

  • Lin, S.T.K.;Lu, Y.;Alamdari, M.M.;Khoa, N.L.D.
    • Structural Engineering and Mechanics
    • /
    • v.81 no.3
    • /
    • pp.293-303
    • /
    • 2022
  • As infrastructure ages and traffic load increases, serious public concerns have arisen for the well-being of bridges. The current health monitoring practice focuses on large-scale bridges rather than short span bridges. However, it is critical that more attention should be given to these behind-the-scene bridges. The relevant information about the construction methods and as-built properties are most likely missing. Additionally, since the condition of a bridge has unavoidably changed during service, due to weathering and deterioration, the material properties and boundary conditions would also have changed since its construction. Therefore, it is not appropriate to continue using the design values of the bridge parameters when undertaking any analysis to evaluate bridge performance. It is imperative to update the model, using finite element (FE) analysis to reflect the current structural condition. In this study, a FE model is established to simulate a concrete culvert bridge in New South Wales, Australia. That model, however, contains a number of parameter uncertainties that would compromise the accuracy of analytical results. The model is therefore updated with a neural network (NN) optimisation algorithm incorporating Monte Carlo (MC) simulation to minimise the uncertainties in parameters. The modal frequency and strain responses produced by the updated FE model are compared with the frequency and strain values on-site measured by sensors. The outcome indicates that the NN model updating incorporating MC simulation is a feasible and robust optimisation method for updating numerical models so as to minimise the difference between numerical models and their real-world counterparts.

Multi-Scale finite element investigations into the flexural behavior of lightweight concrete beams partially reinforced with steel fiber

  • Esmaeili, Jamshid;Ghaffarinia, Mahdi
    • Computers and Concrete
    • /
    • v.29 no.6
    • /
    • pp.393-405
    • /
    • 2022
  • Lightweight concrete is a superior material due to its light weight and high strength. There however remain significant lacunae in engineering knowledge with regards to shear failure of lightweight fiber reinforced concrete beams. The main aim of the present study is to investigate the optimum usage of steel fibers in lightweight fiber reinforced concrete (LWFRC). Multi-scale finite element model calibrated with experimental results is developed to study the effect of steel fibers on the mechanical properties of LWFRC beams. To decrease the amount of steel fibers, it is preferred to reinforce only the middle section of the LWFRC beams, where the flexural stresses are higher. For numerical simulation, a multi-scale finite element model was developed. The cement matrix was modeled as homogeneous and uniform material and both steel fibers and lightweight coarse aggregates were randomly distributed within the matrix. Considering more realistic assumptions, the bonding between fibers and cement matrix was considered with the Cohesive Zone Model (CZM) and its parameters were determined using the model update method. Furthermore, conformity of Load-Crack Mouth Opening Displacement (CMOD) curves obtained from numerical modeling and experimental test results of notched beams under center-point loading tests were investigated. Validating the finite element model results with experimental tests, the effects of fibers' volume fraction, and the length of the reinforced middle section, on flexural and residual strengths of LWFRC, were studied. Results indicate that using steel fibers in a specified length of the concrete beam with high flexural stresses, and considerable savings can be achieved in using steel fibers. Reducing the length of the reinforced middle section from 50 to 30 cm in specimens containing 10 kg/m3 of steel fibers, resulting in a considerable decrease of the used steel fibers by four times, whereas only a 7% reduction in bearing capacity was observed. Therefore, determining an appropriate length of the reinforced middle section is an essential parameter in reducing fibers, usage leading to more affordable construction costs.

Admittance Model-Based Nanodynamic Control of Diamond Turning Machine (어드미턴스 모델을 이용한 다이아몬드 터닝머시인의 초정밀진동제어)

  • Jeong, Sanghwa;Kim, Sangsuk
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.10
    • /
    • pp.154-160
    • /
    • 1996
  • The control of diamond turning is usually achieved through a laser-interferometer feedback of slide position. The limitation of this control scheme is that the feedback signal does not account for additional dynamics of the tool post and the material removal process. If the tool post is rigid and the material removal process is relatively static, then such a non-collocated position feedback control scheme may surfice. However, as the accuracy requirement gets tighter and desired surface cnotours become more complex, the need for a direct tool-tip sensing becomes inevitable. The physical constraints of the machining process prohibit any reasonable implementation of a tool-tip motion measurement. It is proposed that the measured force normal to the face of the workpiece can be filtered through an appropriate admittance transfer function to result in the estimated dapth of cut. This can be compared to the desired depth of cut to generate the adjustment control action in additn to position feedback control. In this work, the design methodology on the admittance model-based control with a conventional controller is presented. The recursive least-squares algorithm with forgetting factor is proposed to identify the parameters and update the cutting process in real time. The normal cutting forces are measured to identify the cutting dynamics in the real diamond turning process using the precision dynamoneter. Based on the parameter estimation of cutting dynamics and the admitance model-based nanodynamic control scheme, simulation results are shown.

  • PDF

Numerical studies of information about elastic parameter sets in non-linear elastic wavefield inversion schemes (비선형 탄성파 파동장 역산 방법에서 탄성파 변수 세트에 관한 정보의 수치적 연구)

  • Sakai, Akio
    • Geophysics and Geophysical Exploration
    • /
    • v.10 no.1
    • /
    • pp.1-18
    • /
    • 2007
  • Non-linear elastic wavefield inversion is a powerful method for estimating elastic parameters for physical constraints that determine subsurface rock and properties. Here, I introduce six elastic-wave velocity models by reconstructing elastic-wave velocity variations from real data and a 2D elastic-wave velocity model. Reflection seismic data information is often decoupled into short and long wavelength components. The local search method has difficulty in estimating the longer wavelength velocity if the starting model is far from the true model, and source frequencies are then changed from lower to higher bands (as in the 'frequency-cascade scheme') to estimate model elastic parameters. Elastic parameters are inverted at each inversion step ('simultaneous mode') with a starting model of linear P- and S-wave velocity trends with depth. Elastic parameters are also derived by inversion in three other modes - using a P- and S-wave velocity basis $('V_P\;V_S\;mode')$; P-impedance and Poisson's ratio basis $('I_P\;Poisson\;mode')$; and P- and S-impedance $('I_P\;I_S\;mode')$. Density values are updated at each elastic inversion step under three assumptions in each mode. By evaluating the accuracy of the inversion for each parameter set for elastic models, it can be concluded that there is no specific difference between the inversion results for the $V_P\;V_S$ mode and the $I_P$ Poisson mode. The same conclusion is expected for the $I_P\;I_S$ mode, too. This gives us a sound basis for full wavelength elastic wavefield inversion.

Evaluation of Soil Parameters Using Adaptive Management Technique (적응형 관리 기법을 이용한 지반 물성 값의 평가)

  • Koo, Bonwhee;Kim, Taesik
    • Journal of the Korean GEO-environmental Society
    • /
    • v.18 no.2
    • /
    • pp.47-51
    • /
    • 2017
  • In this study, the optimization algorithm by inverse analysis that is the core of the adaptive management technique was adopted to update the soil engineering properties based on the ground response during the construction. Adaptive management technique is the framework wherein construction and design procedures are adjusted based on observations and measurements made as construction proceeds. To evaluate the performance of the adaptive management technique, the numerical simulation for the triaxial tests and the synthetic deep excavation were conducted with the Hardening Soil model. To effectively conduct the analysis, the effective parameters among the parameters employed in the model were selected based on the composite scaled sensitivity analysis. The results from the undrained triaxial tests performed with soft Chicago clays were used for the parameter calibration. The simulation for the synthetic deep excavation were conducted assuming that the soil engineering parameters obtained from the triaxial simulation represent the actual field condition. These values were used as the reference values. The observation for the synthetic deep excavation simulations was the horizontal displacement of the support wall that has the highest composite scaled sensitivity among the other possible observations. It was found that the horizontal displacement of the support wall with the various initial soil properties were converged to the reference displacement by using the adaptive management technique.