• 제목/요약/키워드: Sampling Step Size

검색결과 45건 처리시간 0.023초

자동차 현가장치 부품에 대한 신뢰성 기반 최적설계에 관한 연구 (A Study for the Reliability Based Design Optimization of the Automobile Suspension Part)

  • 이종홍;유정훈;임홍재
    • 한국자동차공학회논문집
    • /
    • 제12권2호
    • /
    • pp.123-130
    • /
    • 2004
  • The automobile suspension system is composed of parts that affect performances of a vehicle such as ride quality, handling characteristics, straight performance and steering effort, etc. Moreover, by using the finite element analysis the cost for the initial design step can be decreased. In the design of a suspension system, usually system vibration and structural rigidity must be considered simultaneously to satisfy dynamic and static requirements simultaneously. In this paper, we consider the weight reduction and the increase of the first eigen-frequency of a suspension part, the upper control arm, especially using topology optimization and size optimization. Firstly, we obtain the initial design to maximize the first eigen-frequency using topology optimization. Then, we apply the multi-objective parameter optimization method to satisfy both the weight reduction and the increase of the first eigen-frequency. The design variables are varying during the optimization process for the multi-objective. Therefore, we can obtain the deterministic values of the design variables not only to satisfy the terms of variation limits but also to optimize the two design objectives at the same time. Finally, we have executed reliability based optimal design on the upper control arm using the Monte-Carlo method with importance sampling method for the optimal design result with 98% reliability.

A Fast Volume Rendering Algorithm for Virtual Endoscopy

  • Ra Jong Beom;Kim Sang Hun;Kwon Sung Min
    • 대한의용생체공학회:의공학회지
    • /
    • 제26권1호
    • /
    • pp.23-30
    • /
    • 2005
  • 3D virtual endoscopy has been used as an alternative non-invasive procedure for visualization of hollow organs. However, due to computational complexity, this is a time-consuming procedure. In this paper, we propose a fast volume rendering algorithm based on perspective ray casting for virtual endoscopy. As a pre-processing step, the algorithm divides a volume into hierarchical blocks and classifies them into opaque or transparent blocks. Then, in the first step, we perform ray casting only for sub-sampled pixels on the image plane, and determine their pixel values and depth information. In the next step, by reducing the sub-sampling factor by half, we repeat ray casting for newly added pixels, and their pixel values and depth information are determined. Here, the previously obtained depth information is utilized to reduce the processing time. This step is recursively performed until a full-size rendering image is acquired. Experiments conducted on a PC show that the proposed algorithm can reduce the rendering time by 70- 80% for bronchus and colon endoscopy, compared with the brute-force ray casting scheme. Using the proposed algorithm, interactive volume rendering becomes more realizable in a PC environment without any specific hardware.

유통과학분야에서 탐색적 연구를 위한 요인분석 (Factor Analysis for Exploratory Research in the Distribution Science Field)

  • 임명성
    • 유통과학연구
    • /
    • 제13권9호
    • /
    • pp.103-112
    • /
    • 2015
  • Purpose - This paper aims to provide a step-by-step approach to factor analytic procedures, such as principal component analysis (PCA) and exploratory factor analysis (EFA), and to offer a guideline for factor analysis. Authors have argued that the results of PCA and EFA are substantially similar. Additionally, they assert that PCA is a more appropriate technique for factor analysis because PCA produces easily interpreted results that are likely to be the basis of better decisions. For these reasons, many researchers have used PCA as a technique instead of EFA. However, these techniques are clearly different. PCA should be used for data reduction. On the other hand, EFA has been tailored to identify any underlying factor structure, a set of measured variables that cause the manifest variables to covary. Thus, it is needed for a guideline and for procedures to use in factor analysis. To date, however, these two techniques have been indiscriminately misused. Research design, data, and methodology - This research conducted a literature review. For this, we summarized the meaningful and consistent arguments and drew up guidelines and suggested procedures for rigorous EFA. Results - PCA can be used instead of common factor analysis when all measured variables have high communality. However, common factor analysis is recommended for EFA. First, researchers should evaluate the sample size and check for sampling adequacy before conducting factor analysis. If these conditions are not satisfied, then the next steps cannot be followed. Sample size must be at least 100 with communality above 0.5 and a minimum subject to item ratio of at least 5:1, with a minimum of five items in EFA. Next, Bartlett's sphericity test and the Kaiser-Mayer-Olkin (KMO) measure should be assessed for sampling adequacy. The chi-square value for Bartlett's test should be significant. In addition, a KMO of more than 0.8 is recommended. The next step is to conduct a factor analysis. The analysis is composed of three stages. The first stage determines a rotation technique. Generally, ML or PAF will suggest to researchers the best results. Selection of one of the two techniques heavily hinges on data normality. ML requires normally distributed data; on the other hand, PAF does not. The second step is associated with determining the number of factors to retain in the EFA. The best way to determine the number of factors to retain is to apply three methods including eigenvalues greater than 1.0, the scree plot test, and the variance extracted. The last step is to select one of two rotation methods: orthogonal or oblique. If the research suggests some variables that are correlated to each other, then the oblique method should be selected for factor rotation because the method assumes all factors are correlated in the research. If not, the orthogonal method is possible for factor rotation. Conclusions - Recommendations are offered for the best factor analytic practice for empirical research.

큰 유전율을 가지는 유전체의 전자계 해석을 위한 FVTD-LTS 기법 (FVTD-LTS Method for Electromagnetic Field Analysis by Dielectric with large Permittivity)

  • 윤광렬;채용웅
    • 대한전기학회논문지:전기물성ㆍ응용부문C
    • /
    • 제55권6호
    • /
    • pp.334-338
    • /
    • 2006
  • The finite volume time domain(FVTD) method gives accurate results for the calculation of electromagnetic wave propagation but it should be noted that the number of sampling points per wavelength should be increased when more accurate numerical results are required. Moreover it requires large amount of computer memory resources. In this paper we propose a modified FVTD that employs a time subdivision. The local time-subdivided FVTD(FVTD-LTS) method is enough to divide the space domain grid with a large step size. This method can reduce computation time and memory resources. To validate the proposed method, sever numerical examples are presented. We have then shown that the proposed method yields a reasonable solution.

중추성 작용 약물의 뇌파 효과의 정량화를 위한 스펙트럼 분석에 필요한 기본적 조건의 검토 (Basic ]Requirements for Spectrum Analysis of Electroencephalographic Effects of Central Acting Drugs)

  • 임선희;권지숙;김기민;박상진;정성훈;이만기
    • Biomolecules & Therapeutics
    • /
    • 제8권1호
    • /
    • pp.63-72
    • /
    • 2000
  • We intended to show some basic requirements for spectrum analysis of electroencephalogram (EEG) by visualizing the differences of the results according to different values of some parameters for analysis. Spectrum analysis is the most popular technique applied for the quantitative analysis of the electroen- cephalographic signals. Each step from signal acquisition through spectrum analysis to presentation of parameters was examined with providing some different values of parameters. The steps are:(1) signal acquisition; (2) spectrum analysis; (3) parameter extractions; and (4) presentation of results. In the step of signal acquisition, filtering and amplification of signal should be considered and sampling rate for analog-to-digital conversion is two-time faster than highest frequency component of signal. For the spectrum analysis, the length of signal or epoch size transformed to a function on frequency domain by courier transform is important. Win dowing method applied for the pre-processing before the analysis should be considered for reducing leakage problem. In the step of parameter extraction, data reduction has to be considered so that statistical comparison can be used in appropriate number of parameters. Generally, the log of power of all bands is derived from the spectrum. For good visualization and quantitative evaluation of time course of the parameters are presented in chronospectrogram.

  • PDF

A 12b 100 MS/s Three-Step Hybrid Pipeline ADC Based on Time-Interleaved SAR ADCs

  • Park, Jun-Sang;An, Tai-Ji;Cho, Suk-Hee;Kim, Yong-Min;Ahn, Gil-Cho;Roh, Ji-Hyun;Lee, Mun-Kyo;Nah, Sun-Phil;Lee, Seung-Hoon
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • 제14권2호
    • /
    • pp.189-197
    • /
    • 2014
  • This work proposes a 12b 100 MS/s $0.11{\mu}m$ CMOS three-step hybrid pipeline ADC for high-speed communication and mobile display systems requiring high resolution, low power, and small size. The first stage based on time-interleaved dual-channel SAR ADCs properly handles the Nyquist-rate input without a dedicated SHA. An input sampling clock for each SAR ADC is synchronized to a reference clock to minimize a sampling-time mismatch between the channels. Only one residue amplifier is employed and shared in the proposed ADC for the first-stage SAR ADCs as well as the MDAC of back-end pipeline stages. The shared amplifier, in particular, reduces performance degradation caused by offset and gain mismatches between two channels of the SAR ADCs. Two separate reference voltages relieve a reference disturbance due to the different operating frequencies of the front-end SAR ADCs and the back-end pipeline stages. The prototype ADC in a $0.11{\mu}m$ CMOS shows the measured DNL and INL within 0.38 LSB and 1.21 LSB, respectively. The ADC occupies an active die area of $1.34mm^2$ and consumes 25.3 mW with a maximum SNDR and SFDR of 60.2 dB and 69.5 dB, respectively, at 1.1 V and 100 MS/s.

Power of Variance Component Linkage Analysis to Identify Quantitative Trait Locus in Chickens

  • Park, Hee-Bok;Heo, Kang-Nyeong;Kang, Bo-Seok;Jo, Cheorun;Lee, Jun Heon
    • Journal of Animal Science and Technology
    • /
    • 제55권2호
    • /
    • pp.103-107
    • /
    • 2013
  • A crucial first step in the planning of any scientific experiment is to evaluate an appropriate sample size to permit sufficient statistical power to detect the desired effect. In this study, we investigated the optimal sample size of quantitative trait locus (QTL) linkage analysis for simple random sibship samples in pedigreed chicken populations, under the variance component framework implemented in the genetic power calculator program. Using the program, we could compute the statistical power required to achieve given sample sizes in variance component linkage analysis in random sibship data. For simplicity, an additive model was taken into account. Power calculations were performed to relate sample size to heritability attributable to a QTL. Under the various assumptions, comparative power curves indicated that the power to detect QTL with the variance component method is highly affected by a function of the effect size of QTL. Hence, more power can be achievable for QTL with a larger effect. In addition, a marked improvement in power could be obtained by increasing the sibship size. Thus, the use of chickens is advantageous regarding the sampling unit issue, since desirable sibship size can be easily obtained compared with other domestic species.

Hue-assisted automatic registration of color point clouds

  • Men, Hao;Pochiraju, Kishore
    • Journal of Computational Design and Engineering
    • /
    • 제1권4호
    • /
    • pp.223-232
    • /
    • 2014
  • This paper describes a variant of the extended Gaussian image based registration algorithm for point clouds with surface color information. The method correlates the distributions of surface normals for rotational alignment and grid occupancy for translational alignment with hue filters applied during the construction of surface normal histograms and occupancy grids. In this method, the size of the point cloud is reduced with a hue-based down sampling that is independent of the point sample density or local geometry. Experimental results show that use of the hue filters increases the registration speed and improves the registration accuracy. Coarse rigid transformations determined in this step enable fine alignment with dense, unfiltered point clouds or using Iterative Common Point (ICP) alignment techniques.

화염법을 이용한 Pt/C 촉매 제조 (Pt Coating on Flame-Generated Carbon Particles)

  • 최인대;이동근
    • 대한기계학회논문집B
    • /
    • 제33권2호
    • /
    • pp.116-123
    • /
    • 2009
  • Carbon black, activated carbon and carbon nanotube have been used as supporting materials for precious metal catalysts used in fuel cell electrodes. One-step flame synthesis method is used to coat 2-5nm Pt dots on flame-generated carbon particles. By adjusting flame temperature, gas flow rates and resident time of particles in flame, we can obtain Pt/C nano catalyst-support composite particles. Additional injection of hydrogen gas facilitates pyrolysis of Pt precursor in flame. The size of as-incepted Pt dots increases along the flame due to longer resident time and sintering in high temperature flame. Surface coverage and dispersion of the Pt dots is varied at different sampling heights and confirmed by Transmission electron microscopy (TEM), Energy-dispersive spectra (EDS) and X-ray diffraction (XRD). Crystalinity and surface bonding groups of carbon are investigated through X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy.

정착시간 최소화 기법을 적용한 고속 CMOS A/D 변환기 설계 (A High-Speed CMOS A/D Converter Using an Acquistition-Time Minimization Technique))

  • 전병열;전영득;이승훈
    • 전자공학회논문지C
    • /
    • 제36C권5호
    • /
    • pp.57-66
    • /
    • 1999
  • 본 논문에서는 50 MHz 수준의 고속 신호 샘플링을 위해 정착시간 최소화 기법을 적용한 12 비트 50 MHz CMOS A/D 변환기(analon-to-digital-converter : ADC) 회로를 제안한다. 제안하는 ADC는 0.35㎛ double-poly five-metal n-well CMOS 공정을 사용하여 설계 및 레이아웃되었으며, 응용되는 시스템의 속도, 해상도 및 면적 등의 사양을 고려하여 다단 파이프라인 구조가 적용되었다. 기존의 파이프라인 구조를 가진 ADC의 경우, 동작속도를 제한하는 결정적인 회로 불럭은 잔류전압 증폭기이나, 제안하는 정착 시간 최소화 기법은 이러한 잔류전압 증폭기의 동작 전류 제어를 통해 정착시간 단축 및 출력신호의 불규칙성을 최소한으로 줄인다. 3 V 전원전압에서 50 MHz 클럭 주파수를 사용하여 모의실험한 결과, 입출력단을 포함한 전체 ADC는 197mW의 전력소모를 나타내었고, 입출력단의 패드를 포함한 전체 칩면적은 3.2mm×3.6mm이다.

  • PDF