• Title/Summary/Keyword: Square Root

Search Result 2,665, Processing Time 0.035 seconds

Effects of Scale Ratio on Flow Characteristics in Moonpool (축척비가 문풀 내부 유동 특성에 미치는 영향)

  • Lee, Sang Bong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.22 no.1
    • /
    • pp.118-122
    • /
    • 2016
  • When a geometric size of moonpool and an inflow velocity are determined based on the similarity of Froude number, Reynolds number is depending on the scale ratio of moonpool geometry. It means that different characteristics of flow fluctuations in moonpool can be observed depending on the scale ratio of moonpool even though Froude number is the same. In the present study two dimensional numerical simulations were performed to investigate the influence of scale ratios on the flow characteristics inside the moonpool. The inflow velocity at several scale ratios was determined to keep Froude number constant. A periodic response was observed in a small size of moonpool while a large moonpool showed complicated fluctuations with various amplitudes and frequencies, which made it difficult to distinguish the statistical steady-state response from the temporal responses in the case of large moonpool. The similarity of Froude number gave rise to a spectral characteristic which was inversely proportional to the square root of scale ratios ($f_{0.5}{\approx}{\sqrt{2}}f_1{\approx}2f_{2.0}$) but a low frequent occurrence of strong vortex ($f_{2.0}=0.07$)which is observed inside the large moonpool was characterized depending on scale ratios.

Developing the Accurate Method of Test Data Assessment with Changing Reliability Growth Rate and the Effect Evaluation for Complex and Repairable Products

  • So, Young-Kug;Ryu, Byeong-Jin
    • Journal of Applied Reliability
    • /
    • v.15 no.2
    • /
    • pp.90-100
    • /
    • 2015
  • Reliability growth rate (or reliability growth curve slope) have the two cases of trend as a constant or changing one during the reliability growth testing. The changing case is very common situation. The reasons of reliability growth rate changing are that the failures to follow the NHPP (None-Homogeneous Poisson Process), and the solutions implemented during test to break out other problems or not to take out all of the root cause permanently. If the changing were big, the "Goodness of Fit (GOF)" of reliability growth curve to test data would be very low and then reduce the accuracy of assessing result with test data. In this research, we are using Duane model and AMSAA model for assessing test data and projecting the reliability level of complex and repairable system as like construction equipment and vehicle. In case of no changing in reliability growth rate, it is reasonable for reliability engineer to implement the original Duane model (1964) and Crow-AMSAA model (1975) for the assessment and projection activity. However, in case of reliability growth rate changing, it is necessary to find the method to increase the "GOF" of reliability growth curves to test data. To increase GOF of reliability growth curves, it is necessary to find the proper parameter calculation method of interesting reliability growth models that are applicable to the situation of reliability growth rate changing. Since the Duane and AMSAA models have a characteristic to get more strong influence from the initial test (or failure) data than the latest one, the both models have a limitation to contain the latest test data information that is more important and better to assess test data in view of accuracy, especially when the reliability growth rate changing. The main objective of this research is to find the parameter calculation method to reflect the latest test data in the case of reliability growth rate changing. According to my experience in vehicle and construction equipment developments over 18 years, over the 90% in the total development cases are with such changing during the developing test. The objective of this research was to develop the newly assessing method and the process for GOF level increasing in case of reliability growth rate changing that would contribute to achieve more accurate assessing and projecting result. We also developed the new evaluation method for GOF that are applicable to the both models as Duane and AMSAA, so it is possible to compare it between models and check the effectiveness of new parameter calculation methods in any interesting situation. These research results can reduce the decision error for development process and business control with the accurately assessing and projecting result.

Calibration of Portable Particulate Mattere-Monitoring Device using Web Query and Machine Learning

  • Loh, Byoung Gook;Choi, Gi Heung
    • Safety and Health at Work
    • /
    • v.10 no.4
    • /
    • pp.452-460
    • /
    • 2019
  • Background: Monitoring and control of PM2.5 are being recognized as key to address health issues attributed to PM2.5. Availability of low-cost PM2.5 sensors made it possible to introduce a number of portable PM2.5 monitors based on light scattering to the consumer market at an affordable price. Accuracy of light scatteringe-based PM2.5 monitors significantly depends on the method of calibration. Static calibration curve is used as the most popular calibration method for low-cost PM2.5 sensors particularly because of ease of application. Drawback in this approach is, however, the lack of accuracy. Methods: This study discussed the calibration of a low-cost PM2.5-monitoring device (PMD) to improve the accuracy and reliability for practical use. The proposed method is based on construction of the PM2.5 sensor network using Message Queuing Telemetry Transport (MQTT) protocol and web query of reference measurement data available at government-authorized PM monitoring station (GAMS) in the republic of Korea. Four machine learning (ML) algorithms such as support vector machine, k-nearest neighbors, random forest, and extreme gradient boosting were used as regression models to calibrate the PMD measurements of PM2.5. Performance of each ML algorithm was evaluated using stratified K-fold cross-validation, and a linear regression model was used as a reference. Results: Based on the performance of ML algorithms used, regression of the output of the PMD to PM2.5 concentrations data available from the GAMS through web query was effective. The extreme gradient boosting algorithm showed the best performance with a mean coefficient of determination (R2) of 0.78 and standard error of 5.0 ㎍/㎥, corresponding to 8% increase in R2 and 12% decrease in root mean square error in comparison with the linear regression model. Minimum 100 hours of calibration period was found required to calibrate the PMD to its full capacity. Calibration method proposed poses a limitation on the location of the PMD being in the vicinity of the GAMS. As the number of the PMD participating in the sensor network increases, however, calibrated PMDs can be used as reference devices to nearby PMDs that require calibration, forming a calibration chain through MQTT protocol. Conclusions: Calibration of a low-cost PMD, which is based on construction of PM2.5 sensor network using MQTT protocol and web query of reference measurement data available at a GAMS, significantly improves the accuracy and reliability of a PMD, thereby making practical use of the low-cost PMD possible.

Frequency stabilization of 1.5μm laser diode by using double resonance optical pumping (이중공명 광펌핑을 이용한 1.5μm 반도체 레이저 주파수 안정화)

  • Moon, Han-Sub;Lee, Won-Kyu;Lee, Rim;Kim, Joong-Bok
    • Korean Journal of Optics and Photonics
    • /
    • v.15 no.3
    • /
    • pp.193-199
    • /
    • 2004
  • We present the double resonance optical pumping(DROP) spectra in the transition 5P$_{3}$2/-4D$_{3}$2/ and 5P$_{3}$2/-4D$_{5}$ 2/ of ($^{87}$ Rb) and the frequency stabilization in the $1.5mutextrm{m}$ region using those spectra. Those spectra have high signal-to-noise ratio and narrow spectral linewidth, which is about 10 MHz. We could account fur the relative intensities of the hyperfine states of those spectra by the spontaneous emission into the other state. When the frequency of the $1.5mutextrm{m}$ laser diode was stabilized to the DROP spectrum, the frequency fluctuation was about 0.2 MHz fDr sampling time of 0.1 s and the Allan deviation(or the square root of the Allan variance) was about 1${\times}$10$^{-11}$ for averaging time of l00s.

Outlier Detection and Treatment for the Conversion of Chemical Oxygen Demand to Total Organic Carbon (화학적산소요구량의 총유기탄소 변환을 위한 이상자료의 탐지와 처리)

  • Cho, Beom Jun;Cho, Hong Yeon;Kim, Sung
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.26 no.4
    • /
    • pp.207-216
    • /
    • 2014
  • Total organic carbon (TOC) is an important indicator used as an direct biological index in the research field of the marine carbon cycle. It is possible to produce the sufficient TOC estimation data by using the Chemical Oxygen Demand(COD) data because the available TOC data is relatively poor than the COD data. The outlier detection and treatment (removal) should be carried out reasonably and objectively because the equation for a COD-TOC conversion is directly affected the TOC estimation. In this study, it aims to suggest the optimal regression model using the available salinity, COD, and TOC data observed in the Korean coastal zone. The optimal regression model is selected by the comparison and analysis on the changes of data numbers before and after removal, variation coefficients and root mean square (RMS) error of the diverse detection methods of the outlier and influential observations. According to research result, it is shown that a diagnostic case combining SIQR (Semi - Inter-Quartile Range) boxplot and Cook's distance method is most suitable for the outlier detection. The optimal regression function is estimated as the TOC(mg/L) = $0.44{\cdot}COD(mg/L)+1.53$, then determination coefficient is showed a value of 0.47 and RMS error is 0.85 mg/L. The RMS error and the variation coefficients of the leverage values are greatly reduced to the 31% and 80% of the value before the outlier removal condition. The method suggested in this study can provide more appropriate regression curve because the excessive impacts of the outlier frequently included in the COD and TOC monitoring data is removed.

A Study for Effects of Image Quality due to Scatter Ray produced by Increasing of Tube Voltage (관전압 증가에 기인한 산란선 발생의 화질 영향 연구)

  • Park, Ji-Koon;Jun, Je-Hoon;Yang, Sung-Woo;Kim, Kyo-Tae;Choi, Il-Hong;Kang, Sang-Sik
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.7
    • /
    • pp.663-669
    • /
    • 2017
  • In diagnostic medical imaging, it is essential to reduce the scattered radiation for the high medical image quality and low patient dose. Therefore, in this study, the influence of the scattered radiation on medical images was analyzed as the tube voltage increases. For this purpose, ANSI chest phantom was used to measure the scattering ratio, and the scattering effect on the image quality was investigated by RMS evaluation, RSD and NPS analysis. It was found that the scattering ratio with increasing x-ray tube voltage gradually increased to 48.8% at 73 kV tube voltage and to 80.1% at 93 kV tube voltage. As a result of RMS analysis for evaluating the image quality, RMS value according to increase of tube voltage was increased, resulting in low image quality. Also, the NPS value at 2.5 lp/mm spatial frequency was increased by 20% when the tube voltage was increased by 93 kV compared to the tube voltage of 73 kV. From this study, it can be seen that the scattering radiation have a significant effect on the image quality according to the increase of x-ray tube voltage. The results of this study can be used as basic data for the improvement of medical imaging quality.

Fabrication of Bendable Gd2O2S:Tb Intensifying Screen and Evaluation of Fatigue Properties (유연한 Gd2O2S:Tb 증감지 제작 및 피로누적에 대한 영향)

  • Park, Ji-Koon;Yang, Sung-Woo;Jeon, Je-Hoon;Kim, Joo-Hee;Heo, Ye-Ji;Kang, Sang-Sik;Kim, Kyo-Tae
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.7
    • /
    • pp.611-617
    • /
    • 2017
  • In this study, it was expected that long-term stability against external mechanical external force could be secured if the phosphor layer had ductility. In this study, a bendable $Gd_2O_2S:Tb$ sensitized paper was fabricated by screen printing method and the image uniformity was evaluated through RMS analysis and histogram analysis to investigate the effect of fatigue accumulation due to long-term external force and repetitive external force. As a result, the dominant pixel area is maintained constant and the relative standard deviation is less than 10% for the long-term external force. However, for the repetitive external force, the dominant pixel area is divided into three areas and the image uniformity is adversely affected. Based on these results, it is suggested that the curved surface detector can be applied by securing the mechanical stability against the existing radiation sensitized paper. However, further studies are needed to apply it to the flexible detector. As a result, flexible radiation sensitizers can be applied to various curved surfaces, and it is expected to be applicable to various fields such as nuclear medicine, medical treatment, and industrial fields in the future.

Influence of Regularization Parameter on Algebraic Reconstruction Technique (대수적 재구성 기법에서 정규화 인자의 영향)

  • Son, Jung Min;Chon, Kwon Su
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.7
    • /
    • pp.679-685
    • /
    • 2017
  • Computed tomography has widely been used to diagnose patient disease, and patient dose also increase rapidly. To reduce the patient dose by CT, various techniques have been applied. The iterative reconstruction is used in view of image reconstruction. Image quality of the reconstructed section image through algebraic reconstruction technique, one of iterative reconstruction methods, was examined by the normalized root mean square error. The computer program was written with the Visual C++ under the parallel beam geometry, Shepp-Logan head phantom of $512{\times}512$ size, projections of 360, and detector-pixels of 1,024. The forward and backward projection was realized by Joseph method. The minimum NRMS of 0.108 was obtained after 10 iterations in the regularization parameter of 0.09-0.12, and the optimum image was obtained after 8 and 6 iterations for 0.1% and 0.2% noise. Variation of optimum value of the regularization parameter was observed according to the phantom used. If the ART was used in the reconstruction, the optimal value of the regularization parameter should be found in the case-by-case. By finding the optimal regularization parameter in the algebraic reconstruction technique, the reconstruction time can be reduced.

Development of Free Flow Speed Estimation Model by Artificial Neural Networks for Freeway Basic Sections (인공신경망을 이용한 고속도로 기본구간 자유속도 추정모형개발)

  • Kang, Jin-Gu;Chang, Myung-Soon;Kim, Jin-Tae;Kim, Eung-Cheol
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.3 s.74
    • /
    • pp.109-125
    • /
    • 2004
  • In recent decades, microscopic simulation models have become powerful tools to analyze traffic flow on highways and to assist the investigation of level of service. The existing microscopic simulation models simulate an individual vehicle's speed based on a constant free-flow speed dominantly specified by users and driver's behavior models reflecting vehicle interactions, such as car following and lane changing. They set a single free-flow speed for a single vehicle on a given link and neglect to consider the effects of highway design elements to it in their internal simulation. Due to this, the existing models are limitted to provide with identical simulation results on both curved and tangent sections of highways. This paper presents a model developed to estimate the change of free-flow speeds based on highway design elements. Nine neural network models were trained based on the field data collected from seven different freeway curve sections and three different locations at each section to capture the percent changes of free-flow speeds: 100 m upstream of the point of curve (PC) and the middle of the curve. The model employing seven highway design elements as its input variables was selected as the best : radius of curve, length of curve, superelevation, the number of lanes, grade variations, and the approaching free-flow speed on 100 m upstream of PC. Tests showed that the free-flow speeds estimated by the proposed model were statistically identical to the ones from the field at 95% confidence level at each three different locations described above. The root mean square errors at the starting and the middle of curve section were 6.68 and 10.06, and the R-squares at these points were 0.77 and 0.65, respectively. It was concluded from the study that the proposed model would be one of the potential tools introducing the effects of highway design elements to free-flow speeds in simulation.

Effect of Kinetic Degrees of Freedom of the Fingers on the Task Performance during Force Production and Release: Archery Shooting-like Action

  • Kim, Kitae;Xu, Dayuan;Park, Jaebum
    • Korean Journal of Applied Biomechanics
    • /
    • v.27 no.2
    • /
    • pp.117-124
    • /
    • 2017
  • Objective: The purpose of this study was to examine the effect of changes in degrees of freedom of the fingers (i.e., the number of the fingers involved in tasks) on the task performance during force production and releasing task. Method: Eight right-handed young men (age: $29.63{\pm}3.02yr$, height: $1.73{\pm}0.04m$, weight: $70.25{\pm}9.05kg$) participated in this study. The subjects were required to press the transducers with three combinations of fingers, including the index-middle (IM), index-middle-ring (IMR), and index-middle-ring-little (IMRL). During the trials, they were instructed to maintain a steady-state level of both normal and tangential forces within the first 5 sec. After the first 5 sec, the subjects were instructed to release the fingers on the transducers as quickly as possible at a self-selected manner within the next 5 sec, resulting in zero force at the end. Customized MATLAB codes (MathWorks Inc., Natick, MA, USA) were written for data analysis. The following variables were quantified: 1) finger force sharing pattern, 2) root mean square error (RMSE) of force to the target force in three axes at the aiming phase, 3) the time duration of the release phase (release time), and 4) the accuracy and precision indexes of the virtual firing position. Results: The RMSE was decreased with the number of fingers increased in both normal and tangential forces at the steady-state phase. The precision index was smaller (more precise) in the IMR condition than in the IM condition, while no significant difference in the accuracy index was observed between the conditions. In addition, no significant difference in release time was found between the conditions. Conclusion: The study provides evidence that the increased number of fingers resulted in better error compensation at the aiming phase and performed a more constant shooting (i.e., smaller precision index). However, the increased number of fingers did not affect the release time, which may influence the consistency of terminal performance. Thus, the number of fingers led to positive results for the current task.