• Title/Summary/Keyword: Time averaging

Search Result 337, Processing Time 0.025 seconds

Frequency stabilization of 1.5μm laser diode by using double resonance optical pumping (이중공명 광펌핑을 이용한 1.5μm 반도체 레이저 주파수 안정화)

  • Moon, Han-Sub;Lee, Won-Kyu;Lee, Rim;Kim, Joong-Bok
    • Korean Journal of Optics and Photonics
    • /
    • v.15 no.3
    • /
    • pp.193-199
    • /
    • 2004
  • We present the double resonance optical pumping(DROP) spectra in the transition 5P$_{3}$2/-4D$_{3}$2/ and 5P$_{3}$2/-4D$_{5}$ 2/ of ($^{87}$ Rb) and the frequency stabilization in the $1.5mutextrm{m}$ region using those spectra. Those spectra have high signal-to-noise ratio and narrow spectral linewidth, which is about 10 MHz. We could account fur the relative intensities of the hyperfine states of those spectra by the spontaneous emission into the other state. When the frequency of the $1.5mutextrm{m}$ laser diode was stabilized to the DROP spectrum, the frequency fluctuation was about 0.2 MHz fDr sampling time of 0.1 s and the Allan deviation(or the square root of the Allan variance) was about 1${\times}$10$^{-11}$ for averaging time of l00s.

Performance Analysis of New LMMSE Channel Interpolation Scheme Based on the LTE Sidelink System in V2V Environments (V2V 환경에서 LTE 기반 사이드링크 시스템의 새로운 LMMSE 채널 보간 기법에 대한 성능 분석)

  • Chu, Myeonghun;Moon, Sangmi;Kwon, Soonho;Lee, Jihye;Bae, Sara;Kim, Hanjong;Kim, Cheolsung;Kim, Daejin;Hwang, Intae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.10
    • /
    • pp.15-23
    • /
    • 2016
  • To support the telematics and infotainment services, vehicle-to-everything (V2X) communication requires a robust and reliable network. To do this, the 3rd Generation Partnership Project (3GPP) has recently developed V2X communication. For reliable communication, accurate channel estimation should be done. However, because vehicle speed is very fast, radio channel is rapidly changed with time. Therefore, it is difficult to accurately estimate the channel. In this paper, we propose the new linear minimum mean square error (LMMSE) channel interpolation scheme based on the Long Term Evolution (LTE) sidelink system in vehicle-to-vehicle (V2V) environments. In our proposed reduced decision error (RDE) channel estimation scheme, LMMSE channel estimation is applied in the pilot symbol, and then in the data symbol, smoothing and LMMSE channel interpolation scheme is applied. After that, time and frequency domain averaging are applied to obtain the whole channel frequency response. In addition, the LMMSE equalizer of the receiver side can reduce the error propagation due to the decision error. Therefore, it is possible to detect the reliable data. Analysis and simulation results demonstrate that the proposed scheme outperforms currently conventional schemes in normalized mean square error (NMSE) and bit error rate (BER).

Percentile-Based Analysis of Non-Gaussian Diffusion Parameters for Improved Glioma Grading

  • Karaman, M. Muge;Zhou, Christopher Y.;Zhang, Jiaxuan;Zhong, Zheng;Wang, Kezhou;Zhu, Wenzhen
    • Investigative Magnetic Resonance Imaging
    • /
    • v.26 no.2
    • /
    • pp.104-116
    • /
    • 2022
  • The purpose of this study is to systematically determine an optimal percentile cut-off in histogram analysis for calculating the mean parameters obtained from a non-Gaussian continuous-time random-walk (CTRW) diffusion model for differentiating individual glioma grades. This retrospective study included 90 patients with histopathologically proven gliomas (42 grade II, 19 grade III, and 29 grade IV). We performed diffusion-weighted imaging using 17 b-values (0-4000 s/mm2) at 3T, and analyzed the images with the CTRW model to produce an anomalous diffusion coefficient (Dm) along with temporal (𝛼) and spatial (𝛽) diffusion heterogeneity parameters. Given the tumor ROIs, we created a histogram of each parameter; computed the P-values (using a Student's t-test) for the statistical differences in the mean Dm, 𝛼, or 𝛽 for differentiating grade II vs. grade III gliomas and grade III vs. grade IV gliomas at different percentiles (1% to 100%); and selected the highest percentile with P < 0.05 as the optimal percentile. We used the mean parameter values calculated from the optimal percentile cut-offs to do a receiver operating characteristic (ROC) analysis based on individual parameters or their combinations. We compared the results with those obtained by averaging data over the entire region of interest (i.e., 100th percentile). We found the optimal percentiles for Dm, 𝛼, and 𝛽 to be 68%, 75%, and 100% for differentiating grade II vs. III and 58%, 19%, and 100% for differentiating grade III vs. IV gliomas, respectively. The optimal percentile cut-offs outperformed the entire-ROI-based analysis in sensitivity (0.761 vs. 0.690), specificity (0.578 vs. 0.526), accuracy (0.704 vs. 0.639), and AUC (0.671 vs. 0.599) for grade II vs. III differentiations and in sensitivity (0.789 vs. 0.578) and AUC (0.637 vs. 0.620) for grade III vs. IV differentiations, respectively. Percentile-based histogram analysis, coupled with the multi-parametric approach enabled by the CTRW diffusion model using high b-values, can improve glioma grading.

The gene expression programming method to generate an equation to estimate fracture toughness of reinforced concrete

  • Ahmadreza Khodayari;Danial Fakhri;Adil Hussein, Mohammed;Ibrahim Albaijan;Arsalan Mahmoodzadeh;Hawkar Hashim Ibrahim;Ahmed Babeker Elhag;Shima Rashidi
    • Steel and Composite Structures
    • /
    • v.48 no.2
    • /
    • pp.163-177
    • /
    • 2023
  • Complex and intricate preparation techniques, the imperative for utmost precision and sensitivity in instrumentation, premature sample failure, and fragile specimens collectively contribute to the arduous task of measuring the fracture toughness of concrete in the laboratory. The objective of this research is to introduce and refine an equation based on the gene expression programming (GEP) method to calculate the fracture toughness of reinforced concrete, thereby minimizing the need for costly and time-consuming laboratory experiments. To accomplish this, various types of reinforced concrete, each incorporating distinct ratios of fibers and additives, were subjected to diverse loading angles relative to the initial crack (α) in order to ascertain the effective fracture toughness (Keff) of 660 samples utilizing the central straight notched Brazilian disc (CSNBD) test. Within the datasets, six pivotal input factors influencing the Keff of concrete, namely sample type (ST), diameter (D), thickness (t), length (L), force (F), and α, were taken into account. The ST and α parameters represent crucial inputs in the model presented in this study, marking the first instance that their influence has been examined via the CSNBD test. Of the 660 datasets, 460 were utilized for training purposes, while 100 each were allotted for testing and validation of the model. The GEP model was fine-tuned based on the training datasets, and its efficacy was evaluated using the separate test and validation datasets. In subsequent stages, the GEP model was optimized, yielding the most robust models. Ultimately, an equation was derived by averaging the most exemplary models, providing a means to predict the Keff parameter. This averaged equation exhibited exceptional proficiency in predicting the Keff of concrete. The significance of this work lies in the possibility of obtaining the Keff parameter without investing copious amounts of time and resources into the CSNBD test, simply by inputting the relevant parameters into the equation derived for diverse samples of reinforced concrete subject to varied loading angles.

Comparative study of laminar and turbulent models for three-dimensional simulation of dam-break flow interacting with multiarray block obstacles (다층 블록 장애물과 상호작용하는 3차원 댐붕괴흐름 모의를 위한 층류 및 난류 모델 비교 연구)

  • Chrysanti, Asrini;Song, Yangheon;Son, Sangyoung
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.spc1
    • /
    • pp.1059-1069
    • /
    • 2023
  • Dam-break flow occurs when an elevated dam suddenly collapses, resulting in the catastrophic release of rapid and uncontrolled impounded water. This study compares laminar and turbulent closure models for simulating three-dimensional dam-break flows using OpenFOAM. The Reynolds-Averaged Navier-Stokes (RANS) model, specifically the k-ε model, is employed to capture turbulent dissipation. Two scenarios are evaluated based on a laboratory experiment and a modified multi-layered block obstacle scenario. Both models effectively represent dam-break flows, with the turbulent closure model reducing oscillations. However, excessive dissipation in turbulent models can underestimate water surface profiles. Improving numerical schemes and grid resolution enhances flow recreation, particularly near structures and during turbulence. Model stability is more significantly influenced by numerical schemes and grid refinement than the use of turbulence closure. The k-ε model's reliance on time-averaging processes poses challenges in representing dam-break profiles with pronounced discontinuities and unsteadiness. While simulating turbulence models requires extensive computational efforts, the performance improvement compared to laminar models is marginal. To achieve better representation, more advanced turbulence models like Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) are recommended, necessitating small spatial and time scales. This research provides insights into the applicability of different modeling approaches for simulating dam-break flows, emphasizing the importance of accurate representation near structures and during turbulence.

Quantitative Assessment using SNR and CNR in Cerebrovascular Diseases : Focusing on FRE-MRA, CTA Imaging Method (뇌혈관 질환에서 신호대 잡음비와 대조도대 잡음비를 이용한 정량적평가 : FRE-MRA, CTA 영상기법중심으로)

  • Goo, Eun-Hoe
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.6
    • /
    • pp.493-500
    • /
    • 2017
  • In this study, data analysis has been conducted by INFINITT program to analyze the effect of signal to noise ratio(SNR) and contrast to noise ratio(CNR) of flow related enhancement(FRE) and computed tomography Angiography(CTA) on cerebrovascular diseases for qualitative evaluations. Based on the cerebrovascular image results achieved from 63 patients (January to April, 2017, at C University Hospital), we have selected 19 patients that performed both FRE-MRA and CTA. From the 19 patients, 2 were excluded due to artifacts from movements in the cerebrovascular image results. For the analysis conditions, we have set the 5 part (anterior cerebral artery, right and left Middle cerebral artery, right and left Posterior cerebral artery) as the interest area to evaluate the SNR and CNR, and the results were validated through Independence t Test. As a result, by averaging the SNR, and CNR values, the corresponding FRE-MRA achieved were: anterior cerebral artery ($1500.73{\pm}12.23/970.43{\pm}14.55$), right middle cerebral artery ($1470.16{\pm}11.46/919.44{\pm}13.29$), left middle cerebral artery ($1457.48{\pm}17.11/903.96{\pm}14.53$), right posterior cerebral artery ($1385.83{\pm}16.52/852.11{\pm}14.58$), left posterior cerebral artery ($1318.52{\pm}13.49/756.21{\pm}10.88$). by averaging the SNR, and CNR values, the corresponding CTA achieved were: anterior cerebral artery ($159.95{\pm}12.23/123.36{\pm}11.78$), right middle cerebral artery ($236.66{\pm}17.52/202.37{\pm}15.20$), left middle cerebral artery ($224.85{\pm}13.45/193.14{\pm}11.88$), right posterior cerebral artery ($183.65{\pm}13.47/151.44{\pm}11.48$), left posterior cerebral artery ($177.7{\pm}16.72/144.71{\pm}11.43$) (p < 0.05). In conclusion, MRA had high SNR and CNR value regardless of the cerebral infarction or cerebral hemorrhage observed in the 5 part of the brain. Although FRE-MRA consumed longer time, it proved to have less side effect of contrast media when compared to the CTA.

Novel LTE based Channel Estimation Scheme for V2V Environment (LTE 기반 V2V 환경에서 새로운 채널 추정 기법)

  • Chu, Myeonghun;Moon, Sangmi;Kwon, Soonho;Lee, Jihye;Bae, Sara;Kim, Hanjong;Kim, Cheolsung;Kim, Daejin;Hwang, Intae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.3
    • /
    • pp.3-9
    • /
    • 2017
  • Recently, in 3rd Generation Partnership Project(3GPP), there is a study of the Long Term Evolution(LTE) based vehicle communication which has been actively conducted to provide a transport efficiency, telematics and infortainment. Because the vehicle communication is closely related to the safety, it requires a reliable communication. Because vehicle speed is very fast, unlike the movement of the user, radio channel is rapidly changed and generate a number of problems such as transmission quality degradation. Therefore, we have to continuously updates the channel estimates. There are five types of conventional channel estimation scheme. Least Square(LS) is obtained by pilot symbol which is known to transmitter and receiver. Decision Directed Channel Estimation(DDCE) scheme uses the data signal for channel estimation. Constructed Data Pilot(CDP) scheme uses the correlation characteristic between adjacent two data symbols. Spectral Temporal Averaging(STA) scheme uses the frequency-time domain average of the channel. Smoothing scheme reduces the peak error value of data decision. In this paper, we propose the novel channel estimation scheme in LTE based Vehicle-to-Vehicle(V2V) environment. In our Hybrid Reliable Channel Estimation(HRCE) scheme, DDCE and Smoothing schemes are combined and finally the Linear Minimum Mean Square Error(LMMSE) scheme is applied to minimize the channel estimation error. Therefore it is possible to detect the reliable data. In simulation results, overall performance can be improved in terms of Normalized Mean Square Error(NMSE) and Bit Error Rate(BER).

A Smoothing Data Cleaning based on Adaptive Window Sliding for Intelligent RFID Middleware Systems (지능적인 RFID 미들웨어 시스템을 위한 적응형 윈도우 슬라이딩 기반의 유연한 데이터 정제)

  • Shin, DongCheon;Oh, Dongok;Ryu, SeungWan;Park, Seikwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.1-18
    • /
    • 2014
  • Over the past years RFID/SN has been an elementary technology in a diversity of applications for the ubiquitous environments, especially for Internet of Things. However, one of obstacles for widespread deployment of RFID technology is the inherent unreliability of the RFID data streams by tag readers. In particular, the problem of false readings such as lost readings and mistaken readings needs to be treated by RFID middleware systems because false readings ultimately degrade the quality of application services due to the dirty data delivered by middleware systems. As a result, for the higher quality of services, an RFID middleware system is responsible for intelligently dealing with false readings for the delivery of clean data to the applications in accordance with the tag reading environment. One of popular techniques used to compensate false readings is a sliding window filter. In a sliding window scheme, it is evident that determining optimal window size intelligently is a nontrivial important task in RFID middleware systems in order to reduce false readings, especially in mobile environments. In this paper, for the purpose of reducing false readings by intelligent window adaption, we propose a new adaptive RFID data cleaning scheme based on window sliding for a single tag. Unlike previous works based on a binomial sampling model, we introduce the weight averaging. Our insight starts from the need to differentiate the past readings and the current readings, since the more recent readings may indicate the more accurate tag transitions. Owing to weight averaging, our scheme is expected to dynamically adapt the window size in an efficient manner even for non-homogeneous reading patterns in mobile environments. In addition, we analyze reading patterns in the window and effects of decreased window so that a more accurate and efficient decision on window adaption can be made. With our scheme, we can expect to obtain the ultimate goal that RFID middleware systems can provide applications with more clean data so that they can ensure high quality of intended services.

Building battery deterioration prediction model using real field data (머신러닝 기법을 이용한 납축전지 열화 예측 모델 개발)

  • Choi, Keunho;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.243-264
    • /
    • 2018
  • Although the worldwide battery market is recently spurring the development of lithium secondary battery, lead acid batteries (rechargeable batteries) which have good-performance and can be reused are consumed in a wide range of industry fields. However, lead-acid batteries have a serious problem in that deterioration of a battery makes progress quickly in the presence of that degradation of only one cell among several cells which is packed in a battery begins. To overcome this problem, previous researches have attempted to identify the mechanism of deterioration of a battery in many ways. However, most of previous researches have used data obtained in a laboratory to analyze the mechanism of deterioration of a battery but not used data obtained in a real world. The usage of real data can increase the feasibility and the applicability of the findings of a research. Therefore, this study aims to develop a model which predicts the battery deterioration using data obtained in real world. To this end, we collected data which presents change of battery state by attaching sensors enabling to monitor the battery condition in real time to dozens of golf carts operated in the real golf field. As a result, total 16,883 samples were obtained. And then, we developed a model which predicts a precursor phenomenon representing deterioration of a battery by analyzing the data collected from the sensors using machine learning techniques. As initial independent variables, we used 1) inbound time of a cart, 2) outbound time of a cart, 3) duration(from outbound time to charge time), 4) charge amount, 5) used amount, 6) charge efficiency, 7) lowest temperature of battery cell 1 to 6, 8) lowest voltage of battery cell 1 to 6, 9) highest voltage of battery cell 1 to 6, 10) voltage of battery cell 1 to 6 at the beginning of operation, 11) voltage of battery cell 1 to 6 at the end of charge, 12) used amount of battery cell 1 to 6 during operation, 13) used amount of battery during operation(Max-Min), 14) duration of battery use, and 15) highest current during operation. Since the values of the independent variables, lowest temperature of battery cell 1 to 6, lowest voltage of battery cell 1 to 6, highest voltage of battery cell 1 to 6, voltage of battery cell 1 to 6 at the beginning of operation, voltage of battery cell 1 to 6 at the end of charge, and used amount of battery cell 1 to 6 during operation are similar to that of each battery cell, we conducted principal component analysis using verimax orthogonal rotation in order to mitigate the multiple collinearity problem. According to the results, we made new variables by averaging the values of independent variables clustered together, and used them as final independent variables instead of origin variables, thereby reducing the dimension. We used decision tree, logistic regression, Bayesian network as algorithms for building prediction models. And also, we built prediction models using the bagging of each of them, the boosting of each of them, and RandomForest. Experimental results show that the prediction model using the bagging of decision tree yields the best accuracy of 89.3923%. This study has some limitations in that the additional variables which affect the deterioration of battery such as weather (temperature, humidity) and driving habits, did not considered, therefore, we would like to consider the them in the future research. However, the battery deterioration prediction model proposed in the present study is expected to enable effective and efficient management of battery used in the real filed by dramatically and to reduce the cost caused by not detecting battery deterioration accordingly.

Experimental Study of Overtopping Void Ratio by Wave Breaking (쇄파에 의한 월파의 기포분율에 대한 실험적 연구)

  • Ryu, Yong-Uk;Lee, Jong-In
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.20 no.2
    • /
    • pp.157-167
    • /
    • 2008
  • The aeration of an overtopping wave on a vertical structure generated by a plunging wave was investigated through laboratory measurements of void fraction. The overtopping wave occurring after wave breaking becomes multi-phased and turbulent with significant aeration, so that the void fraction of the flow is of importance. In this study, fiber optic reflectometer and bubble image velocimetry were employed to measure the void fraction, velocity, and layer thickness of the overtopping flow. Mean properties were obtained by ensembleand time-averaging the repeated instantaneous void fractions and velocities. The mean void fractions show that the overtopping wave is very high-aerated near the overtopping wave front and relatively low-aerated near the deck surface and rear free surface of the wave. The flow rate and momentum of the overtopping flow estimated using the measured data show that the void ratio is an important parameter to consider in the multiphase flow. From the similarity profiles of the depth-averaged void fraction, velocity, and layer thickness, one-dimensional empirical equations were obtained and used to estimate the flow rate and momentum of the overtopping flow.