• Title/Summary/Keyword: Real-Time Correction

Search Result 475, Processing Time 0.024 seconds

A new approach for quantitative damage assessment of in-situ rock mass by acoustic emission

  • Kim, Jin-Seop;Kim, Geon-Young;Baik, Min-Hoon;Finsterle, Stefan;Cho, Gye-Chun
    • Geomechanics and Engineering
    • /
    • v.18 no.1
    • /
    • pp.11-20
    • /
    • 2019
  • The purpose of this study was to propose a new approach for quantifying in situ rock mass damage, which would include a degree-of-damage and the degraded strength of a rock mass, along with its prediction based on real-time Acoustic Emission (AE) observations. The basic approach for quantifying in-situ rock mass damage is to derive the normalized value of measured AE energy with the maximum AE energy, called the degree-of-damage in this study. With regard to estimation of the AE energy, an AE crack source location algorithm of the Wigner-Ville Distribution combined with Biot's wave dispersion model, was applied for more reliable AE crack source localization in a rock mass. In situ AE wave attenuation was also taken into account for AE energy correction in accordance with the propagation distance of an AE wave. To infer the maximum AE energy, fractal theory was used for scale-independent AE energy estimation. In addition, the Weibull model was also applied to determine statistically the AE crack size under a jointed rock mass. Subsequently, the proposed methodology was calibrated using an in situ test carried out in the Underground Research Tunnel at the Korea Atomic Energy Research Institute. This was done under a condition of controlled incremental cyclic loading, which had been performed as part of a preceding study. It was found that the inferred degree-of-damage agreed quite well with the results from the in situ test. The methodology proposed in this study can be regarded as a reasonable approach for quantifying rock mass damage.

History of the Photon Beam Dose Calculation Algorithm in Radiation Treatment Planning System

  • Kim, Dong Wook;Park, Kwangwoo;Kim, Hojin;Kim, Jinsung
    • Progress in Medical Physics
    • /
    • v.31 no.3
    • /
    • pp.54-62
    • /
    • 2020
  • Dose calculation algorithms play an important role in radiation therapy and are even the basis for optimizing treatment plans, an important feature in the development of complex treatment technologies such as intensity-modulated radiation therapy. We reviewed the past and current status of dose calculation algorithms used in the treatment planning system for radiation therapy. The radiation-calculating dose calculation algorithm can be broadly classified into three main groups based on the mechanisms used: (1) factor-based, (2) model-based, and (3) principle-based. Factor-based algorithms are a type of empirical dose calculation that interpolates or extrapolates the dose in some basic measurements. Model-based algorithms, represented by the pencil beam convolution, analytical anisotropic, and collapse cone convolution algorithms, use a simplified physical process by using a convolution equation that convolutes the primary photon energy fluence with a kernel. Model-based algorithms allowing side scattering when beams are transmitted to the heterogeneous media provide more precise dose calculation results than correction-based algorithms. Principle-based algorithms, represented by Monte Carlo dose calculations, simulate all real physical processes involving beam particles during transportation; therefore, dose calculations are accurate but time consuming. For approximately 70 years, through the development of dose calculation algorithms and computing technology, the accuracy of dose calculation seems close to our clinical needs. Next-generation dose calculation algorithms are expected to include biologically equivalent doses or biologically effective doses, and doctors expect to be able to use them to improve the quality of treatment in the near future.

The Manufacture of Digital X-ray Devices and Implementation of Image Processing Algorithm (디지털 X-ray 장치 제작 및 영상 처리 알고리즘 구현)

  • Kim, So-young;Park, Seung-woo;Lee, Dong-hoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.21 no.4
    • /
    • pp.195-201
    • /
    • 2020
  • This study studied scoliosis, one of the most common modern diseases caused by lifestyle patterns of office workers sitting in front of computers all day and modern people who use smart phones frequently. Scoliosis is a typical complication that takes more than 80% of the nation's total population at least once. X-ray are used to test for these complications. X-ray, a non-destructive testing method that allows scoliosis to be easily performed and filmed in various areas such as the chest, abdomen and bone without contrast agents or other instruments. We uses NI DAQ to miniaturize digital X-ray imaging devices and image intensifier in self-shielding housing with Vision Assistant for drawing lines to the top and the bottom of the spine to acquire angles, i.e. curvature in real-time. In this way, the research was conducted to see scoliosis patients and their condition easily and to help rapid treatment for solving the problem of posture correction in modern people.

Assessment of real-time bias correction method for rainfall forecast using the Backward-Forward tracking (Backward-Forward tracking 기반 예측강우 편의보정 기법의 실시간 적용 및 평가)

  • Na, Wooyoung;Kang, Minseok;Kim, Yu-Min;Yoo, Chulsang
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.371-371
    • /
    • 2021
  • 돌발홍수 예경보시스템의 입력자료로 예측강우가 활용된다. 기상청과 환경부에서는 초단기 예보의 목적으로 MAPLE(McGill Algorithm for Precipitation nowcasting and Lagrangian Extrapolation)을 생산하고 있다. MAPLE은 선행 30분까지의 예측품질은 어느 정도 정확하다고 볼 수 있으나 그 이후 특히 3시간 이상이 되면 예측품질이 크게 떨어지는 문제가 있다. 예측강우의 편의보정을 위한 여러 시도들이 있었으나 호우의 규모 및 이동특성을 고려한 사례는 제한적이다. 호우의 이동특성을 고려해야하는 이유로는 첫째, 예측의 특성상 예측강우가 생성되고 편의보정이 이루어지는 시간 동안 호우는 이동을 하기 때문이다. 둘째, 호우가 이동을 하면서 편의보정의 대상이 되는 지역에 적합한 보정계수의 결정이 어렵기 때문이다. 마지막으로 돌발홍수는 장마와 같은 전선형 강수가 아닌 국지성 호우와 같이 빠르게 움직이며 강한 호우를 내리는 강수에 의해 발생하기 때문이다. 본 연구에서는 이러한 문제점을 극복하기 위해 호우의 이동특성을 고려하여 예측강우 보정계수를 결정하고 이를 예측강우에 실시간으로 적용할 수 있는 방법을 제시하였다. 이 과정에서 Backward tracking은 미래에 호우가 도달할 지역(대상지역)으로부터 현재 호우가 위치하는 지역을 추적하는데 이용된다. 추적된 지역에서 보정계수가 결정된다. Forward tracking은 현재 호우가 위치하는 지역으로부터 대상지역을 다시 추적하는데 이용된다. 앞서 결정된 보정계수는 대상지역의 예측강우에 적용된다. 해당 방법론을 2019년에 발생한 주요 호우사상에 실시간 적용하고 평가하였다. 그 결과, Backward-Forward tracking 기반 예측강우 보정방법을 적용한 경우에는 실제 관측된 강우와 매우 유사한 보정결과가 도출됨을 확인되었다.

  • PDF

The Effect of Wireless Channel Models on the Performance of Sensor Networks (채널 모델링 방법에 따른 센서 네트워크 성능 변화)

  • 안종석;한상섭;김지훈
    • Journal of KIISE:Information Networking
    • /
    • v.31 no.4
    • /
    • pp.375-383
    • /
    • 2004
  • As wireless mobile networks have been widely adopted due to their convenience for deployment, the research for improving their performance has been actively conducted. Since their throughput is restrained by the packet corruption rate not by congestion as in wired networks, however, network simulations for performance evaluation need to select the appropriate wireless channel model representing the behavior of propagation errors for the evaluated channel. The selection of the right model should depend on various factors such as the adopted frequency band, the level of signal power, the existence of obstacles against signal propagation, the sensitivity of protocols to bit errors, and etc. This paper analyzes 10-day bit traces collected from real sensor channels exhibiting the high bit error rate to determine a suitable sensor channel model. For selection, it also evaluates the performance of two error recovery algorithms such as a link layer FEC algorithm and three TCPs (Tahoe, Reno, and Vegas) over several channel models. The comparison analysis shows that CM(Chaotic Map) model predicts 3-time less BER variance and 10-time larger PER(Packet Error Rate) than traces while these differences between the other models and traces are larger than 10-time. The simulation experiments, furthermore, prove that CM model evaluates the performance of these algorithms over sensor channels with the precision at least 10-time more accurate than any other models.

Analysis of Respiratory Motion Artifacts in PET Imaging Using Respiratory Gated PET Combined with 4D-CT (4D-CT와 결합한 호흡게이트 PET을 이용한 PET영상의 호흡 인공산물 분석)

  • Cho, Byung-Chul;Park, Sung-Ho;Park, Hee-Chul;Bae, Hoon-Sik;Hwang, Hee-Sung;Shin, Hee-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.3
    • /
    • pp.174-181
    • /
    • 2005
  • Purpose: Reduction of respiratory motion artifacts in PET images was studied using respiratory-gated PET (RGPET) with moving phantom. Especially a method of generating simulated helical CT images from 4D-CT datasets was developed and applied to a respiratory specific RGPET images for more accurate attenuation correction. Materials and Methods: Using a motion phantom with periodicity of 6 seconds and linear motion amplitude of 26 mm, PET/CT (Discovery ST: GEMS) scans with and without respiratory gating were obtained for one syringe and two vials with each volume of 3, 10, and 30 ml respectively. RPM (Real-Time Position Management, Varian) was used for tracking motion during PET/CT scanning. Ten datasets of RGPET and 4D-CT corresponding to every 10% phase intervals were acquired. from the positions, sizes, and uptake values of each subject on the resultant phase specific PET and CT datasets, the correlations between motion artifacts in PET and CT images and the size of motion relative to the size of subject were analyzed. Results: The center positions of three vials in RGPET and 4D-CT agree well with the actual position within the estimated error. However, volumes of subjects in non-gated PET images increase proportional to relative motion size and were overestimated as much as 250% when the motion amplitude was increased two times larger than the size of the subject. On the contrary, the corresponding maximal uptake value was reduced to about 50%. Conclusion: RGPET is demonstrated to remove respiratory motion artifacts in PET imaging, and moreover, more precise image fusion and more accurate attenuation correction is possible by combining with 4D-CT.

Correction of TDC Position for Engine Output Measuring in Marine Diesel Engines (선박용 디젤엔진의 출력산정을 위한 TDC 위치보정에 관한 연구)

  • Jung, Kyun-Sik;Choi, Jun-Young;Jeong, Eun-Seok;Choi, Jae-Sung
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.36 no.4
    • /
    • pp.459-466
    • /
    • 2012
  • The accurate engine output is basically one of important factors for the analysis of engine performance. Nowadays in-cylinder pressure analysis in internal combustion engine is also an indispensable tool for engine research and development, environment regulation and maintenance of engine. Here, it is essential more than anything else to find the correct TDC(Top Dead Center) position for the accuracy of engine output for diesel engine. Therefore this study is to analyze affecting factors to TDC position in 2-stroke large low speed engine and to suggest new method for determining correct TDC position. In the previous paper, it was mentioned that the accuracy of engine output is influenced by the determination of exact TDC position, and that 'Angle based sampling' method is better than 'Time based sampling' method in terms of precision. It was confirmed that there is 'Loss of angle', which is a difference between compression pressure peak and real TDC caused by heat loss and blow by of gas leakage. Consequently we invented new method, called "An improved method of time based sampling", which can obtain the correct engine output. The results by this method with compensating loss of angle was shown the same result by the 'Angle based sampling' method in encoder setting cylinder. This study is to suggest the new measuring method of exact engine output, and to examnine the reliance on the outcome.

Research for Calibration and Correction of Multi-Spectral Aerial Photographing System(PKNU 3) (다중분광 항공촬영 시스템(PKNU 3) 검정 및 보정에 관한 연구)

  • Lee, Eun Kyung;Choi, Chul Uong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.4
    • /
    • pp.143-154
    • /
    • 2004
  • The researchers, who seek geological and environmental information, depend on the remote sensing and aerial photographic datum from various commercial satellites and aircraft. However, the adverse weather conditions and the expensive equipment can restrict that the researcher can collect their data anywhere and any time. To allow for better flexibility, we have developed a compact, a multi-spectral automatic Aerial photographic system(PKNU 2). This system's Multi-spectral camera can catch the visible(RGB) and infrared(NIR) bands($3032{\times}2008$ pixels) image. Visible and infrared bands images were obtained from each camera respectively and produced Color-infrared composite images to be analyzed in the purpose of the environment monitor but that was not very good data. Moreover, it has a demerit that the stereoscopic overlap area is not satisfied with 60% due to the 12s storage time of each data, while it was possible that PKNU 2 system photographed photos of great capacity. Therefore, we have been developing the advanced PKNU 2(PKNU 3) that consists of color-infrared spectral camera can photograph the visible and near infrared bands data using one sensor at once, thermal infrared camera, two of 40 G computers to store images, and MPEG board to compress and transfer data to the computer at the real time and can attach and detach itself to a helicopter. Verification and calibration of each sensor(REDLAKE MS 4000, Raytheon IRPro) were conducted before we took the aerial photographs for obtaining more valuable data. Corrections for the spectral characteristics and radial lens distortions of sensor were carried out.

  • PDF

Enhanced Indoor Localization Scheme Based on Pedestrian Dead Reckoning and Kalman Filter Fusion with Smartphone Sensors (스마트폰 센서를 이용한 PDR과 칼만필터 기반 개선된 실내 위치 측위 기법)

  • Harun Jamil;Naeem Iqbal;Murad Ali Khan;Syed Shehryar Ali Naqvi;Do-Hyeun Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.4
    • /
    • pp.101-108
    • /
    • 2024
  • Indoor localization is a critical component for numerous applications, ranging from navigation in large buildings to emergency response. This paper presents an enhanced Pedestrian Dead Reckoning (PDR) scheme using smartphone sensors, integrating neural network-aided motion recognition, Kalman filter-based error correction, and multi-sensor data fusion. The proposed system leverages data from the accelerometer, magnetometer, gyroscope, and barometer to accurately estimate a user's position and orientation. A neural network processes sensor data to classify motion modes and provide real-time adjustments to stride length and heading calculations. The Kalman filter further refines these estimates, reducing cumulative errors and drift. Experimental results, collected using a smartphone across various floors of University, demonstrate the scheme's ability to accurately track vertical movements and changes in heading direction. Comparative analyses show that the proposed CNN-LSTM model outperforms conventional CNN and Deep CNN models in angle prediction. Additionally, the integration of barometric pressure data enables precise floor level detection, enhancing the system's robustness in multi-story environments. Proposed comprehensive approach significantly improves the accuracy and reliability of indoor localization, making it viable for real-world applications.

Adaptive Hard Decision Aided Fast Decoding Method in Distributed Video Coding (적응적 경판정 출력을 이용한 고속 분산 비디오 복호화 기술)

  • Oh, Ryang-Geun;Shim, Hiuk-Jae;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.66-74
    • /
    • 2010
  • Recently distributed video coding (DVC) is spotlighted for the environment which has restriction in computing resource at encoder. Wyner-Ziv (WZ) coding is a representative scheme of DVC. The WZ encoder independently encodes key frame and WZ frame respectively by conventional intra coding and channel code. WZ decoder generates side information from reconstructed two key frames (t-1, t+1) based on temporal correlation. The side information is regarded as a noisy version of original WZ frame. Virtual channel noise can be removed by channel decoding process. So the performance of WZ coding greatly depends on the performance of channel code. Among existing channel codes, Turbo code and LDPC code have the most powerful error correction capability. These channel codes use stochastically iterative decoding process. However the iterative decoding process is quite time-consuming, so complexity of WZ decoder is considerably increased. Analysis of the complexity of LPDCA with real video data shows that the portion of complexity of LDPCA decoding is higher than 60% in total WZ decoding complexity. Using the HDA (Hard Decision Aided) method proposed in channel code area, channel decoding complexity can be much reduced. But considerable RD performance loss is possible according to different thresholds and its proper value is different for each sequence. In this paper, we propose an adaptive HDA method which sets up a proper threshold according to sequence. The proposed method shows about 62% and 32% of time saving, respectively in LDPCA and WZ decoding process, while RD performance is not that decreased.