• Title/Summary/Keyword: Sampling Theorem

Search Result 57, Processing Time 0.023 seconds

Application of Biosignal Data Compression for u-Health Sensor Network System (u-헬스 센서 네트워크 시스템의 생체신호 압축 처리)

  • Lee, Yong-Gyu;Park, Ji-Ho;Yoon, Gil-Won
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.5
    • /
    • pp.352-358
    • /
    • 2012
  • A sensor network system can be an efficient tool for healthcare telemetry for multiple users due to its power efficiency. One drawback is its limited data size. This paper proposed a real-time application of data compression/decompression method in u-Health monitoring system in order to improve the network efficiency. Our high priority was given to maintain a high quality of signal reconstruction since it is important to receive undistorted waveform. Our method consisted of down sampling coding and differential Huffman coding. Down sampling was applied based on the Nyquist-Shannon sampling theorem and signal amplitude was taken into account to increase compression rate in the differential Huffman coding. Our method was successfully tested in a ZigBee and WLAN dual network. Electrocardiogram (ECG) had an average compression ratio of 3.99 : 1 with 0.24% percentage root mean square difference (PRD). Photoplethysmogram (PPG) showed an average CR of 37.99 : 1 with 0.16% PRD. Our method produced an outstanding PRD compared to other previous reports.

Front-End Design for Underwater Communication System with 25 kHz Carrier Frequency and 5 kHz Symbol Rate (25kHz 반송파와 5kHz 심볼율을 갖는 수중통신 수신기용 전단부 설계)

  • Kim, Seung-Geun;Yun, Chang-Ho;Park, Jin-Young;Kim, Sea-Moon;Park, Jong-Won;Lim, Young-Kon
    • Journal of Ocean Engineering and Technology
    • /
    • v.24 no.1
    • /
    • pp.166-171
    • /
    • 2010
  • In this paper, the front-end of a digital receiver with a 25 kHz carrier frequency, 5 kHz symbol rate, and any excess-bandwidth is designed using two basic facts. The first is known as the uniform sampling theorem, which states that the sampled sequence might not suffer from aliasing even if its sampling rate is lower than the Nyquist sampling rate if the analog signal is a bandpass one. The other fact is that if the sampling rate is 4 times the center frequency of the sampled sequence, the front-end processing complexity can be dramatically reduced due to the half of the sampled sequence to be multiplied by zero in the demixing process. Furthermore, the designed front-end is simplified by introducing sub-filters and sub-sampling sequences. The designed front-end is composed of an A/D converter, which takes samples of a bandpass filtered signal at a 20 kHz rate; a serial-to-parallel converter, which converts a sampled bandpass sequence to 4 parallel sub-sample sequences; 4 sub-filter blocks, which act as a frequency shifter and lowpass filter for a complex sequence; 4 synchronized switches; and 2 adders. The designed front-end dramatically reduces the computational complexity by more than 50% for frequency shifting and lowpass filtering operations since a conventional front-end requires a frequency shifting and two lowpass filtering operations to get one lowpass complex sample, while the proposed front-end requires only four filtering operation to get four lowpass complex samples, which is equivalent to one filtering operation for one sample.

IMPLEMENTATION OF DATA ASSIMILATION METHODOLOGY FOR PHYSICAL MODEL UNCERTAINTY EVALUATION USING POST-CHF EXPERIMENTAL DATA

  • Heo, Jaeseok;Lee, Seung-Wook;Kim, Kyung Doo
    • Nuclear Engineering and Technology
    • /
    • v.46 no.5
    • /
    • pp.619-632
    • /
    • 2014
  • The Best Estimate Plus Uncertainty (BEPU) method has been widely used to evaluate the uncertainty of a best-estimate thermal hydraulic system code against a figure of merit. This uncertainty is typically evaluated based on the physical model's uncertainties determined by expert judgment. This paper introduces the application of data assimilation methodology to determine the uncertainty bands of the physical models, e.g., the mean value and standard deviation of the parameters, based upon the statistical approach rather than expert judgment. Data assimilation suggests a mathematical methodology for the best estimate bias and the uncertainties of the physical models which optimize the system response following the calibration of model parameters and responses. The mathematical approaches include deterministic and probabilistic methods of data assimilation to solve both linear and nonlinear problems with the a posteriori distribution of parameters derived based on Bayes' theorem. The inverse problem was solved analytically to obtain the mean value and standard deviation of the parameters assuming Gaussian distributions for the parameters and responses, and a sampling method was utilized to illustrate the non-Gaussian a posteriori distributions of parameters. SPACE is used to demonstrate the data assimilation method by determining the bias and the uncertainty bands of the physical models employing Bennett's heated tube test data and Becker's post critical heat flux experimental data. Based on the results of the data assimilation process, the major sources of the modeling uncertainties were identified for further model development.

Connection between Fourier of Signal Processing and Shannon of 5G SmartPhone (5G 스마트폰의 샤논과 신호처리의 푸리에의 표본화에서 만남)

  • Kim, Jeong-Su;Lee, Moon-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.6
    • /
    • pp.69-78
    • /
    • 2017
  • Shannon of the 5G smartphone and Fourier of the signal processing meet in the sampling theorem (2 times the highest frequency 1). In this paper, the initial Shannon Theorem finds the Shannon capacity at the point-to-point, but the 5G shows on the Relay channel that the technology has evolved into Multi Point MIMO. Fourier transforms are signal processing with fixed parameters. We analyzed the performance by proposing a 2N-1 multivariate Fourier-Jacket transform in the multimedia age. In this study, the authors tackle this signal processing complexity issue by proposing a Jacket-based fast method for reducing the precoding/decoding complexity in terms of time computation. Jacket transforms have shown to find applications in signal processing and coding theory. Jacket transforms are defined to be $n{\times}n$ matrices $A=(a_{jk})$ over a field F with the property $AA^{\dot{+}}=nl_n$, where $A^{\dot{+}}$ is the transpose matrix of the element-wise inverse of A, that is, $A^{\dot{+}}=(a^{-1}_{kj})$, which generalise Hadamard transforms and centre weighted Hadamard transforms. In particular, exploiting the Jacket transform properties, the authors propose a new eigenvalue decomposition (EVD) method with application in precoding and decoding of distributive multi-input multi-output channels in relay-based DF cooperative wireless networks in which the transmission is based on using single-symbol decodable space-time block codes. The authors show that the proposed Jacket-based method of EVD has significant reduction in its computational time as compared to the conventional-based EVD method. Performance in terms of computational time reduction is evaluated quantitatively through mathematical analysis and numerical results.

Registration of Aerial Image with Lines using RANSAC Algorithm

  • Ahn, Y.;Shin, S.;Schenk, T.;Cho, W.
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_1
    • /
    • pp.529-536
    • /
    • 2007
  • Registration between image and object space is a fundamental step in photogrammetry and computer vision. Along with rapid development of sensors - multi/hyper spectral sensor, laser scanning sensor, radar sensor etc., the needs for registration between different sensors are ever increasing. There are two important considerations on different sensor registration. They are sensor invariant feature extraction and correspondence between them. Since point to point correspondence does not exist in image and laser scanning data, it is necessary to have higher entities for extraction and correspondence. This leads to modify first, existing mathematical and geometrical model which was suitable for point measurement to line measurements, second, matching scheme. In this research, linear feature is selected for sensor invariant features and matching entity. Linear features are incorporated into mathematical equation in the form of extended collinearity equation for registration problem known as photo resection which calculates exterior orientation parameters. The other emphasis is on the scheme of finding matched entities in the aide of RANSAC (RANdom SAmple Consensus) in the absence of correspondences. To relieve computational load which is a common problem in sampling theorem, deterministic sampling technique and selecting 4 line features from 4 sectors are applied.

Compensation Methods for Non-uniform and Incomplete Data Sampling in High Resolution PET with Multiple Scintillation Crystal Layers (다중 섬광결정을 이용한 고해상도 PET의 불균일/불완전 데이터 보정기법 연구)

  • Lee, Jae-Sung;Kim, Soo-Mee;Lee, Kwon-Song;Sim, Kwang-Souk;Rhe, June-Tak;Park, Kwang-Suk;Lee, Dong-Soo;Hong, Seong-Jong
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.1
    • /
    • pp.52-60
    • /
    • 2008
  • Purpose: To establish the methods for sinogram formation and correction in order to appropriately apply the filtered backprojection (FBP) reconstruction algorithm to the data acquired using PET scanner with multiple scintillation crystal layers. Materials and Methods: Formation for raw PET data storage and conversion methods from listmode data to histogram and sinogram were optimized. To solve the various problems occurred while the raw histogram was converted into sinogram, optimal sampling strategy and sampling efficiency correction method were investigated. Gap compensation methods that is unique in this system were also investigated. All the sinogram data were reconstructed using 20 filtered backprojection algorithm and compared to estimate the improvements by the correction algorithms. Results: Optimal radial sampling interval and number of angular samples in terms of the sampling theorem and sampling efficiency correction algorithm were pitch/2 and 120, respectively. By applying the sampling efficiency correction and gap compensation, artifacts and background noise on the reconstructed image could be reduced. Conclusion: Conversion method from the histogram to sinogram was investigated for the FBP reconstruction of data acquired using multiple scintillation crystal layers. This method will be useful for the fast 20 reconstruction of multiple crystal layer PET data.

Improvement of a Pound-Drever-Hall Technique to Measure Precisely the Free Spectral Range of a Fabry-Perot Etalon

  • Seo, Dong-Sun;Park, Chongdae;Leaird, Daniel E.;Weiner, Andrew M.
    • Journal of the Optical Society of Korea
    • /
    • v.19 no.4
    • /
    • pp.357-362
    • /
    • 2015
  • We examine the principle of a modified Pound-Drever-Hall (PDH) technique to measure the free spectral range of a Fabry-Perot etalon (FPE). The FPE's periodic transmission of phase-modulated light allows us to adopt a sampling theorem to develop a new relationship for the PDH error signal. This leads us to find the key parameters governing the measurement accuracy: the phase modulation index ${\beta}$ and the FPE finesse. Without any additional complexity for background noise reduction, we achieve a measurement accuracy of 0.5 ppm. The improvement is mainly attributed to the wide-band phase modulation approaching ${\beta}=10$, and partly to the use of both reflected and transmitted light from the FPE and good FPE finesse.

A SHARP BOUND FOR ITO PROCESSES

  • Choi, Chang-Sun
    • Journal of the Korean Mathematical Society
    • /
    • v.35 no.3
    • /
    • pp.713-725
    • /
    • 1998
  • Let X and Y be Ito processes with dX$_{s}$ = $\phi$$_{s}$dB$_{s}$$\psi$$_{s}$ds and dY$_{s}$ = (equation omitted)dB$_{s}$ + ξ$_{s}$ds. Burkholder obtained a sharp bound on the distribution of the maximal function of Y under the assumption that │Y$_{0}$$\leq$│X$_{0}$│,│ζ│$\leq$$\phi$│, │ξ│$\leq$$\psi$│ and that X is a nonnegative local submartingale. In this paper we consider a wider class of Ito processes, replace the assumption │ξ│$\leq$$\psi$│ by a more general one │ξ│$\leq$$\alpha$$\psi$│ , where a $\geq$ 0 is a constant, and get a weak-type inequality between X and the maximal function of Y. This inequality, being sharp for all a $\geq$ 0, extends the work by Burkholder.der.urkholder.der.

  • PDF

Combined membrane and flexural reinforcement design in RC shells and ultimate behavior (막응력과 휨을 고려한 RC 쉘의 설계와 극한거동)

  • 민창식
    • Proceedings of the Korea Concrete Institute Conference
    • /
    • 1998.10a
    • /
    • pp.405-411
    • /
    • 1998
  • An iterative numerical computational algorithm is presented to design a plate of shell element subjected to membrane and flexural forces. Based on equilibrium consideration, equations for capacities of top and bottom reinforcements in two orthogonal directions have been derived. The amount of reinforcement is determined locally, i. e., for each sampling point, from the equilibrium between applied and internal forces. One case of design is performed for a hyperbolic paraboloid saddle shell (originally used by Lin and Scordelis) to check the design strength against a consistent design load, therefore, to verify the adequacy of design practice for reinforced concrete shells. Based on nonlinear analyses performed, the analytically calculated ultimate load exceeded the design ultimate load from 14-43% for an analysis with relatively low to high tension stiffening, ${\gamma}$ =5~20 cases. For these cases, the design method gives a lower bound on the ultimate load with respect to Lower bound theorem. This shows the adequacy of the current practice at least for this saddle shell case studied. To generalize the conclusion many more designs-analyses are performed with different shell configurations.

  • PDF

Comparison of interpretation methods for large amplitude oscillatory shear response

  • Kim Hyung-Sup;Hyun Kyu;Kim Dae-Jin;Cho Kwang-Soo
    • Korea-Australia Rheology Journal
    • /
    • v.18 no.2
    • /
    • pp.91-98
    • /
    • 2006
  • We compare FT (Fourier Transform) and SD (Stress Decomposition), the interpretation methods for LAOS (Large Amplitude Oscillatory Shear). Although the two methods are equivalent in mathematics. they are significantly different in numerical procedures. Precision of FT greatly depends on sampling rate and length of data because FT of experimental data is the discrete version of Fourier integral theorem. FT inevitably involves unnecessary frequencies which must not appear in LAOS. On the other hand, SD is free from the problems from which FT suffers, because SD involves only odd harmonics of primary frequency. SD is based on two axioms on shear stress: [1] shear stress is a sufficiently smooth function of strain and its time derivatives; [2] shear stress satisfies macroscopic time-reversal symmetry. In this paper, we compared numerical aspects of the two interpretation methods for LAOS.