• Title/Summary/Keyword: offset error

Search Result 448, Processing Time 0.019 seconds

Muscle Fatigue Assessment using Hilbert-Huang Transform and an Autoregressive Model during Repetitive Maximum Isokinetic Knee Extensions (슬관절의 등속성 최대 반복 신전시 Hilbert-Huang 변환과 AR 모델을 이용한 근피로 평가)

  • Kim, H.S.;Choi, S.W.;Yun, A.R.;Lee, S.E.;Shin, K.Y.;Choi, J.I.;Mun, J.H.
    • Journal of Biosystems Engineering
    • /
    • v.34 no.2
    • /
    • pp.127-132
    • /
    • 2009
  • In the working population, muscle fatigue and musculoskeletal discomfort are common, which, in the case of insufficient recovery may lead to musculoskeletal pain. Workers suffering from musculoskeletal pains need to be rehabilitated for recovery. Isokinetic testing has been used in physical strengthening, rehabilitation and post-operative orthopedic surgery. Frequency analysis of electromyography (EMG) signals using the mean frequency (MNF) has been widely used to characterize muscle fatigue. During isokinetic contractions, EMG signals present strong nonstationarities. Hilbert-Haung transform (HHT) and autoregressive (AR) model have been known more suitable than Fourier or wavelet transform for nonstationary signals. Moreover, several analyses have been performed within each active phase during isokinetic contractions. Thus, the aims of this study were i) to determine which one was better suitable for the analysis of MNF between HHT and AR model during repetitive maximum isokinetic extensions and ii) to investigate whether the analysis could be repeated for sequential fixed epoch lengths. Seven healthy volunteers (five males and two females) performed isokinetic knee extensions at $60^{\circ}/s$ and $240^{\circ}/s$ until 50% of the maximum peak torque was reached. Surface EMG signals were recorded from the rectus femoris of the right thigh. An algorithm detecting the onset and offset of EMG signals was applied to extract each active phase of the muscle. Following the results, slopes from the least-square error linear regression of MNF values showed that muscle fatigue of all subjects occurred. The AR model is better suited than HHT for estimating MNF from nonstationary EMG signals during isokinetic knee extensions. Moreover, the linear regression can be extracted from MNF values calculated by sequential fixed epoch lengths (p> 0.0I).

Broadband Processing of Conventional Marine Seismic Data Through Source and Receiver Deghosting in Frequency-Ray Parameter Domain (주파수-파선변수 영역에서 음원 및 수신기 고스트 제거를 통한 전통적인 해양 탄성파 자료의 광대역 자료처리)

  • Kim, Su-min;Koo, Nam-Hyung;Lee, Ho-Young
    • Geophysics and Geophysical Exploration
    • /
    • v.19 no.4
    • /
    • pp.220-227
    • /
    • 2016
  • Marine seismic data have not only primary signals from subsurface but also ghost signals reflected from the sea surface. The ghost decreases temporal resolution of seismic data because it attenuates specific frequency components. For eliminating the ghost signals effectively, the exact ghost delaytimes and reflection coefficients are required. Because of undulation of the sea surface and vertical movements of airguns and streamers, the ghost delaytime varies spatially and randomly while acquiring seismic data. The reflection coefficient is a function of frequency, incidence angle of plane-wave and the sea state. In order to estimate the proper ghost delaytimes considering these characteristics, we compared the ghost delaytimes estimated with L-1 norm, L-2 norm and kurtosis of the deghosted trace and its autocorrelation on synthetic data. L-1 norm of autocorrelation showed a minimal error and the reflection coefficient was calculated using Kirchhoff approximation equation which can handle the effect of wave height. We applied the estimated ghost delaytimes and the calculated reflection coefficients to remove the source and receiver ghost effects. By removing ghost signals, we reconstructed the frequency components attenuated near the notch frequency and produced the migrated stack section with enhanced temporal resolution.

A Study on A Deep Learning Algorithm to Predict Printed Spot Colors (딥러닝 알고리즘을 이용한 인쇄된 별색 잉크의 색상 예측 연구)

  • Jun, Su Hyeon;Park, Jae Sang;Tae, Hyun Chul
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.2
    • /
    • pp.48-55
    • /
    • 2022
  • The color image of the brand comes first and is an important visual element that leads consumers to the consumption of the product. To express more effectively what the brand wants to convey through design, the printing market is striving to print accurate colors that match the intention. In 'offset printing' mainly used in printing, colors are often printed in CMYK (Cyan, Magenta, Yellow, Key) colors. However, it is possible to print more accurate colors by making ink of the desired color instead of dotting CMYK colors. The resulting ink is called 'spot color' ink. Spot color ink is manufactured by repeating the process of mixing the existing inks. In this repetition of trial and error, the manufacturing cost of ink increases, resulting in economic loss, and environmental pollution is caused by wasted inks. In this study, a deep learning algorithm to predict printed spot colors was designed to solve this problem. The algorithm uses a single DNN (Deep Neural Network) model to predict printed spot colors based on the information of the paper and the proportions of inks to mix. More than 8,000 spot color ink data were used for learning, and all color was quantified by dividing the visible light wavelength range into 31 sections and the reflectance for each section. The proposed algorithm predicted more than 80% of spot color inks as very similar colors. The average value of the calculated difference between the actual color and the predicted color through 'Delta E' provided by CIE is 5.29. It is known that when Delta E is less than 10, it is difficult to distinguish the difference in printed color with the naked eye. The algorithm of this study has a more accurate prediction ability than previous studies, and it can be added flexibly even when new inks are added. This can be usefully used in real industrial sites, and it will reduce the attempts of the operator by checking the color of ink in a virtual environment. This will reduce the manufacturing cost of spot color inks and lead to improved working conditions for workers. In addition, it is expected to contribute to solving the environmental pollution problem by reducing unnecessarily wasted ink.

The Prime Counting Function (소수계량함수)

  • Lee, Sang-Un;Choi, Myeong-Bok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.101-109
    • /
    • 2011
  • The Riemann's zeta function $\zeta(s)$ has been known as answer for a number of primes $\pi$(x) less than given number x. In prime number theorem, there are another approximation function $\frac{x}{lnx}$,Li(x), and R(x). The error about $\pi$(x) is R(x) < Li(x) < $\frac{x}{lnx}$. The logarithmic integral function is Li(x) = $\int_{2}^{x}\frac{1}{lnt}dt$ ~ $\frac{x}{lnx}\sum\limits_{k=0}^{\infty}\frac{k!}{(lnx)^k}=\frac{x}{lnx}(1+\frac{1!}{(lnx)^1}+\frac{2!}{(lnx)^2}+\cdots)$. This paper shows that the $\pi$(x) can be represent with finite Li(x), and presents generalized prime counting function $\sqrt{{\alpha}x}{\pm}{\beta}$. Firstly, the $\pi$(x) can be represent to $Li_3(x)=\frac{x}{lnx}(\sum\limits_{t=0}^{{\alpha}}\frac{k!}{(lnx)^k}{\pm}{\beta})$ and $Li_4(x)=\lfloor\frac{x}{lnx}(1+{\alpha}\frac{k!}{(lnx)^k}{\pm}{\beta})}k\geq2$ such that $0{\leq}t{\leq}2k$. Then, $Li_3$(x) is adjusted by $\pi(x){\simeq}Li_3(x)$ with ${\alpha}$ and error compensation value ${\beta}$. As a results, this paper get the $Li_3(x)=Li_4(x)=\pi(x)$ for $x=10^k$. Then, this paper suggests a generalized function $\pi(x)=\sqrt{{\alpha}x}{\pm}{\beta}$. The $\pi(x)=\sqrt{{\alpha}x}{\pm}{\beta}$ function superior than Riemann's zeta function in representation of prime counting.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

Quantification of Temperature Effects on Flowering Date Determination in Niitaka Pear (신고 배의 개화기 결정에 미치는 온도영향의 정량화)

  • Kim, Soo-Ock;Kim, Jin-Hee;Chung, U-Ran;Kim, Seung-Heui;Park, Gun-Hwan;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.2
    • /
    • pp.61-71
    • /
    • 2009
  • Most deciduous trees in temperate zone are dormant during the winter to overcome cold and dry environment. Dormancy of deciduous fruit trees is usually separated into a period of rest by physiological conditions and a period of quiescence by unfavorable environmental conditions. Inconsistent and fewer budburst in pear orchards has been reported recently in South Korea and Japan and the insufficient chilling due to warmer winters is suspected to play a role. An accurate prediction of the flowering time under the climate change scenarios may be critical to the planning of adaptation strategy for the pear industry in the future. However, existing methods for the prediction of budburst depend on the spring temperature, neglecting potential effects of warmer winters on the rest release and subsequent budburst. We adapted a dormancy clock model which uses daily temperature data to calculate the thermal time for simulating winter phenology of deciduous trees and tested the feasibility of this model in predicting budburst and flowering of Niitaka pear, one of the favorite cultivars in Korea. In order to derive the model parameter values suitable for Niitaka, the mean time for the rest release was estimated by observing budburst of field collected twigs in a controlled environment. The thermal time (in chill-days) was calculated and accumulated by a predefined temperature range from fall harvest until the chilling requirement (maximum accumulated chill-days in a negative number) is met. The chilling requirement is then offset by anti-chill days (in positive numbers) until the accumulated chill-days become null, which is assumed to be the budburst date. Calculations were repeated with arbitrary threshold temperatures from $4^{\circ}C$ to $10^{\circ}C$ (at an interval of 0.1), and a set of threshold temperature and chilling requirement was selected when the estimated budburst date coincides with the field observation. A heating requirement (in accumulation of anti-chill days since budburst) for flowering was also determined from an experiment based on historical observations. The dormancy clock model optimized with the selected parameter values was used to predict flowering of Niitaka pear grown in Suwon for the recent 9 years. The predicted dates for full bloom were within the range of the observed dates with 1.9 days of root mean square error.

Real-time Nutrient Monitoring of Hydroponic Solutions Using an Ion-selective Electrode-based Embedded System (ISE 기반의 임베디드 시스템을 이용한 실시간 수경재배 양액 모니터링)

  • Han, Hee-Jo;Kim, Hak-Jin;Jung, Dae-Hyun;Cho, Woo-Jae;Cho, Yeong-Yeol;Lee, Gong-In
    • Journal of Bio-Environment Control
    • /
    • v.29 no.2
    • /
    • pp.141-152
    • /
    • 2020
  • The rapid on-site measurement of hydroponic nutrients allows for the more efficient use of crop fertilizers. This paper reports on the development of an embedded on-site system consisting of multiple ion-selective electrodes (ISEs) for the real-time measurement of the concentrations of macronutrients in hydroponic solutions. The system included a combination of PVC ISEs for the detection of NO3, K, and Ca ions, a cobalt-electrode for the detection of H2PO4, a double-junction reference electrode, a solution container, and a sampling system consisting of pumps and valves. An Arduino Due board was used to collect data and to control the volume of the sample. Prior to the measurement of each sample, a two-point normalization method was employed to adjust the sensitivity followed by an offset to minimize potential drift that might occur during continuous measurement. The predictive capabilities of the NO3 and K ISEs based on PVC membranes were satisfactory, producing results that were in close agreement with the results of standard analyzers (R2 = 0.99). Though the Ca ISE fabricated with Ca ionophore II underestimated the Ca concentration by an average of 55%, the strong linear relationship (R2 > 0.84) makes it possible for the embedded system to be used in hydroponic NO3, K, and Ca sensing. The cobalt-rod-based phosphate electrodes exhibited a relatively high error of 24.7±9.26% in the phosphate concentration range of 45 to 155 mg/L compared to standard methods due to inconsistent signal readings between replicates, illustrating the need for further research on the signal conditioning of cobalt electrodes to improve their predictive ability in hydroponic P sensing.

Capacity Comparison of Two Uplink OFDMA Systems Considering Synchronization Error among Multiple Users and Nonlinear Distortion of Amplifiers (사용자간 동기오차와 증폭기의 비선형 왜곡을 동시에 고려한 두 상향링크 OFDMA 기법의 채널용량 비교 분석)

  • Lee, Jin-Hui;Kim, Bong-Seok;Choi, Kwonhue
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.5
    • /
    • pp.258-270
    • /
    • 2014
  • In this paper, we investigate channel capacity of two kinds of uplink OFDMA (Orthogonal Frequency Division Multiple Access) schemes, i.e. ZCZ (Zero Correlation Zone) code time-spread OFDMA and sparse SC-FDMA (Single Carrier Frequency Division Mmultiple Access) robust to access timing offset (TO) among multiple users. In order to reflect the practical condition, we consider not only access TO among multiple users but also peak to average power ratio (PAPR) which is one of hot issues of uplink OFDMA. In the case with access TO among multiple users, the amplified signal of users by power control might affect a severe interference to signals of other users. Meanwhile, amplified signal by considering distance between user and base station might be distorted due to the limit of amplifier and thus the performance might degrade. In order to achieve the maximum channel capacity, we investigate the combinations of transmit power so called ASF (adaptive scaling factor) by numerical simulations. We check that the channel capacity of the case with ASF increases compared to the case with considering only distance i.e. ASF=1. From the simulation results, In the case of high signal to noise ratio (SNR), ZCZ code time-spread OFDMA achieves higher channel capacity compared to sparse block SC-FDMA. On the other hand, in the case of low SNR, the sparse block SC-FDMA achieves better performance compared to ZCZ time-spread OFDMA.