• Title/Summary/Keyword: Non-Gaussian

Search Result 509, Processing Time 0.034 seconds

Gaussian Filtering Effects on Brain Tissue-masked Susceptibility Weighted Images to Optimize Voxel-based Analysis (화소 분석의 최적화를 위해 자화감수성 영상에 나타난 뇌조직의 가우시안 필터 효과 연구)

  • Hwang, Eo-Jin;Kim, Min-Ji;Jahng, Geon-Ho
    • Investigative Magnetic Resonance Imaging
    • /
    • v.17 no.4
    • /
    • pp.275-285
    • /
    • 2013
  • Purpose : The objective of this study was to investigate effects of different smoothing kernel sizes on brain tissue-masked susceptibility-weighted images (SWI) obtained from normal elderly subjects using voxel-based analyses. Materials and Methods: Twenty healthy human volunteers (mean $age{\pm}SD$ = $67.8{\pm}6.09$ years, 14 females and 6 males) were studied after informed consent. A fully first-order flow-compensated three-dimensional (3D) gradient-echo sequence ran to obtain axial magnitude and phase images to generate SWI data. In addition, sagittal 3D T1-weighted images were acquired with the magnetization-prepared rapid acquisition of gradient-echo sequence for brain tissue segmentation and imaging registration. Both paramagnetically (PSWI) and diamagnetically (NSWI) phase-masked SWI data were obtained with masking out non-brain tissues. Finally, both tissue-masked PSWI and NSWI data were smoothed using different smoothing kernel sizes that were isotropic 0, 2, 4, and 8 mm Gaussian kernels. The voxel-based comparisons were performed using a paired t-test between PSWI and NSWI for each smoothing kernel size. Results: The significance of comparisons increased with increasing smoothing kernel sizes. Signals from NSWI were greater than those from PSWI. The smoothing kernel size of four was optimal to use voxel-based comparisons. The bilaterally different areas were found on multiple brain regions. Conclusion: The paramagnetic (positive) phase mask led to reduce signals from high susceptibility areas. To minimize partial volume effects and contributions of large vessels, the voxel-based analysis on SWI with masked non-brain components should be utilized.

Implementation and Performance Comparison for an Underwater Robot Localization Methods Using Seabed Terrain Information (해저 지형정보를 이용하는 수중 로봇 위치추정 방법의 구현 및 성능 비교)

  • Noh, Sung Woo;Ko, Nak Yong;Choi, Hyun Taek
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.1
    • /
    • pp.70-77
    • /
    • 2015
  • This paper proposes an application of unscented Kalman filter(UKF) for localization of an underwater robot. The method compares the bathymetric measurement from the robot with the seabed terrain information. For the measurement of bathymetric range to seabed, it uses a DVL which typically yields four range data together with velocity of the robot. Usual extended Kalman filter is not appropriated for application in case of terrain navigation, since it is not feasible to derive Jacobian for the bathymetric range measurement. Though particle filter(PF) is a nice solution which doesn't require Jacobian and can deal with non-linear and non-Gaussian system and measurement, it suffers from heavy computational burden. The paper compares the localization performance and the computation time of the UKF approach and PF approach. Though there have been some UKF methods which are used for underwater navigation, application of the UKF for bathymetric localization is rare. Especially, the proposed method uses only four range data whereas many of the bathymetric navigation methods have used multibeam sonar which yields hundreds of scanned range data. The result shows feasibility of the UKF approach for terrain-based navigation using small numbers of range data.

Blind Watermarking by Using Circular Input Method and Binary Image (이진영상과 Circular Input 방식을 이용한 Blind 워터마킹)

  • Kim Tae-Ho;Kim Young-Hee;Jin Kyo-Hong;Ko Bong-Jin;Park Mu-Hun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.8
    • /
    • pp.1407-1413
    • /
    • 2006
  • The field of medical images has been digitalized as the development of computer and the digitalization of the medical instruments. As a result it causes a lot of problems such as an illegal copy related to medical images and property right of the medical images. Therefore, digital watermarking is used for discrimination whether the data are modified or not. It is also used to protect both the property right of medical images and the private life of many patients. The proposed theories, the Non-blind and the Blind method, have two problems. One is needed an original image and the other is using a gaussian watermarking. This paper proposes the new Blind Watermarking using binary images in order to easily recognize the results of watermark. This algorism is described that an watermark of a binary image is wavelet-transformed, and then a transformed watermark is inserted in medium-band of frequency domains of original image by the Circular Input method. This method don't have any loss when image didn't have any attack. As a result Watermark can be perfectly extracted by using this algorithm. And Maximam PSNR value is improved 3.35dB. This algorithm will be improved by using gray level image and color image.

Recent Research Trends of Process Monitoring Technology: State-of-the Art (공정 모니터링 기술의 최근 연구 동향)

  • Yoo, ChangKyoo;Choi, Sang Wook;Lee, In-Beum
    • Korean Chemical Engineering Research
    • /
    • v.46 no.2
    • /
    • pp.233-247
    • /
    • 2008
  • Process monitoring technology is able to detect the faults and the process changes which occur in a process unpredictably, which makes it possible to find the reasons of the faults and get rid of them, resulting in a stable process operation, high-quality product. Statistical process monitoring method based on data set has a main merit to be a tool which can easily supervise a process with the statistics and can be used in the analysis of process data if a high quality of data is given. Because a real process has the inherent characteristics of nonlinearity, non-Gaussianity, multiple operation modes, sensor faults and process changes, however, the conventional multivariate statistical process monitoring method results in inefficient results, the degradation of the supervision performances, or often unreliable monitoring results. Because the conventional methods are not easy to properly supervise the process due to their disadvantages, several advanced monitoring methods are developed recently. This review introduces the theories and application results of several remarkable monitoring methods, which are a nonlinear monitoring with kernel principle component analysis (KPCA), an adaptive model for process change, a mixture model for multiple operation modes and a sensor fault detection and reconstruction, in order to tackle the weak points of the conventional methods.

Palatability Grading Analysis of Hanwoo Beef using Sensory Properties and Discriminant Analysis (관능특성 및 판별함수를 이용한 한우고기 맛 등급 분석)

  • Cho, Soo-Hyun;Seo, Gu-Reo-Un-Dal-Nim;Kim, Dong-Hun;Kim, Jae-Hee
    • Food Science of Animal Resources
    • /
    • v.29 no.1
    • /
    • pp.132-139
    • /
    • 2009
  • The objective of this study was to investigate the most effective analysis methods for palatability grading of Hanwoo beef by comparing the results of discriminant analysis with sensory data. The sensory data were obtained from sensory testing by 1,300 consumers evaluated tenderness, juiciness, flavor-likeness and overall acceptability of Hanwoo beef samples prepared by boiling, roasting and grilling cooking methods. For the discriminant analysis with one factor, overall acceptability, the linear discriminant functions and the non-parametric discriminant function with the Gaussian kernel were estimated. The linear discriminant functions were simple and easy to understand while the non-parametric discriminant functions were not explicit and had the problem of selection of kernel function and bandwidth. With the three palatability factors such as tenderness, juiciness and flavor-likeness, the canonical discriminant analysis was used and the ability of classification was calculated with the accurate classification rate and the error rate. The canonical discriminant analysis did not need the specific distributional assumptions and only used the principal component and canonical correlation. Also, it contained the function of 3 factors (tenderness, juiciness and flavor-likeness) and accurate classification rate was similar with the other discriminant methods. Therefore, the canonical discriminant analysis was the most proper method to analyze the palatability grading of Hanwoo beef.

A Comparison on the Positioning Accuracy from Different Filtering Strategies in IMU/Ranging System (IMU/Range 시스템의 필터링기법별 위치정확도 비교 연구)

  • Kwon, Jay-Hyoun;Lee, Jong-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.3
    • /
    • pp.263-273
    • /
    • 2008
  • The precision of sensors' position is particularly important in the application of road extraction or digital map generation. In general, the various ranging solution systems such as GPS, Total Station, and Laser Ranger have been employed for the position of the sensor. Basically, the ranging solution system has problems that the signal may be blocked or degraded by various environmental circumstances and has low temporal resolution. To overcome those limitations a IMU/range integrated system could be introduced. In this paper, after pointing out the limitation of extended Kalman filter which has been used for workhorse in navigation and geodetic community, the two sampling based nonlinear filters which are sigma point Kalman filter using nonlinear transformation and carefully chosen sigma points and particle filter using the non-gaussian assumption are implemented and compared with extended Kalman filter in a simulation test. For the ranging solution system, the GPS and Total station was selected and the three levels of IMUs(IMU400C, HG1700, LN100) are chosen for the simulation. For all ranging solution system and IMUs the sampling based nonlinear filter yield improved position result and it is more noticeable that the superiority of nonlinear filter in low temporal resolution such as 5 sec. Therefore, it is recommended to apply non-linear filter to determine the sensor's position with low degree position sensors.

Improvement of the Dose Calculation Accuracy Using MVCBCT Image Processing (Megavoltage Cone-Beam CT 영상의 변환을 이용한 선량 계산의 정확성 향상)

  • Kim, Min-Joo;Cho, Woong;Kang, Young-Nam;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.23 no.1
    • /
    • pp.62-69
    • /
    • 2012
  • The dose re-calculation process using Megavoltage cone-beam CT images is inevitable process to perform the Adaptive Radiation Therapy (ART). The purpose of this study is to improve dose re-calculation accuracy using MVCBCT images by applying intensity calibration method and three dimensional rigid body transform and filtering process. The three dimensional rigid body transform and Gaussian smoothing filtering process to MVCBCT Rando phantom images was applied to reduce image orientation error and the noise of the MVCBCT images. Then, to obtain the predefined modification level for intensity calibration, the cheese phantom images from kilo-voltage CT (kV CT), MVCBCT was acquired. From these cheese phantom images, the calibration table for MVCBCT images was defined from the relationship between Hounsfield Units (HUs) of kV CT and MVCBCT images at the same electron density plugs. The intensity of MVCBCT images from Rando phantom was calibrated using the predefined modification level as discussed above to have the intensity of the kV CT images to make the two images have the same intensity range as if they were obtained from the same modality. Finally, the dose calculation using kV CT, MVCBCT with/without intensity calibration was applied using radiation treatment planning system. As a result, the percentage difference of dose distributions between dose calculation based on kVCT and MVCBCT with intensity calibration was reduced comparing to the percentage difference of dose distribution between dose calculation based on kVCT and MVCBCT without intensity calibration. For head and neck, lung images, the percentage difference between kV CT and non-calibrated MVCBCT images was 1.08%, 2.44%, respectively. In summary, our method has quantitatively improved the accuracy of dose calculation and could be a useful solution to enhance the dose calculation accuracy using MVCBCT images.

Population Phenology and an Early Season Adult Emergence model of Pumpkin Fruit Fly, Bactrocera depressa (Diptera: Tephritidae) (호박과실파리 발생생태 및 계절초기 성충우화시기 예찰 모형)

  • Kang, Taek-Jun;Jeon, Heung-Yong;Kim, Hyeong-Hwan;Yang, Chang-Yeol;Kim, Dong-Soon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.10 no.4
    • /
    • pp.158-166
    • /
    • 2008
  • The pumpkin fruit fly, Bactrocera depressa (Tephritidae: Diptera), is one of the most important pests in Cucurbitaceae plants. This study was conducted to investigate the basic ecology of B. depressa, and to develop a forecasting model for predicting the time of adult emergence in early season. In green pumpkin producing farms, the oviposition punctures caused by the oviposition of B. depressa occurred first between mid- and late July, peaked in late August, and then decreased in mid-September followed by disappearance of the symptoms in late September, during which oviposition activity of B. depressa is considered active. In full-ripened pumpkin producing farms, damaged fruits abruptly increased from early Auguest, because the decay of pumpkins caused by larval development began from that time. B. depressa produced a mean oviposition puncture of 2.2 per fruit and total 28.8-29.8 eggs per fruit. Adult emergence from overwintering pupae, which was monitored using a ground emergence trap, was first observed between mid- and late May, and peaked during late May to early June. The development times from overwintering pupae to adult emergence decreased with increasing temperature: 59.0 days at $15^{\circ}C$, 39.3 days at $20^{\circ}C$, 25.8 days at$25^{\circ}C$ and 21.4 days at $30^{\circ}C$. The pupae did not develop to adult at $35^{\circ}C$. The lower developmental threshold temperature was calculated as $6.8^{\circ}C$ by linear regression. The thermal constant was 482.3 degree-days. The non-linear model of Gaussian equation well explained the relationship between the development rate and temperature. The Weibull function provided a good fit for the distribution of development times of overwintering pupae. The predicted date of 50% adult emergence by a degree-day model showed one day deviation from the observed actual date. Also, the output estimated by rate summation model, which was consisted of the developmental model and the Weibull function, well pursued the actual pattern of cumulative frequency curve of B. depressa adult emergence. Consequently, it is expected that the present results could be used to establish the management strategy of B. depressa.

Laryngeal Cancer Screening using Cepstral Parameters (켑스트럼 파라미터를 이용한 후두암 검진)

  • 이원범;전경명;권순복;전계록;김수미;김형순;양병곤;조철우;왕수건
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.14 no.2
    • /
    • pp.110-116
    • /
    • 2003
  • Background and Objectives : Laryngeal cancer discrimination using voice signals is a non-invasive method that can carry out the examination rapidly and simply without giving discomfort to the patients. n appropriate analysis parameters and classifiers are developed, this method can be used effectively in various applications including telemedicine. This study examines voice analysis parameters used for laryngeal disease discrimination to help discriminate laryngeal diseases by voice signal analysis. The study also estimates the laryngeal cancer discrimination activity of the Gaussian mixture model (GMM) classifier based on the statistical modelling of voice analysis parameters. Materials and Methods : The Multi-dimensional voice program (MDVP) parameters, which have been widely used for the analysis of laryngeal cancer voice, sometimes fail to analyze the voice of a laryngeal cancer patient whose cycle is seriously damaged. Accordingly, it is necessary to develop a new method that enables an analysis of high reliability for the voice signals that cannot be analyzed by the MDVP. To conduct the experiments of laryngeal cancer discrimination, the authors used three types of voices collected at the Department of Otorhinorlaryngology, Pusan National University Hospital. 50 normal males voice data, 50 voices of males with benign laryngeal diseases and 105 voices of males laryngeal cancer. In addition, the experiment also included 11 voices data of males with laryngeal cancer that cannot be analyzed by the MDVP, Only monosyllabic vowel /a/ was used as voice data. Since there were only 11 voices of laryngeal cancer patients that cannot be analyzed by the MDVP, those voices were used only for discrimination. This study examined the linear predictive cepstral coefficients (LPCC) and the met-frequency cepstral coefficients (MFCC) that are the two major cepstrum analysis methods in the area of acoustic recognition. Results : The results showed that this met frequency scaling process was effective in acoustic recognition but not useful for laryngeal cancer discrimination. Accordingly, the linear frequency cepstral coefficients (LFCC) that excluded the met frequency scaling from the MFCC was introduced. The LFCC showed more excellent discrimination activity rather than the MFCC in predictability of laryngeal cancer. Conclusion : In conclusion, the parameters applied in this study could discriminate accurately even the terminal laryngeal cancer whose periodicity is disturbed. Also it is thought that future studies on various classification algorithms and parameters representing pathophysiology of vocal cords will make it possible to discriminate benign laryngeal diseases as well, in addition to laryngeal cancer.

  • PDF

The Principles of Fractal Geometry and Its Applications for Pulp & Paper Industry (펄프·제지 산업에서의 프랙탈 기하 원리 및 그 응용)

  • Ko, Young Chan;Park, Jong-Moon;Shin, Soo-Jung
    • Journal of Korea Technical Association of The Pulp and Paper Industry
    • /
    • v.47 no.4
    • /
    • pp.177-186
    • /
    • 2015
  • Until Mandelbrot introduced the concept of fractal geometry and fractal dimension in early 1970s, it has been generally considered that the geometry of nature should be too complex and irregular to describe analytically or mathematically. Here fractal dimension indicates a non-integer number such as 0.5, 1.5, or 2.5 instead of only integers used in the traditional Euclidean geometry, i.e., 0 for point, 1 for line, 2 for area, and 3 for volume. Since his pioneering work on fractal geometry, the geometry of nature has been found fractal. Mandelbrot introduced the concept of fractal geometry. For example, fractal geometry has been found in mountains, coastlines, clouds, lightning, earthquakes, turbulence, trees and plants. Even human organs are found to be fractal. This suggests that the fractal geometry should be the law for Nature rather than the exception. Fractal geometry has a hierarchical structure consisting of the elements having the same shape, but the different sizes from the largest to the smallest. Thus, fractal geometry can be characterized by the similarity and hierarchical structure. A process requires driving energy to proceed. Otherwise, the process would stop. A hierarchical structure is considered ideal to generate such driving force. This explains why natural process or phenomena such as lightning, thunderstorm, earth quakes, and turbulence has fractal geometry. It would not be surprising to find that even the human organs such as the brain, the lung, and the circulatory system have fractal geometry. Until now, a normal frequency distribution (or Gaussian frequency distribution) has been commonly used to describe frequencies of an object. However, a log-normal frequency distribution has been most frequently found in natural phenomena and chemical processes such as corrosion and coagulation. It can be mathematically shown that if an object has a log-normal frequency distribution, it has fractal geometry. In other words, these two go hand in hand. Lastly, applying fractal principles is discussed, focusing on pulp and paper industry. The principles should be applicable to characterizing surface roughness, particle size distributions, and formation. They should be also applicable to wet-end chemistry for ideal mixing, felt and fabric design for papermaking process, dewatering, drying, creping, and post-converting such as laminating, embossing, and printing.