• Title/Summary/Keyword: gaussian function

Search Result 926, Processing Time 0.029 seconds

Gaussian Noise Reduction Method using Adaptive Total Variation : Application to Cone-Beam Computed Tomography Dental Image (적응형 총변이 기법을 이용한 가우시안 잡음 제거 방법: CBCT 치과 영상에 적용)

  • Kim, Joong-Hyuk;Kim, Jung-Chae;Kim, Kee-Deog;Yoo, Sun-K.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.29-38
    • /
    • 2012
  • The noise generated in the process of obtaining the medical image acts as the element obstructing the image interpretation and diagnosis. To restore the true image from the image polluted from the noise, the total variation optimization algorithm was proposed by the R.O. F (L.Rudin, S Osher, E. Fatemi). This method removes the noise by fitting the balance of the regularity and fidelity. However, the blurring phenomenon of the border area generated in the process of performing the iterative operation cannot be avoided. In this paper, we propose the adaptive total variation method by mapping the control parameter to the proposed transfer function for minimizing boundary error. The proposed transfer function is determined by the noise variance and the local property of the image. The proposed method was applied to 464 tooth images. To evaluate proposed method performance, PSNR which is a indicator of signal and noise's signal power ratio was used. The experimental results show that the proposed method has better performance than other methods.

Pulse Broadening and Intersymbol Interference of the Optical Gaussian Pulse Due to Atmospheric Turbulence in an Optical Wireless Communication System (광 무선통신시스템에서 대기 교란으로 인한 광 가우시안 펄스의 펄스 퍼짐과 부호 간 간섭에 관한 연구)

  • Jung, Jin-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.16 no.5
    • /
    • pp.417-422
    • /
    • 2005
  • When an optical pulse propagates through the atmospheric channel, it is attenuated and spreaded by the atmospheric turbulence. This pulse broadening produces the intersymbol interference(ISI) between adjacent pulses. Therefore, adjacent pulses are overlapped, and the bit rates and the repeaterless transmission length are limited by the ISI. In this paper, the ISI as a function of the refractive index structure constant that presents the strength of atmospheric turbulence is found using the temporal momentum function, and is numerically analyzed fer the basic SONET transmission rates. The numerical results show that ISI is gradually increasing at the lower transmission rate than the OC-192(9.953 Gb/s) system and is slowly converging after rapid increasing at the higher transmission rate than the OC-768(39.813 Gb/s) system as the turbulence is stronger. Also, we know that accurate information transmission is possible to 10[km] at the OC-48(2.488 Gb/s) system under any atmospheric turbulence, but is impossible under the stronger turbulence than $10^{-14}[m^{-2/3}]$ at the 100 Gb/s system, $10^{-13}[m^{-2/3}]$ at the OC-768 system, and $10^{-12}[m^{-2/3}]$ at the OC-192 system, because the ISI is seriously induced.

Suspension of Sediment over Swash Zone (Swash대역에서의 해빈표사 부유거동에 관한 연구)

  • Cho, Yong Jun;Kim, Kwon Soo;Ryu, Ha Sang
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.1B
    • /
    • pp.95-109
    • /
    • 2008
  • We numerically analyzed the nonlinear shoaling, a plunging breaker and its accompanying energetic suspension of sediment at a bed, and a redistribution of suspended sediments by a down rush of preceding waves and the following plunger using SPH with a Gaussian kernel function, Lagrangian Dynamic Smagorinsky model (LDS), Van Rijn's pick up function. In that process, we came to the conclusion that the conventional model for the tractive force at a bottom like a quadratic law can not accurately describe the rapidly accelerating flow over a swash zone, and propose new methodology to accurately estimate the bottom tractive force. Using newly proposed wave model in this study, we can successfully duplicate severely deformed water surface profile, free falling water particles, a queuing splash after the landing of water particles on the free surface and a wave finger due to the structured vortex on a rear side of wave crest (Narayanaswamy and Dalrymple, 2002), a circulation of suspended sediments over a swash zone, net transfer of sediments clouds suspended over a swash zone toward the offshore, which so far have been regarded very difficult features to mimic in the computational fluid mechanics.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Analysis of Trading Performance on Intelligent Trading System for Directional Trading (방향성매매를 위한 지능형 매매시스템의 투자성과분석)

  • Choi, Heung-Sik;Kim, Sun-Woong;Park, Sung-Cheol
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.187-201
    • /
    • 2011
  • KOSPI200 index is the Korean stock price index consisting of actively traded 200 stocks in the Korean stock market. Its base value of 100 was set on January 3, 1990. The Korea Exchange (KRX) developed derivatives markets on the KOSPI200 index. KOSPI200 index futures market, introduced in 1996, has become one of the most actively traded indexes markets in the world. Traders can make profit by entering a long position on the KOSPI200 index futures contract if the KOSPI200 index will rise in the future. Likewise, they can make profit by entering a short position if the KOSPI200 index will decline in the future. Basically, KOSPI200 index futures trading is a short-term zero-sum game and therefore most futures traders are using technical indicators. Advanced traders make stable profits by using system trading technique, also known as algorithm trading. Algorithm trading uses computer programs for receiving real-time stock market data, analyzing stock price movements with various technical indicators and automatically entering trading orders such as timing, price or quantity of the order without any human intervention. Recent studies have shown the usefulness of artificial intelligent systems in forecasting stock prices or investment risk. KOSPI200 index data is numerical time-series data which is a sequence of data points measured at successive uniform time intervals such as minute, day, week or month. KOSPI200 index futures traders use technical analysis to find out some patterns on the time-series chart. Although there are many technical indicators, their results indicate the market states among bull, bear and flat. Most strategies based on technical analysis are divided into trend following strategy and non-trend following strategy. Both strategies decide the market states based on the patterns of the KOSPI200 index time-series data. This goes well with Markov model (MM). Everybody knows that the next price is upper or lower than the last price or similar to the last price, and knows that the next price is influenced by the last price. However, nobody knows the exact status of the next price whether it goes up or down or flat. So, hidden Markov model (HMM) is better fitted than MM. HMM is divided into discrete HMM (DHMM) and continuous HMM (CHMM). The only difference between DHMM and CHMM is in their representation of state probabilities. DHMM uses discrete probability density function and CHMM uses continuous probability density function such as Gaussian Mixture Model. KOSPI200 index values are real number and these follow a continuous probability density function, so CHMM is proper than DHMM for the KOSPI200 index. In this paper, we present an artificial intelligent trading system based on CHMM for the KOSPI200 index futures system traders. Traders have experienced on technical trading for the KOSPI200 index futures market ever since the introduction of the KOSPI200 index futures market. They have applied many strategies to make profit in trading the KOSPI200 index futures. Some strategies are based on technical indicators such as moving averages or stochastics, and others are based on candlestick patterns such as three outside up, three outside down, harami or doji star. We show a trading system of moving average cross strategy based on CHMM, and we compare it to a traditional algorithmic trading system. We set the parameter values of moving averages at common values used by market practitioners. Empirical results are presented to compare the simulation performance with the traditional algorithmic trading system using long-term daily KOSPI200 index data of more than 20 years. Our suggested trading system shows higher trading performance than naive system trading.

A noise reduction method for MODIS NDVI time series data based on statistical properties of NDVI temporal dynamics (MODIS NDVI 시계열 자료의 통계적 특성에 기반한 NDVI 데이터 잡음 제거 방법)

  • Jung, Myunghee;Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.9
    • /
    • pp.24-33
    • /
    • 2017
  • Multitemporal MODIS vegetation index (VI) data are widely used in vegetation monitoring research into environmental and climate change, since they provide a profile of vegetation activity. However, MODIS data inevitably contain disturbances caused by the presence of clouds, atmospheric variability, and instrument problems, which impede the analysis of the NDVI time series data and limit its application utility. For this reason, preprocessing to reduce the noise and reconstruct high-quality temporal data streams is required for VI analysis. In this study, a data reconstruction method for MODIS NDVI is proposed to restore bad or missing data based on the statistical properties of the oscillations in the NDVI temporal dynamics. The first derivatives enable us to examine the monotonic properties of a function in the data stream and to detect anomalous changes, such as sudden spikes and drops. In this approach, only noisy data are corrected, while the other data are left intact to preserve the detailed temporal dynamics for further VI analysis. The proposed method was successfully tested and evaluated with simulated data and NDVI time series data covering Baekdu Mountain, located in the northern part of North Korea, over the period of interest from 2006 to 2012. The results show that it can be effectively employed as a preprocessing method for data reconstruction in MODIS NDVI analysis.

A Study on the Analysis of the Error in Photometric Stereo Method Caused by the General-purpose Lighting Environment (測光立體視法에서 범용조명원에 기인한 오차 해석에 관한 연구)

  • Kim, Tae-Eun;Chang, Tae-Gyu;Choi, Jong-Soo
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.11
    • /
    • pp.53-62
    • /
    • 1994
  • This paper presents a new approach of analyzing errors resulting from nonideal general-purpose lighting environment when the Photometric Stereo Method (PSM) is applied to estimate the surface-orientation of a three-dimensional object. The approach introduces the explicit modeling of the lighting environment including a circular-disk type irradiance object plane and the direct simulation of the error distribution with the model. The light source is modeled as a point source that has a certain amount of beam angle, and the luminance distribution on the irradiance plane is modeled as a Gaussian function with different deviation values. A simulation algorithm is devised to estimate the light source orientation computing the average luminance intensities obtained from the irradiance object planes positioned in three different orientations. The effect of the nonideal lighting model is directly reflected in such simulation, because of the analogy between the PSM and the proposed algorithm. With an instrumental tool designed to provide arbitrary orientations of the object plane at the origin of the coordinate system, experiment can be performed in a systematic way for the error analysis and compensation. Simulations are performed to find out the error distribution by widely varying the light model and the orientation set of the object plane. The simulation results are compared with those of the experiment performed in the same way as the simulation. It is confirmed from the experiment that a fair amount of errors is due to the erroneous effect of the general-purpose lighting environment.

  • PDF

Calculation of Gamma-ray Energy Spectrum for Spherical BGO Scintillation Detector (구형 BGO 섬광 검출기에 대한 감마선 에너지 스펙트럼 계산)

  • Doh, Sih-Hong;Kim, Jong-Il;Park, Hung-Ki;Chu, Min-Cheal;Jeong, Jung-Hyun;Kim, Gi-Dong;Lee, Dae-Won
    • Journal of Sensor Science and Technology
    • /
    • v.4 no.4
    • /
    • pp.1-9
    • /
    • 1995
  • The ${\gamma}$-ray deposition spectra were calculated by Monte Calro method to obtain the scintillation characteristics of the ${\gamma}$-ray for BGO scintillation detector with the spherical shape of 1.25 cm radius. The code used in calculating the ${\gamma}$-ray deposition spectra was made for personal computer with qbasic language. Also the ${\gamma}$-ray energy spectra of $^{22}Na$, $^{137}Cs$ and $^{207}Bi$ were measured with the detector. The energy dependent resolution below 2000 keV for the detector was determined by estimating the standard deviation of the photopeak fitted with gaussian function, and $X^{2}$ fitting using Nardi's empirical formula. The measured spectra of $^{22}Na$ and $^{137}Cs$ were compared with the broadened spectra which were obtained by broadening the calculated ${\gamma}$-ray deposition spectra with the energy dependent resolution. The absolute efficiency and the intrinsic peak efficiency of the detector were obtained by calculating the ${\gamma}$-ray deposition spectrum with the code.

  • PDF

Analytical Methods of Levoglucosan, a Tracer for Cellulose in Biomass Burning, by Four Different Techniques

  • Bae, Min-Suk;Lee, Ji-Yi;Kim, Yong-Pyo;Oak, Min-Ho;Shin, Ju-Seon;Lee, Kwang-Yul;Lee, Hyun-Hee;Lee, Sun-Young;Kim, Young-Joon
    • Asian Journal of Atmospheric Environment
    • /
    • v.6 no.1
    • /
    • pp.53-66
    • /
    • 2012
  • A comparison of analytical approaches for Levoglucosan ($C_6H_{10}O_5$, commonly formed from the pyrolysis of carbohydrates such as cellulose) and used for a molecular marker in biomass burning is made between the four different analytical systems. 1) Spectrothermography technique as the evaluation of thermograms of carbon using Elemental Carbon & Organic Carbon Analyzer, 2) mass spectrometry technique using Gas Chromatography/mass spectrometer (GC/MS), 3) Aerosol Mass Spectrometer (AMS) for the identification of the particle size distribution and chemical composition, and 4) two dimensional Gas Chromatography with Time of Flight mass spectrometry (GC${\times}$GC-TOFMS) for defining the signature of Levoglucosan in terms of chemical analytical process. First, a Spectrothermography, which is defined as the graphical representation of the carbon, can be measured as a function of temperature during the thermal separation process and spectrothermographic analysis. GC/MS can detect mass fragment ions of Levoglucosan characterized by its base peak at m/z 60, 73 in mass fragment-grams by methylation and m/z 217, 204 by trimethylsilylderivatives (TMS-derivatives). AMS can be used to analyze the base peak at m/z 60.021, 73.029 in mass fragment-grams with a multiple-peak Gaussian curve fit algorithm. In the analysis of TMS derivatives by GC${\times}$GC-TOFMS, it can detect m/z 73 as the base ion for the identification of Levoglucosan. It can also observe m/z 217 and 204 with existence of m/z 333. Although the ratios of m/z 217 and m/z 204 to the base ion (m/z 73) in the mass spectrum of GC${\times}$GC-TOFMS lower than those of GC/MS, Levoglucosan can be separated and characterized from D (-) +Ribose in the mixture of sugar compounds. At last, the environmental significance of Levoglucosan will be discussed with respect to the health effect to offer important opportunities for clinical and potential epidemiological research for reducing incidence of cardiovascular and respiratory diseases.

Feature Vector Extraction and Classification Performance Comparison According to Various Settings of Classifiers for Fault Detection and Classification of Induction Motor (유도 전동기의 고장 검출 및 분류를 위한 특징 벡터 추출과 분류기의 다양한 설정에 따른 분류 성능 비교)

  • Kang, Myeong-Su;Nguyen, Thu-Ngoc;Kim, Yong-Min;Kim, Cheol-Hong;Kim, Jong-Myon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.8
    • /
    • pp.446-460
    • /
    • 2011
  • The use of induction motors has been recently increasing with automation in aeronautical and automotive industries, and it playes a significant role. This has motivated that many researchers have studied on developing fault detection and classification systems of an induction motor in order to minimize economical damage caused by its fault. With this reason, this paper proposed feature vector extraction methods based on STE (short-time energy)+SVD (singular value decomposition) and DCT (discrete cosine transform)+SVD techniques to early detect and diagnose faults of induction motors, and classified faults of an induction motor into different types of them by using extracted features as inputs of BPNN (back propagation neural network) and multi-layer SVM (support vector machine). When BPNN and multi-lay SVM are used as classifiers for fault classification, there are many settings that affect classification performance: the number of input layers, the number of hidden layers and learning algorithms for BPNN, and standard deviation values of Gaussian radial basis function for multi-layer SVM. Therefore, this paper quantitatively simulated to find appropriate settings for those classifiers yielding higher classification performance than others.