• Title/Summary/Keyword: Output index

Search Result 671, Processing Time 0.037 seconds

Improved prediction of soil liquefaction susceptibility using ensemble learning algorithms

  • Satyam Tiwari;Sarat K. Das;Madhumita Mohanty;Prakhar
    • Geomechanics and Engineering
    • /
    • v.37 no.5
    • /
    • pp.475-498
    • /
    • 2024
  • The prediction of the susceptibility of soil to liquefaction using a limited set of parameters, particularly when dealing with highly unbalanced databases is a challenging problem. The current study focuses on different ensemble learning classification algorithms using highly unbalanced databases of results from in-situ tests; standard penetration test (SPT), shear wave velocity (Vs) test, and cone penetration test (CPT). The input parameters for these datasets consist of earthquake intensity parameters, strong ground motion parameters, and in-situ soil testing parameters. liquefaction index serving as the binary output parameter. After a rigorous comparison with existing literature, extreme gradient boosting (XGBoost), bagging, and random forest (RF) emerge as the most efficient models for liquefaction instance classification across different datasets. Notably, for SPT and Vs-based models, XGBoost exhibits superior performance, followed by Light gradient boosting machine (LightGBM) and Bagging, while for CPT-based models, Bagging ranks highest, followed by Gradient boosting and random forest, with CPT-based models demonstrating lower Gmean(error), rendering them preferable for soil liquefaction susceptibility prediction. Key parameters influencing model performance include internal friction angle of soil (ϕ) and percentage of fines less than 75 µ (F75) for SPT and Vs data and normalized average cone tip resistance (qc) and peak horizontal ground acceleration (amax) for CPT data. It was also observed that the addition of Vs measurement to SPT data increased the efficiency of the prediction in comparison to only SPT data. Furthermore, to enhance usability, a graphical user interface (GUI) for seamless classification operations based on provided input parameters was proposed.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Assessment of Parameters Measured with Volumetric Pulmonary Artery Catheter as Predictors of Fluid Responsiveness in Patients with Coronary Artery Occlusive Disease (관상동맥 질환을 가진 환자에서 폐동맥카테터로 측정한 전부하 지표들은 수액부하 반응을 예상할 수 있는가?)

  • Lee, Ji-Yeon;Lee, Jong-Hwa;Shim, Jae-Kwang;Yoo, Kyung-Jong;Hong, Seung-Bum;Kwak, Young-Lan
    • Journal of Chest Surgery
    • /
    • v.41 no.1
    • /
    • pp.41-48
    • /
    • 2008
  • Background: Accurate assessment of the preload and the fluid responsiveness is of great importance for optimizing cardiac output, especially in those patients with coronary artery occlusive disease (CAOD). In this study, we evaluated the relationship between the parameters of preload with the changes in the stroke volume index (SVI) after fluid loading in patients who were undergoing coronary artery bypass grafting (CABG). The purpose of this study was to find the predictors of fluid responsiveness in order to assess the feasibility of using. certain parameters of preload as a guide to fluid therapy. Material and Method: We studied 96 patients who were undergoing CABG. After induction of anesthesia, the hemodynamic parameters were measured before (T1) and 10 min after volume replacement (T2) by an infusion of 6% hydroxyethyl starch 130/0.4 (10 mL/kg) over 20 min. Result: The right ventricular end-diastolic volume index (RVEDVI), as well as the central venous pressure (CVP) and pulmonary capillary wedge pressure (PCWP), failed to demonstrate significant correlation with the changes in the SVI (%). Only the right ventricular ejection fraction (RVEF) measured at T1 showed significant correlation. with the changes of the SVI by linear regression (r=0.272, p=0.017). However, when the area under the curve of receiver operating characteristics (ROC) was evaluated, none of the parameters were over 0.7. The volume-induced increase in the SVI was 10% or greater in 31 patients (responders) and under 10% in 65 patients (non-responders). None of the parameters of preload measured at T1 showed a significant difference between the responders and non-responders, except for the RVEF. Conclusion: The conventional parameters measured with a volumetric pulmonary artery catheter failed to predict the response of SVI following fluid administration in patients suffering with CAOD.

Performance Improvement on Short Volatility Strategy with Asymmetric Spillover Effect and SVM (비대칭적 전이효과와 SVM을 이용한 변동성 매도전략의 수익성 개선)

  • Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.119-133
    • /
    • 2020
  • Fama asserted that in an efficient market, we can't make a trading rule that consistently outperforms the average stock market returns. This study aims to suggest a machine learning algorithm to improve the trading performance of an intraday short volatility strategy applying asymmetric volatility spillover effect, and analyze its trading performance improvement. Generally stock market volatility has a negative relation with stock market return and the Korean stock market volatility is influenced by the US stock market volatility. This volatility spillover effect is asymmetric. The asymmetric volatility spillover effect refers to the phenomenon that the US stock market volatility up and down differently influence the next day's volatility of the Korean stock market. We collected the S&P 500 index, VIX, KOSPI 200 index, and V-KOSPI 200 from 2008 to 2018. We found the negative relation between the S&P 500 and VIX, and the KOSPI 200 and V-KOSPI 200. We also documented the strong volatility spillover effect from the VIX to the V-KOSPI 200. Interestingly, the asymmetric volatility spillover was also found. Whereas the VIX up is fully reflected in the opening volatility of the V-KOSPI 200, the VIX down influences partially in the opening volatility and its influence lasts to the Korean market close. If the stock market is efficient, there is no reason why there exists the asymmetric volatility spillover effect. It is a counter example of the efficient market hypothesis. To utilize this type of anomalous volatility spillover pattern, we analyzed the intraday volatility selling strategy. This strategy sells short the Korean volatility market in the morning after the US stock market volatility closes down and takes no position in the volatility market after the VIX closes up. It produced profit every year between 2008 and 2018 and the percent profitable is 68%. The trading performance showed the higher average annual return of 129% relative to the benchmark average annual return of 33%. The maximum draw down, MDD, is -41%, which is lower than that of benchmark -101%. The Sharpe ratio 0.32 of SVS strategy is much greater than the Sharpe ratio 0.08 of the Benchmark strategy. The Sharpe ratio simultaneously considers return and risk and is calculated as return divided by risk. Therefore, high Sharpe ratio means high performance when comparing different strategies with different risk and return structure. Real world trading gives rise to the trading costs including brokerage cost and slippage cost. When the trading cost is considered, the performance difference between 76% and -10% average annual returns becomes clear. To improve the performance of the suggested volatility trading strategy, we used the well-known SVM algorithm. Input variables include the VIX close to close return at day t-1, the VIX open to close return at day t-1, the VK open return at day t, and output is the up and down classification of the VK open to close return at day t. The training period is from 2008 to 2014 and the testing period is from 2015 to 2018. The kernel functions are linear function, radial basis function, and polynomial function. We suggested the modified-short volatility strategy that sells the VK in the morning when the SVM output is Down and takes no position when the SVM output is Up. The trading performance was remarkably improved. The 5-year testing period trading results of the m-SVS strategy showed very high profit and low risk relative to the benchmark SVS strategy. The annual return of the m-SVS strategy is 123% and it is higher than that of SVS strategy. The risk factor, MDD, was also significantly improved from -41% to -29%.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

EFFICIENCY OF ENERGY TRANSFER BY A POPULATION OF THE FARMED PACIFIC OYSTER, CRASSOSTREA GIGAS IN GEOJE-HANSAN BAY (거제${\cdot}$한산만 양식굴 Crassostrea gigas의 에너지 전환 효율)

  • KIM Yong Sool
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.13 no.4
    • /
    • pp.179-183
    • /
    • 1980
  • The efficiency of energy transfer by a population of the farmed pacific oyster, Crassostrea gigas was studied during culture period of 10 months July 1979-April 1980, in Geoje-Hansan Bay near Chungmu City. Energy use by the farmed oyster population was calculated from estimates of half-a-month unit age specific natural mortality rate and data on growth, gonad output, shell organic matter production and respiration. Total mortality during the culture period was estimated approximate $36\%$ from data on survivor individual number per cluster. Growth may be dual consisted of a curved line during the first half culture period (July-November) and a linear line in the later half period (December-April). The first half growth was approximated by the von Bertalanffy growth model; shell height, $SH=6.33\;(1-e^{0.2421(t+0.54)})$, where t is age in half-a-month unit. In the later half growth period shell height was related to t by SH=4.44+0.14t. Dry meat weight (DW) was related to shell height by log $DW=-2.2907+2.589{\cdot}log\;SH,\;(2, and/or log $DW=-5.8153+7.208{\cdot}log\;SH,\;(5. Size specific gonad output (G) as calculated by condition index of before and after the spawning season, was related to shell height by $G=0.0145+(3.95\times10^{-3}{\times}SH^{2.9861})$. Shell organic matter production (SO) was related to shell height by log $SO=-3.1884+2.527{\cdot}1og\;SH$. Size and temperature specific respiration rate (R) as determined in biotron system with controlled temperature, was related to dry meat weight and temperature (T) by log $R=(0.386T-0.5381)+(0.6409-0.0083T){\cdot}log\;DW$. The energy used in metabolism was calculated from size, temperature specific respiration and data on body composition. The calorie contents of oyster meat were estimated by bomb calorimetry based on nitrogen correction. The assimilation efficiency of the oyster estimated directly by a insoluble crude silicate method gave $55.5\%$. From the information presently available by other workers, the assimilation efficiency ranges between $40\%\;and\;70\%$. Twenty seven point four percent of the filtered food material expressed by energy value for oyster population was estimated to have been rejected as pseudofaeces : $17.2\%$ was passed as faeces; $35.04\%$ was respired and lost as heat; $0.38\%$ was bounded up in shell organics; $2.74\%$ was released as gonad output, $2.06\%$ was fell as meat reducing by mortality. The remaining $15.28\%$ was used as meat production. The net efficiency of energy transfer from assimilation to meat production (yield/assimilation) of a farm population of the oyster was estimated to be $28\%$ during culture period July 1979-April 1980. The gross efficiency of energy transfer from ingestion to meat production (yield/food filtered) is probably between $11\%\;and\;20\%$.

  • PDF

Changes of Brain Natriuretic Peptide Levels according to Right Ventricular HemodynaMics after a Pulmonary Resection (폐절제술 후 우심실의 혈역학적 변화에 따른 BNP의 변화)

  • Na, Myung-Hoon;Han, Jong-Hee;Kang, Min-Woong;Yu, Jae-Hyeon;Lim, Seung-Pyung;Lee, Young;Choi, Jae-Sung;Yoon, Seok-Hwa;Choi, Si-Wan
    • Journal of Chest Surgery
    • /
    • v.40 no.9
    • /
    • pp.593-599
    • /
    • 2007
  • Background: The correlation between levels of brain natriuretic peptide (BNP) and the effect of pulmonary resection on the right ventricle of the heart is not yet widely known. This study aims to assess the relationship between the change in hemodynamic values of the right ventricle and increased BNP levels as a compensatory mechanism for right heart failure following pulmonary resection and to evaluate the role of the BNP level as an index of right heart failure after pulmonary resection. Material and Method: In 12 non small cell lung cancer patients that had received a lobectomy or pnemonectomy, the level of NT-proBNP was measured using the immunochemical method (Elecsys $1010^{(R)}$, Roche, Germany) which was compared with hemodynamic variables determined through the use of a Swan-Garz catheter prior to and following the surgery. Echocardiography was performed prior to and following the surgery, to measure changes in right ventricular and left ventricular pressures. For statistical analysis, the Wilcoxon rank sum test and linear regression analysis were conducted using SPSSWIN (version, 11.5). Result: The level of postoperative NT-proBNP (pg/mL) significantly increased for 6 hours, then for 1 day, 2 days, 3 days and 7 days after the surgery (p=0.003, 0.002, 0.002, 0.006, 0.004). Of the hemodynamic variables measured using the Swan-Ganz catheter, the mean pulmonary artery pressure after the surgery when compared with the pressure prior to surgery significantly increased at 0 hours, 6 hours, then 1 day, 2 days, and 3 days after the surgery (p=0.002, 0,002, 0.006, 0.007, 0.008). The right ventricular pressure significantly increased at 0 hours, 6 hours, then 1 day, and 3 days after the surgery (p=0.000, 0.009, 0.044, 0.032). The pulmonary vascular resistance index [pulmonary vascular resistance index=(mean pulmonary artery pressure-mean pulmonary capillary wedge pressure)/cardiac output index] significantly increased at 6 hours, then 2 days after the surgery (p=0.008, 0.028). When a regression analysis was conducted for changes in the mean pulmonary artery pressure and NT-proBNP levels after the surgery, significance was evident after 6 hours (r=0.602, p=0.038) and there was no significance thereafter. Echocardiography displayed no significant changes after the surgery. Conclusion: There was a significant correlation between changes in the mean pulmonary artery pressure and the NT-proBNP level 6 hours after a pulmonary resection. Therefore, it can be concluded that changes in NT-proBNP level after a pulmonary resection can serve as an index that reflects early hemodynamic changes in the right ventricle after a pulmonary resection.

Vasopressin in Young Patients with Congenital Heart Defects for Postoperative Vasodilatory Shock (선천성 심장병 수술 후 발생한 혈관확장성 쇼크에 대한 바소프레신의 치료)

  • 황여주;안영찬;전양빈;이재웅;박철현;박국양;한미영;이창하
    • Journal of Chest Surgery
    • /
    • v.37 no.6
    • /
    • pp.504-510
    • /
    • 2004
  • Background: Vasodilatory shock after cardiac surgery may result from the vasopressin deficiency following cardio-pulmonary bypass and sepsis, which did not respond to usual intravenous inotropes. In contrast to the adult patients, the effectiveness of vasopressin for vasodilatory shock in children has not been known well and so we reviewed our experience of vasopressin therapy in the small babies with a cardiac disease. Material and Method: Between February and August 2003, intravenous vasopressin was administrated in 6 patients for vasodilatory shock despite being supported on intravenous inotropes after cardiac surgery. Median age at operation was 25 days old (ranges; 2∼41 days) and median body weight was 2,870 grams (ranges; 900∼3,530 grams). Preoperative diag-noses were complete transposition of the great arteries in 2 patients, hypoplastic left heart syndrome in 1, Fallot type double-outlet right ventricle in 1, aortic coarctation with severe atrioventricular valve regurgitation in 1, and total anomalous pulmonary venous return in 1. Total repair and palliative repair were undertaken in each 3 patient. Result: Most patients showed vasodilatory shock not responding to the inotropes and required the vasopressin therapy within 24 hours after cardiac surgery and its readministration for septic shock. The dosing range for vasopressin was 0.0002∼0.008 unit/kg/minute with a median total time of its administration of 59 hours (ranges; 26∼140 hours). Systolic blood pressure before, 1 hour, and 6 hours after its administration were 42.7$\pm$7.4 mmHg, 53.7$\pm$11.4 mmHg, and 56.3$\pm$13.4 mmHg, respectively, which shows a significant increase in systolic blood pressure (systolic pressure 1hour and 6 hours after the administration compared to before the administration; p=0.042 in all). Inotropic indexes before, 6 hour, and 12 hours after its administration were 32.3$\pm$7.2, 21.0$\pm$8.4, and 21.2$\pm$8.9, respectively, which reveals a significant decrease in inotropic index (inotropic indexes 6 hour and 12 hours after the administration compared to before the administration; p=0.027 in all). Significant metabolic acidosis and decreased urine output related to systemic hypoperfusion were not found after vasopressin admin- istration. Conclusion: In young children suffering from vasodilatory shock not responding to common inotropes despite normal ventricular contractility, intravenous vasopressin reveals to be an effective vasoconstrictor to increase systolic blood pressure and to mitigate the complications related to higher doses of inotropes.

The Performance Comparison of MMA and S-MMA Adaptive Equalization Algorithm for QAM Signal (QAM 신호에대한 MMA와 S-MMA 적응 등화 알고리즘의 성능 비교)

  • Kang, Dae-Soo;Lim, Seung-Gag
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.19-26
    • /
    • 2013
  • This paper deals with the performance comparison of blind adaptive equalization algorithm, the MMA and S-MMA, that is used for compensation of the amplitude and phase distortion simultaneously which occurs in the time dispersive channel. The present CMA algorithm is possible to compensates the amplitude only, but not in phase, so it needs to the another additional circuit for compensating the phase. In order to overcoming the abovemensioned shorthand, the improved cost function is applied to the MMA algorithm. In MMA algorithm, the error is consists of the dispersion constant only, but in S-MMA, the error is consists of the dispersion constant considering the output of decision device (sliced symbol) in order to updating the tap coefficients. By using the two kind error signal, the adaptive equalization algorithm has different performance. In this paper, we compare to the adaptive equalization algorithm performance by using the recovered constellation, residual isi, MD (Maximum Distortion) and SER as a index when the transmitting signal is 16 and 64-QAM and then passing through the same communication channel. As a result of simulation, the S-MMA can improving the Roburstness in SER performance compared to the MMA in the high order QAM signal.

The Performance Analysis of CCA Adaptive Equalization Algorithm for 16-QAM Signal (16-QAM 신호에 대한 CCA 적응 등화 알고리즘 성능 분석)

  • Lim, Seung-Gag
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.27-34
    • /
    • 2013
  • This paper deals with the performance anlysis of CCA adaptive equalization algorithm, that is used for reduction of intersymbol interference at the receiving side which occurs in the time dispersive communication channel. Basically, this algorithm is borned for the solving phase unrecovery problem in the CMA equalizer, and the comines the concept of DDA (Decision Directed Algorithm) and RCA (Reduce Constellation Algorithm). The DDA has a stable convergence characteristics in unilevel signal, but not in the number of levels in multilevel signal such as QAM, so it has unstable problem. The RCA does not provide reliable initial convergence. And even after convergence, the equalization noise due to the steady state misadjustment exhibited by it is very high as compared to DDA. For the solving the abovemensioned point, the CCA adaptive eualization alogorithm has borned. In order to performance analysis of CCA algorithm, the recovered signal constellation that is the output of the equalizer, the convergence characteristic by the residual isi and MD (maximum distortion), the SER characteristic are used by computer simulation and it was compared with the DDA, RCA respectively. As a result of simulation, the DDA has superior performance than other algoithm, but it has a convergence unguarantee and unstability in the multilevel signal. In order to solving this problem, the CCA has more good performance than RCA in every performance index.