• Title/Summary/Keyword: Learning curve

Search Result 411, Processing Time 0.022 seconds

Comparison of Pattern Design Functions in YUKA and CLO for CAD Education: Focusing on Skirt Patterns (캐드 교육을 위한 YUKA와 CLO의 패턴 제도 기능 비교: 스커트패턴을 중심으로)

  • Younglim Choi
    • Fashion & Textile Research Journal
    • /
    • v.26 no.1
    • /
    • pp.65-77
    • /
    • 2024
  • This study aimed to propose effective ways to integrate CLO into educational settings by conducting a comparative analysis of pattern functions in YUKA and CLO, specifically focusing on skirt prototypes and variations. CLO, being a 3D virtual sample CAD tool, is mainly used in education to facilitate the creation of 3D virtual clothing. In order to explore the applicability of CLO's pattern functions in pattern education, CAD education experts were asked to produce two types of skirt prototypes and two skirt variations. Subsequently, in-depth interviews were conducted. In addition, the skirt pattern creation process was recorded on video and used for comparative analysis of YUKA and CLO pattern functions. The comparison revealed that CLO provides the pattern tools necessary for drafting skirt prototypes. The learning curve for acquiring the skills necessary for drafting and transforming skirt prototypes was found to be relatively shorter for CLO compared to YUKA. In addition, due to CLO's surface-based pattern drawing method, it is difficult to move or copy only specific parts of the outline, and there are some limitations in drawing right angle lines. In the pattern transformation process, CLO's preview function proved to be advantageous, and it was highly rated on user convenience due to the intuitive UI. Thus, CLO shows promise for pattern drafting education and is deemed to have high scalability as it is directly linked to 3D virtual clothing.

Predictive modeling algorithms for liver metastasis in colorectal cancer: A systematic review of the current literature

  • Isaac Seow-En;Ye Xin Koh;Yun Zhao;Boon Hwee Ang;Ivan En-Howe Tan;Aik Yong Chok;Emile John Kwong Wei Tan;Marianne Kit Har Au
    • Annals of Hepato-Biliary-Pancreatic Surgery
    • /
    • v.28 no.1
    • /
    • pp.14-24
    • /
    • 2024
  • This study aims to assess the quality and performance of predictive models for colorectal cancer liver metastasis (CRCLM). A systematic review was performed to identify relevant studies from various databases. Studies that described or validated predictive models for CRCLM were included. The methodological quality of the predictive models was assessed. Model performance was evaluated by the reported area under the receiver operating characteristic curve (AUC). Of the 117 articles screened, seven studies comprising 14 predictive models were included. The distribution of included predictive models was as follows: radiomics (n = 3), logistic regression (n = 3), Cox regression (n = 2), nomogram (n = 3), support vector machine (SVM, n = 2), random forest (n = 2), and convolutional neural network (CNN, n = 2). Age, sex, carcinoembryonic antigen, and tumor staging (T and N stage) were the most frequently used clinicopathological predictors for CRCLM. The mean AUCs ranged from 0.697 to 0.870, with 86% of the models demonstrating clear discriminative ability (AUC > 0.70). A hybrid approach combining clinical and radiomic features with SVM provided the best performance, achieving an AUC of 0.870. The overall risk of bias was identified as high in 71% of the included studies. This review highlights the potential of predictive modeling to accurately predict the occurrence of CRCLM. Integrating clinicopathological and radiomic features with machine learning algorithms demonstrates superior predictive capabilities.

Aortic valve replacement through right anterior mini-thoracotomy in patients with chronic severe aortic regurgitation: a retrospective single-center study

  • Eun Yeung Jung;Ji Eun Im;Ho-Ki Min;Seok Soo Lee
    • Journal of Yeungnam Medical Science
    • /
    • v.41 no.3
    • /
    • pp.213-219
    • /
    • 2024
  • Background: Aortic valve replacement (AVR) has recently been performed at many centers using a minimally invasive approach to reduce postoperative mortality, morbidity, and pain. Most previous reports on minimally invasive AVR (MiAVR) have mainly focused on aortic stenosis, and those exclusively dealing with aortic regurgitation (AR) are few. The purpose of this study was to investigate early surgical results and review our experience with patients with chronic severe AR who underwent AVR via right anterior mini-thoracotomy (RAT). Methods: Data were retrospectively collected in this single-center study. Eight patients who underwent RAT AVR between January 2020 and January 2024 were enrolled. Short-term outcomes, including the length of hospital stay, in-hospital mortality, postoperative complications, and echocardiographic data, were analyzed. Results: No in-hospital mortalities were observed. Postoperative atrial fibrillation occurred temporarily in three patients (37.5%). However, none required permanent pacemaker implantation or renal replacement therapy. The median values of ventilator time, length of intensive care unit stay, and hospital stay were 17 hours, 34.5 hours, and 9 days, respectively. Preoperative and postoperative measurements of left ventricular ejection fraction were similar. However, the left ventricular end systolic and diastolic diameters significantly decreased postoperatively from 42 mm to 35.5 mm (p=0.018) and 63 mm to 51 mm (p=0.012), respectively. Conclusion: MiAVR via RAT is a safe and reproducible procedure with acceptable morbidity and complication rates in patients with chronic severe AR. Despite some limitations such as a narrow surgical field and demanding learning curve, MiAVR is a competent method for AR.

The Surgical Outcome for Gastric Submucosal Tumors: Laparoscopy vs. Open Surgery (위 점막하 종양에 대한 개복 및 복강경 위 절제술의 비교)

  • Lim, Chai-Sun;Lee, Sang-Lim;Park, Jong-Min;Jin, Sung-Ho;Jung, In-Ho;Cho, Young-Kwan;Han, Sang-Uk
    • Journal of Gastric Cancer
    • /
    • v.8 no.4
    • /
    • pp.225-231
    • /
    • 2008
  • Purpose: Laparoscopic gastric resection (LGR) is increasingly being used instead of open gastric resection (OGR) as the standard surgical treatment for gastric submucosal tumors. Yet there are few reports on which technique shows better postoperative outcomes. This study was performed to compare these two treatment modalities for gastric submucosal tumors by evaluating the postoperative outcomes. We also provide an analysis of the learning curve for LGR. Materials and Methods: Between 2003.4 and 2008.8, 103 patients with a gastric submucosal tumor underwent either LGR (N=78) or OGR (n=25). A retrospective review was performed on a prospectively obtained database of 103 patients. We reviewed the data with regard to the operative time, the blood loss during the operation, the time to the first soft diet, the postoperative hospital stay, the tumor size and the tumor location. Results: The clinicopatholgic and tumor characteristics of the patients were similar for both groups. There was no open conversion in the LGR group. The mean operation time and the bleeding loss were not different between the LGR group and the OWR group. The time to first soft diet (3.27 vs. 6.16 days, P<0.001) and the length of the postoperative hospital stay (7.37 vs. 8.88 days, P=0.002) were shorter in the LGR group compared to the OGR group. The tumor size was bigger in the OGR group than that in the LGR group (6.44 vs. 3.65 cm, P<0.001). When performing laparoscopic gastric resection of gastric SMT, the surgeon was able to decrease the operation time and bleeding loss with gaining more experience. We separated the total cases into 3 periods to compare the operation time, the bleeding losses and the complications. The third period showed the shortest operation time, the least bleeding loss and the fewest complications. Conclusion: LGR for treating a gastric submucosal tumor was superior to OGR in terms of the postoperative outcomes. An operator needs some experience to perform a complete laparoscopic gastric resection. Laparoscopic resection could be considered the first-line treatment for gastric submucosal tumors.

  • PDF

The Learning Curve of Laparoscopy-assisted Distal Gastrectomy (LADG) for Cancer (학습곡선을 기준으로 한 복강경 보조 원위절제술에 대한 결과)

  • Kim, Kab-Choong;Yook, Jeong-Hwan;Choi, Ji-Eun;Cheong, Oh;Lim, Jeong-Taek;Oh, Sung-Tae;Kim, Byung-Sik
    • Journal of Gastric Cancer
    • /
    • v.8 no.4
    • /
    • pp.232-236
    • /
    • 2008
  • Purpose: Laparoscopic surgery for gastric cancer was introduced in the past decade because it was considered less invasive than open surgery, and this results in less postoperative pain, faster recovery and an improved quality of life. Several studies have demonstrated the safety and feasibility of this procedure. We examined the outcome of performing laparoscopic surgery for gastric cancer over the last two year. Materials and Methods: From April 2004 to December 2006, 329 patients with gastric adenocarcinoma underwent a laparoscopy-assisted distal gastrectomy with lymph node dissection. The data was retrospectively reviewed in terms of the clinicopathologic findings, the perioperative outcomes and the complications. Results: The total patient group was comprised 196 men (59.6%) and 133 women (40.4%). The mean BMI was 23.6 and the mean tumor size was 2.7 cm. The mean number of harvested lymph node was 22.7, and this was 18.6 before 30 cases and 23.1 after 30 cases, and the difference was significant (P=0.02). The mean operation time was 180.9 min, and this was than 287.9 min before 30 cases and 170.2 min after 30 cases. After 30 cases, there was a significant improvement of the operation time (P<0.01). The mean incision length after 30 cases was shorter than that before 30 cases (P<0.01). Postoperative complications occurred in 24 (7.3%) of 329 patients and there was no conversion to open surgery. Conclusion: Even though the LADG was accompanied by a difficult learning curve, we successfully performed 329 LADG procedures over the past 2 years and we believe that LADG is a safe, feasible operation for treating most early gastric cancers (EGC).

  • PDF

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

Multi-classification of Osteoporosis Grading Stages Using Abdominal Computed Tomography with Clinical Variables : Application of Deep Learning with a Convolutional Neural Network (멀티 모달리티 데이터 활용을 통한 골다공증 단계 다중 분류 시스템 개발: 합성곱 신경망 기반의 딥러닝 적용)

  • Tae Jun Ha;Hee Sang Kim;Seong Uk Kang;DooHee Lee;Woo Jin Kim;Ki Won Moon;Hyun-Soo Choi;Jeong Hyun Kim;Yoon Kim;So Hyeon Bak;Sang Won Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.3
    • /
    • pp.187-201
    • /
    • 2024
  • Osteoporosis is a major health issue globally, often remaining undetected until a fracture occurs. To facilitate early detection, deep learning (DL) models were developed to classify osteoporosis using abdominal computed tomography (CT) scans. This study was conducted using retrospectively collected data from 3,012 contrast-enhanced abdominal CT scans. The DL models developed in this study were constructed for using image data, demographic/clinical information, and multi-modality data, respectively. Patients were categorized into the normal, osteopenia, and osteoporosis groups based on their T-scores, obtained from dual-energy X-ray absorptiometry, into normal, osteopenia, and osteoporosis groups. The models showed high accuracy and effectiveness, with the combined data model performing the best, achieving an area under the receiver operating characteristic curve of 0.94 and an accuracy of 0.80. The image-based model also performed well, while the demographic data model had lower accuracy and effectiveness. In addition, the DL model was interpreted by gradient-weighted class activation mapping (Grad-CAM) to highlight clinically relevant features in the images, revealing the femoral neck as a common site for fractures. The study shows that DL can accurately identify osteoporosis stages from clinical data, indicating the potential of abdominal CT scans in early osteoporosis detection and reducing fracture risks with prompt treatment.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

A Distributed Web-DSS Approach for Coordinating Interdepartmental Decisions - Emphasis on Production and Marketing Decision (부서간 의사결정 조정을 위한 분산 웹 의사결정지원시스템에 관한 연구)

  • 이건창;조형래;김진성
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.10a
    • /
    • pp.291-300
    • /
    • 1999
  • 인터넷을 기반으로 한 정보통신의 급속한 발전이라는 기업환경의 변화에 적응하기 위해서 기업은 점차 모든 경영시스템을 인터넷을 기반으로 하도록 변화시키고 있을 뿐만 아니라, 기업 조직 또한 전세계를 기반으로한 글로벌 기업 형태로 변화하고 있다. 이러한 급속한 경영환경의 변화로 인해서 기업 내에서는 종전과는 다른 형태의 부서간 상호의사결정조정 과정이 필요하게 되었다. 일반 기업들을 대상으로 한 상호의사결정의 지원과정에 대해서는 기존에 많은 연구들이 있었으나 글로벌기업과 같은 네트워크 형태의 새로운 형태의 기업에 있어서의 상호의사결정과정을 지원할 수 있는 의사결정지원시스템에 대해서는 단순한 그룹의사결정지원시스템 또는 분산의사결정지원시스템과 같은 연구들이 주를 이루고 있다. 따라서 본 연구에서는 인터넷 특히, 웹을 기반으로 한 기업의 글로벌경영 및 분산 경영에서 비롯되는 부서간 상호의사결정이라는 문제를 효율적으로 지원할 수 있는 기업의 글로벌경영 및 분산 경영에서 비롯되는 부서간 상호의사결정이라는 문제를 효율적으로 지원할 수 있는 메커니즘을 제시하고 이에 기반한 프로토타입 형태의 시스템을 구현하여 성능을 검증하고자 한다. 특히, 기업 내에서 가장 대표적으로 상호의사결정지원이 필요한 생산과 마케팅 부서를 대상으로 상호의사결정지원 메커니즘을 개발하고 실험을 진행하였다. 그 결과 글로벌 기업내의 생산과 마케팅 부서간 상호의사결정을 효율적으로 지원 할 수 있는 상호조정 메카니즘인 개선된 PROMISE(PROduction and Marketing Interface Support Environment)를 기반으로 한 웹 분산의사결정지원시스템 (Web-DSS : Web-Decision Support Systems)을 제안하는 바이다.자대상 벤처기업의 선정을 위한 전문가시스템을 구축중이다.의 밀도를 비재무적 지표변수로 산정하여 로지스틱회귀 분석과 인공신경망 기법으로 검증하였다. 로지스틱회귀분석 결과에서는 재무적 지표변수 모형의 전체적 예측적중률이 87.50%인 반면에 재무/비재무적 지표모형은 90.18%로서 비재무적 지표변수 사용에 대한 개선의 효과가 나타났다. 표본기업들을 훈련과 시험용으로 구분하여 분석한 결과는 전체적으로 재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer

  • PDF