• 제목/요약/키워드: Learning curve

검색결과 401건 처리시간 0.028초

QAM 신호에서 mDSE-MMA 적응 등화 알고리즘의 성능 평가 (A Performance Evaluation of mDSE-MMA Adaptive Equalization Algorithm in QAM Signal)

  • 임승각
    • 한국인터넷방송통신학회논문지
    • /
    • 제20권2호
    • /
    • pp.103-108
    • /
    • 2020
  • 본 논문은 QAM 신호 전송시 부가 잡음, 부호간 간섭 및 페이딩등 비선형 통신 채널에서 발생되는 찌그러짐을 줄일 수 있는 mDSE-MMA (modified Dithered Signed Error-MMA) 적응 등화 알고리즘의 성능 평가에 관한 것이다. DSE-MMA 적응 등화 알고리즘은 기존 MMA의 연산량을 줄일 수 있지만, 이로 인하여 등화 성능의 열화되는 문제점이 있다. 이런 DSE-MMA의 성능 열화를 개선하기 위하여 mDSE-MMA는 적응을 위한 스텝 크기를 등화기의 출력이 송신 신호점을 중심으로 임의 반경내의 존재 여부에 따라 조절하게 된다. 제안 mDSE-MMA 알고리즘의 성능을 기존 DSE-MMA 알고리즘의 성능 평가를 위하여 동일한 채널과 잡음 환경하에서 컴퓨터 시뮬레이션을 수행하였으며, 이를 위한 지수로는 수신측에서의 등화기 출력 신호인 복원된 신호 성상도, 수렴 성능을 나타내는 잔류 isi, MD 및 MSE learning 곡선과 SER을 사용하였다. 시뮬레이션 결과 모든 성능 지수에서 mDSE-MMA가 DSE-MMA 보다 우월함을 확인하였다.

가변 스텝 Complex Sign-Sign LMS 적응 알고리즘을 사용한 WCDMA 간섭제거 중계기 (WCDMA Interference Cancellation Wireless Repeater Using Variable Stepsize Complex Sign-Sign LMS Algorithm)

  • 홍승모;김종훈
    • 대한전자공학회논문지TC
    • /
    • 제47권9호
    • /
    • pp.37-43
    • /
    • 2010
  • 간섭제거 무선중계기는 미약한 기지국/단말의 RF신호를 곧바로 증폭해서 송출하여 기지국과 단말간의 연결범위를 확장하는 중계기로 송출된 신호의 일부가 주위환경에 의해 반사되어 입력되는 간섭신호를 제거하는 기능이 필수적이다. 본 논문에서는 궤환 신호 제거를 위한 채널 추정 알고리즘으로 Variable Stepsize Complex Signed-Signed(VSCSS) LMS 적응 알고리즘을 제안하였다. 제안된 알고리즘은 곱셈/나눗셈 연산이 없이 구현할 수 있어 FPGA 구현시 소요되는 논리 자원(Resource)을 획기적으로 줄일 수 있다. 알고리즘의 성능을 CSS-LMS 알고리즘과 비교 분석하였으며 모의실험을 통해 얻어진 학습곡선(Learning Curve)으로부터 분석의 유효성을 검증하였다. 또한 페이딩 궤환 채널 환경에서 WCDMA 신호에 대한 모의실험으로 널리 사용되고 있는 NLMS 알고리즘과 수렴 속도 및 오차 측면에서 거의 같은 성능을 보임을 입증하였다.

사이드 스캔 소나 영상에서 수중물체 자동 탐지를 위한 컨볼루션 신경망 기법 적용 (The application of convolutional neural networks for automatic detection of underwater object in side scan sonar images)

  • 김정문;최지웅;권혁종;오래근;손수욱
    • 한국음향학회지
    • /
    • 제37권2호
    • /
    • pp.118-128
    • /
    • 2018
  • 본 논문은 사이드 스캔 소나 영상을 컨볼루션 신경망으로 학습하여 수중물체를 탐색하는 방법을 다루었다. 사이드 스캔 소나 영상을 사람이 직접 분석하던 방법에서 컨볼루션 신경망 알고리즘이 보강되면 분석의 효율성을 높일 수 있다. 연구에 사용한 사이드 스캔 소나의 영상 데이터는 미 해군 수상전센터에서 공개한 자료이고 4종류의 합성수중물체로 구성되었다. 컨볼루션 신경망 알고리즘은 관심영역 기반으로 학습하는 Faster R-CNN(Region based Convolutional Neural Networks)을 기본으로 하며 신경망의 세부사항을 보유한 데이터에 적합하도록 구성하였다. 연구의 결과를 정밀도-재현율 곡선으로 비교하였고 소나 영상 데이터에 지정한 관심영역의 변경이 탐지성능에 미치는 영향을 검토함으로써 컨볼루션 신경망의 수중물체 탐지 적용성에 대해 살펴보았다.

학습 알고리즘을 이용한 AF용 ROI 선택과 영역 안정화 방법 (Selection of ROI for the AF using by Learning Algorithm and Stabilization Method for the Region)

  • 한학용;장원우;하주영;허강인;강봉순
    • 융합신호처리학회논문지
    • /
    • 제10권4호
    • /
    • pp.233-238
    • /
    • 2009
  • 본 논문에서는 얼굴을 자동 초점(Auto-focus) 디지털 카메라의 관심영역(ROI : Region Of Interest)으로 이용하는 시스템에서 요구되는 검출 영역의 안정적인 선택을 위한 방법을 제안한다. 이 방법은 디지털 카메라와 모바일 카메라에 포함되는 ISP(Image Signal Processor)에서 실시간으로 처리되는 프로그레시브 입력 영상에서 얼굴 영역을 관심영역으로 간주하고 자동으로 초점을 맞추는 방법이다. 얼굴 영역 검출을 위하여 사용한 학습 알고리즘은 에이다부스트 알고리즘을 이용하였다. 학습에 포함되지 않은 기울어진 얼굴에 대한 검출방법과 검출 결과에 대한 후처리 방법, 관심영역이 흔들리지 않고 일정한 영역을 유지하도록 하기 위한 안정화 대책을 제안한다. 제안된 ROI 영역 안정화 알고리즘에 대한 성능을 평가하기 위하여 움직임이 있는 얼굴에 대하여 기준 궤적과의 차이를 보이고, 각 궤적의 회귀곡선과의 RMS 오차를 안정화 성능평가의 척도로 이용하였다.

  • PDF

Learning fiberoptic intubation for awake nasotracheal intubation

  • Kim, Hyuk;So, Eunsun;Karm, Myong-Hwan;Kim, Hyun Jeong;Seo, Kwang-Suk
    • Journal of Dental Anesthesia and Pain Medicine
    • /
    • 제17권4호
    • /
    • pp.297-305
    • /
    • 2017
  • Background: Fiberoptic nasotracheal intubation (FNI) is performed if it is difficult to open the mouth or if intubation using laryngoscope is expected to be difficult. However, training is necessary because intubation performed by inexperienced operators leads to complications. Methods: Every resident performed intubation in 40 patients. Success of FNI was evaluated as the time of FNI. First intubation time was restricted to 2 min 30 s. If the second attempt was unsuccessful, it was considered a failed case, and a specialist performed nasotracheal intubation. If the general method of intubation was expected to be difficult, awake intubation was performed. The degree of nasal bleeding during intubation was also evaluated. Results: The mean age of the operators (11 men, 7 women) was 27.8 years. FNI was performed in a total of 716 patients. The success rate was 88.3% for the first attempt and 94.6% for the second attempt. The failure rate of intubation in anesthetized patients was 4.9%, and 13.6% in awake patients. When intubation was performed in anesthetized patients, the failure rate from the first to fifth trial was 9.6%, which decreased to 0.7% when the number of trials increased to > 30 times. In terms of awake intubation, there was no failed attempt when the resident had performed the FNI > 30 times. The number of FNIs performed and nasal bleeding were important factors influencing the failure rate. Conclusion: The success rate of FNI increased as the number of FNI performed by residents increased despite the nasal bleeding.

Forecasting of the COVID-19 pandemic situation of Korea

  • Goo, Taewan;Apio, Catherine;Heo, Gyujin;Lee, Doeun;Lee, Jong Hyeok;Lim, Jisun;Han, Kyulhee;Park, Taesung
    • Genomics & Informatics
    • /
    • 제19권1호
    • /
    • pp.11.1-11.8
    • /
    • 2021
  • For the novel coronavirus disease 2019 (COVID-19), predictive modeling, in the literature, uses broadly susceptible exposed infected recoverd (SEIR)/SIR, agent-based, curve-fitting models. Governments and legislative bodies rely on insights from prediction models to suggest new policies and to assess the effectiveness of enforced policies. Therefore, access to accurate outbreak prediction models is essential to obtain insights into the likely spread and consequences of infectious diseases. The objective of this study is to predict the future COVID-19 situation of Korea. Here, we employed 5 models for this analysis; SEIR, local linear regression (LLR), negative binomial (NB) regression, segment Poisson, deep-learning based long short-term memory models (LSTM) and tree based gradient boosting machine (GBM). After prediction, model performance comparison was evelauated using relative mean squared errors (RMSE) for two sets of train (January 20, 2020-December 31, 2020 and January 20, 2020-January 31, 2021) and testing data (January 1, 2021-February 28, 2021 and February 1, 2021-February 28, 2021) . Except for segmented Poisson model, the other models predicted a decline in the daily confirmed cases in the country for the coming future. RMSE values' comparison showed that LLR, GBM, SEIR, NB, and LSTM respectively, performed well in the forecasting of the pandemic situation of the country. A good understanding of the epidemic dynamics would greatly enhance the control and prevention of COVID-19 and other infectious diseases. Therefore, with increasing daily confirmed cases since this year, these results could help in the pandemic response by informing decisions about planning, resource allocation, and decision concerning social distancing policies.

Comparison of survival prediction models for pancreatic cancer: Cox model versus machine learning models

  • Kim, Hyunsuk;Park, Taesung;Jang, Jinyoung;Lee, Seungyeoun
    • Genomics & Informatics
    • /
    • 제20권2호
    • /
    • pp.23.1-23.9
    • /
    • 2022
  • A survival prediction model has recently been developed to evaluate the prognosis of resected nonmetastatic pancreatic ductal adenocarcinoma based on a Cox model using two nationwide databases: Surveillance, Epidemiology and End Results (SEER) and Korea Tumor Registry System-Biliary Pancreas (KOTUS-BP). In this study, we applied two machine learning methods-random survival forests (RSF) and support vector machines (SVM)-for survival analysis and compared their prediction performance using the SEER and KOTUS-BP datasets. Three schemes were used for model development and evaluation. First, we utilized data from SEER for model development and used data from KOTUS-BP for external evaluation. Second, these two datasets were swapped by taking data from KOTUS-BP for model development and data from SEER for external evaluation. Finally, we mixed these two datasets half and half and utilized the mixed datasets for model development and validation. We used 9,624 patients from SEER and 3,281 patients from KOTUS-BP to construct a prediction model with seven covariates: age, sex, histologic differentiation, adjuvant treatment, resection margin status, and the American Joint Committee on Cancer 8th edition T-stage and N-stage. Comparing the three schemes, the performance of the Cox model, RSF, and SVM was better when using the mixed datasets than when using the unmixed datasets. When using the mixed datasets, the C-index, 1-year, 2-year, and 3-year time-dependent areas under the curve for the Cox model were 0.644, 0.698, 0.680, and 0.687, respectively. The Cox model performed slightly better than RSF and SVM.

Machine learning based anti-cancer drug response prediction and search for predictor genes using cancer cell line gene expression

  • Qiu, Kexin;Lee, JoongHo;Kim, HanByeol;Yoon, Seokhyun;Kang, Keunsoo
    • Genomics & Informatics
    • /
    • 제19권1호
    • /
    • pp.10.1-10.7
    • /
    • 2021
  • Although many models have been proposed to accurately predict the response of drugs in cell lines recent years, understanding the genome related to drug response is also the key for completing oncology precision medicine. In this paper, based on the cancer cell line gene expression and the drug response data, we established a reliable and accurate drug response prediction model and found predictor genes for some drugs of interest. To this end, we first performed pre-selection of genes based on the Pearson correlation coefficient and then used ElasticNet regression model for drug response prediction and fine gene selection. To find more reliable set of predictor genes, we performed regression twice for each drug, one with IC50 and the other with area under the curve (AUC) (or activity area). For the 12 drugs we tested, the predictive performance in terms of Pearson correlation coefficient exceeded 0.6 and the highest one was 17-AAG for which Pearson correlation coefficient was 0.811 for IC50 and 0.81 for AUC. We identify common predictor genes for IC50 and AUC, with which the performance was similar to those with genes separately found for IC50 and AUC, but with much smaller number of predictor genes. By using only common predictor genes, the highest performance was AZD6244 (0.8016 for IC50, 0.7945 for AUC) with 321 predictor genes.

Malware Detection Using Deep Recurrent Neural Networks with no Random Initialization

  • Amir Namavar Jahromi;Sattar Hashemi
    • International Journal of Computer Science & Network Security
    • /
    • 제23권8호
    • /
    • pp.177-189
    • /
    • 2023
  • Malware detection is an increasingly important operational focus in cyber security, particularly given the fast pace of such threats (e.g., new malware variants introduced every day). There has been great interest in exploring the use of machine learning techniques in automating and enhancing the effectiveness of malware detection and analysis. In this paper, we present a deep recurrent neural network solution as a stacked Long Short-Term Memory (LSTM) with a pre-training as a regularization method to avoid random network initialization. In our proposal, we use global and short dependencies of the inputs. With pre-training, we avoid random initialization and are able to improve the accuracy and robustness of malware threat hunting. The proposed method speeds up the convergence (in comparison to stacked LSTM) by reducing the length of malware OpCode or bytecode sequences. Hence, the complexity of our final method is reduced. This leads to better accuracy, higher Mattews Correlation Coefficients (MCC), and Area Under the Curve (AUC) in comparison to a standard LSTM with similar detection time. Our proposed method can be applied in real-time malware threat hunting, particularly for safety critical systems such as eHealth or Internet of Military of Things where poor convergence of the model could lead to catastrophic consequences. We evaluate the effectiveness of our proposed method on Windows, Ransomware, Internet of Things (IoT), and Android malware datasets using both static and dynamic analysis. For the IoT malware detection, we also present a comparative summary of the performance on an IoT-specific dataset of our proposed method and the standard stacked LSTM method. More specifically, of our proposed method achieves an accuracy of 99.1% in detecting IoT malware samples, with AUC of 0.985, and MCC of 0.95; thus, outperforming standard LSTM based methods in these key metrics.

넷플로우-타임윈도우 기반 봇넷 검출을 위한 오토엔코더 실험적 재고찰 (An Experimental Study on AutoEncoder to Detect Botnet Traffic Using NetFlow-Timewindow Scheme: Revisited)

  • 강구홍
    • 정보보호학회논문지
    • /
    • 제33권4호
    • /
    • pp.687-697
    • /
    • 2023
  • 공격 양상이 더욱 지능화되고 다양해진 봇넷은 오늘날 가장 심각한 사이버 보안 위협 중 하나로 인식된다. 본 논문은 UGR과 CTU-13 데이터 셋을 대상으로 반지도 학습 딥러닝 모델인 오토엔코더를 활용한 봇넷 검출 실험결과를 재검토한다. 오토엔코더의 입력벡터를 준비하기 위해, 발신지 IP 주소를 기준으로 넷플로우 레코드를 슬라이딩 윈도우 기반으로 그룹화하고 이들을 중첩하여 트래픽 속성을 추출한 데이터 포인트를 생성하였다. 특히, 본 논문에서는 동일한 흐름-차수(flow-degree)를 가진 데이터 포인트 수가 이들 데이터 포인트에 중첩된 넷플로우 레코드 수에 비례하는 멱법칙(power-law) 특징을 발견하고 실제 데이터 셋을 대상으로 97% 이상의 상관계수를 제공하는 것으로 조사되었다. 또한 이러한 멱법칙 성질은 오토엔코더의 학습에 중요한 영향을 미치고 결과적으로 봇넷 검출 성능에 영향을 주게 된다. 한편 수신자조작특성(ROC)의 곡선아래면적(AUC) 값을 사용해 오토엔코더의 성능을 검증하였다.