• Title/Summary/Keyword: 계산의 복잡성

Search Result 1,096, Processing Time 0.05 seconds

Analysis of Uncertainty in Ocean Color Products by Water Vapor Vertical Profile (수증기 연직 분포에 의한 GOCI-II 해색 산출물 오차 분석)

  • Kyeong-Sang Lee;Sujung Bae;Eunkyung Lee;Jae-Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1591-1604
    • /
    • 2023
  • In ocean color remote sensing, atmospheric correction is a vital process for ensuring the accuracy and reliability of ocean color products. Furthermore, in recent years, the remote sensing community has intensified its requirements for understanding errors in satellite data. Accordingly, research is currently addressing errors in remote sensing reflectance (Rrs) resulting from inaccuracies in meteorological variables (total ozone, pressure, wind field, and total precipitable water) used as auxiliary data for atmospheric correction. However, there has been no investigation into the error in Rrs caused by the variability of the water vapor profile, despite it being a recognized error source. In this study, we used the Second Simulation of a Satellite Signal Vector version 2.1 simulation to compute errors in water vapor transmittance arising from variations in the water vapor profile within the GOCI-II observation area. Subsequently, we conducted an analysis of the associated errors in ocean color products. The observed water vapor profile not only exhibited a complex shape but also showed significant variations near the surface, leading to differences of up to 0.007 compared to the US standard 62 water vapor profile used in the GOCI-II atmospheric correction. The resulting variation in water vapor transmittance led to a difference in aerosol reflectance estimation, consequently introducing errors in Rrs across all GOCI-II bands. However, the error of Rrs in the 412-555 nm due to the difference in the water vapor profile band was found to be below 2%, which is lower than the required accuracy. Also, similar errors were shown in other ocean color products such as chlorophyll-a concentration, colored dissolved organic matter, and total suspended matter concentration. The results of this study indicate that the variability in water vapor profiles has minimal impact on the accuracy of atmospheric correction and ocean color products. Therefore, improving the accuracy of the input data related to the water vapor column concentration is even more critical for enhancing the accuracy of ocean color products in terms of water vapor absorption correction.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Koreanized Analysis System Development for Groundwater Flow Interpretation (지하수유동해석을 위한 한국형 분석시스템의 개발)

  • Choi, Yun-Yeong
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.3 no.3 s.10
    • /
    • pp.151-163
    • /
    • 2003
  • In this study, the algorithm of groundwater flow process was established for koreanized groundwater program development dealing with the geographic and geologic conditions of the aquifer have dynamic behaviour in groundwater flow system. All the input data settings of the 3-DFM model which is developed in this study are organized in Korean, and the model contains help function for each input data. Thus, it is designed to get detailed information about each input parameter when the mouse pointer is placed on the corresponding input parameter. This model also is designed to easily specify the geologic boundary condition for each stratum or initial head data in the work sheet. In addition, this model is designed to display boxes for input parameter writing for each analysis condition so that the setting for each parameter is not so complicated as existing MODFLOW is when steady and unsteady flow analysis are performed as well as the analysis for the characteristics of each stratum. Descriptions for input data are displayed on the right side of the window while the analysis results are displayed on the left side as well as the TXT file for this results is available to see. The model developed in this study is a numerical model using finite differential method, and the applicability of the model was examined by comparing and analyzing observed and simulated groundwater heads computed by the application of real recharge amount and the estimation of parameters. The 3-DFM model is applied in this study to Sehwa-ri, and Songdang-ri area, Jeju, Korea for analysis of groundwater flow system according to pumping, and obtained the results that the observed and computed groundwater head were almost in accordance with each other showing the range of 0.03 - 0.07 error percent. It is analyzed that the groundwater flow distributed evenly from Nopen-orum and Munseogi-orum to Wolang-bong, Yongnuni-orum, and Songja-bong through the computation of equipotentials and velocity vector using the analysis result of simulation which was performed before the pumping started in the study area. These analysis results show the accordance with MODFLOW's.

Development of an Eating Habit Checklist for Screening Elementary School Children at Risk of Inadequate Micronutrient Intake (초등학생의 미량영양소 섭취부족 위험 진단을 위한 간이 식습관평가표 개발)

  • Yon, Mi-Yong;Hyun, Tai-Sun
    • Journal of Nutrition and Health
    • /
    • v.42 no.1
    • /
    • pp.38-47
    • /
    • 2009
  • The purpose of the study was to develop an eating habit checklist for screening elementary school children at risk of inadequate micronutrient intake. Eating habits, food intake, and anthropometric data were collected from 142 children (80 boys and 62 girls) in the $4^{th}$ to $6^{th}$ grades of elementary schools. Percentage of Recommended Intakes (RI) and Mean Adequacy Ratio (MAR) of six micronutrients; vitamin A, riboflavin, vitamin C, calcium, iron, zinc, and the number of nutrients the children consumed below EAR among the six nutrients were used as indices to detect the risk of inadequate micronutrient intake. Pearson correlation coefficients were calculated between eating habit scores and inadequate micronutrient intake indices in order to select questions included in the checklist. Meal frequency, enough time for breakfast, regularity of dinner, appetite, eating frequencies of Kimchi, milk, fruits and beans showed significant correlations with indices of inadequate micronutrient intake. Stepwise regression analysis was performed to give each item a different weight by prediction strength. To determine the cut-off point of the test score, sensitivity, specificity, and positive predictive values were calculated. The 8-item checklist with test results from 0 to 12 points was developed, and those with equal or higher than 6 points were diagnosed as high-risk group of inadequate micronutrient intake, and those with 4 or 5 points were diagnosed as moderate-risk group. Among our subjects 14.1% was diagnosed as high-risk group, and 30.3% as moderate-risk group. The proportions of the subjects who consumed below EAR of all micronutrients but vitamin C were highest in the high-risk group, and there were significant differences in the proportions of the subjects with intake below EAR of all micronutrients except vitamin B6 among the three groups. This checklist will provide a useful screening tool to identify children at risk of inadequate micronutrient intake.

HW/SW Partitioning Techniques for Multi-Mode Multi-Task Embedded Applications (멀티모드 멀티태스크 임베디드 어플리케이션을 위한 HW/SW 분할 기법)

  • Kim, Young-Jun;Kim, Tae-Whan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.337-347
    • /
    • 2007
  • An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.

Review on Usefulness of EPID (Electronic Portal Imaging Device) (EPID (Electronic Portal Imaging Device)의 유용성에 관한 고찰)

  • Lee, Choong Won;Park, Do Keun;Choi, A Hyun;Ahn, Jong Ho;Song, Ki Weon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.57-67
    • /
    • 2013
  • Purpose: Replacing the film which used to be used for checking the set-up of the patient and dosimetry during radiation therapy, more and more EPID equipped devices are in use at present. Accordingly, this article tried to evaluated the accuracy of the position check-up and the usefulness of dosimetry during the use of an electronic portal imaging device. Materials and Methods: On 50 materials acquired with the search of Korea Society Radiotherapeutic Technology, The Korean Society for Radiation Oncology, and Pubmed using "EPID", "Portal dosimetry", "Portal image", "Dose verification", "Quality control", "Cine mode", "Quality - assurance", and "In vivo dosimetry" as indexes, the usefulness of EPID was analyzed by classifying them as history of EPID and dosimetry, set-up verification and characteristics of EPID. Results: EPID is developed from the first generation of Liquid-filled ionization chamber, through the second generation of Camera-based fluoroscopy, and to the third generation of Amorphous-silicon EPID imaging modes can be divided into EPID mode, Cine mode and Integrated mode. When evaluating absolute dose accuracy of films and EPID, it was found that EPID showed within 1% and EDR2 film showed within 3% errors. It was confirmed that EPID is better in error measurement accuracy than film. When gamma analyzing the dose distribution of the base exposure plane which was calculated from therapy planning system, and planes calculated by EDR2 film and EPID, both film and EPID showed less than 2% of pixels which exceeded 1 at gamma values (r%>1) with in the thresholds such as 3%/3 mm and 2%/2 mm respectively. For the time needed for full course QA in IMRT to compare loads, EDR2 film recorded approximately 110 minutes, and EPID recorded approximately 55 minutes. Conclusion: EPID could easily replace conventional complicated and troublesome film and ionization chamber which used to be used for dosimetry and set-up verification, and it was proved to be very efficient and accurate dosimetry device in quality assurance of IMRT (intensity modulated radiation therapy). As cine mode imaging using EPID allows locating tumors in real-time without additional dose in lung and liver which are mobile according to movements of diaphragm and in rectal cancer patients who have unstable position, it may help to implement the most optimal radiotherapy for patients.

  • PDF

Development of Multimedia Annotation and Retrieval System using MPEG-7 based Semantic Metadata Model (MPEG-7 기반 의미적 메타데이터 모델을 이용한 멀티미디어 주석 및 검색 시스템의 개발)

  • An, Hyoung-Geun;Koh, Jae-Jin
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.573-584
    • /
    • 2007
  • As multimedia information recently increases fast, various types of retrieval of multimedia data are becoming issues of great importance. For the efficient multimedia data processing, semantics based retrieval techniques are required that can extract the meaning contents of multimedia data. Existing retrieval methods of multimedia data are annotation-based retrieval, feature-based retrieval and annotation and feature integration based retrieval. These systems take annotator a lot of efforts and time and we should perform complicated calculation for feature extraction. In addition. created data have shortcomings that we should go through static search that do not change. Also, user-friendly and semantic searching techniques are not supported. This paper proposes to develop S-MARS(Semantic Metadata-based Multimedia Annotation and Retrieval System) which can represent and extract multimedia data efficiently using MPEG-7. The system provides a graphical user interface for annotating, searching, and browsing multimedia data. It is implemented on the basis of the semantic metadata model to represent multimedia information. The semantic metadata about multimedia data is organized on the basis of multimedia description schema using XML schema that basically comply with the MPEG-7 standard. In conclusion. the proposed scheme can be easily implemented on any multimedia platforms supporting XML technology. It can be utilized to enable efficient semantic metadata sharing between systems, and it will contribute to improving the retrieval correctness and the user's satisfaction on embedding based multimedia retrieval algorithm method.

Spatial Genetic Structure at a Korean Pine (Pinus koraiensis) Stand on Mt. Jumbong in Korea Based on Isozyme Studies (점봉산(點鳳山) 잣나무임분(林分)의 개체목(個體木) 공간분포(空間分布)에 따른 유전구조(遺傳構造))

  • Hong, Kyung-Nak;Kwon, Young-Jin;Chung, Jae-Min;Shin, Chang-Ho;Hong, Yong-Pyo;Kang, Bum-Yong
    • Journal of Korean Society of Forest Science
    • /
    • v.90 no.1
    • /
    • pp.43-54
    • /
    • 2001
  • Genetic differentiation of populations is resulted from the environmental and the genetic effects, and the interactions between them. Whereas, the major factors influencing to the genetic differentiation within populations are the gene flow induced by seed or pollen dispersial, the microsite heterogeneity, and the density-dependent distribution of individuals. For the purpose of studying spatial genetic structure and the distribution pattern of Korean pines(Pinus koraiensis), we set up one $100{\times}100m$ plot at a Korean pine stand in Quercus mongolica community on Mt. Jumbong in Korea. To estimate the coefficient of spatial autocorrelation as Moran's index and an analogue, simple block distance, isozyme markers were analyzed in 325 Korean pines. For 11 polymorphic loci observed in 9 enzyme systems, the average percentage of polymorphic loci, the observed and expected heterozygocity were 72.2% 0.200, and 0.251, respectively. It was revealed the excess of homozygotes was observed in the plot, which suggests that here may be more number of consanguineous trees than expected. On the basis of isozyme genotypes observed in this study, 325 trees were classified into 147 groups in which the maximum number of trees for one group was 34. From the distance class of 24-32m, the genetic heterogeneity began to increase. The variation of simple block distance against the growth performance by tree height and diameter also showed the same trend at 24~32m class. According to high fixation index(F=0.204), the spatial genetic structure within a stand, the analysis of the growth performance, and the distribution patterns of identical genotypes, we inferred that the genetic structure of a Korean pine stand in Mt. Jumbong has been maintained rather density-dependent mechanism than the gene flow, such as the pollen dispersial or the heavy input of seeds following the forest gaps. The genetic patchy size was determined between 24~32m, which suggests that the selection of individuals for the ex situ conservation of Korean pine in Mt. Jumbong may be desirable to be made with the spatial distance over 37 meters between trees.

  • PDF

Assessment for the Utility of Treatment Plan QA System according to Dosimetric Leaf Gap in Multileaf Collimator (다엽콜리메이터의 선량학적엽간격에 따른 치료계획 정도관리시스템의 효용성 평가)

  • Lee, Soon Sung;Choi, Sang Hyoun;Min, Chul Kee;Kim, Woo Chul;Ji, Young Hoon;Park, Seungwoo;Jung, Haijo;Kim, Mi-Sook;Yoo, Hyung Jun;Kim, Kum Bae
    • Progress in Medical Physics
    • /
    • v.26 no.3
    • /
    • pp.168-177
    • /
    • 2015
  • For evaluating the treatment planning accurately, the quality assurance for treatment planning is recommended when patients were treated with IMRT which is complex and delicate. To realize this purpose, treatment plan quality assurance software can be used to verify the delivered dose accurately before and after of treatment. The purpose of this study is to evaluate the accuracy of treatment plan quality assurance software for each IMRT plan according to MLC DLG (dosimetric leaf gap). Novalis Tx with a built-in HD120 MLC was used in this study to acquire the MLC dynalog file be imported in MobiusFx. To establish IMRT plan, Eclipse RTP system was used and target and organ structures (multi-target, mock prostate, mock head/neck, C-shape case) were contoured in I'mRT phantom. To verify the difference of dose distribution according to DLG, MLC dynalog files were imported to MobiusFx software and changed the DLG (0.5, 0.7, 1.0, 1.3, 1.6 mm) values in MobiusFx. For evaluation dose, dose distribution was evaluated by using 3D gamma index for the gamma criteria 3% and distance to agreement 3 mm, and the point dose was acquired by using the CC13 ionization chamber in isocenter of I'mRT phantom. In the result for point dose, the mock head/neck and multi-target had difference about 4% and 3% in DLG 0.5 and 0.7 mm respectively, and the other DLGs had difference less than 3%. The gamma index passing-rate of mock head/neck were below 81% for PTV and cord, and multi-target were below 30% for center and superior target in DLGs 0.5, 0.7 mm, however, inferior target of multi-target case and parotid of mock head/neck case had 100.0% passing rate in all DLGs. The point dose of mock prostate showed difference below 3.0% in all DLGs, however, the passing rate of PTV were below 95% in 0.5, 0.7 mm DLGs, and the other DLGs were above 98%. The rectum and bladder had 100.0% passing rate in all DLGs. As the difference of point dose in C-shape were 3~9% except for 1.3 mm DLG, the passing rate of PTV in 1.0 1.3 mm were 96.7, 93.0% respectively. However, passing rate of the other DLGs were below 86% and core was 100.0% passing rate in all DLGs. In this study, we verified that the accuracy of treatment planning QA system can be affected by DLG values. For precise quality assurance for treatment technique using the MLC motion like IMRT and VMAT, we should use appropriate DLG value in linear accelerator and RTP system.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.