• Title/Summary/Keyword: parallel analysis

Search Result 2,988, Processing Time 0.03 seconds

Influence of Microcracks in Geochang Granite on Brazilian Tensile Strength (거창화강암의 미세균열이 압열인장강도에 미치는 영향)

  • Park, Deok-Won
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.34 no.3
    • /
    • pp.193-208
    • /
    • 2021
  • The characteristics of the microcrack lengths(①), microcrack spacings(②) and Brazilian tensile strengths(③) related to the six directions of rock cleavages(H2~R1) in Geochang granite were analyzed. First, the 18 cumulative graphs for the above three major factors representing unique characteristics of the rock cleavages were made. Through the general chart for these graphs classified into three planes and three rock cleavages, the 28 parameters on the length, spacing and Brazilian tensile strength have been determined. The results of correlation analysis among these parameters are summarized as follows. Second, the above parameters were classified into six groups(I~VI) according to the sorting order on the magnitude of parameter values among three rock cleavages and three planes. The values of parameters belonging to group I and II are in order of R(rift) < G(grain) < H(hardway) and H < G < R. The values of the 8 parameters on the length of line(os2, 𝚫s, 𝚫L and oSmean), the exponent(λLmean and λSmean), the slope(amean) and the anisotropy coefficient (Anmean) are in order of R < G < H and H'(hardway plane) < G'(grain plane) < R'(rift plane). Third, the noticeable differences in distribution patterns among the six types of charts for three planes and three rock cleavages are as follows. From the chart for three planes, the values of 𝚫L, 𝚫s and 𝚫σt, corresponding to the distance between two points where the two fitting lines meet on the X-axis, increase in the order of R' < H' < G'. In particular, the two graphs of R2 and G2 related to the length and Brazilian tensile strength are almost parallel to each other and show the distribution characteristics of hardway plane. Among the graphs related to the Brazilian tensile strength, the overall shape for hardway plane is similar to that for grain. From the chart for three rock cleavages, the slopes of the graphs related to the length increase in the order of R < G < H, while those of the graphs related to the spacing and Brazilian tensile strength decrease in the order of R < G < H. Lastly, the characteristics of variation among the six rock cleavages, the three planes and the three rock cleavages were visualized through the correlation chart among the above parameters from this study.

Effect of Chlorine Dioxide (ClO2) on the Malodor Suppression of Chicken Feces (이산화염소(ClO2) 처리가 계분의 악취 억제에 미치는 영향)

  • Ji Woo, Park;Gyeongjin, Kim;Tabita Dameria, Marbun;Duhak, Yoon;Changsu, Kong;Sang Moo, Lee;Eun Joong, Kim
    • Korean Journal of Poultry Science
    • /
    • v.49 no.4
    • /
    • pp.287-298
    • /
    • 2022
  • This study evaluated the efficacy of chlorine dioxide (ClO2) as an oxidant to reduce malodor emission from chicken feces. Two experiments were performed with the following four treatments in parallel: 1) fresh chicken feces with only distilled water added as a control, 2) a commercial germicide as a positive control, and 3) 2,000 or 4) 3,000 ppm of ClO2 supplementation. Aluminum gas bags containing chicken feces sealed with a silicone plug were used in both experiments, and each treatment was tested in triplicate. In Experiment 1, 10 mL of each additive was added on the first day of incubation, and malodor emissions were then assessed after 10 days of incubation. In Experiment 2, 1 mL of each additive was added daily during a 14-day incubation period. At the end of the incubation, gas production, malodor-causing substances (H2S and NH3 gases), dry matter, pH, volatile fatty acids (VFAs), and microbial enumeration were analyzed. Supplementing ClO2 at 2,000 and 3,000 ppm significantly reduced the pH and the ammonia-N, total VFA, H2S, and ammonia gas concentrations in chicken feces compared with the control feces (P<0.05). Additionally, microbial analysis indicated that the number of coliform bacteria was decrease after ClO2 treatment (P<0.05). In conclusion, ClO2 at 2,000 and 3,000 ppm was effective at reducing malodor emission from chicken feces. However, further studies are warranted to examine the effects of ClO2 at various concentrations and the effects on malodor emission from a poultry farm.

Effect of Live Commerce Characteristics on Purchase Intention : Focusing on the Parallel Multiple Mediating Effect of Trust and Flow (라이브 커머스 특성이 구매 의도에 미치는 영향 : 신뢰와 몰입의 이중매개 효과를 중심으로)

  • Kim, Sung-jong;Chung, Byoung-gyu
    • Journal of Venture Innovation
    • /
    • v.5 no.1
    • /
    • pp.59-73
    • /
    • 2022
  • Untact marketing is being activated due to COVID-19. As a result, live commerce, an untact seller, is also active in the e-commerce market. Therefore, in this study, we tried to find out what factors influence consumers when they purchase through live commerce. In particular, since consumers' trust and flow in live commerce platforms and products is important, their mediating effects were analyzed. The research model was established by deriving common variables among the characteristics of live commerce based on previous studies. An online survey was conducted for empirical analysis. 200 users who made at least one purchase in live commerce were analyzed. The study results are as follows. Among the characteristics of live commerce, entertainment, economics, professionality were found to have a positive (+) effect on purchase intention. On the other hand, ease of use did not significantly affect purchase intention. The influence was shown in the order of entertainment, professionality and economics. The mediating effect of trust was found to play a mediating role in that entertainment, economics, and professionality affect purchase intention. On the other hand, a significant mediating effect was not tested between ease of use and purchase intention. As for the mediating effect of flow, it was found that flow plays a mediating role in that entertainment and economics affect purchase intention. On the other hand, the mediating effect of flow in terms of ease of use and economics affecting purchase intention was not tested. As for the multiple mediating effect of flow and trust, the mediating effect of flow was stronger than the mediating effect of trust when entertainment had an effect on purchase intention. In terms of professionality affecting purchase intention, the mediating effect of flow was also stronger than the mediating effect of trust. On the other hand, it was analyzed that only trust had a mediating effect when economics had an effect on purchase intention. The results of this study empirically tested that entertainment, which is a fun and interesting factor of live commerce content, is the most important factor when consumers use live commerce. In addition, various results were derived, such as cases where trust and flow act as mediators at the same time or not at all. Practical implications can be found in that it provided a clue about what to prioritize in order to reach consumers for live commerce platform.

Development of Strategies to Improve Water Quality of the Yeongsan River in Connection with Adaptation to Climate Change (기후변화의 적응과 연계한 영산강 수질개선대책 개발)

  • Yong Woon Lee;Won Mo Yang;Gwang Duck Song;Yong Uk Ryu;Hak Young Lee
    • Korean Journal of Ecology and Environment
    • /
    • v.56 no.3
    • /
    • pp.187-195
    • /
    • 2023
  • Almost all of the water from agricultural dams located to the upper of the Yeongsan river is supplied as irrigation water for farmland and thus is not discharged to the main stream of the river. Also, most of the irrigation water does not return to the river after use, adding to the lack of flow in the main stream. As a result, the water quality and aquatic health of the river have become the poorest among the four major rivers in Korea. Therefore, in this study, several strategies for water quality improvement of the river were developed considering pollution reduction and flow rate increase, and their effect analysis was performed using a water quality model. The results of this study showed that the target water quality of the Yeongsan river could be achieved if flow increase strategies (FISs) are intensively pursued in parallel with pollution reduction. The reason is because the water quality of the river has been steadily improved through pollution reduction but this method is now nearing the limit. In addition, rainfall-related FISs such as dam construction and water distribution adjustment may be less effective or lost if a megadrought continues due to climate change and then rainfall does not occur for a long time. Therefore, in the future, if the application conditions for the FISs are similar, the seawater desalination facility, which is independent of rainfall, should be considered as the priority installation target among the FISs. The reason is that seawater desalination facilities can replace the water supply function of dams, which are difficult to newly build in Korea, and can be useful as a climate change adaptation facility by preventing water-related disasters in the event of a long-term megadrought.

The Presence of Related Personnel Effects on the IPO of Special Listed Firms on KOSDAQ Market: Based on the Signal Effect of Third-party Social Recognition (관계인사 영입이 코스닥 기술특례기업 IPO성과에 미치는 영향: 제3자 사회적 인정의 신호 효과를 바탕으로)

  • Kiyong, Kim;Young-Hee, Ko
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.17 no.6
    • /
    • pp.13-24
    • /
    • 2022
  • The purpose of this study is to examine whether the existence of related personnel in KOSDAQ technology special listed firms has a signal effect on the market and affects performance when listed. The KOSDAQ technology special listing system is a system introduced to enable future growth by securing financing through corporate public offering based on the technology and marketability of technology-based startups and venture companies. As a result of analyzing 135 special technology companies listed from 2005 to 21 (excluding SPAC mergers and foreign companies) whether or not related personnel affect corporate value and listing period when they are listed, it was analyzed that the presence of related personnel did not significantly affect corporate value or listing period. The same was found in the results of the verification by reducing the scope to related personnel such as public officials and related agencies. However, under certain conditions, significant results were derived from the presence of related personnel on the listing of companies listed in special technology cases. It was found that the presence of related personnel and VC investment had a significant effect on corporate value, and in the case of bio-industry, there was a slight significant effect on the duration of listing. This study is significant in that it systematically analyzed the signal effect of the existence of related personnel for the first time for all 135 companies. In addition, as a result of the analysis, the results suggest that internalized efforts to secure technology and marketability are more important, such as parallel to VC investment, rather than simply recruiting related personnel.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Characteristics and Distribution Pattern of Carbonate Rock Resources in Kangwon Area: The Gabsan Formation around the Mt. Gachang Area, Chungbuk, Korea (강원 지역에 분포하는 석회석 자원의 특성과 부존환경: 충북 가창산 지역의 갑산층을 중심으로)

  • Park, Soo-In;Lee, Hee-Kwon;Lee, Sang-Hun
    • Journal of the Korean earth science society
    • /
    • v.21 no.4
    • /
    • pp.437-448
    • /
    • 2000
  • The Middle Carboniferous Gabsan Formation is distributed in the Cheongrim area of southern Yeongwol and the Mt. Gachang area of Chungbuk Province. This study was carried out to investigate the lithological characters and geochemical composition of the limestones and to find out controlling structures of the limestones of the formation. The limestones of the Gabsan Formation are characterized by the light gray to light brown in color and fine and dense textures. The limestone grains are composed of crinoid fragments, small foraminfers, fusulinids, gastropods, ostracods, etc. Due to the recrystallization, some limestones consist of fine crystalline calcites. The chemical analysis of limestones of the formation was conducted to find out the contents of CaO, MgO, Al$_2$O$_3$, Fe$_2$O$_3$ and SiO$_2$. The content of CaO ranges from 49.78-60.63% and the content of MgO ranges from 0.74 to 4.63% The contents of Al$_2$O$_3$ and Fe$_2$O$_3$ are 0.02-0.55% and 0.02${\sim}$0.84% , respectively. The content of SiO$_2$ varies from 1.55 to 4.80%, but some samples contain more than 6.0%. The limestones of the formation can be grouped into two according to the CaO content: One is a group of which CaO content ranges from 49.78 to 56.26% and the other is a group of which CaO content varies from 59.36 to 60.38%. In the first group, the contents of Al$_2$O$_3$, Fe$_2$O$_3$ and SiO$_2$ range very irregularly according to the CaO content. In the second group, the values of MgO, Al$_2$O$_3$, Fe$_2$O$_3$ and SiO$_2$ are nearly same. Detailed structural analysis of mesoscopic structures and microstructures indicates the five phase of deformation in the study area. The first phase of deformation(D$_1$) is characterized by regional scale isoclinal folds, and bedding parallel S$_1$ axial plane foliation which is locally developed in the mudstone and sandstone. Based on the observations of microstructures, S$_1$ foliations appear to be developed by grain preferred orientation accompanying pressure-solution. During second phase of deformation, outcrop scale E-W trending folds with associated foliations and lineations are developed. Microstructural observations indicate that crenulation foliations were formed by pressure-solution, grain boundary sliding and grain rotation. NNW and SSE trending outcrop scale folds, axial plane foliations, crenulation foliations, crenulation lineations, intersection lineations are developed during the third phase of deformation. On the microscale F$_3$ fold, axial plane foliations which are formed by pressure solution are well developed. Fourth phase of deformation is characterized by map scale NNW trending folds. The pre-existing planar and linear structures are reoriented by F$_4$ folds. Fifth phase of deformation developed joints and faults. The distribution pattern of the limestones is mostly controlled by F$_1$ and F$_4$ folds.

  • PDF

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

The Characteristics and Performances of Manufacturing SMEs that Utilize Public Information Support Infrastructure (공공 정보지원 인프라 활용한 제조 중소기업의 특징과 성과에 관한 연구)

  • Kim, Keun-Hwan;Kwon, Taehoon;Jun, Seung-pyo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.1-33
    • /
    • 2019
  • The small and medium sized enterprises (hereinafter SMEs) are already at a competitive disadvantaged when compared to large companies with more abundant resources. Manufacturing SMEs not only need a lot of information needed for new product development for sustainable growth and survival, but also seek networking to overcome the limitations of resources, but they are faced with limitations due to their size limitations. In a new era in which connectivity increases the complexity and uncertainty of the business environment, SMEs are increasingly urged to find information and solve networking problems. In order to solve these problems, the government funded research institutes plays an important role and duty to solve the information asymmetry problem of SMEs. The purpose of this study is to identify the differentiating characteristics of SMEs that utilize the public information support infrastructure provided by SMEs to enhance the innovation capacity of SMEs, and how they contribute to corporate performance. We argue that we need an infrastructure for providing information support to SMEs as part of this effort to strengthen of the role of government funded institutions; in this study, we specifically identify the target of such a policy and furthermore empirically demonstrate the effects of such policy-based efforts. Our goal is to help establish the strategies for building the information supporting infrastructure. To achieve this purpose, we first classified the characteristics of SMEs that have been found to utilize the information supporting infrastructure provided by government funded institutions. This allows us to verify whether selection bias appears in the analyzed group, which helps us clarify the interpretative limits of our study results. Next, we performed mediator and moderator effect analysis for multiple variables to analyze the process through which the use of information supporting infrastructure led to an improvement in external networking capabilities and resulted in enhancing product competitiveness. This analysis helps identify the key factors we should focus on when offering indirect support to SMEs through the information supporting infrastructure, which in turn helps us more efficiently manage research related to SME supporting policies implemented by government funded institutions. The results of this study showed the following. First, SMEs that used the information supporting infrastructure were found to have a significant difference in size in comparison to domestic R&D SMEs, but on the other hand, there was no significant difference in the cluster analysis that considered various variables. Based on these findings, we confirmed that SMEs that use the information supporting infrastructure are superior in size, and had a relatively higher distribution of companies that transact to a greater degree with large companies, when compared to the SMEs composing the general group of SMEs. Also, we found that companies that already receive support from the information infrastructure have a high concentration of companies that need collaboration with government funded institution. Secondly, among the SMEs that use the information supporting infrastructure, we found that increasing external networking capabilities contributed to enhancing product competitiveness, and while this was no the effect of direct assistance, we also found that indirect contributions were made by increasing the open marketing capabilities: in other words, this was the result of an indirect-only mediator effect. Also, the number of times the company received additional support in this process through mentoring related to information utilization was found to have a mediated moderator effect on improving external networking capabilities and in turn strengthening product competitiveness. The results of this study provide several insights that will help establish policies. KISTI's information support infrastructure may lead to the conclusion that marketing is already well underway, but it intentionally supports groups that enable to achieve good performance. As a result, the government should provide clear priorities whether to support the companies in the underdevelopment or to aid better performance. Through our research, we have identified how public information infrastructure contributes to product competitiveness. Here, we can draw some policy implications. First, the public information support infrastructure should have the capability to enhance the ability to interact with or to find the expert that provides required information. Second, if the utilization of public information support (online) infrastructure is effective, it is not necessary to continuously provide informational mentoring, which is a parallel offline support. Rather, offline support such as mentoring should be used as an appropriate device for abnormal symptom monitoring. Third, it is required that SMEs should improve their ability to utilize, because the effect of enhancing networking capacity through public information support infrastructure and enhancing product competitiveness through such infrastructure appears in most types of companies rather than in specific SMEs.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.