• Title/Summary/Keyword: optimal parameters

Search Result 4,300, Processing Time 0.033 seconds

Phase Behavior Study of Fatty Acid Potassium Cream Soaps (지방산 칼륨 Cream Soaps 의 상거동 연구)

  • Noh, Min Joo;Yeo, Hye Lim;Lee, Ji Hyun;Park, Myeong Sam;Lee, Jun Bae;Yoon, Moung Seok
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.48 no.1
    • /
    • pp.55-64
    • /
    • 2022
  • The potassium cream soap with fatty acid called cleaning foam has a crystal gel structure, and unlike an emulsion system, it is weak to shear stress and shows characteristics that are easily separated under high temperature storage conditions. The crystal gel structure of cleansing foams is significantly influenced by the nature and proportion of fatty acids, degree of neutralization, and the nature and proportion of polyols. In order to investigate the effect of these parameters on the crystal gel structure, a ternary system consisting of water/KOH/fatty acid was investigated in this study. The investigation of differential scanning calorimeter (DSC) revealed that the eutectic point was found at the ratio of myristic acid (MA) : stearic acid (SA) = 3 : 1 and ternary systems were the most stable at the eutectic point. However, the increase in fatty acid content had little effect on stability. On the basis of viscosity and polarized optical microscopy (POM) measurements, the optimum degree of neutralization was found to be about 75%. The system was stable when the melting point (Tm) of the ternary system was higher than the storage temperature and the crystal phase was transferred to lamellar gel phase, but the increase in fatty acid content had little effect on stability. The addition of polyols to the ternary system played an important role in changing the Tm and causing phase transition. The structure of the cleansing foams were confirmed through cryogenic scanning electron microscope (Cryo-SEM), small and wide angle X-ray scattering (SAXS and WAXS) analysis. Since butylene glycol (BG), propylene glycol (PG), and dipropylene glycol (DPG) lowered the Tm and hindered the lamellar gel formation, they were unsuitable for the formation of stable cleansing foam. In contrast, glycerin, PEG-400, and sorbitol increased the Tm, and facilitated the formation of lamellar gel phase, which led to a stable ternary system. Glycerin was found to be the most optimal agent to prepare a cleansing foam with enhanced stability.

Investigating Data Preprocessing Algorithms of a Deep Learning Postprocessing Model for the Improvement of Sub-Seasonal to Seasonal Climate Predictions (계절내-계절 기후예측의 딥러닝 기반 후보정을 위한 입력자료 전처리 기법 평가)

  • Uran Chung;Jinyoung Rhee;Miae Kim;Soo-Jin Sohn
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.2
    • /
    • pp.80-98
    • /
    • 2023
  • This study explores the effectiveness of various data preprocessing algorithms for improving subseasonal to seasonal (S2S) climate predictions from six climate forecast models and their Multi-Model Ensemble (MME) using a deep learning-based postprocessing model. A pipeline of data transformation algorithms was constructed to convert raw S2S prediction data into the training data processed with several statistical distribution. A dimensionality reduction algorithm for selecting features through rankings of correlation coefficients between the observed and the input data. The training model in the study was designed with TimeDistributed wrapper applied to all convolutional layers of U-Net: The TimeDistributed wrapper allows a U-Net convolutional layer to be directly applied to 5-dimensional time series data while maintaining the time axis of data, but every input should be at least 3D in U-Net. We found that Robust and Standard transformation algorithms are most suitable for improving S2S predictions. The dimensionality reduction based on feature selections did not significantly improve predictions of daily precipitation for six climate models and even worsened predictions of daily maximum and minimum temperatures. While deep learning-based postprocessing was also improved MME S2S precipitation predictions, it did not have a significant effect on temperature predictions, particularly for the lead time of weeks 1 and 2. Further research is needed to develop an optimal deep learning model for improving S2S temperature predictions by testing various models and parameters.

A Comparative Study on Topic Modeling of LDA, Top2Vec, and BERTopic Models Using LIS Journals in WoS (LDA, Top2Vec, BERTopic 모형의 토픽모델링 비교 연구 - 국외 문헌정보학 분야를 중심으로 -)

  • Yong-Gu Lee;SeonWook Kim
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.58 no.1
    • /
    • pp.5-30
    • /
    • 2024
  • The purpose of this study is to extract topics from experimental data using the topic modeling methods(LDA, Top2Vec, and BERTopic) and compare the characteristics and differences between these models. The experimental data consist of 55,442 papers published in 85 academic journals in the field of library and information science, which are indexed in the Web of Science(WoS). The experimental process was as follows: The first topic modeling results were obtained using the default parameters for each model, and the second topic modeling results were obtained by setting the same optimal number of topics for each model. In the first stage of topic modeling, LDA, Top2Vec, and BERTopic models generated significantly different numbers of topics(100, 350, and 550, respectively). Top2Vec and BERTopic models seemed to divide the topics approximately three to five times more finely than the LDA model. There were substantial differences among the models in terms of the average and standard deviation of documents per topic. The LDA model assigned many documents to a relatively small number of topics, while the BERTopic model showed the opposite trend. In the second stage of topic modeling, generating the same 25 topics for all models, the Top2Vec model tended to assign more documents on average per topic and showed small deviations between topics, resulting in even distribution of the 25 topics. When comparing the creation of similar topics between models, LDA and Top2Vec models generated 18 similar topics(72%) out of 25. This high percentage suggests that the Top2Vec model is more similar to the LDA model. For a more comprehensive comparison analysis, expert evaluation is necessary to determine whether the documents assigned to each topic in the topic modeling results are thematically accurate.

Shading Effects on the Growth and Physiological Characteristics of Osmanthus insularis Seedlings, a Rare Species (희귀 식물 박달목서 유묘의 생장 및 생리적 특성에 대한 차광 효과)

  • Da-Eun Gu;Sim-Hee Han;Eun-Young Yim;Jin Kim;Ja-Jung Ku
    • Journal of Korean Society of Forest Science
    • /
    • v.113 no.1
    • /
    • pp.88-96
    • /
    • 2024
  • This study was conducted to determine the optimal light conditions for the in situ and ex situ conservation and restoration of Osmanthus insularis, a rare plant species in South Korea. Evaluations included the growth performance, leaf morphological features, photosynthetic characteristics, and photosynthetic pigment contents of seedlings grown from April to November under different light conditions (100%, 55%, 20%, and 10% relative light intensity). The shoot lengths and root collar diameters did not differ significantly with relative light intensity. The dry weights of leaves, stems, and roots and the leaf number were highest at 55% relative light intensity. The leaf shape showed morphological acclimation to light intensity, with leaf area decreasing and thickness increasing as the relative light intensity increased. Several leaf parameters, including photosynthetic rate and stomatal conductance at light saturation point, net apparent quantum yield, and dark respiration, as well as chlorophyll a, chlorophyll b, and carotenoid contents, were all highest at 55% relative light intensity. Under full light conditions, the leaves were the smallest and thickest, but the chlorophyll content was lower than at 55% relative light intensity, resulting in lower photosynthetic ability. Plants grown at 10% and 20% relative light intensity showed lower chlorophyll a, chlorophyll b, and carotenoid contents, as well as decreased photosynthetic and dark respiration rates. In conclusion, O. insularis seedlings exhibited morphological adaptations in response to light intensity; however, no physiological responses indicating enhanced photosynthetic efficiency in shade were evident. The most favorable light condition for vigorous photosynthesis and maximum biomass production in O. insularis seedlings appeared to be 55% relative light intensity. Therefore, shading to approximately 55% of full light is suggested for the growth of O. insularis seedlings.

Optimum Radiotherapy Schedule for Uterine Cervical Cancer based-on the Detailed Information of Dose Fractionation and Radiotherapy Technique (처방선량 및 치료기법별 치료성적 분석 결과에 기반한 자궁경부암 환자의 최적 방사선치료 스케줄)

  • Cho, Jae-Ho;Kim, Hyun-Chang;Suh, Chang-Ok;Lee, Chang-Geol;Keum, Ki-Chang;Cho, Nam-Hoon;Lee, Ik-Jae;Shim, Su-Jung;Suh, Yang-Kwon;Seong, Jinsil;Kim, Gwi-Eon
    • Radiation Oncology Journal
    • /
    • v.23 no.3
    • /
    • pp.143-156
    • /
    • 2005
  • Background: The best dose-fractionation regimen of the definitive radiotherapy for cervix cancer remains to be clearly determined. It seems to be partially attributed to the complexity of the affecting factors and the lack of detailed information on external and intra-cavitary fractionation. To find optimal practice guidelines, our experiences of the combination of external beam radiotherapy (EBRT) and high-dose-rate intracavitary brachytherapy (HDR-ICBT) were reviewed with detailed information of the various treatment parameters obtained from a large cohort of women treated homogeneously at a single institute. Materials and Methods: The subjects were 743 cervical cancer patients (Stage IB 198, IIA 77, IIB 364, IIIA 7, IIIB 89 and IVA 8) treated by radiotherapy alone, between 1990 and 1996. A total external beam radiotherapy (EBRT) dose of $23.4\~59.4$ Gy (Median 45.0) was delivered to the whole pelvis. High-dose-rate intracavitary brachytherapy (HDR-IBT) was also peformed using various fractionation schemes. A Midline block (MLB) was initiated after the delivery of $14.4\~43.2$ Gy (Median 36.0) of EBRT in 495 patients, while In the other 248 patients EBRT could not be used due to slow tumor regression or the huge initial bulk of tumor. The point A, actual bladder & rectal doses were individually assessed in all patients. The biologically effective dose (BED) to the tumor ($\alpha/\beta$=10) and late-responding tissues ($\alpha/\beta$=3) for both EBRT and HDR-ICBT were calculated. The total BED values to point A, the actual bladder and rectal reference points were the summation of the EBRT and HDR-ICBT. In addition to all the details on dose-fractionation, the other factors (i.e. the overall treatment time, physicians preference) that can affect the schedule of the definitive radiotherapy were also thoroughly analyzed. The association between MD-BED $Gy_3$ and the risk of complication was assessed using serial multiple logistic regression models. The associations between R-BED $Gy_3$ and rectal complications and between V-BED $Gy_3$ and bladder complications were assessed using multiple logistic regression models after adjustment for age, stage, tumor size and treatment duration. Serial Coxs proportional hazard regression models were used to estimate the relative risks of recurrence due to MD-BED $Gy_{10}$, and the treatment duration. Results: The overall complication rate for RTOG Grades $1\~4$ toxicities was $33.1\%$. The 5-year actuarial pelvic control rate for ail 743 patients was $83\%$. The midline cumulative BED dose, which is the sum of external midline BED and HDR-ICBT point A BED, ranged from 62.0 to 121.9 $Gy_{10}$ (median 93.0) for tumors and from 93.6 to 187.3 $Gy_3$ (median 137.6) for late responding tissues. The median cumulative values of actual rectal (R-BED $Gy_3$) and bladder Point BED (V-BED $Gy_3$) were 118.7 $Gy_3$ (range $48.8\~265.2$) and 126.1 $Gy_3$ (range: $54.9\~267.5$), respectively. MD-BED $Gy_3$ showed a good correlation with rectal (p=0.003), but not with bladder complications (p=0.095). R-BED $Gy_3$ had a very strong association (p=<0.0001), and was more predictive of rectal complications than A-BED $Gy_3$. B-BED $Gy_3$ also showed significance in the prediction of bladder complications in a trend test (p=0.0298). No statistically significant dose-response relationship for pelvic control was observed. The Sandwich and Continuous techniques, which differ according to when the ICR was inserted during the EBRT and due to the physicians preference, showed no differences in the local control and complication rates; there were also no differences in the 3 vs. 5 Gy fraction size of HDR-ICBT. Conclusion: The main reasons optimal dose-fractionation guidelines are not easily established is due to the absence of a dose-response relationship for tumor control as a result of the high-dose gradient of HDR-ICBT, individual differences In tumor responses to radiation therapy and the complexity of affecting factors. Therefore, in our opinion, there is a necessity for individualized tailored therapy, along with general guidelines, in the definitive radiation treatment for cervix cancer. This study also demonstrated the strong predictive value of actual rectal and bladder reference dosing therefore, vaginal gauze packing might be very Important. To maintain the BED dose to less than the threshold resulting in complication, early midline shielding, the HDR-ICBT total dose and fractional dose reduction should be considered.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.