• Title/Summary/Keyword: Stock management

Search Result 1,582, Processing Time 0.033 seconds

Optimum Management Plan of Swine Wastewater Treatment Plant for the Removal of High-concentration Nitrogen (고농도 질소제거를 위한 축산폐수 처리시설 적정관리 방안)

  • Shin, Nam-Cheol;Jung, Yoo-Jin;Sung, Nak-Chang
    • Korean Journal of Environmental Agriculture
    • /
    • v.19 no.3
    • /
    • pp.194-200
    • /
    • 2000
  • The amount of swine wastewater reaches about $197,000m^3$ per day at live-stock houses in the whole country. A half of the swine wastewater resources are too small to be restricted legally. This untreated wastewater causes the eutrophication in the water bodies. In case of swine wastewater treatment, the solid-liquid separation must be performed because feces(solid phase) and urine(liquid phase) have large differences in nitrogen and phosphorus concentration. It is necessary to assess exactly the concentration of the pollutants in swine wastewater for planning the wastewater treatment facilities. A full-scale operation was carried out in K city and the plant is consists of conventional plant, the supplementary flocculation basin of chemical treatment process and $anaerobic{\cdot}aerobic$ basin for nitrogen removal. The improved full-scale swine wastewater treatment plant removed the $1,500{\sim}3,000mg/l$ of total-nitrogen(T-N) to 120mg/l of T-N and $131{\sim}156mg/l$ of total-phosphorus(T-P) to $0.15{\sim}1.00mg/l$ of T-N. Accordingly, as a results of operational improvement, the removal efficiencies of T-N and T-P were over $92{\sim}96%$, 99%, respectively. The continuous supply of organic carbon sources and the state of pH played important roles for the harmonious metabolism in anaerobic basin and the pH value of anaerobic basin maintained at about 9.0 for the period of the study.

  • PDF

A Study on the Availability of Spatial and Statistical Data for Assessing CO2 Absorption Rate in Forests - A Case Study on Ansan-si - (산림의 CO2 흡수량 평가를 위한 통계 및 공간자료의 활용성 검토 - 안산시를 대상으로 -)

  • Kim, Sunghoon;Kim, Ilkwon;Jun, Baysok;Kwon, Hyuksoo
    • Journal of Environmental Impact Assessment
    • /
    • v.27 no.2
    • /
    • pp.124-138
    • /
    • 2018
  • This research was conducted to examine the availability of spatial data for assessing absorption rates of $CO_2$ in the forest of Ansan-si and evaluate the validity of methods that analyze $CO_2$ absorption. To statistically assess the $CO_2$ absorption rates per year, the 1:5,000 Digital Forest-Map (Lim5000) and Standard Carbon Removal of Major Forest Species (SCRMF) methods were employed. Furthermore, Land Cover Map (LCM) was also used to verify $CO_2$ absorption rate availability per year. Great variations in $CO_2$ absorption rates occurred before and after the year 2010. This was due to improvement in precision and accuracy of the Forest Basic Statistics (FBS) in 2010, which resulted in rapid increase in growing stock. Thus, calibration of data prior to 2010 is necessary, based on recent FBS standards. Previous studies that employed Lim5000 and FBS (2015, 2010) did not take into account the $CO_2$ absorption rates of different tree species, and the combination of SCRMF and Lim5000 resulted in $CO_2$ absorption of 42,369 ton. In contrast to the combination of SCRMF and Lim5000, LCM and SCRMF resulted in $CO_2$ absorption of 40,696 ton. Homoscedasticity tests for Lim5000 and LCM resulted in p-value <0.01, with a difference in $CO_2$ absorption of 1,673 ton. Given that $CO_2$ absorption in forests is an important factor that reduces greenhouse gas emissions, the findings of this study should provide fundamental information for supporting a wide range of decision-making processes for land use and management.

A Study on Problems with the ROK's Bioterrorism Response System and Ways to Improve it (생물테러 대응체제의 문제점과 개선방안 연구)

  • Jung, Yook-Sang
    • Korean Security Journal
    • /
    • no.22
    • /
    • pp.113-144
    • /
    • 2010
  • Bioterrorism is becoming more attractive to terrorist groups owing to the dramatic increase in the utility and lethality of biological weapons in line with today's cutting-edge biological science and technology. The Republic of Korea is facing both internal and external terrorist threats, as well as the possible biological warfare by North Korea. Therefore, it is essential to establish an effective bioterrorism response system in the ROK. In order to come up with the adequate response system for the ROK, an in-depth study has been conducted on the current bioterrorism response system of the U.S. whose preparedness is considered relatively adamant. As a result, the following facts have been found: (1)the legislation with regard to bioterrorism has been established or amended according to the current situation in the U.S., (2)the counter terrorism activities have been integrated with the Department of the Homeland Security as the central agency in order to maximize the national CT capacity, (3)Specific procedures and instructions to cope with bioterrorism have been made into manuals so as to enhance the working-level response capabilities. Next, the analysis on the ROK's bioterrorism response system has been performed in various categories, including the legislation system, task role distribution, cooperative relations, and resource application. It turned out that the ROK's legislation basis is relatively weak and it lacks the apparatus to integrate the bioterrorism response activities on the national level. The shortage of the adequate response facilities and resources, as well as the poor management of manpower have also emerged as problems that hinder the effective CT implementations. Through an analytical and comparative study of the U.S. and the ROK systems, this paper presents several ways to ameliorate improve the current system in the ROK as follows: (1)establish the anti-terrorism law, which would be the basic legal basis for the bioterrorism-related matters; and make revisions to the disaster-related legislation, relevant to bioterrorism response activities, (2)establish an integrated body that has a powerful authority to coordinate the relevant CT agencies; and converge the decentralized functions to maximize the overall response capacity, (3)install the laboratories with a high biosafety level and secure enough of the strategic medical stock-pile, (4)enhance the ability of the inexperienced response personnel by providing with a manual that has detailed instructions.

  • PDF

Short-term Effect of Thinning on Aboveground Carbon Storage in Korean Pine (Pinus koraiensis) Plantation (간벌이 잣나무 조림지 지상부 탄소저장량에 미치는 초기 영향)

  • Hwang, Jaehong;Bae, Sang-Won;Lee, Kyung Jae;Lee, Kwang-Soo;Kim, Hyun-Seop
    • Journal of Korean Society of Forest Science
    • /
    • v.97 no.6
    • /
    • pp.605-610
    • /
    • 2008
  • This study was carried out to investigate the short-term (3 years) effect of thinning on aboveground carbon storage for 34-year-old (site 1) and 45-year-old (site 2) Korean pine (Pinus koraiensis Siebold et Zuccarini) plantations with different diameter class and site quality located in Gwangneung experimental forest. Thinning was manually carried out in consideration of basal area in 2004 (site 1 : 30% and 60% of basal area removed and site 2 : 60% of basal area removed). In 2004 and 2007, DBH and tree height were measured to analyze the changes in carbon storage after thinning. In the sites of 60% of basal area removed, although the mean DBH of site 1 was higher than that of site 2, mean annual carbon storage increment in site 2 ($6.5Mg\;C\;ha^{-1}yr^{-1}$) was about 3 times higher than that in site 1 ($2.3Mg\;C\;ha^{-1}yr^{-1}$). The reason for this result was probably due to higher stem density and site quality in site 2 compared to site 1. In site 2, mean annual carbon storage increment in thinned plot ($6.5Mg\;C\;ha^{-1}yr^{-1}$) was about 1.3 times higher than that in control ($5.2Mg\;C\;ha^{-1}yr^{-1}$). The results suggest that the stem density and site quality may be much more related to the aboveground carbon storage compared to diameter class. In addition, it is needed to consider these two factors for determining whether thinning is a feasible management alternative for the increase in aboveground carbon sequestration.

An Economic Analysis of Wildlife Rearing Farmhouses in Korea (Deer, Pheasant, Wild Boar and Fox Rearing Farmhouses) (야생조수(野生鳥獸) 인공사육농가(人工飼育農家)의 경영실태분석(經營實態分析)(사슴, 꿩, 멧돼지와 여우 사육농가(飼育農家)를 중심(中心)으로))

  • Kwak, Kyung Ho;Cho, Eung Hyouk;Kim, Se Bin;Oh, Kyoung Su
    • Korean Journal of Agricultural Science
    • /
    • v.20 no.1
    • /
    • pp.25-33
    • /
    • 1993
  • This study was conducted to obtain necessary informations for improving of wildlife rearing management. The data was gathered by surveying with questionaire. One hundred and eighty farmers which was 60 of deer and pheasant, 30 of wild boar and fox rearing farmers respectively were investigated during the summer in 1992. The results of this study are as follows : 1. Most of managers considered their rearing as a side job but agriculture was appointed as a main job from most of them except wild boar managers. 2. The major cost items were breeding stock and feeding which occupied over than half. 3. The yearly profit was the highest in deer(25.5%) but the lowest in wild boar(10.3%). 4. The break-even point was the highest in wild boar(24 mil. won) but the lowest in pheasant(7.3 mil. won). 5. The optimum sales scale for a year was deer(11 heads), Pheasant(1,027 heads), Wild boar(69 heads) and Fox (102 heads).

  • PDF

WHICH INFORMATION MOVES PRICES: EVIDENCE FROM DAYS WITH DIVIDEND AND EARNINGS ANNOUNCEMENTS AND INSIDER TRADING

  • Kim, Chan-Wung;Lee, Jae-Ha
    • The Korean Journal of Financial Studies
    • /
    • v.3 no.1
    • /
    • pp.233-265
    • /
    • 1996
  • We examine the impact of public and private information on price movements using the thirty DJIA stocks and twenty-one NASDAQ stocks. We find that the standard deviation of daily returns on information days (dividend announcement, earnings announcement, insider purchase, or insider sale) is much higher than on no-information days. Both public information matters at the NYSE, probably due to masked identification of insiders. Earnings announcement has the greatest impact for both DJIA and NASDAQ stocks, and there is some evidence of positive impact of insider asle on return volatility of NASDAQ stocks. There has been considerable debate, e.g., French and Roll (1986), over whether market volatility is due to public information or private information-the latter gathered through costly search and only revealed through trading. Public information is composed of (1) marketwide public information such as regularly scheduled federal economic announcements (e.g., employment, GNP, leading indicators) and (2) company-specific public information such as dividend and earnings announcements. Policy makers and corporate insiders have a better access to marketwide private information (e.g., a new monetary policy decision made in the Federal Reserve Board meeting) and company-specific private information, respectively, compated to the general public. Ederington and Lee (1993) show that marketwide public information accounts for most of the observed volatility patterns in interest rate and foreign exchange futures markets. Company-specific public information is explored by Patell and Wolfson (1984) and Jennings and Starks (1985). They show that dividend and earnings announcements induce higher than normal volatility in equity prices. Kyle (1985), Admati and Pfleiderer (1988), Barclay, Litzenberger and Warner (1990), Foster and Viswanathan (1990), Back (1992), and Barclay and Warner (1993) show that the private information help by informed traders and revealed through trading influences market volatility. Cornell and Sirri (1992)' and Meulbroek (1992) investigate the actual insider trading activities in a tender offer case and the prosecuted illegal trading cased, respectively. This paper examines the aggregate and individual impact of marketwide information, company-specific public information, and company-specific private information on equity prices. Specifically, we use the thirty common stocks in the Dow Jones Industrial Average (DJIA) and twenty one National Association of Securities Dealers Automated Quotations (NASDAQ) common stocks to examine how their prices react to information. Marketwide information (public and private) is estimated by the movement in the Standard and Poors (S & P) 500 Index price for the DJIA stocks and the movement in the NASDAQ Composite Index price for the NASDAQ stocks. Divedend and earnings announcements are used as a subset of company-specific public information. The trading activity of corporate insiders (major corporate officers, members of the board of directors, and owners of at least 10 percent of any equity class) with an access to private information can be cannot legally trade on private information. Therefore, most insider transactions are not necessarily based on private information. Nevertheless, we hypothesize that market participants observe how insiders trade in order to infer any information that they cannot possess because insiders tend to buy (sell) when they have good (bad) information about their company. For example, Damodaran and Liu (1993) show that insiders of real estate investment trusts buy (sell) after they receive favorable (unfavorable) appraisal news before the information in these appraisals is released to the public. Price discovery in a competitive multiple-dealership market (NASDAQ) would be different from that in a monopolistic specialist system (NYSE). Consequently, we hypothesize that NASDAQ stocks are affected more by private information (or more precisely, insider trading) than the DJIA stocks. In the next section, we describe our choices of the fifty-one stocks and the public and private information set. We also discuss institutional differences between the NYSE and the NASDAQ market. In Section II, we examine the implications of public and private information for the volatility of daily returns of each stock. In Section III, we turn to the question of the relative importance of individual elements of our information set. Further analysis of the five DJIA stocks and the four NASDAQ stocks that are most sensitive to earnings announcements is given in Section IV, and our results are summarized in Section V.

  • PDF

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

Evaluation on the Immunization Module of Non-chart System in Private Clinic for Development of Internet Information System of National Immunization Programme m Korea (국가 예방접종 인터넷정보시스템 개발을 위한 의원정보시스템의 예방접종 모듈 평가연구)

  • Lee, Moo-Sik;Lee, Kun-Sei;Lee, Seok-Gu;Shin, Eui-Chul;Kim, Keon-Yeop;Na, Bak-Ju;Hong, Jee-Young;Kim, Yun-Jeong;Park, Sook-Kyung;Kim, Bo-Kyung;Kwon, Yun-Hyung;Kim, Young-Taek
    • Journal of agricultural medicine and community health
    • /
    • v.29 no.1
    • /
    • pp.65-75
    • /
    • 2004
  • Objectives: Immunizations have been one of the most effective measures preventing from infectious diseases. It is quite important national infectious disease prevention policy to keep the immunizations rate high and monitor the immunizations rate continuously. To do this, Korean CDC introduced the National Immunization Registry Program(NIRP) which has been implementing since 2000 at the Public Health Centers(PHC). The National Immunization Registry Program will be near completed after sharing, connecting and transfering vaccination data between public and private sector. The aims of this study was to evaluate the immunization module of non-chart system in private clinic with health information system of public health center(made by POSDATA Co., LTD) and immunization registry program(made by BIT Computer Co., LTD). Methods: The analysis and survey were done by specialists in medical, health field, and health information fields from 2001. November to 2002. January. We made the analysis and recommendation about the immunization module of non-chart system in private clinic. Results and Conclusions: To make improvement on immunization module, the system will be revised on various function like receipt and registration, preliminary medical examination, reference and inquiry, registration of vaccine, print-out various sheet, function of transfer vaccination data, issue function of vaccination certification, function of reminder and recall, function of statistical calculation, and management of vaccine stock. There are needs of an accurate assessment of current immunization module on each private non-chart system. And further studies will be necessary to make it an accurate system under changing health policy related national immunization program. We hope that the result of this study may contribute to establish the National Immunization Registry Program.

  • PDF

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.