• Title/Summary/Keyword: numerical studies

Search Result 3,272, Processing Time 0.032 seconds

Improving the Accuracy of the Mohr Failure Envelope Approximating the Generalized Hoek-Brown Failure Criterion (일반화된 Hoek-Brown 파괴기준식의 근사 Mohr 파괴포락선 정확도 개선)

  • Youn-Kyou Lee
    • Tunnel and Underground Space
    • /
    • v.34 no.4
    • /
    • pp.355-373
    • /
    • 2024
  • The Generalized Hoek-Brown (GHB) criterion is a nonlinear failure criterion specialized for rock engineering applications and has recently seen increased usage. However, the GHB criterion expresses the relationship between minimum and maximum principal stresses at failure, and when GSI≠100, it has disadvantage of being difficult to express as an explicit relationship between the normal and shear stresses acting on the failure plane, i.e., as a Mohr failure envelope. This disadvantage makes it challenging to apply the GHB criterion in numerical analysis techniques such as limit equilibrium analysis, upper-bound limit analysis, and the critical plane approach. Consequently, recent studies have attempted to express the GHB Mohr failure envelope as an approximate analytical formula, and there is still a need for continued interest in related research. This study presents improved formulations for the approximate GHB Mohr failure envelope, offering higher accuracy in predicting shear strength compared to existing formulas. The improved formulation process employs a method to enhance the approximation accuracy of the tangential friction angle and utilizes the tangent line equation of the nonlinear GHB failure envelope to improve the accuracy of shear strength approximation. In the latter part of this paper, the advantages and limitations of the proposed approximate GHB failure envelopes in terms of shear strength prediction accuracy and calculation time are discussed.

Spatio-temporal enhancement of forest fire risk index using weather forecast and satellite data in South Korea (기상 예보 및 위성 자료를 이용한 우리나라 산불위험지수의 시공간적 고도화)

  • KANG, Yoo-Jin;PARK, Su-min;JANG, Eun-na;IM, Jung-ho;KWON, Chun-Geun;LEE, Suk-Jun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.4
    • /
    • pp.116-130
    • /
    • 2019
  • In South Korea, forest fire occurrences are increasing in size and duration due to various factors such as the increase in fuel materials and frequent drying conditions in forests. Therefore, it is necessary to minimize the damage caused by forest fires by appropriately providing the probability of forest fire risk. The purpose of this study is to improve the Daily Weather Index(DWI) provided by the current forest fire forecasting system in South Korea. A new Fire Risk Index(FRI) is proposed in this study, which is provided in a 5km grid through the synergistic use of numerical weather forecast data, satellite-based drought indices, and forest fire-prone areas. The FRI is calculated based on the product of the Fine Fuel Moisture Code(FFMC) optimized for Korea, an integrated drought index, and spatio-temporal weighting approaches. In order to improve the temporal accuracy of forest fire risk, monthly weights were applied based on the forest fire occurrences by month. Similarly, spatial weights were applied using the forest fire density information to improve the spatial accuracy of forest fire risk. In the time series analysis of the number of monthly forest fires and the FRI, the relationship between the two were well simulated. In addition, it was possible to provide more spatially detailed information on forest fire risk when using FRI in the 5km grid than DWI based on administrative units. The research findings from this study can help make appropriate decisions before and after forest fire occurrences.

Calculation of Surface Heat Flux in the Southeastern Yellow Sea Using Ocean Buoy Data (해양부이 자료를 이용한 황해 남동부 해역 표층 열속 산출)

  • Kim, Sun-Bok;Chang, Kyung-Il
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.19 no.3
    • /
    • pp.169-179
    • /
    • 2014
  • Monthly mean surface heat fluxes in the southeastern Yellow Sea are calculated using directly observed airsea variables from an ocean buoy station including short- and longwave radiations, and COARE 3.0 bulk flux algorithm. The calculated monthly mean heat fluxes are then compared with previous estimates of climatological monthly mean surface heat fluxes near the buoy location. Sea surface receives heat through net shortwave radiation ($Q_i$) and loses heat as net longwave radiation ($Q_b$), sensible heat flux ($Q_h$), and latent heat flux ($Q_e$). $Q_e$ is the largest contribution to the total heat loss of about 51 %, and $Q_b$ and $Q_h$ account for 34% and 15% of the total heat loss, respectively. Net heat flux ($Q_n$) shows maximum in May ($191.4W/m^2$) when $Q_i$ shows its annual maximum, and minimum in December ($-264.9W/m^2$) when the heat loss terms show their annual minimum values. Annual mean $Q_n$ is estimated to be $1.9W/m^2$, which is negligibly small considering instrument errors (maximum of ${\pm}19.7W/m^2$). In the previous estimates, summertime incoming radiations ($Q_i$) are underestimated by about $10{\sim}40W/m^2$, and wintertime heat losses due to $Q_e$ and $Q_h$ are overestimated by about $50W/m^2$ and $30{\sim}70W/m^2$, respectively. Consequently, as compared to $Q_n$ from the present study, the amount of net heat gain during the period of net oceanic heat gain between April and August is underestimated, while the ocean's net heat loss in winter is overestimated in other studies. The difference in $Q_n$ is as large as $70{\sim}130W/m^2$ in December and January. Analysis of long-term reanalysis product (MERRA) indicates that the difference in the monthly mean heat fluxes between the present and previous studies is not due to the temporal variability of fluxes but due to inaccurate data used for the calculation of the heat fluxes. This study suggests that caution should be exercised in using the climatological monthly mean surface heat fluxes documented previously for various research and numerical modeling purposes.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Review of Erosion and Piping in Compacted Bentonite Buffers Considering Buffer-Rock Interactions and Deduction of Influencing Factors (완충재-근계암반 상호작용을 고려한 압축 벤토나이트 완충재 침식 및 파이핑 연구 현황 및 주요 영향인자 도출)

  • Hong, Chang-Ho;Kim, Ji-Won;Kim, Jin-Seop;Lee, Changsoo
    • Tunnel and Underground Space
    • /
    • v.32 no.1
    • /
    • pp.30-58
    • /
    • 2022
  • The deep geological repository for high-level radioactive waste disposal is a multi barrier system comprised of engineered barriers and a natural barrier. The long-term integrity of the deep geological repository is affected by the coupled interactions between the individual barrier components. Erosion and piping phenomena in the compacted bentonite buffer due to buffer-rock interactions results in the removal of bentonite particles via groundwater flow and can negatively impact the integrity and performance of the buffer. Rapid groundwater inflow at the early stages of disposal can lead to piping in the bentonite buffer due to the buildup of pore water pressure. The physiochemical processes between the bentonite buffer and groundwater lead to bentonite swelling and gelation, resulting in bentonite erosion from the buffer surface. Hence, the evaluation of erosion and piping occurrence and its effects on the integrity of the bentonite buffer is crucial in determining the long-term integrity of the deep geological repository. Previous studies on bentonite erosion and piping failed to consider the complex coupled thermo-hydro-mechanical-chemical behavior of bentonite-groundwater interactions and lacked a comprehensive model that can consider the complex phenomena observed from the experimental tests. In this technical note, previous studies on the mechanisms, lab-scale experiments and numerical modeling of bentonite buffer erosion and piping are introduced, and the future expected challenges in the investigation of bentonite buffer erosion and piping are summarized.

A preliminary study on the village landscape in Baengpo Bay, Haenam Peninsula - Around the Bronze Age - (해남반도 백포만일대 취락경관에 대한 시론 - 청동기시대를 중심으로 -)

  • KIM Jinyoung
    • Korean Journal of Heritage: History & Science
    • /
    • v.56 no.3
    • /
    • pp.62-74
    • /
    • 2023
  • Much attention has been focused on the Baekpoman area due to the archaeological achievements of the past, but studies on prehistoric times when villages began to form is insufficient, and the Bronze Age village landscape was examined in order to supplement this. In the area of Baekpo Bay, the natural geographical limit connected to the inland was culturally confirmed by the distribution density of dolmens, and the generality of the Bronze Age settlement was confirmed with the Hwangsan-ri settlement. Bunto Village in Hwangsan-ri represents a farming-based village in the Baekpo Bay area, and the residential group and the tomb group are located on the same hill, and it is composed of three individual residential groups, and the village landscape had attached buildings used as warehouses and storage facilities. In the area of Baekpo Bay, it spread in the Tamjin River basin and the Yeongsan River basin where Songgukri culture and dolmen culture were integrated, and the density distribution of the villages was considered to correspond to the distribution density of dolmens. In order to examine the landscape of village distribution, the classification of Sochon-Jungchon-Daechon was applied, and it was classified as Sochon, a sub-unit constituting the village, in that the number of settlements constituting the village in the Bronze Age was mostly less than five. There are numerical differences between Jungchon and Daechon, and the distribution pattern does not necessarily coincide with the hierarchy. The three individual residential groups of Bunto Village in Hwangsan-ri are Jungchon composed of complex communities of blood relatives with each family community, and a stabilized village landscape was created in the Gusancheon area. In the area of Baekpo Bay, Bronze Age villages formed a landscape in which small villages were scattered around the rivers and formed a single-layered relationship. Dolmens (tombs) were formed between the villages and villages, and seem to have coexisted. Sochondeul is a family community based on agriculture, and it is believed that self-sufficient stabilized rural villages that live by acquiring various wild resources in rivers, mountains, and the sea formed a landscape.

The Impact of Market Environments on Optimal Channel Strategy Involving an Internet Channel: A Game Theoretic Approach (시장 환경이 인터넷 경로를 포함한 다중 경로 관리에 미치는 영향에 관한 연구: 게임 이론적 접근방법)

  • Yoo, Weon-Sang
    • Journal of Distribution Research
    • /
    • v.16 no.2
    • /
    • pp.119-138
    • /
    • 2011
  • Internet commerce has been growing at a rapid pace for the last decade. Many firms try to reach wider consumer markets by adding the Internet channel to the existing traditional channels. Despite the various benefits of the Internet channel, a significant number of firms failed in managing the new type of channel. Previous studies could not cleary explain these conflicting results associated with the Internet channel. One of the major reasons is most of the previous studies conducted analyses under a specific market condition and claimed that as the impact of Internet channel introduction. Therefore, their results are strongly influenced by the specific market settings. However, firms face various market conditions in the real worlddensity and disutility of using the Internet. The purpose of this study is to investigate the impact of various market environments on a firm's optimal channel strategy by employing a flexible game theory model. We capture various market conditions with consumer density and disutility of using the Internet.

    shows the channel structures analyzed in this study. Before the Internet channel is introduced, a monopoly manufacturer sells its products through an independent physical store. From this structure, the manufacturer could introduce its own Internet channel (MI). The independent physical store could also introduce its own Internet channel and coordinate it with the existing physical store (RI). An independent Internet retailer such as Amazon could enter this market (II). In this case, two types of independent retailers compete with each other. In this model, consumers are uniformly distributed on the two dimensional space. Consumer heterogeneity is captured by a consumer's geographical location (ci) and his disutility of using the Internet channel (${\delta}_{N_i}$).
    shows various market conditions captured by the two consumer heterogeneities.
    (a) illustrates a market with symmetric consumer distributions. The model captures explicitly the asymmetric distributions of consumer disutility in a market as well. In a market like that is represented in
    (c), the average consumer disutility of using an Internet store is relatively smaller than that of using a physical store. For example, this case represents the market in which 1) the product is suitable for Internet transactions (e.g., books) or 2) the level of E-Commerce readiness is high such as in Denmark or Finland. On the other hand, the average consumer disutility when using an Internet store is relatively greater than that of using a physical store in a market like (b). Countries like Ukraine and Bulgaria, or the market for "experience goods" such as shoes, could be examples of this market condition. summarizes the various scenarios of consumer distributions analyzed in this study. The range for disutility of using the Internet (${\delta}_{N_i}$) is held constant, while the range of consumer distribution (${\chi}_i$) varies from -25 to 25, from -50 to 50, from -100 to 100, from -150 to 150, and from -200 to 200.
    summarizes the analysis results. As the average travel cost in a market decreases while the average disutility of Internet use remains the same, average retail price, total quantity sold, physical store profit, monopoly manufacturer profit, and thus, total channel profit increase. On the other hand, the quantity sold through the Internet and the profit of the Internet store decrease with a decreasing average travel cost relative to the average disutility of Internet use. We find that a channel that has an advantage over the other kind of channel serves a larger portion of the market. In a market with a high average travel cost, in which the Internet store has a relative advantage over the physical store, for example, the Internet store becomes a mass-retailer serving a larger portion of the market. This result implies that the Internet becomes a more significant distribution channel in those markets characterized by greater geographical dispersion of buyers, or as consumers become more proficient in Internet usage. The results indicate that the degree of price discrimination also varies depending on the distribution of consumer disutility in a market. The manufacturer in a market in which the average travel cost is higher than the average disutility of using the Internet has a stronger incentive for price discrimination than the manufacturer in a market where the average travel cost is relatively lower. We also find that the manufacturer has a stronger incentive to maintain a high price level when the average travel cost in a market is relatively low. Additionally, the retail competition effect due to Internet channel introduction strengthens as average travel cost in a market decreases. This result indicates that a manufacturer's channel power relative to that of the independent physical retailer becomes stronger with a decreasing average travel cost. This implication is counter-intuitive, because it is widely believed that the negative impact of Internet channel introduction on a competing physical retailer is more significant in a market like Russia, where consumers are more geographically dispersed, than in a market like Hong Kong, that has a condensed geographic distribution of consumers.
    illustrates how this happens. When mangers consider the overall impact of the Internet channel, however, they should consider not only channel power, but also sales volume. When both are considered, the introduction of the Internet channel is revealed as more harmful to a physical retailer in Russia than one in Hong Kong, because the sales volume decrease for a physical store due to Internet channel competition is much greater in Russia than in Hong Kong. The results show that manufacturer is always better off with any type of Internet store introduction. The independent physical store benefits from opening its own Internet store when the average travel cost is higher relative to the disutility of using the Internet. Under an opposite market condition, however, the independent physical retailer could be worse off when it opens its own Internet outlet and coordinates both outlets (RI). This is because the low average travel cost significantly reduces the channel power of the independent physical retailer, further aggravating the already weak channel power caused by myopic inter-channel price coordination. The results implies that channel members and policy makers should explicitly consider the factors determining the relative distributions of both kinds of consumer disutility, when they make a channel decision involving an Internet channel. These factors include the suitability of a product for Internet shopping, the level of E-Commerce readiness of a market, and the degree of geographic dispersion of consumers in a market. Despite the academic contributions and managerial implications, this study is limited in the following ways. First, a series of numerical analyses were conducted to derive equilibrium solutions due to the complex forms of demand functions. In the process, we set up V=100, ${\lambda}$=1, and ${\beta}$=0.01. Future research may change this parameter value set to check the generalizability of this study. Second, the five different scenarios for market conditions were analyzed. Future research could try different sets of parameter ranges. Finally, the model setting allows only one monopoly manufacturer in the market. Accommodating competing multiple manufacturers (brands) would generate more realistic results.

  • PDF
  • A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

    • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
      • Journal of Intelligence and Information Systems
      • /
      • v.23 no.4
      • /
      • pp.127-146
      • /
      • 2017
    • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

    Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

    • Park, Ho-yeon;Kim, Kyoung-jae
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.4
      • /
      • pp.141-154
      • /
      • 2019
    • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

    An Study on Cognition and Investigation of Silla Tumuli in the Japanese Imperialistic Rule (일제강점기의 신라고분조사연구에 대한 검토)

    • Cha, Soon Chul
      • Korean Journal of Heritage: History & Science
      • /
      • v.39
      • /
      • pp.95-130
      • /
      • 2006
    • Japanese government college researchers, including Sekino Tadashi(關野貞), have conducted research studies and collected data, on overall Korean cultural relics as well as Silla tumuli(新羅古墳) in the early modern times under the Japanese imperialistic rule. They were supported by the Meichi government in the early stage of research, by the Chosun government-general, and by their related organizations after Korea was coIonialized to carry out investigations on Korean antiquities, fine arts, architecture, anthropology, folklore, and so on. The objective for which they prosecuted inquiries into Korean cultural relics, including Silla tumuli, may be attributed to the purport to find out such data as needed for the theoretical foundation to justify their colonialization of Korea. Such a reason often showed locally biased or distorted views. Investigations and surveys had been incessantly carried out by those Japanese scholars who took a keen interest in Korean tumuli and excavated relics since 1886. 'Korea Architecture Survey Reports' conducted in 1904 by Sekino in Korea gives a brief introduction of the contents of Korean tumuli, including the Five Royal Mausoleums(五陵). And in 1906 Imanishi Ryu(今西龍) launched for the first time an excavation survey on Buksan Tumulus(北山古墳) in Sogeumgangsan(小金剛山) and on 'Namchong(南塚)' in Hwangnam-dong, which greatly contributed to the foundation of a basic understanding of Wooden chamber tombs with stone mound(積石木槨墳) and stone chambers with tunnel entrance(橫穴式石室墳). The ground plan and cross section of stone chambers made in 1909 at his excavation survey of seokchimchong(石枕塚) by Yazui Seiyichi(谷井第一) who majored in architecture made a drawing in excavation surveys for the first time in Korea, in which numerical expressions are sharply distinguished from the previous sketched ones. And even in the following excavation surveys this kind of drawing continued. Imanishi and Yazui elucidated that wooden chambers with stone mound chronologically differs from the stone chambers with tunnel entrance on the basis of the results of surveys of the locational characteristics of Silla tumuli, the forms and size of tomb entrance, excavated relics, and so forth. The government-general put in force 'the Historic Spots and Relics Preservation Rules' and 'the Historic Spots Survey Council Regulations' in 1916, establishing 'Historic Spots Survey Council and Museum Conference. When museums initiated their activities, they exhibited those relics excavated from tumuli and conducted surveys of relics with the permission of the Chosun government-general. A gold crown tomb(金冠塚) was excavated and surveyed in 1921 and a seobong tomb(瑞鳳塚) in 1927. Concomitantly with this large size wooden chamber tombs with stone mound attracted strong public attention. Furthermore, a variety of surveys of spots throughout the country were carried out but publication of tumuli had not yet been realized. Recently some researchers's endeavors led to publish unpublished reports. However, the reason why reports of such significant tumuli as seobong tomb had not yet been published may be ascribed to the critical point in those days. The Gyeongju Tumuli Distribution Chart made by Nomori Ken(野守健) on the basis of the land register in the late 1920s seems of much significance in that it specifies the size and locations of 155 tumuli and shows the overall shape of tumuli groups within the city, as used in today's distribution chart. In the 1930s Arimitsu Kyoichi(有光敎一) and Saito Tadashi(齋藤忠) identified through excavation surveys of many wooden chamber tombs with stone mound and stone chambers with tunnel entrance, that there were several forms of tombs in a tomb system. In particular, his excavation survey experience of those wooden chamber tombs with stone mound which were exposed in complicated and overlapped forms show features more developed than that of preceding excavation surveys and reports publication, and so on. The result of having reviewed the contents of many historic spots surveyed at that time. Therefore this reexamination is considered to be a significant project in arranging the history of archaeology in Korea.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.