• Title/Summary/Keyword: Paper machine

Search Result 9,778, Processing Time 0.048 seconds

Analysis of Trading Performance on Intelligent Trading System for Directional Trading (방향성매매를 위한 지능형 매매시스템의 투자성과분석)

  • Choi, Heung-Sik;Kim, Sun-Woong;Park, Sung-Cheol
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.187-201
    • /
    • 2011
  • KOSPI200 index is the Korean stock price index consisting of actively traded 200 stocks in the Korean stock market. Its base value of 100 was set on January 3, 1990. The Korea Exchange (KRX) developed derivatives markets on the KOSPI200 index. KOSPI200 index futures market, introduced in 1996, has become one of the most actively traded indexes markets in the world. Traders can make profit by entering a long position on the KOSPI200 index futures contract if the KOSPI200 index will rise in the future. Likewise, they can make profit by entering a short position if the KOSPI200 index will decline in the future. Basically, KOSPI200 index futures trading is a short-term zero-sum game and therefore most futures traders are using technical indicators. Advanced traders make stable profits by using system trading technique, also known as algorithm trading. Algorithm trading uses computer programs for receiving real-time stock market data, analyzing stock price movements with various technical indicators and automatically entering trading orders such as timing, price or quantity of the order without any human intervention. Recent studies have shown the usefulness of artificial intelligent systems in forecasting stock prices or investment risk. KOSPI200 index data is numerical time-series data which is a sequence of data points measured at successive uniform time intervals such as minute, day, week or month. KOSPI200 index futures traders use technical analysis to find out some patterns on the time-series chart. Although there are many technical indicators, their results indicate the market states among bull, bear and flat. Most strategies based on technical analysis are divided into trend following strategy and non-trend following strategy. Both strategies decide the market states based on the patterns of the KOSPI200 index time-series data. This goes well with Markov model (MM). Everybody knows that the next price is upper or lower than the last price or similar to the last price, and knows that the next price is influenced by the last price. However, nobody knows the exact status of the next price whether it goes up or down or flat. So, hidden Markov model (HMM) is better fitted than MM. HMM is divided into discrete HMM (DHMM) and continuous HMM (CHMM). The only difference between DHMM and CHMM is in their representation of state probabilities. DHMM uses discrete probability density function and CHMM uses continuous probability density function such as Gaussian Mixture Model. KOSPI200 index values are real number and these follow a continuous probability density function, so CHMM is proper than DHMM for the KOSPI200 index. In this paper, we present an artificial intelligent trading system based on CHMM for the KOSPI200 index futures system traders. Traders have experienced on technical trading for the KOSPI200 index futures market ever since the introduction of the KOSPI200 index futures market. They have applied many strategies to make profit in trading the KOSPI200 index futures. Some strategies are based on technical indicators such as moving averages or stochastics, and others are based on candlestick patterns such as three outside up, three outside down, harami or doji star. We show a trading system of moving average cross strategy based on CHMM, and we compare it to a traditional algorithmic trading system. We set the parameter values of moving averages at common values used by market practitioners. Empirical results are presented to compare the simulation performance with the traditional algorithmic trading system using long-term daily KOSPI200 index data of more than 20 years. Our suggested trading system shows higher trading performance than naive system trading.

A Hybrid Forecasting Framework based on Case-based Reasoning and Artificial Neural Network (사례기반 추론기법과 인공신경망을 이용한 서비스 수요예측 프레임워크)

  • Hwang, Yousub
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.43-57
    • /
    • 2012
  • To enhance the competitive advantage in a constantly changing business environment, an enterprise management must make the right decision in many business activities based on both internal and external information. Thus, providing accurate information plays a prominent role in management's decision making. Intuitively, historical data can provide a feasible estimate through the forecasting models. Therefore, if the service department can estimate the service quantity for the next period, the service department can then effectively control the inventory of service related resources such as human, parts, and other facilities. In addition, the production department can make load map for improving its product quality. Therefore, obtaining an accurate service forecast most likely appears to be critical to manufacturing companies. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average simulation. However, these methods are only efficient for data with are seasonal or cyclical. If the data are influenced by the special characteristics of product, they are not feasible. In our research, we propose a forecasting framework that predicts service demand of manufacturing organization by combining Case-based reasoning (CBR) and leveraging an unsupervised artificial neural network based clustering analysis (i.e., Self-Organizing Maps; SOM). We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the service forecasting domain. Our proposed approach has several appealing features : (1) We applied CBR and SOM in a new forecasting domain such as service demand forecasting. (2) We proposed our combined approach between CBR and SOM in order to overcome limitations of traditional statistical forecasting methods and We have developed a service forecasting tool based on the proposed approach using an unsupervised artificial neural network and Case-based reasoning. In this research, we conducted an empirical study on a real digital TV manufacturer (i.e., Company A). In addition, we have empirically evaluated the proposed approach and tool using real sales and service related data from digital TV manufacturer. In our empirical experiments, we intend to explore the performance of our proposed service forecasting framework when compared to the performances predicted by other two service forecasting methods; one is traditional CBR based forecasting model and the other is the existing service forecasting model used by Company A. We ran each service forecasting 144 times; each time, input data were randomly sampled for each service forecasting framework. To evaluate accuracy of forecasting results, we used Mean Absolute Percentage Error (MAPE) as primary performance measure in our experiments. We conducted one-way ANOVA test with the 144 measurements of MAPE for three different service forecasting approaches. For example, the F-ratio of MAPE for three different service forecasting approaches is 67.25 and the p-value is 0.000. This means that the difference between the MAPE of the three different service forecasting approaches is significant at the level of 0.000. Since there is a significant difference among the different service forecasting approaches, we conducted Tukey's HSD post hoc test to determine exactly which means of MAPE are significantly different from which other ones. In terms of MAPE, Tukey's HSD post hoc test grouped the three different service forecasting approaches into three different subsets in the following order: our proposed approach > traditional CBR-based service forecasting approach > the existing forecasting approach used by Company A. Consequently, our empirical experiments show that our proposed approach outperformed the traditional CBR based forecasting model and the existing service forecasting model used by Company A. The rest of this paper is organized as follows. Section 2 provides some research background information such as summary of CBR and SOM. Section 3 presents a hybrid service forecasting framework based on Case-based Reasoning and Self-Organizing Maps, while the empirical evaluation results are summarized in Section 4. Conclusion and future research directions are finally discussed in Section 5.

An identity analysis of Mechanic Design through the Japan Animation (일본 애니메이션<신세기 에반게리온>으로 본 메카닉 디자인의 정체성 분석)

  • Lee, Jong-Han;Liu, Si-Jie
    • Cartoon and Animation Studies
    • /
    • s.50
    • /
    • pp.275-297
    • /
    • 2018
  • Japan's mechanic animation is widely known throughout the world. 1952년, Japan's first mechanic animation and the first TV animation, , has been popular since it's creation in 1952. Atom, a big hit at the time, has influenced many people. Japanese mechanic animations convey their unique traits and world view to the public In this paper, we are going to discuss the change of the Japanese mechanical design through comparison of the mechanical design, which has been booming since the 1990s in Japan; and the . I expect the results of this analysis to depict Japanese culture and thought reflected in animation, which is a good indication of worldwide cultural view of animation. unexpectedly influenced the Japanese animation industry after it screened in 1995, and there are still people constantly reinterpreting and analyzing it. This is the reaction of the audience to anticipate the mystery and endless conclusions of the work itself. The design elements of Evangelion are distinguished from other mechanical objects. Mechanic design based on human biotechnology can overcome limitations of machine and make you feel more human. The pilot 's boarding structure, which can contain human nature, is reinforced in the form of an enterprising plug, and the attitude of excavation makes humanity more prominent than a straight robot. Thus, pursues a mechanic design that can reflect human identity. can be selected as the mechanic animation of the 80's, and the "Neon Genesis Evangelion" of the 90's shows it with a completely different design. By comparing the mechanical design of two works, therefore, we examine the correlation between the message and the design of the work. presents the close relationship between the identity of the mechanical design and the contents. I would like to point out that mechanical design can be a good example and theoretical basis for the future.

In Vitro Evaluation of Shear Bond Strengths of Zirconia Cerami with Various Types of Cement after Thermocycling on Bovine Dentin Surface (지르코니아 표면 처리와 시멘트 종류에 따른 치면과의 전단 결합 강도 비교 연구)

  • Cho, Soo-Hyun;Cho, In-Ho;Lee, Jong-Hyuk;Nam, Ki-Young;Kim, Jong-Bae;Hwang, Sang-Hee
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.23 no.3
    • /
    • pp.249-257
    • /
    • 2007
  • State of problem : The use of zirconium oxide all-ceramic material provides several advantages, including a high flexural strength(>1000MPa) and desirable optical properties, such as shading adaptation to the basic shades and a reduction in the layer thickness. Along with the strength of the materials, the cementation technique is also important to the clinical success of a restoration. Nevertheless, little information is available on the effect of different surface treatments on the bonding of zirconium high-crystalline ceramics and resin luting agents. Purpose : The aim of this study was to test the effects of surface treatments of zirconium on shear bond strengths between bovine teeth and a zirconia ceramic and evaluate differences among cements Material and methods : 54 sound bovine teeth extracted within a 1 months, were used. They were frozen in distilled water. These were rinsed by tap water to confirm that no granulation tissues have left. These were kept refrigerated at $4^{\circ}C$ until tested. Each tooth was placed horizontally at a plastic cylinder (diameter 20mm), and embedded in epoxy resin. Teeth were sectioned with diamond burs to expose dentin and grinded with #600 silicon carbide paper. To make sure there was no enamel left, each was observed under an optical microscope. 54 prefabricated zirconium oxide ceramic copings(Lava, 3M ESPE, USA) were assigned into 3 groups ; control, airborne-abraded with $110{\mu}m$ $Al_2O_3$ and scratched with diamond burs at 4 directions. They were cemented with a seating force of 10 ㎏ per tooth, using resin luting cement(Panavia $F^{(R)}$), resin cement(Superbond $C&B^{(R)}$), and resin modified GI cement(Rely X $Luting^{(R)}$). Those were thermocycled at $5^{\circ}C$ and $55^{\circ}C$ for 5000 cycles with a 30 second dwell time, and then shear bond strength was determined in a universal test machine(Model 4200, Instron Co., Canton, USA). The crosshead speed was 1 mm/min. The result was analyzed with one-way analysis of variance(ANOVA) and the Tukey test at a significance level of P<0.05. Results : Superbond $C&B^{(R)}$ at scratching with diamond burs showed the highest shear bond strength than others (p<.05). For Panavia $F^{(R)}$, groups of scratching and sandblasting showed significantly higher shear bond strength than control group(p<.05). For Rely X $Luting^{(R)}$, only between scratching & control group, significantly different shear bond strength was observed(p<.05). Conclusion : Within the limitation of this study, Superbond $C&B^{(R)}$ showed clinically acceptable shear bond between bovine teeth & zirconia ceramics regardless of surface treatments. For the surface treatment, scratching increased shear bond strength. Increase of shear bond strength by sandblasting with $110{\mu}m$ $Al_2O_3$ was not statistically different.

P300 speller using a new stimulus presentation paradigm (새로운 자극제시방법을 사용한 P300 문자입력기)

  • Eom, Jin-Sup;Yang, Hye-Ryeon;Park, Mi-Sook;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.107-116
    • /
    • 2013
  • In the implementation of a P300 speller, rows and columns paradigm (RCP) is most commonly used. However, the RCP remains subject to adjacency-distraction error and double-flash problems. This study suggests a novel P300 speller stimuli presentation-the sub-block paradigm (SBP) that is likely to solve the problems effectively. Fifteen subjects participated in this experiment where both SBP and RCP were used to implement the P300 speller. Electroencephalography (EEG) activity was recorded from Fz, Cz, Pz, Oz, P3, P4, PO7, and PO8. Each paradigm consisted of a training phase to train a classifier and a testing phase to evaluate the speller. Eighteen characters were used for the target stimuli in the training phase. Additionally, 5 subjects were required to spell 50 characters and the rest of the subjects were to spell 25 characters in the testing phase. Classification accuracy results show that average accuracy was significantly higher in SBP as of 83.73% than that of RCP as of 66.40%. Grand mean event-related potentials (ERPs) at Pz show that positive peak amplitude for the target stimuli was greater in SBP compared to that of RCP. It was found that subjects tended to attend more to the characters in SBP. According to the participants' ratings on how comfortable they were with using each type of paradigm on 7-point Likert scale, most subjects responded 'very difficult' in RCP while responding 'medium' and 'easy' in SBP. The result showed that SBP was felt more comfortable than RCP by the subjects. In sum, the SBP was more correct in P300 speller performance as well as more convenient for users than the RCP. The actual limitations in the study were discussed in the last part of this paper.

  • PDF

Effects of laser-irradiated dentin on shear bond strength of composite resin (레이저 처리가 상아질과 복합 레진의 결합에 미치는 영향)

  • Kim, Sung-Sook;Park, Jong-Il;Lee, Jae-In;Kim, Gye-Sun;Cho, Hye-Won
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.46 no.5
    • /
    • pp.520-527
    • /
    • 2008
  • Purpose: This study was conducted to evaluate the shear bond strength of composite resin to dentin when etched with laser instead of phosphoric acid. Material and methods: Recently extracted forty molars, completely free of dental caries, were embedded into acrylic resin. After exposing dentin with diamond saw, teeth surface were polished with a series of SiC paper. The teeth were divided into four groups composed of 10 specimens each; 1) no surface treated group as a control 2) acid-etched with 35%-phosphoric acid 3) Er:YAG laser treated 4) Er,Cr:YSGG laser treated. A dentin bonding agent (Adapter Single Bond2, 3M/ESPE) was applied to the specimens and then transparent plastic tubes (3 mm of height and diameter) were placed on each dentin. The composite resin was inserted into the tubes and cured. All the specimens were stored in distilled water at $37^{\circ}C$ for 24 hours and the shear bond strength was measured using a universal testing machine (Z020, Zwick, Germany). The data of tensile bond strength were statistically analyzed by one-way ANOVA and Duncan's test at ${\alpha}$= 0.05. Results: The bond strengths of Er:YAG laser-treated group was $3.98{\pm}0.88$ MPa and Er,Cr:YSGG laser-treated group showed $3.70{\pm}1.55$ MPa. There were no significant differences between two laser groups. The control group showed the lowest bond strength, $1.52{\pm}0.42$ MPa and the highest shear bond strength was presented in acid-etched group, $7.10{\pm}1.86$ MPa (P < .05). Conclusion: Laser-etched group exhibited significantly higer bond strength than that of control group, while still weaker than that of the phosphoric acid-etched group.

An Implementation Method of the Character Recognizer for the Sorting Rate Improvement of an Automatic Postal Envelope Sorting Machine (우편물 자동구분기의 구분율 향상을 위한 문자인식기의 구현 방법)

  • Lim, Kil-Taek;Jeong, Seon-Hwa;Jang, Seung-Ick;Kim, Ho-Yon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.4
    • /
    • pp.15-24
    • /
    • 2007
  • The recognition of postal address images is indispensable for the automatic sorting of postal envelopes. The process of the address image recognition is composed of three steps-address image preprocessing, character recognition, address interpretation. The extracted character images from the preprocessing step are forwarded to the character recognition step, in which multiple candidate characters with reliability scores are obtained for each character image extracted. aracters with reliability scores are obtained for each character image extracted. Utilizing those character candidates with scores, we obtain the final valid address for the input envelope image through the address interpretation step. The envelope sorting rate depends on the performance of all three steps, among which character recognition step could be said to be very important. The good character recognizer would be the one which could produce valid candidates with very reliable scores to help the address interpretation step go easy. In this paper, we propose the method of generating character candidates with reliable recognition scores. We utilize the existing MLP(multilayered perceptrons) neural network of the address recognition system in the current automatic postal envelope sorters, as the classifier for the each image from the preprocessing step. The MLP is well known to be one of the best classifiers in terms of processing speed and recognition rate. The false alarm problem, however, might be occurred in recognition results, which made the address interpretation hard. To make address interpretation easy and improve the envelope sorting rate, we propose promising methods to reestimate the recognition score (confidence) of the existing MLP classifier: the generation method of the statistical recognition properties of the classifier and the method of the combination of the MLP and the subspace classifier which roles as a reestimator of the confidence. To confirm the superiority of the proposed method, we have used the character images of the real postal envelopes from the sorters in the post office. The experimental results show that the proposed method produces high reliability in terms of error and rejection for individual characters and non-characters.

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

The Excluded from Public Pension : Problem, Cause and Policy Measures (공적연금의 사각지대 : 실태, 원인과 정책방안)

  • Seok, Jae-Eun
    • Korean Journal of Social Welfare
    • /
    • v.53
    • /
    • pp.285-310
    • /
    • 2003
  • As National Pension Scheme for all nation complete in 1999 through expanding application in cities, the public pension including Public Occupational Pension became main axis of old-age income maintenance. After 4years since then, now, it is only half of total National Pension insured persons who have been qualified to receive pension through participate and contribution. The other half of National Pension insured is left the excluded from public pension. This paper is intended to identify scale and characteristics of the excluded from public pension and to analysis its cause, and to explore policy measures for solving the excluded's problem. for current recipients over 60 years old generation, the its excluded's scale is no less than 86% of the old over 60 years. The probability of getting in the excluded is high in case of old elderly and female for current elderly generation. For future recipients 18-59 years working generation, the its excluded's scale is no less than 61% of the 18-59 years total population. The probability of getting in the excluded is high in case of 18-29 years and female for current working generation. As logistic regression analysis determinant factor of paying or not pension contribution for future recipients, it appear that probability of getting in the excluded for current working generation is high in case of younger old, lower education attainment, irregular employee, working at agriculture forestry fishery sector, construction sector, wholesale retail trade restaurants hotels sector, financial institution and insurance real estate renting and leasing sector in comparison with manufacturing sector, occpaying at elementary occupation, professionals technicians and associate professionals, sale and service workers, plant machine operators and assemblers, legislators senior officials and managers in comparison with clerks. The Policy measures for the current recipient old generation have need to reinforce supplemental role of Senior's pension(non-contribution pension) until maturing of public pension, because of no having chance of public pension participants for them. And the Policy measures for the future recipient working generation have need to restructure social security fundamentally corresponding with social-economic change as labour market and family structure etc. The pension system has need to change from one earner one pension to one citizen one pension with citizenship rights. At this point, public pension have need to manage with combining insurance's contribution principle and citizenship principle financing by taxes. Then public pension will become substantially universal social network for old-age income maintenance and we can find real solution for the excluded from.

  • PDF

Host-Based Intrusion Detection Model Using Few-Shot Learning (Few-Shot Learning을 사용한 호스트 기반 침입 탐지 모델)

  • Park, DaeKyeong;Shin, DongIl;Shin, DongKyoo;Kim, Sangsoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.271-278
    • /
    • 2021
  • As the current cyber attacks become more intelligent, the existing Intrusion Detection System is difficult for detecting intelligent attacks that deviate from the existing stored patterns. In an attempt to solve this, a model of a deep learning-based intrusion detection system that analyzes the pattern of intelligent attacks through data learning has emerged. Intrusion detection systems are divided into host-based and network-based depending on the installation location. Unlike network-based intrusion detection systems, host-based intrusion detection systems have the disadvantage of having to observe the inside and outside of the system as a whole. However, it has the advantage of being able to detect intrusions that cannot be detected by a network-based intrusion detection system. Therefore, in this study, we conducted a study on a host-based intrusion detection system. In order to evaluate and improve the performance of the host-based intrusion detection system model, we used the host-based Leipzig Intrusion Detection-Data Set (LID-DS) published in 2018. In the performance evaluation of the model using that data set, in order to confirm the similarity of each data and reconstructed to identify whether it is normal data or abnormal data, 1D vector data is converted to 3D image data. Also, the deep learning model has the drawback of having to re-learn every time a new cyber attack method is seen. In other words, it is not efficient because it takes a long time to learn a large amount of data. To solve this problem, this paper proposes the Siamese Convolutional Neural Network (Siamese-CNN) to use the Few-Shot Learning method that shows excellent performance by learning the little amount of data. Siamese-CNN determines whether the attacks are of the same type by the similarity score of each sample of cyber attacks converted into images. The accuracy was calculated using Few-Shot Learning technique, and the performance of Vanilla Convolutional Neural Network (Vanilla-CNN) and Siamese-CNN was compared to confirm the performance of Siamese-CNN. As a result of measuring Accuracy, Precision, Recall and F1-Score index, it was confirmed that the recall of the Siamese-CNN model proposed in this study was increased by about 6% from the Vanilla-CNN model.