• Title/Summary/Keyword: matching number

Search Result 801, Processing Time 0.029 seconds

Comparison of Laparoscopic and Open Gastrectomy for Patients With Gastric Cancer Treated With Neoadjuvant Chemotherapy: A Multicenter Retrospective Study Based on the Korean Gastric Cancer Association Nationwide Survey

  • Seul Ki Oh;Chang Seok Ko;Seong-A Jeong;Jeong Hwan Yook;Moon-Won Yoo;Beom Su Kim;In-Seob Lee;Chung Sik Gong;Sa-Hong Min;Na Young Kim;the Information Committee of the Korean Gastric Cancer Association
    • Journal of Gastric Cancer
    • /
    • v.23 no.3
    • /
    • pp.499-508
    • /
    • 2023
  • Purpose: Despite scientific evidence regarding laparoscopic gastrectomy (LG) for advanced gastric cancer treatment, its application in patients receiving neoadjuvant chemotherapy remains uncertain. Materials and Methods: We used the 2019 Korean Gastric Cancer Association nationwide survey database to extract data from 489 patients with primary gastric cancer who received neoadjuvant chemotherapy. After propensity score matching analysis, we compared the surgical outcomes of 97 patients who underwent LG and 97 patients who underwent open gastrectomy (OG). We investigated the risk factors for postoperative complications using multivariate analysis. Results: The operative time was significantly shorter in the OG group. Patients in the LG group had significantly less blood loss than those in the OG group. Hospital stay and overall postoperative complications were similar between the two groups. The incidence of Clavien-Dindo grade ≥3 complications in the LG group was comparable with that in the OG group (1.03% vs. 4.12%, P=0.215). No statistically significant difference was observed in the number of harvested lymph nodes between the two groups (38.60 vs. 35.79, P=0.182). Multivariate analysis identified body mass index (odds ratio [OR], 1.824; 95% confidence interval [CI], 1.029-3.234; P=0.040) and extent of resection (OR, 3.154; 95% CI, 1.084-9.174; P=0.035) as independent risk factors for overall postoperative complications. Conclusions: Using a large nationwide multicenter survey database, we demonstrated that LG and OG had comparable short-term outcomes in patients with gastric cancer who received neoadjuvant chemotherapy.

An application of MMS in precise inspection for safety and diagnosis of road tunnel (도로터널에서 MMS를 이용한 정밀안전진단 적용 사례)

  • Jinho Choo;Sejun Park;Dong-Seok Kim;Eun-Chul Noh
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.2
    • /
    • pp.113-128
    • /
    • 2024
  • Items of road tunnel PISD (Precise Inspection for Safety and Diagnosis) were reviewed and analyzed using newly enhanced MMS (Mobile Mapping System) technology. Possible items with MMS can be visual inspection, survey and non-destructive test, structural analysis, and maintenance plan. The resolution of 3D point cloud decreased when the vehicle speed of MMS is too fast while the calibration error increased when it is too slow. The speed measurement of 50 km/h is determined to be effective in this study. Although image resolution by MMS has a limit to evaluating the width of crack with high precision, it can be used as data to identify the status of facilities in the tunnel and determine whether they meet disaster prevention management code of tunnel. 3D point cloud with MMS can be applicable for matching of cross-section and also possible for the variation of longitudinal survey, which can intuitively check vehicle clearance throughout the road tunnel. Compared with the measurement of current PISD, number of test and location of survey is randomly sampled, the continuous measurement with MMS for environment condition can be effective and meaningful for precise estimation in various analysis.

Lineament and Fault-related Landforms of the Western Chungcheongnamdo (충남 서부지역의 선형구조와 단층지형)

  • Tae-Suk Kim;Cho-Hee Lee;Yeong Bae Seong
    • Journal of the Korean earth science society
    • /
    • v.45 no.3
    • /
    • pp.224-238
    • /
    • 2024
  • This study analyzed lineaments and fault-related landforms in Chungcheongnam-do, central Korean Peninsula, based on historical and instrumental records, given its susceptibility to future earthquakes. We extracted 151 lineaments associated with fault-related landforms. In regions with the Dangjin and Yesan faults, lineaments with strikes matching these faults were densely distributed. Conversely, in the Hongseong Fault area, the number of lineaments was smaller, and those with strikes similar to the fault were less discernible. This is likely due to the extensive distribution of alluvium and surface deformation from long-term weathering, erosion, and cultivation, which obscures geomorphic evidence of faults. At five key fault points, we identified fault-related landforms, such as fault saddles, knickpoints in Quaternary alluvium, and linear valleys, along the lineament, which may indicate an actual fault. However, the displacements of the Quaternary layer within the lineaments appear to be influenced more by external factors, such as artificial disturbances (e.g., cultivation) or stream erosion, than by direct fault movement. The differences between the fault-related landforms in this study area and those in the southeastern Korean Peninsula suggest a specific relationship between fault types and their associated landforms.

Matching prediction on Korean professional volleyball league (한국 프로배구 연맹의 경기 예측 및 영향요인 분석)

  • Heesook Kim;Nakyung Lee;Jiyoon Lee;Jongwoo Song
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.3
    • /
    • pp.323-338
    • /
    • 2024
  • This study analyzes the Korean professional volleyball league and predict match outcomes using popular machine learning classification methods. Match data from the 2012/2013 to 2022/2023 seasons for both male and female leagues were collected, including match details. Two different data structures were applied to the models: Separating matches results into two teams and performance differentials between the home and away teams. These two data structures were applied to construct a total of four predictive models, encompassing both male and female leagues. As specific variable values used in the models are unavailable before the end of matches, the results of the most recent 3 to 4 matches, up until just before today's match, were preprocessed and utilized as variables. Logistc Regrssion, Decision Tree, Bagging, Random Forest, Xgboost, Adaboost, and Light GBM, were employed for classification, and the model employing Random Forest showed the highest predictive performance. The results indicated that while significant variables varied by gender and data structure, set success rate, blocking points scored, and the number of faults were consistently crucial. Notably, our win-loss prediction model's distinctiveness lies in its ability to provide pre-match forecasts rather than post-event predictions.

A Dynamic Management Method for FOAF Using RSS and OLAP cube (RSS와 OLAP 큐브를 이용한 FOAF의 동적 관리 기법)

  • Sohn, Jong-Soo;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.39-60
    • /
    • 2011
  • Since the introduction of web 2.0 technology, social network service has been recognized as the foundation of an important future information technology. The advent of web 2.0 has led to the change of content creators. In the existing web, content creators are service providers, whereas they have changed into service users in the recent web. Users share experiences with other users improving contents quality, thereby it has increased the importance of social network. As a result, diverse forms of social network service have been emerged from relations and experiences of users. Social network is a network to construct and express social relations among people who share interests and activities. Today's social network service has not merely confined itself to showing user interactions, but it has also developed into a level in which content generation and evaluation are interacting with each other. As the volume of contents generated from social network service and the number of connections between users have drastically increased, the social network extraction method becomes more complicated. Consequently the following problems for the social network extraction arise. First problem lies in insufficiency of representational power of object in the social network. Second problem is incapability of expressional power in the diverse connections among users. Third problem is the difficulty of creating dynamic change in the social network due to change in user interests. And lastly, lack of method capable of integrating and processing data efficiently in the heterogeneous distributed computing environment. The first and last problems can be solved by using FOAF, a tool for describing ontology-based user profiles for construction of social network. However, solving second and third problems require a novel technology to reflect dynamic change of user interests and relations. In this paper, we propose a novel method to overcome the above problems of existing social network extraction method by applying FOAF (a tool for describing user profiles) and RSS (a literary web work publishing mechanism) to OLAP system in order to dynamically innovate and manage FOAF. We employed data interoperability which is an important characteristic of FOAF in this paper. Next we used RSS to reflect such changes as time flow and user interests. RSS, a tool for literary web work, provides standard vocabulary for distribution at web sites and contents in the form of RDF/XML. In this paper, we collect personal information and relations of users by utilizing FOAF. We also collect user contents by utilizing RSS. Finally, collected data is inserted into the database by star schema. The system we proposed in this paper generates OLAP cube using data in the database. 'Dynamic FOAF Management Algorithm' processes generated OLAP cube. Dynamic FOAF Management Algorithm consists of two functions: one is find_id_interest() and the other is find_relation (). Find_id_interest() is used to extract user interests during the input period, and find-relation() extracts users matching user interests. Finally, the proposed system reconstructs FOAF by reflecting extracted relationships and interests of users. For the justification of the suggested idea, we showed the implemented result together with its analysis. We used C# language and MS-SQL database, and input FOAF and RSS as data collected from livejournal.com. The implemented result shows that foaf : interest of users has reached an average of 19 percent increase for four weeks. In proportion to the increased foaf : interest change, the number of foaf : knows of users has grown an average of 9 percent for four weeks. As we use FOAF and RSS as basic data which have a wide support in web 2.0 and social network service, we have a definite advantage in utilizing user data distributed in the diverse web sites and services regardless of language and types of computer. By using suggested method in this paper, we can provide better services coping with the rapid change of user interests with the automatic application of FOAF.

Social Tagging-based Recommendation Platform for Patented Technology Transfer (특허의 기술이전 활성화를 위한 소셜 태깅기반 지적재산권 추천플랫폼)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.53-77
    • /
    • 2015
  • Korea has witnessed an increasing number of domestic patent applications, but a majority of them are not utilized to their maximum potential but end up becoming obsolete. According to the 2012 National Congress' Inspection of Administration, about 73% of patents possessed by universities and public-funded research institutions failed to lead to creating social values, but remain latent. One of the main problem of this issue is that patent creators such as individual researcher, university, or research institution lack abilities to commercialize their patents into viable businesses with those enterprises that are in need of them. Also, for enterprises side, it is hard to find the appropriate patents by searching keywords on all such occasions. This system proposes a patent recommendation system that can identify and recommend intellectual rights appropriate to users' interested fields among a rapidly accumulating number of patent assets in a more easy and efficient manner. The proposed system extracts core contents and technology sectors from the existing pool of patents, and combines it with secondary social knowledge, which derives from tags information created by users, in order to find the best patents recommended for users. That is to say, in an early stage where there is no accumulated tag information, the recommendation is done by utilizing content characteristics, which are identified through an analysis of key words contained in such parameters as 'Title of Invention' and 'Claim' among the various patent attributes. In order to do this, the suggested system extracts only nouns from patents and assigns a weight to each noun according to the importance of it in all patents by performing TF-IDF analysis. After that, it finds patents which have similar weights with preferred patents by a user. In this paper, this similarity is called a "Domain Similarity". Next, the suggested system extract technology sector's characteristics from patent document by analyzing the international technology classification code (International Patent Classification, IPC). Every patents have more than one IPC, and each user can attach more than one tag to the patents they like. Thus, each user has a set of IPC codes included in tagged patents. The suggested system manages this IPC set to analyze technology preference of each user and find the well-fitted patents for them. In order to do this, the suggeted system calcuates a 'Technology_Similarity' between a set of IPC codes and IPC codes contained in all other patents. After that, when the tag information of multiple users are accumulated, the system expands the recommendations in consideration of other users' social tag information relating to the patent that is tagged by a concerned user. The similarity between tag information of perferred 'patents by user and other patents are called a 'Social Simialrity' in this paper. Lastly, a 'Total Similarity' are calculated by adding these three differenent similarites and patents having the highest 'Total Similarity' are recommended to each user. The suggested system are applied to a total of 1,638 korean patents obtained from the Korea Industrial Property Rights Information Service (KIPRIS) run by the Korea Intellectual Property Office. However, since this original dataset does not include tag information, we create virtual tag information and utilized this to construct the semi-virtual dataset. The proposed recommendation algorithm was implemented with JAVA, a computer programming language, and a prototype graphic user interface was also designed for this study. As the proposed system did not have dependent variables and uses virtual data, it is impossible to verify the recommendation system with a statistical method. Therefore, the study uses a scenario test method to verify the operational feasibility and recommendation effectiveness of the system. The results of this study are expected to improve the possibility of matching promising patents with the best suitable businesses. It is assumed that users' experiential knowledge can be accumulated, managed, and utilized in the As-Is patent system, which currently only manages standardized patent information.

A Prospective Randomized Comparative Clinical Trial Comparing the Efficacy between Ondansetron and Metoclopramide for Prevention of Nausea and Vomiting in Patients Undergoing Fractionated Radiotherapy to the Abdominal Region (복부 방사선치료를 받는 환자에서 발생하는 오심 및 구토에 대한 온단세트론과 메토클로프라미드의 효과 : 제 3상 전향적 무작위 비교임상시험)

  • Park Hee Chul;Suh Chang Ok;Seong Jinsil;Cho Jae Ho;Lim John Jihoon;Park Won;Song Jae Seok;Kim Gwi Eon
    • Radiation Oncology Journal
    • /
    • v.19 no.2
    • /
    • pp.127-135
    • /
    • 2001
  • Purpose : This study is a prospective randomized clinical trial comparing the efficacy and complication of anti-emetic drugs for prevention of nausea and vomiting after radiotherapy which has moderate emetogenic potential. The aim of this study was to investigate whether the anti-emetic efficacy of ondansetron $(Zofran^{\circledR})$ 8 mg bid dose (Group O) is better than the efficacy of metoclopramide 5 mg lid dose (Group M) in patients undergoing fractionated radiotherapy to the abdominal region. Materials and Methods : Study entry was restricted to those patients who met the following eligibility criteria: histologically confirmed malignant disease; no distant metastasis; performance status of not more than ECOG grade 2; no previous chemotherapy and radiotherapy. Between March 1997 and February 1998, 60 patients enrolled in this study. All patients signed a written statement of informed consent prior to enrollment. Blinding was maintained by dosing identical number of tablets including one dose of matching placebo for Group O. The extent of nausea, appetite loss, and the number of emetic episodes were recorded everyday using diary card. The mean score of nausea, appetite loss and the mean number of emetic episodes were obtained in a weekly interval. Results : Prescription error occurred in one patient. And diary cards have not returned in 3 patients due to premature refusal of treatment. Card from one patient was excluded from the analysis because she had a history of treatment for neurosis. As a result, the analysis consisted of 55 patients. Patient characteristics and radiotherapy characteristics were similar except mean age was $52.9{\pm}11.2$ in group M, $46.5{\pm}9.5$ in group O. The difference of age was statistically significant. The mean score of nausea, appetite loss and emetic episodes in a weekly interval was higher in group M than O. In group M, the symptoms were most significant at 5th week. In a panel data analysis using mixed procedure, treatment group was only significant factor detecting the difference of weekly score for all three symptoms. Ondansetron $(Zofran^{\circledR})$ 8 mg bid dose and metoclopramide 5 mg lid dose were well tolerated without significant side effects. There were no clinically important changes In vital signs or clinical laboratory parameters with either drug. Conclusion : Concerning the fact that patients with younger age have higher emetogenic potential, there are possibilities that age difference between two treatment groups lowered the statistical power of analysis. There were significant difference favoring ondansetron group with respect to the severity of nausea, vomiting and loss of appetite. We concluded that ondansetron is more effective anti-emetic agents in the control of radiotherapy-induced nausea, vomiting, loss of appetite without significant toxicity, compared with commonly used drug, i.e., metoclopramide. However, there were patients suffering emesis despite the administration of ondansetron. The possible strategies to improve the prevention and the treatment of radiotherapy-induced emesis must be further studied.

  • PDF

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

Early Results of Heart Transplantaion: A Review of 20 Patients (심장이식술 20례의 조기성적)

  • Park, Chong-Bin;Song, Hyun;Song, Meong-Gun;Kim, Jae-Joong;Lee, Jay-Won;Seo, Dong-Man;Sohn, Kwang-Hyun
    • Journal of Chest Surgery
    • /
    • v.30 no.2
    • /
    • pp.164-171
    • /
    • 1997
  • Heart transplantation is now accepted as a definitive therapeutic modality in patients with terminal hear failure. The first successful heart transplantation in humans was done in 1967 and the first case in Korea was performed in november, 1992. Since the first case in 1992, more than 50 cases have been performed in Korea. A total of 20 patients underwent orthotopic heart transplantation since November, 1992 in Asan Medicla Center. The purpose of this study is to evaluate the early results and the follow-up course of 20 cases of heart transplantation done in Asan Medical Center. The average age of 20 patients was 39.9$\pm$11.8 years old(20~58). The mean follow-up duration was 14.4$\pm$11.2 months(1~41). All patients are alive till now. The blood type was identical in 14 and compatible in 6 patients. ihe original heart disease was dilated cardiomyopathy in 16, valvular heart disease in 2, ischemic cardiomyopathy in 1, and giant cell myocarditis in 1 patient. HLA cross matching for recipient and donor was done in 18 cases and the results were negative for T-cell and B-cell in 16 patients, pos tive for warm B-cell in 2 patients. Among 6 loci of A, B, and DR, one locus was matched in 8 cases, 2 loci in 5 cases, and 3 loci matched in 1 case. The number of acute allograft rejection averaged 2.8$\pm$0.5 (0~6) per case and the number of acute allograft rejection requiring treatment averaged 1.0$\pm$0.9 (1~3) per case. The time interval from operation to the first acute rejection requiring treatment was 35.5$\pm$20.4 days (5~60). Acute humoral rejection was suspected strongly in 1 case and was successfully treated. The left ventricular ejection fraction measured by echocardiography and/or MUGA scan was dramatically increased from 17.5$\pm$6.8 (9~32)% to 58.9$\pm$2.0 (55~62)% after heart transplantation. Temporary pacing was needed in 5 patients over 24 hours but normal sinus rhythm appeared within 7 days in all cases. One patient has been taken permanent pacemaker implantation due to complete AV block appearing 140 days after heart transplantaion. One patient had cyclosporine-associated n urotoxicity during the immediate postoperative period and was recovered after 27 hours. The heart transplantation of Asan Medical Center is on a developing stage but the early result is comparable to that of well established centers in other countries, even though the long-term follow-up result must be reevaluated. We can conclude that the heart transplantion is a promising therapeutic option in patients with terminal heart failure.

  • PDF

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.