• Title/Summary/Keyword: High Accuracy

Search Result 8,695, Processing Time 0.043 seconds

The Role of Camera-Based Coincidence Positron Emission Tomography in Nodal Staging of Non-Small Cell Lung Cancer (비소세포폐암의 림프절 병기 결정에서 Coincidence PET의 역할)

  • Lee, Sun-Min;Choi, Young-Hwa;Oh, Yoon-Jung;Cheong, Seong-Cheoll;Park, Kwang-Joo;Hwang, Sung-Chul;Lee, Yi-Hyeong;Park, Chan-H;Hahn, Myung-Ho
    • Tuberculosis and Respiratory Diseases
    • /
    • v.47 no.5
    • /
    • pp.642-649
    • /
    • 1999
  • Background: It is very important to determine an accurate staging of the non-small cell lung cancer(NSCLC) for an assessment of operability and it's prognosis. However, it is difficult to evaluate tumor involvement of mediastinal lymph nodes accurately utilizing noninvasive imaging modalities. PET is one of the sensitive and specific imaging modality. Unfortunately PET is limited use because of prohibitive cost involved with it's operation. Recently hybrid SPECT/PET(single photon emission computed tomography/positron emission tomography) camera based PET imaging was introduced with relatively low cost. We evaluated the usefulness of coincidence detection(CoDe) PET in the detection of metastasis to the mediastinal lymph nodes in patients with NSCLC. Methods: Twenty one patients with NSCLC were evaluated by CT or MRI and they were considered operable. CoDe PET was performed in all 21 patients prior to surgery. Tomographic slices of axial, coronal and sagittal planes were visually analysed. At surgery, mediastinal lymph nodes were removed and histological diagnosis was performed. CoDe PET findings were correlated with histological findings. Results: Twenty of 21 primary tumor masses were detected by the CoDe PET. Thirteen of 21 patients was correctly diagnosed mediastinal lymph node metastasis by the CoDe PET. Pathological N0 was 14 cases and the specificity of N0 of CoDe PET was 64.3%. Sensitivity, specificity, positive predictive value, negative predictive value and accuracy of N1 node was 83.3%, 73.3%, 55.6%, 91.7%, and 76.2% respectively. Sensitivity, specificity, positive predictive value, negative predictive value and accuracy of N2 node was 60.0%, 87.5%, 60.0%,87.5%, and 90.0% respectively. There were 3 false negative cases but the size of the 3 nodes were less than 1cm. The size of true positive nodes were 1.1cm, 1.0cm, 0.5cm respectively. There were 1 false positive among the 12 lymph nodes which were larger than 1cm. False positive cases consisted of 1 tuberculosis case, 1 pneumoconiosis case and 1 anthracosis case. Conclusion: CoDe PET has relatively high negative predictive value in the enlarged lymph node in staging of mediastinal nodes in patients with NSCLC. Therefore CoDe PET is useful in ruling out metastasis of enlarged N3 nodes. However, further study is needed including more number of patients in the future.

  • PDF

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Effects of Anti-thyroglobulin Antibody on the Measurement of Thyroglobulin : Differences Between Immunoradiometric Assay Kits Available (면역방사계수법을 이용한 Thyroglobulin 측정시 항 Thyroglobulin 항체의 존재가 미치는 영향: Thyroglobulin 측정 키트에 따른 차이)

  • Ahn, Byeong-Cheol;Seo, Ji-Hyeong;Bae, Jin-Ho;Jeong, Shin-Young;Yoo, Jeong-Soo;Jung, Jin-Hyang;Park, Ho-Yong;Kim, Jung-Guk;Ha, Sung-Woo;Sohn, Jin-Ho;Lee, In-Kyu;Lee, Jae-Tae;Kim, Bo-Wan
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.4
    • /
    • pp.252-256
    • /
    • 2005
  • Purpose: Thyroglobulin (Tg) is a valuable and sensitive tool as a marker for diagnosis and follow-up for several thyroid disorders, especially, in the follow-up of patients with differentiated thyroid cancer (DTC). Often, clinical decisions rely entirely on the serum Tg concentration. But the Tg assay is one of the most challenging laboratory measurements to perform accurately owing to antithyroglobulin antibody (Anti-Tg). In this study, we have compared the degree of Anti-Tg effects on the measurement of Tg between availale Tg measuring kits. Materials and Methods: Measurement of Tg levels for standard Tg solution was performed with two different kits commercially available (A/B kits) using immunoradiometric assay technique either with absence or presence of three different concentrations of Anti-Tg. Measurement of Tg for patient's serum was also performed with the same kits. Patient's serum samples were prepared with mixtures of a serum containing high Tg levels and a serum containg high Anti-Tg concentrations. Results: In the measurements of standard Tg solution, presence of Anti-Tg resulted in falsely lower Tg level by both A and B kits. Degree of Tg underestimation by h kit was more prominent than B kit. The degree of underestimation by B kit was trivial therefore clinically insignificant, but statistically significant. Addition of Anti-Tg to patient serum resulted in falsely lower Tg levels with only A kit. Conclusion: Tg level could be underestimated in the presence of anti-Tg. Anti-Tg effect on Tg measurement was variable according to assay kit used. Therefore, accuracy test must be performed for individual Tg-assay kit.

Development of Adjustable Head holder Couch in H&N Cancer Radiation Therapy (두경부암 방사선 치료 시 Set-Up 조정 Head Holder 장치의 개발)

  • Shim, JaeGoo;Song, KiWon;Kim, JinMan;Park, MyoungHwan
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.43-50
    • /
    • 2014
  • In case of all patients who receive radiation therapy, a treatment plan is established and all steps of treatment are planned in the same geometrical condition. In case of head and neck cancer patients who undergo simulated treatment through computed tomography (CT), patients are fixed onto a table for planning, but laid on the top of the treatment table in the radiation therapy room. This study excogitated and fabricated an adjustable holder for head and neck cancer patients to fix patient's position and geometrical discrepancies when performing radiation therapy on head and neck cancer patients, and compared the error before and after adjusting the position of patients due to difference in weight to evaluate the correlation between patients' weight and range of error. Computed tomography system(High Advantage, GE, USA) is used for phantom to maintain the supine position to acquire the images of the therapy site for IMRT. IMRT 4MV X-rays was used by applying the LINAC(21EX, Varian, U.S.A). Treatment planning system (Pinnacle, ver. 9.1h, Philips, Madison, USA) was used. The setup accuracy was compared with each measurement was repeated five times for each weight (0, 15, and 30Kg) and CBCT was performed 30 times to find the mean and standard deviation of errors before and after the adjustment of each weight. SPSS ver.19.0(SPSS Inc., Chicago, IL,USA) statistics program was used to perform the Wilcoxon Rank test for significance evaluation and the Spearman analysis was used as the tool to analyze the significance evaluation of the correlation of weight. As a result of measuring the error values from CBCT before and after adjusting the position due to the weight difference, X,Y,Z axis was $0.4{\pm}0.8mm$, $0.8{\pm}0.4mm$, 0 for 0Kg before the adjustment. In 15Kg CBCT before and after adjusting the position due to the weight difference, X,Y,Z axis was $0.2{\pm}0.8mm$, $1.2{\pm}0.4mm$, $2.0{\pm}0.4mm$. After adjusting position was X,Y,Z axis was $0.2{\pm}0.4mm$, $0.4{\pm}0.5mm$, $0.4{\pm}0.5mm$. In 30Kg CBCT before and after adjusting the position due to the weight difference, X,Y,Z axis was $0.8{\pm}0.4mm$, $2.4{\pm}0.5mm$, $4.4{\pm}0.8mm$. After adjusting position was X,Y,Z axis was $0.6{\pm}0.5mm$, $1.0{\pm}0mm$, $0.6{\pm}0.5mm$. When the holder for the head and neck cancer was used to adjust the ab.0ove error value, the error values from CBCT were $0.2{\pm}0.8mm$ for the X axis, $0.40{\pm}0.54mm$ for Y axis, and 0 for Z axis. As a result of statistically analyzing each value before and after the adjustment the value was significant with p<0.034 at the Z axis with 15Kg of weight and with p<0.038 and p<0.041 at the Y and Z axes respectively with 30Kg of weight. There was a significant difference with p<0.008 when the analysis was performed through Kruscal-Wallis in terms of the difference in the adjusted values of the three weight groups. As it could reduce the errors, patients' reproduction could be improved for more precise and accurate radiation therapy. Development of an adjustable device for head and neck cancer patients is significant because it improves the reproduction of existing equipment by reducing the errors in patients' position.

System Development for Measuring Group Engagement in the Art Center (공연장에서 다중 몰입도 측정을 위한 시스템 개발)

  • Ryu, Joon Mo;Choi, Il Young;Choi, Lee Kwon;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.45-58
    • /
    • 2014
  • The Korean Culture Contents spread out to Worldwide, because the Korean wave is sweeping in the world. The contents stand in the middle of the Korean wave that we are used it. Each country is ongoing to keep their Culture industry improve the national brand and High added value. Performing contents is important factor of arousal in the enterprise industry. To improve high arousal confidence of product and positive attitude by populace is one of important factor by advertiser. Culture contents is the same situation. If culture contents have trusted by everyone, they will give information their around to spread word-of-mouth. So, many researcher study to measure for person's arousal analysis by statistical survey, physiological response, body movement and facial expression. First, Statistical survey has a problem that it is not possible to measure each person's arousal real time and we cannot get good survey result after they watched contents. Second, physiological response should be checked with surround because experimenter sets sensors up their chair or space by each of them. Additionally it is difficult to handle provided amount of information with real time from their sensor. Third, body movement is easy to get their movement from camera but it difficult to set up experimental condition, to measure their body language and to get the meaning. Lastly, many researcher study facial expression. They measures facial expression, eye tracking and face posed. Most of previous studies about arousal and interest are mostly limited to reaction of just one person and they have problems with application multi audiences. They have a particular method, for example they need room light surround, but set limits only one person and special environment condition in the laboratory. Also, we need to measure arousal in the contents, but is difficult to define also it is not easy to collect reaction by audiences immediately. Many audience in the theater watch performance. We suggest the system to measure multi-audience's reaction with real-time during performance. We use difference image analysis method for multi-audience but it weaks a dark field. To overcome dark environment during recoding IR camera can get the photo from dark area. In addition we present Multi-Audience Engagement Index (MAEI) to calculate algorithm which sources from sound, audience' movement and eye tracking value. Algorithm calculates audience arousal from the mobile survey, sound value, audience' reaction and audience eye's tracking. It improves accuracy of Multi-Audience Engagement Index, we compare Multi-Audience Engagement Index with mobile survey. And then it send the result to reporting system and proposal an interested persons. Mobile surveys are easy, fast, and visitors' discomfort can be minimized. Also additional information can be provided mobile advantage. Mobile application to communicate with the database, real-time information on visitors' attitudes focused on the content stored. Database can provide different survey every time based on provided information. The example shown in the survey are as follows: Impressive scene, Satisfied, Touched, Interested, Didn't pay attention and so on. The suggested system is combine as 3 parts. The system consist of three parts, External Device, Server and Internal Device. External Device can record multi-Audience in the dark field with IR camera and sound signal. Also we use survey with mobile application and send the data to ERD Server DB. The Server part's contain contents' data, such as each scene's weights value, group audience weights index, camera control program, algorithm and calculate Multi-Audience Engagement Index. Internal Device presents Multi-Audience Engagement Index with Web UI, print and display field monitor. Our system is test-operated by the Mogencelab in the DMC display exhibition hall which is located in the Sangam Dong, Mapo Gu, Seoul. We have still gotten from visitor daily. If we find this system audience arousal factor with this will be very useful to create contents.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Evaluation of Factors Used in AAPM TG-43 Formalism Using Segmented Sources Integration Method and Monte Carlo Simulation: Implementation of microSelectron HDR Ir-192 Source (미소선원 적분법과 몬테칼로 방법을 이용한 AAPM TG-43 선량계산 인자 평가: microSelectron HDR Ir-192 선원에 대한 적용)

  • Ahn, Woo-Sang;Jang, Won-Woo;Park, Sung-Ho;Jung, Sang-Hoon;Cho, Woon-Kap;Kim, Young-Seok;Ahn, Seung-Do
    • Progress in Medical Physics
    • /
    • v.22 no.4
    • /
    • pp.190-197
    • /
    • 2011
  • Currently, the dose distribution calculation used by commercial treatment planning systems (TPSs) for high-dose rate (HDR) brachytherapy is derived from point and line source approximation method recommended by AAPM Task Group 43 (TG-43). However, the study of Monte Carlo (MC) simulation is required in order to assess the accuracy of dose calculation around three-dimensional Ir-192 source. In this study, geometry factor was calculated using segmented sources integration method by dividing microSelectron HDR Ir-192 source into smaller parts. The Monte Carlo code (MCNPX 2.5.0) was used to calculate the dose rate $\dot{D}(r,\theta)$ at a point ($r,\theta$) away from a HDR Ir-192 source in spherical water phantom with 30 cm diameter. Finally, anisotropy function and radial dose function were calculated from obtained results. The obtained geometry factor was compared with that calculated from line source approximation. Similarly, obtained anisotropy function and radial dose function were compared with those derived from MCPT results by Williamson. The geometry factor calculated from segmented sources integration method and line source approximation was within 0.2% for $r{\geq}0.5$ cm and 1.33% for r=0.1 cm, respectively. The relative-root mean square error (R-RMSE) of anisotropy function obtained by this study and Williamson was 2.33% for r=0.25 cm and within 1% for r>0.5 cm, respectively. The R-RMSE of radial dose function was 0.46% at radial distance from 0.1 to 14.0 cm. The geometry factor acquired from segmented sources integration method and line source approximation was in good agreement for $r{\geq}0.1$ cm. However, application of segmented sources integration method seems to be valid, since this method using three-dimensional Ir-192 source provides more realistic geometry factor. The anisotropy function and radial dose function estimated from MCNPX in this study and MCPT by Williamson are in good agreement within uncertainty of Monte Carlo codes except at radial distance of r=0.25 cm. It is expected that Monte Carlo code used in this study could be applied to other sources utilized for brachytherapy.

Development and Validation of Analytical Method for Wogonin, Quercetin, and Quercetin-3-O-glucuronide in Extracts of Nelumbo nucifera, Morus alba L., and Raphanus sativus Mixture (연잎, 상엽, 건조 무 혼합 추출물의 지표성분 wogonin, quercetin 및 quercetin-3-O-glucuronide의 분석법 개발 및 검증)

  • Jang, Gill-Woong;Park, Eun-Young;Choi, Seung-Hyun;Choi, Sun-il;Cho, Bong-Yeon;Sim, Wan-Sup;Han, Xinggao;Cho, Hyun-Duk;Lee, Ok-Hwan
    • Journal of Food Hygiene and Safety
    • /
    • v.33 no.4
    • /
    • pp.289-295
    • /
    • 2018
  • The aim of this study was to develop and validate an analytical method for determining the presence of wogonin, quercetin, and quercetin-3-O-glucuronide in extracts of Nelumbo nucifera, Morus alba L., and Raphanus sativus mixtures. We evaluated the specificity, linearity, precision, accuracy, limit of detection (LOD), and limit of quantification (LOQ) of analytical methods for wogonin, quercetin, and quercetin-3-O-glucuronide using high performance liquid chromatography. Our result showed that the correlation coefficients of the calibration curve for wogonin, quercetin, and quercetin-3-O-glucuronide were 0.9999. The LOD for wogonin, quercetin, and quercetin-3-O-glucuronide ranged from 0.09 to 0.16 and those for the LOQ ranged from 0.26 to $0.48{\mu}g/mL$. The inter-day and intra-day precision values of wogonin, quercetin, and quercetin-3-O-glucuronide ranged from 0.74 to 1.87 and from 0.28 to 1.12%, respectively. The inter-day and intra-day accuracies were 99.96~115.88% and 99.73~114.81%, respectively. Therefore, the analytical method was validated for the detection of wogonin, quercetin, and quercetin-3-O-glucuronide in extracts of Nelumbo nucifera, Morus alba L., and Raphanus sativus mixtures.