• Title/Summary/Keyword: Generate

Search Result 11,165, Processing Time 0.035 seconds

Dosimetry of the Low Fluence Fast Neutron Beams for Boron Neutron Capture Therapy (붕소-중성자 포획치료를 위한 미세 속중성자 선량 특성 연구)

  • Lee, Dong-Han;Ji, Young-Hoon;Lee, Dong-Hoon;Park, Hyun-Joo;Lee, Suk;Lee, Kyung-Hoo;Suh, So-Heigh;Kim, Mi-Sook;Cho, Chul-Koo;Yoo, Seong-Yul;Yu, Hyung-Jun;Gwak, Ho-Shin;Rhee, Chang-Hun
    • Radiation Oncology Journal
    • /
    • v.19 no.1
    • /
    • pp.66-73
    • /
    • 2001
  • Purpose : For the research of Boron Neutron Capture Therapy (BNCT), fast neutrons generated from the MC-50 cyclotron with maximum energy of 34.4 MeV in Korea Cancer Center Hospital were moderated by 70 cm paraffin and then the dose characteristics were investigated. Using these results, we hope to establish the protocol about dose measurement of epi-thermal neutron, to make a basis of dose characteristic of epi-thermal neutron emitted from nuclear reactor, and to find feasibility about accelerator-based BNCT. Method and Materials : For measuring the absorbed dose and dose distribution of fast neutron beams, we used Unidos 10005 (PTW, Germany) electrometer and IC-17 (Far West, USA), IC-18, ElC-1 ion chambers manufactured by A-150 plastic and used IC-l7M ion chamber manufactured by magnesium for gamma dose. There chambers were flushed with tissue equivalent gas and argon gas and then the flow rate was S co per minute. Using Monte Carlo N-Particle (MCNP) code, transport program in mixed field with neutron, photon, electron, two dimensional dose and energy fluence distribution was calculated and there results were compared with measured results. Results : The absorbed dose of fast neutron beams was $6.47\times10^{-3}$ cGy per 1 MU at the 4 cm depth of the water phantom, which is assumed to be effective depth for BNCT. The magnitude of gamma contamination intermingled with fast neutron beams was $65.2{\pm}0.9\%$ at the same depth. In the dose distribution according to the depth of water, the neutron dose decreased linearly and the gamma dose decreased exponentially as the depth was deepened. The factor expressed energy level, $D_{20}/D_{10}$, of the total dose was 0.718. Conclusion : Through the direct measurement using the two ion chambers, which is made different wall materials, and computer calculation of isodose distribution using MCNP simulation method, we have found the dose characteristics of low fluence fast neutron beams. If the power supply and the target material, which generate high voltage and current, will be developed and gamma contamination was reduced by lead or bismuth, we think, it may be possible to accelerator-based BNCT.

  • PDF

The Evaluation of IL-8 in the Serum of Pneumoconiotic patients (진폐증 환자에서의 혈청내 IL-8 농도)

  • Ahn, Hyeong Sook;Kim, Ji Hong;Chang, Hwang Sin;Kim, Kyung Ah;Lim, Young
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.6
    • /
    • pp.945-953
    • /
    • 1996
  • Background : Many acute and chronic lung diseases including pneumoconiosis are characterized by the presence of increased numbers of activated macrophages. These macrophages generate several inflammatory cell chemoattractants, by which neutrophil migrate from vascular compartment to the alveolar space. Recruited neutrophils secrete toxic oxygen radicals or proteolytic enzymes and induce inflammatory response. Continuing inflammatory response results in alteration of the pulmonary structure and irreversible fibrosis. Recently, a polypeptide with specific neutrophil chemotactic activity, interleukin-8(IL-8), has been cloned and isolated from a number of cells including : monocytes, macrophages and fibroblasts. IL-1 and/or TNF-${\alpha}$ preceded for the synthesis of IL-8, and we already observed high level of IL-1 and TNF-${\alpha}$ in the pneumoconioses. So we hypothesized that IL-8 may be a central role in the pathogenesis of pneumoconiosis. In order to evaluate the clinical utility of IL-8 as a biomarker in the early diagnosis of pneumoconiosis, we investigated the increase of IL-8 in the pneumoconiotic patient and the correlation between IL-8 level and progression of pneumoconiosis. Method : We measured IL-8 in the serum of 48 patients with pneumoconiosis and 16 persons without dust exposure history as a control group. Pneumoconiotic cases were divided into 3 groups according to ILO Classification : suspicious group(n=16), small opacity group(n=16) and large opacity group(n=16). IL-8 was measured by a sandwich enzytne immunoassay technique. All data were expressed as the $mean{\pm}standard$ deviation. Results: 1) The mean value of age was higher in the small opacity and large opacity group than comparison group, but smoking history was even. Duration of dust exposure was not different among 3 pneumoconiosis groups. 2) IL-8 level was $70.50{\pm}53.63pg/m{\ell}$ in the suspicious group, $107.50{\pm}45.88pg/m{\ell}$ in the small opacity group, $132.50{\pm}73.47pg/m{\ell}$ in the large opacity group and $17.85{\pm}33.85pg/m{\ell}$ in the comparison group. IL-8 concentration in all pneumoconiosis group was significant higher than that in the comparison group(p<0.001). 3) IL-8 level tended to increase with the progression of pneumoconiosis. Multiple comparison test using Anova/Scheffe analysis showed a significant difference between suspicious group and large opacity group(p<0.05). 4) The level of IL-8 was correlated with the progression of pneumoconiosis(r=0.4199, p<0.05). Conclusion : IL-8 is thought to be a good biomarker for the early diagnosis of pneumoconiosis.

  • PDF

Sesquiterpenoids Bioconversion Analysis by Wood Rot Fungi

  • Lee, Su-Yeon;Ryu, Sun-Hwa;Choi, In-Gyu;Kim, Myungkil
    • 한국균학회소식:학술대회논문집
    • /
    • 2016.05a
    • /
    • pp.19-20
    • /
    • 2016
  • Sesquiterpenoids are defined as $C_{15}$ compounds derived from farnesyl pyrophosphate (FPP), and their complex structures are found in the tissue of many diverse plants (Degenhardt et al. 2009). FPP's long chain length and additional double bond enables its conversion to a huge range of mono-, di-, and tri-cyclic structures. A number of cyclic sesquiterpenes with alcohol, aldehyde, and ketone derivatives have key biological and medicinal properties (Fraga 1999). Fungi, such as the wood-rotting Polyporus brumalis, are excellent sources of pharmaceutically interesting natural products such as sesquiterpenoids. In this study, we investigated the biosynthesis of P. brumalis sesquiterpenoids on modified medium. Fungal suspensions of 11 white rot species were inoculated in modified medium containing $C_6H_{12}O_6$, $C_4H_{12}N_2O_6$, $KH_2PO_4$, $MgSO_4$, and $CaCl_2$ for 20 days. Cultivation was stopped by solvent extraction via separation of the mycelium. The metabolites were identified as follows: propionic acid (1), mevalonic acid lactone (2), ${\beta}$-eudesmane (3), and ${\beta}$-eudesmol (4), respectively (Figure 1). The main peaks of ${\beta}$-eudesmane and ${\beta}$-eudesmol, which were indicative of sesquiterpene structures, were consistently detected for 5, 7, 12, and 15 days These results demonstrated the existence of terpene metabolism in the mycelium of P. brumalis. Polyporus spp. are known to generate flavor components such as methyl 2,4-dihydroxy-3,6-dimethyl benzoate; 2-hydroxy-4-methoxy-6-methyl benzoic acid; 3-hydroxy-5-methyl phenol; and 3-methoxy-2,5-dimethyl phenol in submerged cultures (Hoffmann and Esser 1978). Drimanes of sesquiterpenes were reported as metabolites from P. arcularius and shown to exhibit antimicrobial activity against Gram-positive bacteria such as Staphylococcus aureus (Fleck et al. 1996). The main metabolites of P. brumalis, ${\beta}$-Eudesmol and ${\beta}$-eudesmane, were categorized as eudesmane-type sesquiterpene structures. The eudesmane skeleton could be biosynthesized from FPP-derived IPP, and approximately 1,000 structures have been identified in plants as essential oils. The biosynthesis of eudesmol from P. brumalis may thus be an important tool for the production of useful natural compounds as presumed from its identified potent bioactivity in plants. Essential oils comprising eudesmane-type sesquiterpenoids have been previously and extensively researched (Wu et al. 2006). ${\beta}$-Eudesmol is a well-known and important eudesmane alcohol with an anticholinergic effect in the vascular endothelium (Tsuneki et al. 2005). Additionally, recent studies demonstrated that ${\beta}$-eudesmol acts as a channel blocker for nicotinic acetylcholine receptors at the neuromuscular junction, and it can inhibit angiogenesis in vitro and in vivo by blocking the mitogen-activated protein kinase (MAPK) signaling pathway (Seo et al. 2011). Variation of nutrients was conducted to determine an optimum condition for the biosynthesis of sesquiterpenes by P. brumalis. Genes encoding terpene synthases, which are crucial to the terpene synthesis pathway, generally respond to environmental factors such as pH, temperature, and available nutrients (Hoffmeister and Keller 2007, Yu and Keller 2005). Calvo et al. described the effect of major nutrients, carbon and nitrogen, on the synthesis of secondary metabolites (Calvo et al. 2002). P. brumalis did not prefer to synthesize sesquiterpenes under all growth conditions. Results of differences in metabolites observed in P. brumalis grown in PDB and modified medium highlighted the potential effect inorganic sources such as $C_4H_{12}N_2O_6$, $KH_2PO_4$, $MgSO_4$, and $CaCl_2$ on sesquiterpene synthesis. ${\beta}$-eudesmol was apparent during cultivation except for when P. brumalis was grown on $MgSO_4$-free medium. These results demonstrated that $MgSO_4$ can specifically control the biosynthesis of ${\beta}$-eudesmol. Magnesium has been reported as a cofactor that binds to sesquiterpene synthase (Agger et al. 2008). Specifically, the $Mg^{2+}$ ions bind to two conserved metal-binding motifs. These metal ions complex to the substrate pyrophosphate, thereby promoting the ionization of the leaving groups of FPP and resulting in the generation of a highly reactive allylic cation. Effect of magnesium source on the sesquiterpene biosynthesis was also identified via analysis of the concentration of total carbohydrates. Our current study offered further insight that fungal sesquiterpene biosynthesis can be controlled by nutrients. To profile the metabolites of P. brumalis, the cultures were extracted based on the growth curve. Despite metabolites produced during mycelia growth, there was difficulty in detecting significant changes in metabolite production, especially those at low concentrations. These compounds may be of interest in understanding their synthetic mechanisms in P. brumalis. The synthesis of terpene compounds began during the growth phase at day 9. Sesquiterpene synthesis occurred after growth was complete. At day 9, drimenol, farnesol, and mevalonic lactone (or mevalonic acid lactone) were identified. Mevalonic acid lactone is the precursor of the mevalonic pathway, and particularly, it is a precursor for a number of biologically important lipids, including cholesterol hormones (Buckley et al. 2002). Farnesol is the precursor of sesquiterpenoids. Drimenol compounds, bi-cyclic-sesquiterpene alcohols, can be synthesized from trans-trans farnesol via cyclization and rearrangement (Polovinka et al. 1994). They have also been identified in the basidiomycota Lentinus lepideus as secondary metabolites. After 12 days in the growth phase, ${\beta}$-elemene caryophyllene, ${\delta}$-cadiene, and eudesmane were detected with ${\beta}$-eudesmol. The data showed the synthesis of sesquiterpene hydrocarbons with bi-cyclic structures. These compounds can be synthesized from FPP by cyclization. Cyclic terpenoids are synthesized through the formation of a carbon skeleton from linear precursors by terpene cyclase, which is followed by chemical modification by oxidation, reduction, methylation, etc. Sesquiterpene cyclase is a key branch-point enzyme that catalyzes the complex intermolecular cyclization of the linear prenyl diphosphate into cyclic hydrocarbons (Toyomasu et al. 2007). After 20 days in stationary phase, the oxygenated structures eudesmol, elemol, and caryophyllene oxide were detected. Thus, after growth, sesquiterpenes were identified. Per these results, we showed that terpene metabolism in wood-rotting fungi occurs in the stationary phase. We also showed that such metabolism can be controlled by magnesium supplementation in the growth medium. In conclusion, we identified P. brumalis as a wood-rotting fungus that can produce sesquiterpenes. To mechanistically understand eudesmane-type sesquiterpene biosynthesis in P. brumalis, further research into the genes regulating the dynamics of such biosynthesis is warranted.

  • PDF

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Construction of Consumer Confidence index based on Sentiment analysis using News articles (뉴스기사를 이용한 소비자의 경기심리지수 생성)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.1-27
    • /
    • 2017
  • It is known that the economic sentiment index and macroeconomic indicators are closely related because economic agent's judgment and forecast of the business conditions affect economic fluctuations. For this reason, consumer sentiment or confidence provides steady fodder for business and is treated as an important piece of economic information. In Korea, private consumption accounts and consumer sentiment index highly relevant for both, which is a very important economic indicator for evaluating and forecasting the domestic economic situation. However, despite offering relevant insights into private consumption and GDP, the traditional approach to measuring the consumer confidence based on the survey has several limits. One possible weakness is that it takes considerable time to research, collect, and aggregate the data. If certain urgent issues arise, timely information will not be announced until the end of each month. In addition, the survey only contains information derived from questionnaire items, which means it can be difficult to catch up to the direct effects of newly arising issues. The survey also faces potential declines in response rates and erroneous responses. Therefore, it is necessary to find a way to complement it. For this purpose, we construct and assess an index designed to measure consumer economic sentiment index using sentiment analysis. Unlike the survey-based measures, our index relies on textual analysis to extract sentiment from economic and financial news articles. In particular, text data such as news articles and SNS are timely and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. There exist two main approaches to the automatic extraction of sentiment from a text, we apply the lexicon-based approach, using sentiment lexicon dictionaries of words annotated with the semantic orientations. In creating the sentiment lexicon dictionaries, we enter the semantic orientation of individual words manually, though we do not attempt a full linguistic analysis (one that involves analysis of word senses or argument structure); this is the limitation of our research and further work in that direction remains possible. In this study, we generate a time series index of economic sentiment in the news. The construction of the index consists of three broad steps: (1) Collecting a large corpus of economic news articles on the web, (2) Applying lexicon-based methods for sentiment analysis of each article to score the article in terms of sentiment orientation (positive, negative and neutral), and (3) Constructing an economic sentiment index of consumers by aggregating monthly time series for each sentiment word. In line with existing scholarly assessments of the relationship between the consumer confidence index and macroeconomic indicators, any new index should be assessed for its usefulness. We examine the new index's usefulness by comparing other economic indicators to the CSI. To check the usefulness of the newly index based on sentiment analysis, trend and cross - correlation analysis are carried out to analyze the relations and lagged structure. Finally, we analyze the forecasting power using the one step ahead of out of sample prediction. As a result, the news sentiment index correlates strongly with related contemporaneous key indicators in almost all experiments. We also find that news sentiment shocks predict future economic activity in most cases. In almost all experiments, the news sentiment index strongly correlates with related contemporaneous key indicators. Furthermore, in most cases, news sentiment shocks predict future economic activity; in head-to-head comparisons, the news sentiment measures outperform survey-based sentiment index as CSI. Policy makers want to understand consumer or public opinions about existing or proposed policies. Such opinions enable relevant government decision-makers to respond quickly to monitor various web media, SNS, or news articles. Textual data, such as news articles and social networks (Twitter, Facebook and blogs) are generated at high-speeds and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. Although research using unstructured data in economic analysis is in its early stages, but the utilization of data is expected to greatly increase once its usefulness is confirmed.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.

Dynamic Virtual Ontology using Tags with Semantic Relationship on Social-web to Support Effective Search (효율적 자원 탐색을 위한 소셜 웹 태그들을 이용한 동적 가상 온톨로지 생성 연구)

  • Lee, Hyun Jung;Sohn, Mye
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.19-33
    • /
    • 2013
  • In this research, a proposed Dynamic Virtual Ontology using Tags (DyVOT) supports dynamic search of resources depending on user's requirements using tags from social web driven resources. It is general that the tags are defined by annotations of a series of described words by social users who usually tags social information resources such as web-page, images, u-tube, videos, etc. Therefore, tags are characterized and mirrored by information resources. Therefore, it is possible for tags as meta-data to match into some resources. Consequently, we can extract semantic relationships between tags owing to the dependency of relationships between tags as representatives of resources. However, to do this, there is limitation because there are allophonic synonym and homonym among tags that are usually marked by a series of words. Thus, research related to folksonomies using tags have been applied to classification of words by semantic-based allophonic synonym. In addition, some research are focusing on clustering and/or classification of resources by semantic-based relationships among tags. In spite of, there also is limitation of these research because these are focusing on semantic-based hyper/hypo relationships or clustering among tags without consideration of conceptual associative relationships between classified or clustered groups. It makes difficulty to effective searching resources depending on user requirements. In this research, the proposed DyVOT uses tags and constructs ontologyfor effective search. We assumed that tags are extracted from user requirements, which are used to construct multi sub-ontology as combinations of tags that are composed of a part of the tags or all. In addition, the proposed DyVOT constructs ontology which is based on hierarchical and associative relationships among tags for effective search of a solution. The ontology is composed of static- and dynamic-ontology. The static-ontology defines semantic-based hierarchical hyper/hypo relationships among tags as in (http://semanticcloud.sandra-siegel.de/) with a tree structure. From the static-ontology, the DyVOT extracts multi sub-ontology using multi sub-tag which are constructed by parts of tags. Finally, sub-ontology are constructed by hierarchy paths which contain the sub-tag. To create dynamic-ontology by the proposed DyVOT, it is necessary to define associative relationships among multi sub-ontology that are extracted from hierarchical relationships of static-ontology. The associative relationship is defined by shared resources between tags which are linked by multi sub-ontology. The association is measured by the degree of shared resources that are allocated into the tags of sub-ontology. If the value of association is larger than threshold value, then associative relationship among tags is newly created. The associative relationships are used to merge and construct new hierarchy the multi sub-ontology. To construct dynamic-ontology, it is essential to defined new class which is linked by two more sub-ontology, which is generated by merged tags which are highly associative by proving using shared resources. Thereby, the class is applied to generate new hierarchy with extracted multi sub-ontology to create a dynamic-ontology. The new class is settle down on the ontology. So, the newly created class needs to be belong to the dynamic-ontology. So, the class used to new hyper/hypo hierarchy relationship between the class and tags which are linked to multi sub-ontology. At last, DyVOT is developed by newly defined associative relationships which are extracted from hierarchical relationships among tags. Resources are matched into the DyVOT which narrows down search boundary and shrinks the search paths. Finally, we can create the DyVOT using the newly defined associative relationships. While static data catalog (Dean and Ghemawat, 2004; 2008) statically searches resources depending on user requirements, the proposed DyVOT dynamically searches resources using multi sub-ontology by parallel processing. In this light, the DyVOT supports improvement of correctness and agility of search and decreasing of search effort by reduction of search path.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.