• Title/Summary/Keyword: Library system

Search Result 3,057, Processing Time 0.028 seconds

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

A Study about Development of Environment Printing Technology and $CO_2$ (환경 인쇄 기술의 발전과 인쇄물의 $CO_2$ 발생량에 관한 연구)

  • Lee, Mun-Hag
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.30 no.3
    • /
    • pp.89-114
    • /
    • 2012
  • For as to world, the concern about the environment problem is enhanced than any other time in the past because of being 21 century. And the environment problem is highlighted as the world-wide issue. The time of the environment problem intimidates the alive of the mankind and presence of an earth over the time. It becomes the essentiality not being selection in the personal living or the economical viewpoint now to prepare for the climatic modification. As to the company management, the green growth period which it excludes the environment management considering an environment, cannot carry on the company the continued management comes. That is, in the change center of the management paradigm, there is the environment management. Nearly, the greenhouse gas which the publication industry is the environmental toxic material like all industries is generated. The greenhouse gas is ejected in the process of running the manufacturing process and print shop of the various kinds material used as the raw material of the book. Particularly, the tree felling for getting the material of the paper is known to reach the direct influence on the global warming. This study does according to an object it considers and organizes the environment parameter based on this kind of fact as to the publication industry. And it is determined as the reference which is used as the basic materials preparing the case that carbon exhaust right transaction(CAP and TRADE) drawing are enforced in all industries and is sustainable the management of the publication industry and reduces the environmental risk among the company many risk management elements and plans and enforces the publication related policy that there is a value. In the printing publication industry, this study tried to inquire into elements discharging the environmental pollutant or the greenhouse gas. Additionally, in the printed publication production process, it tried to inquire into the effort for an environment-friendly and necessity at the printing paper and the printers ink, regarded as the element discharging the greenhouse gas all kinds of the printing materials, operation of the print shop and all kinds of the machines and recycle process, and etc. These considerations make these industrial field employees aware of the significance about a conservation and environmental protection. They try to give a help in the subsequent study producing quantitatively each environmental parameter emission of green house gas. This makes the calculation of the relative $CO_2$ output reproached ultimately possible. Meanwhile, in a sense, many research protects and improving an environment in connection with the contents of research at the printing publication industrial field is in progress. There will be the voluntary human face that it has to protect an environment but this can not do by the outside factor according to all kinds of environment related law and regulation. Anyway, because of acting on company management as the factor of oppression, the increase of this environment-related correspondence cost could know that the research that the environment loading relates with a procurement and development, environment management system introduction, quality control standard, including, normalizing including a material, and etc. through the part of the effort to reduce the cost low was actively in progress. As to the green growth era, as follows, this paper prescribed the subject and alternative of the print publication industry. It is surrounded by the firstly new digital environment and the generation of the subject. And secondly the printing industry is caused by the point of time when the green growth leaves by the topic which is largest in the global industry and it increases. The printing publication industry has to prepare the bridgehead for the environment-friendly green growth as the alternative for this resolution with first. The support blown in each industry becomes the obligation not being selection. Prestek in which the print publishing was exposed to spend many energies and which is known as the practice of the sustainable print publishing insisted that it mentioned importance of the green printing through the white pages in 2008 and a company had to be the green growth comprised through the environment-friendly activity. The core management for the sustainable printing publication industry presented from Presstack white pages is compacted to 4 words that it is a remove, reduce, recover, and recycle. Second, positively the digital printing(POD) system should be utilized. In the worldwide print out market, the digital printing area stops at the level of 10% or so but the change over and growth of the market of an analog-to-digital will increase rapidly in the future. As to the CEO Jeff Hayes of the Infoland, the offset print referred to that it of the traditional method got old and infirm with the minor phase of the new printing application like the customer to be wanted publication and the print of the digital method led the market. In conclusion, print publishers have to grasp well the market flow in the situation where a digitalization cannot be generalized and a support cannot avoid. And it keeps pace with the flow of the digital age and the recognition about the effort for the development and environment problem have to be raised. Particularly, the active green strategy is employed for the active green strategy.

Source Term Characterization for Structural Components in $17{\times}17$ KOFA Spent Fuel Assembly ($17{\times}17$ KOFA 사용후핵연료집합체내 구조재의 방사선원항 특성 분석)

  • Cho, Dong-Keun;Kook, Dong-Hak;Choi, Heui-Joo;Choi, Jong-Won
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.8 no.4
    • /
    • pp.347-353
    • /
    • 2010
  • Source terms of metal waste comprising a spent fuel assembly are relatively important when the spent fuel is pyroprocessed, because cesium, strontium, and transuranics are not a concern any more in the aspect of source term of permanent disposal. In this study, characteristics of radiation source terms for each structural component in spent fuel assembly was analyzed by using ORIGEN-S with a assumption that 10 metric tons of uranium is pyroprocessed. At first, mass and volume for each structural component of the fuel assembly were calculated in detail. Activation cross section library was generated by using KENO-VI/ORIGEN-S module for top-end piece and bottom-end piece, because those are located at outer core with different neutron spectrum compared to that of inner core. As a result, values of radioactivity, decay heat, and hazard index were reveled to be $1.40{\times}10^{15}$ Bequerels, 236 Watts, $4.34{\times}10^9m^3$-water, respectively, at 10 years after discharge. Those values correspond to 0.7 %, 1.1 %, 0.1 %, respectively, compared to that of spent fuel. Inconel 718 grid plate was shown to be the most important component in the all aspects of radioactivity, decay heat, and hazard index although the mass occupies only 1 % of the total. It was also shown that if the Inconel 718 grid plate is managed separately, the radioactivity and hazard index of metal waste could be decreased to 20~45 % and 30~45 %, respectively. As a whole, decay heat of metal waste was shown to be negligible in the aspect of disposal system design, while the radioactivity and hazard index are important.

Cloning of a Glutathione S-Transferase Decreasing During Differentiation of HL60 Cell Line (HL6O 세포주의 분화 시 감소 특성을 보이는 Glutathione S-Transferase의 클로닝)

  • Kim Jae Chul;Park In Kyu;Lee Kyu Bo;Sohn Sang Kyun;Kim Moo Kyu;Kim Jung Chul
    • Radiation Oncology Journal
    • /
    • v.17 no.2
    • /
    • pp.151-157
    • /
    • 1999
  • Purpose : By sequencing the Erpressed Sequence Tags of human 걸ermal papilla CDNA library, we identified a clone named K872 of which the expression decreased during differentiation of HL6O cell line. Materials and Methods : K872 plasmid DNA was isolated according to QIA plasmid extraction kit (Qiagen GmbH, Germany). The nucleotide sequencing was performed by Sanger's method with K872 plasmid DNA. The most updated GenBank EMBL necleic acid banks were searched through the internet by using BLAST (Basic Local Alignment Search Tools) program. Nothern bots were performed using RNA isolated from various human tissues and cancer cell lines. The gene expression of the fusion protein was achieved by His-Patch Thiofusicn expression system and the protein product was identified on SDS-PAGE. Results : K872 clone is 1006 nucleotides long, and has a coding region of 675 nucleotides and a 3' non-coding region of 280 nucleotides. The presumed open reading frame starting at the 5' terminus of K872 encodes 226 amino acids, including the initiation methionine residue. The amino acid sequence deduced from the open reading frame of K872 shares $70\%$, identity with that of rat glutathione 5-transferase kappa 1 (rGSTKl). The transcripts were expressed in a variety of human tissues and cancer cells. The levels of transcript were relatively high in those tissues such as heart, skeletal muscle, and peripheral blood leukocyte. It is noteworthy that K872 was found to be abundantly expressed in coloreetal cancer and melanoma cell lines. Conclusion : Homology search result suggests that K872 clone is the human homolog of the rGSTK1 which is known to be involved in the resistance of cytotoxic therapy. We propose that meticulous functional analysis should be followed to confirm that.

  • PDF

Korean Practice Guidelines for Gastric Cancer 2022: An Evidence-based, Multidisciplinary Approach

  • Tae-Han Kim;In-Ho Kim;Seung Joo Kang;Miyoung Choi;Baek-Hui Kim;Bang Wool Eom;Bum Jun Kim;Byung-Hoon Min;Chang In Choi;Cheol Min Shin;Chung Hyun Tae;Chung sik Gong;Dong Jin Kim;Arthur Eung-Hyuck Cho;Eun Jeong Gong;Geum Jong Song;Hyeon-Su Im;Hye Seong Ahn;Hyun Lim;Hyung-Don Kim;Jae-Joon Kim;Jeong Il Yu;Jeong Won Lee;Ji Yeon Park;Jwa Hoon Kim;Kyoung Doo Song;Minkyu Jung;Mi Ran Jung;Sang-Yong Son;Shin-Hoo Park;Soo Jin Kim;Sung Hak Lee;Tae-Yong Kim;Woo Kyun Bae;Woong Sub Koom;Yeseob Jee;Yoo Min Kim;Yoonjin Kwak;Young Suk Park;Hye Sook Han;Su Youn Nam;Seong-Ho Kong;The Development Working Group for the Korean Practice Guidelines for Gastric Cancer 2022 Task Force Team
    • Journal of Gastric Cancer
    • /
    • v.23 no.1
    • /
    • pp.3-106
    • /
    • 2023
  • Gastric cancer is one of the most common cancers in Korea and the world. Since 2004, this is the 4th gastric cancer guideline published in Korea which is the revised version of previous evidence-based approach in 2018. Current guideline is a collaborative work of the interdisciplinary working group including experts in the field of gastric surgery, gastroenterology, endoscopy, medical oncology, abdominal radiology, pathology, nuclear medicine, radiation oncology and guideline development methodology. Total of 33 key questions were updated or proposed after a collaborative review by the working group and 40 statements were developed according to the systematic review using the MEDLINE, Embase, Cochrane Library and KoreaMed database. The level of evidence and the grading of recommendations were categorized according to the Grading of Recommendations, Assessment, Development and Evaluation proposition. Evidence level, benefit, harm, and clinical applicability was considered as the significant factors for recommendation. The working group reviewed recommendations and discussed for consensus. In the earlier part, general consideration discusses screening, diagnosis and staging of endoscopy, pathology, radiology, and nuclear medicine. Flowchart is depicted with statements which is supported by meta-analysis and references. Since clinical trial and systematic review was not suitable for postoperative oncologic and nutritional follow-up, working group agreed to conduct a nationwide survey investigating the clinical practice of all tertiary or general hospitals in Korea. The purpose of this survey was to provide baseline information on follow up. Herein we present a multidisciplinary-evidence based gastric cancer guideline.

Discovery of UBE2I as a Novel Binding Protein of a Premature Ovarian Failure-Related Protein, FOXL2 (조기 난소 부전증 유발 관련 단백질인 FOXL2의 새로운 결합 단백질 UBE2I의 발견)

  • Park, Mira;Jung, Hyun Sook;Kim, Hyun-Lee;Pisarska, Margareta D.;Ha, Hye-Jeong;Lee, Kangseok;Bae, Jeehyeon;Ko, Jeong-Jae
    • Development and Reproduction
    • /
    • v.12 no.3
    • /
    • pp.289-296
    • /
    • 2008
  • BPES (Blepharophimosis/Ptosis/Epicanthus inversus Syndrome) is an autosomal dominant disorder caused by mutations in FOXL2. Affected individuals have premature ovarian failure (POF) in addition to small palpebral fissures, drooping eyelids, and broad nasal bridge. FOXL2 is a member of the forkhead family transcription factors. In FOXL2-deficient ovaries, granulosa cell differentiation dose not progress, leading to arrest of folliculogenesis and oocytes atresia. Using yeast two-hybrid screening of rat ovarian cDNA library with FOXL2 as bait, we found that small ubiquitin-related modifier (SUMO)-conjugating E2 enzyme UBE2I protein interacted with FOXL2 protein. UBE2I also known as UBC9 is an essential protein for processing SUMO modification. Sumoylation is a form of post-translational modification involved in diverse signaling pathways including the regulation of transcriptional activities of many transcriptional factors. In the present study, we confirmed the protein-protein interaction between FOXL2 and UBE2I in human cells, 293T, by in vivo immunoprecipitation. In addition, we generated truncated FOXL2 mutants and identified the region of FOXL2 required for its association with UBE2I using yeast-two hybrid system. Therefore, the identification of UBE2I as an interacting protein of FOXL2 further suggests a presence of novel regulatory mechanism of FOXL2 by sumoylation.

  • PDF

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Assessment of Physiological Activity of Entomopathogenic Fungi with Insecticidal Activity Against Locusts (풀무치에 대하여 살충활성을 보유한 곤충병원성 진균의 생리활성 평가)

  • Lee, Mi Rong;Kim, Jong Cheol;Lee, Se Jin;Kim, Sihyeon;Lee, Seok Ju;Park, So Eun;Lee, Wang Hyu;Kim, Jae Su
    • Korean journal of applied entomology
    • /
    • v.56 no.3
    • /
    • pp.301-308
    • /
    • 2017
  • Locusts, Locusta migratoria (Orthoptera: Acrididae) are periodical unpredictable agricultural pests worldwide and cause serious damage to crop production; however, little consideration has been given to the management of this pest. Herein, we constructed a locust-pathogenic fungal library and confirmed that some fungi could be used as resources for locust management. First, the entomopathogenic fungi were collected from sampled soils using a Tenebrio molitor-based baiting system. For the locust assay, a locust colony was obtained from the National Institute of Agricultural Science and Technology. A total of 34 entomopathogenic fungal granules, which were produced by solid cultures, were placed in the plastic insect-rearing boxes (2 g/box) and nymphs of locust were contained in the box. In 3-7 days, mycosis was observed on the membranous cuticles of the head, abdomen, and legs of locusts. In particular, Metarhizium anisopliae, M. lepidiotae, and Clonostachys rogersoniana exhibited high virulence against the locust. Given that the 34 isolates could be used in field applications, their conidial production and stability (thermotolerance) were further characterized. In the thermotolerance assay, Paecilomyces and Purpureocillium isolates had higher thermotolerance than the other isolates. Most of the fungal isolates produced ca. >$1{\times}10^8conidia/g$ on millet grain medium. In a greenhouse trial, the granular application of M. anisopliae isolate on the soil surface resulted in 85.7% control efficacy. This work suggests that entomopathogenic fungi in a granular form can be effectively used to control the migratory locust.

Recent study of Acupuncture in Treatment of Urianry Disturbance (배뇨장애(排尿障碍)에 대한 침구치료(鍼灸治療)의 연구동향(硏究動向))

  • Kim, Kyung-tai;Ko, Young-jin;Kim, Yong-suk;Kim, Chang-hwan
    • Journal of Acupuncture Research
    • /
    • v.22 no.3
    • /
    • pp.123-135
    • /
    • 2005
  • Objective : The aim of this study was to rivew systemically literature and clinical trials in the treatment of urinary incontinence or lower urinary tract syndrome(LUTS). Methods : Computerized literature searches were carried out on two electronic database, and computerized searching on some korea oriental medicine journals in library of Kyung-Hee Medical center. Results : 1. Three reports of review study, six reports of experimental study and fourteen reports of clinical trials were collected and reviewed. Three reports of review study were all printed in the korea oriental medicine journal. From 2000, researches and studies have been increased in quantity and improved in quality. 2. Urinary disturbance include variable symptoms of lower urinary tract symptoms, urinary incontinence, in theaspect of Oriental medicine these symptoms are anurin, dysuria, urinary incontinence, nochumal enuresis, uracratia and so on. 3. Roughly physiological procedure of Acupuncture in Treatment of Urianry Disturbance may be that effect of acupuncture stimulation for parasympathetic nerve, sleep-arousal system in cerebrum, pontine/spinal urination center and pudendal/pelvic nerve affect bladder in expansion of bladder capacity, inhibition of urinary contraction and affection in periurethral muscle by continuous excitement of spinal annular circuit and synapse of neuron. 4. Clinical result for acupuncture treatment in urinary disturbance is summarized that acupuncture treatment in urianation disturbance of Neurogenic Bladder, Incontinence, Cycitis, Nocturnal Enuresis, Prostatitis/Pelvic Pain Syndrom and so on is significant clinical trials and technique. Conclusion : Hereafter, in the old age society these variable urinary disturbance patients are increased and desire of treatment may be also increased. So study of various and formal treatment and tecnnique is needed.

  • PDF