• Title/Summary/Keyword: 문제

Search Result 66,185, Processing Time 0.087 seconds

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

Construction of Consumer Confidence index based on Sentiment analysis using News articles (뉴스기사를 이용한 소비자의 경기심리지수 생성)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.1-27
    • /
    • 2017
  • It is known that the economic sentiment index and macroeconomic indicators are closely related because economic agent's judgment and forecast of the business conditions affect economic fluctuations. For this reason, consumer sentiment or confidence provides steady fodder for business and is treated as an important piece of economic information. In Korea, private consumption accounts and consumer sentiment index highly relevant for both, which is a very important economic indicator for evaluating and forecasting the domestic economic situation. However, despite offering relevant insights into private consumption and GDP, the traditional approach to measuring the consumer confidence based on the survey has several limits. One possible weakness is that it takes considerable time to research, collect, and aggregate the data. If certain urgent issues arise, timely information will not be announced until the end of each month. In addition, the survey only contains information derived from questionnaire items, which means it can be difficult to catch up to the direct effects of newly arising issues. The survey also faces potential declines in response rates and erroneous responses. Therefore, it is necessary to find a way to complement it. For this purpose, we construct and assess an index designed to measure consumer economic sentiment index using sentiment analysis. Unlike the survey-based measures, our index relies on textual analysis to extract sentiment from economic and financial news articles. In particular, text data such as news articles and SNS are timely and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. There exist two main approaches to the automatic extraction of sentiment from a text, we apply the lexicon-based approach, using sentiment lexicon dictionaries of words annotated with the semantic orientations. In creating the sentiment lexicon dictionaries, we enter the semantic orientation of individual words manually, though we do not attempt a full linguistic analysis (one that involves analysis of word senses or argument structure); this is the limitation of our research and further work in that direction remains possible. In this study, we generate a time series index of economic sentiment in the news. The construction of the index consists of three broad steps: (1) Collecting a large corpus of economic news articles on the web, (2) Applying lexicon-based methods for sentiment analysis of each article to score the article in terms of sentiment orientation (positive, negative and neutral), and (3) Constructing an economic sentiment index of consumers by aggregating monthly time series for each sentiment word. In line with existing scholarly assessments of the relationship between the consumer confidence index and macroeconomic indicators, any new index should be assessed for its usefulness. We examine the new index's usefulness by comparing other economic indicators to the CSI. To check the usefulness of the newly index based on sentiment analysis, trend and cross - correlation analysis are carried out to analyze the relations and lagged structure. Finally, we analyze the forecasting power using the one step ahead of out of sample prediction. As a result, the news sentiment index correlates strongly with related contemporaneous key indicators in almost all experiments. We also find that news sentiment shocks predict future economic activity in most cases. In almost all experiments, the news sentiment index strongly correlates with related contemporaneous key indicators. Furthermore, in most cases, news sentiment shocks predict future economic activity; in head-to-head comparisons, the news sentiment measures outperform survey-based sentiment index as CSI. Policy makers want to understand consumer or public opinions about existing or proposed policies. Such opinions enable relevant government decision-makers to respond quickly to monitor various web media, SNS, or news articles. Textual data, such as news articles and social networks (Twitter, Facebook and blogs) are generated at high-speeds and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. Although research using unstructured data in economic analysis is in its early stages, but the utilization of data is expected to greatly increase once its usefulness is confirmed.

Suggestion of Urban Regeneration Type Recommendation System Based on Local Characteristics Using Text Mining (텍스트 마이닝을 활용한 지역 특성 기반 도시재생 유형 추천 시스템 제안)

  • Kim, Ikjun;Lee, Junho;Kim, Hyomin;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.149-169
    • /
    • 2020
  • "The Urban Renewal New Deal project", one of the government's major national projects, is about developing underdeveloped areas by investing 50 trillion won in 100 locations on the first year and 500 over the next four years. This project is drawing keen attention from the media and local governments. However, the project model which fails to reflect the original characteristics of the area as it divides project area into five categories: "Our Neighborhood Restoration, Housing Maintenance Support Type, General Neighborhood Type, Central Urban Type, and Economic Base Type," According to keywords for successful urban regeneration in Korea, "resident participation," "regional specialization," "ministerial cooperation" and "public-private cooperation", when local governments propose urban regeneration projects to the government, they can see that it is most important to accurately understand the characteristics of the city and push ahead with the projects in a way that suits the characteristics of the city with the help of local residents and private companies. In addition, considering the gentrification problem, which is one of the side effects of urban regeneration projects, it is important to select and implement urban regeneration types suitable for the characteristics of the area. In order to supplement the limitations of the 'Urban Regeneration New Deal Project' methodology, this study aims to propose a system that recommends urban regeneration types suitable for urban regeneration sites by utilizing various machine learning algorithms, referring to the urban regeneration types of the '2025 Seoul Metropolitan Government Urban Regeneration Strategy Plan' promoted based on regional characteristics. There are four types of urban regeneration in Seoul: "Low-use Low-Level Development, Abandonment, Deteriorated Housing, and Specialization of Historical and Cultural Resources" (Shon and Park, 2017). In order to identify regional characteristics, approximately 100,000 text data were collected for 22 regions where the project was carried out for a total of four types of urban regeneration. Using the collected data, we drew key keywords for each region according to the type of urban regeneration and conducted topic modeling to explore whether there were differences between types. As a result, it was confirmed that a number of topics related to real estate and economy appeared in old residential areas, and in the case of declining and underdeveloped areas, topics reflecting the characteristics of areas where industrial activities were active in the past appeared. In the case of the historical and cultural resource area, since it is an area that contains traces of the past, many keywords related to the government appeared. Therefore, it was possible to confirm political topics and cultural topics resulting from various events. Finally, in the case of low-use and under-developed areas, many topics on real estate and accessibility are emerging, so accessibility is good. It mainly had the characteristics of a region where development is planned or is likely to be developed. Furthermore, a model was implemented that proposes urban regeneration types tailored to regional characteristics for regions other than Seoul. Machine learning technology was used to implement the model, and training data and test data were randomly extracted at an 8:2 ratio and used. In order to compare the performance between various models, the input variables are set in two ways: Count Vector and TF-IDF Vector, and as Classifier, there are 5 types of SVM (Support Vector Machine), Decision Tree, Random Forest, Logistic Regression, and Gradient Boosting. By applying it, performance comparison for a total of 10 models was conducted. The model with the highest performance was the Gradient Boosting method using TF-IDF Vector input data, and the accuracy was 97%. Therefore, the recommendation system proposed in this study is expected to recommend urban regeneration types based on the regional characteristics of new business sites in the process of carrying out urban regeneration projects."

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Effects of Recipient Oocytes and Electric Stimulation Condition on In Vitro Development of Cloned Embryos after Interspecies Nuclear Transfer with Caprine Somatic Cell (수핵난자와 전기적 융합조건이 산양의 이종간 복제수정란의 체외발달에 미치는 영향)

  • 이명열;박희성
    • Reproductive and Developmental Biology
    • /
    • v.28 no.1
    • /
    • pp.21-27
    • /
    • 2004
  • This study was conducted to investigate the developmental ability of caprine embryos after somatic cell interspecies nuclear transfer. Recipient bovine and porcine oocytes were obtained from slaughterhouse and were matured in vitro according to established protocols. Donor cells were obtained from an ear-skin biopsy of a caprine, digested with 0.25% trypsin-EDTA in PBS and primary fibroblast cultures were established in TCM-199 with 10% FBS. The matured oocytes were dipped in D-PBS plus 10% FBS + 7.5 $\mu$ g/ml cytochalasin B and 0.05M sucrose. Enucleation were accomplished by aspirating the first polar body and partial cytoplasm which containing metaphase II chromosomes using a micropipette with an out diameter of 20∼30 $\mu$m. A Single donor cell was individually transferred into the perivitelline space of each enucleated oocyte. The reconstructed oocytes were electric fusion with 0.3M mannitol fusion medium. After the electrofusion, embryos were activated by electric stimulation. Interspecies nuclear transfer embryos with bovine cytoplasts were cultured in TCM-199 medium supplemented with 10% FBS including bovine oviduct epithelial cells for 7∼9 day. And porcine cytoplasts were cultured in NCSU-23 medium supplemented with 10% FBS for 6 ∼8 day at $39^{\circ}C, 5% CO_2 $in air. Interspecies nuclear transfer by recipient bovine oocytes were fused with electric length 1.95 kv/cm and 2.10 kv/cm. There was no significant difference between two electric length in fusion rate(47.7 and 44.6%) and in cleavage rate(41.9 and 54.5%). Using electric length 1.95 kv/cm and 2.10 kv/cm in caprine-porcine NT oocytes, there was also no significant difference between two treatments in fusion rate(51.3 and 46.1%) and in cleavage rate(75.0 and 84.9%). The caprine-bovine NT oocytes fusion rate was lower(P<0.05) in 1 pulse for 60 $\mu$sec(19.3%), than those from 1 pulse for 30 $\mu$sec(50.8%) and 2 pulse for 30 $\mu$sec(31.0%). The cleavage rate was higher(P<0.05) in 1 pulse for 30 $\mu$sec(53.3%) and 2 pulse for 30 $\mu$sec(50.0%), than in 1 pulse for 60 $\mu$sec(18.2%). The caprine-porcine NT oocytes fusion rate was 48.1% in 1 pulse for 30 $\mu$sec, 45.2% in 2 pulse for 30 $\mu$sec and 48.6% in 1 pulse for 60 $\mu$sec. The cleavage rate was higher(P<0.05) in 1 pulse for 30 $\mu$sec(78.4%) and 1 pulse for 60 $\mu$sec(79.4%), than in 2 pulse for 30 $\mu$sec(53.6%). In caprine-bovine NT embryos, the developmental rate of morula and blastocyst stage embryos were 22.6% in interspecies nuclear transfer and 30.6% in parthenotes, which was no significant differed. The developmental rate of morula and blastocyst stage embryos with caprine-porcine NT embryos were lower(P<0.05) in interspecies nuclear transfer(5.1%) than parthenotes(37.4%).

Correlation analysis of radiation therapy position and dose factors for left breast cancer (좌측 유방암의 방사선치료 자세와 선량인자의 상관관계 분석)

  • Jeon, Jaewan;Park, Cheolwoo;Hong, Jongsu;Jin, Seongjin;Kang, Junghun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.29 no.1
    • /
    • pp.37-48
    • /
    • 2017
  • Purpose: The most basic conditions of radiation therapy is to prevent unnecessary exposure of normal tissue. The risk factors that are important o evaluate the dose emitted to the lung and heart from radiation therapy for breast cancer. Therefore, comparing the dose factors of a normal tissue according to the radion treatment position and Seeking an effective radiation treatment for breast cancer through the analysis of the correlation relationship. Materials and Methods: Computed tomography was conducted among 30 patients with left breast cancer in supine and prone position. Eclipse Treatment Planning System (Ver.11) was established by computerized treatment planning. Using the DVH compared the incident dose to normal tissue by position. Based on the result, Using the SPSS (ver.18) analyzed the dose in each normal tissue factors and Through the correlation analysis between variables, independent sample test examined the association. Finally The HI, CI value were compared Using the MIRADA RTx (ver. ad 1.6) in the supine, prone position Results: The results of computerized treatment planning of breast cancer in the supine position were V20, $16.5{\pm}2.6%$ and V30, $13.8{\pm}2.2%$ and Mean dose, $779.1{\pm}135.9cGy$ (absolute value). In the prone position it showed in the order $3.1{\pm}2.2%$, $1.8{\pm}1.7%$, $241.4{\pm}138.3cGy$. The prone position showed overall a lower dose. The average radiation dose 537.7 cGy less was exposured. In the case of heart, it showed that V30, $8.1{\pm}2.6%$ and $5.1{\pm}2.5%$, Mean dose, $594.9{\pm}225.3$ and $408{\pm}183.6cGy$ in the order supine, prone position. Results of statistical analysis, Cronbach's Alpha value of reliability analysis index is 0.563. The results of the correlation analysis between variables, position and dose factors of lung is about 0.89 or more, Which means a high correlation. For the heart, on the other hand it is less correlated to V30 (0.488), mean dose (0.418). Finally The results of independent samples t-test, position and dose factors of lung and heart were significantly higher in both the confidence level of 99 %. Conclusion: Radiation therapy is currently being developed state-of-the-art linear accelerator and a variety of treatment plan technology. The basic premise of the development think normal tissue protection around PTV. Of course, if you treat a breast cancer patient is in the prone position it take a lot of time and reproducibility of set-up problems. Nevertheless, As shown in the experiment results it is possible to reduce the dose to enter the lungs and the heart from the prone position. In conclusion, if a sufficient treatment time in the prone position and place correct confirmation will be more effective when the radiation treatment to patient.

  • PDF

Diagnosis of the Field-Grown Rice Plant -[1] Diagnostic Criteria by Flag Leaf Analysis- (포장재배(圃場栽培) 수도(水稻)의 영양진단(營養診斷) -1. 지엽분석(止葉分析)에 의(依)한 진단(診斷)-)

  • Park, Hoon
    • Applied Biological Chemistry
    • /
    • v.16 no.1
    • /
    • pp.18-30
    • /
    • 1973
  • The flag and lower leaves (4th or 5th) of rice plant from the field of NPK simple trial and from three low productive area were analyzed in order to find out certain diagnostic criteria of nutritional status at harvest. 1. Nutrient contents in the leaves from no fertilizer, minus nutrient and fertilizer plots revealed each criterion for induced deficiency (severe deficient case induced by other nutrients), deficiency (below the critical concentration), insufficiency (hidden hunger region), sufficiency (luxuary consumption stage) and excess (harmful or toxic level). 2. Nitrogen contents for the above five status was less than 1.0%, 1.0 to 1.2, 1.2 to 1.6, 1.6 to 1.9 and greater than 1.9, respectively. 3. It was less than 0.3%, 0.3 to 0.4, 0.4 to 0.55 and greater than 0.55 for phosphorus $(P_2O_5)$ but excess level was not clear. 4. It was below 0.5%, 0.5 to 0.9, 0.9 to 1.2, 1.2 to 1.4 and above 1.4 for potassium. 5. It was below 4%, 4 to 6, 6 to 11 and above 11 for silicate $(SiO_2)$ and no excess was appeared. 6. Potassium in flag leaf seemed to crow out nitrogen to ear resulting better growth of ear by the inhibition of overgrowth of flag leaf. 7. Phosphorus accelerated the transport of Mg, Si, Mn and K in this order from lower leaf to flag, and retarded that of Ca and N in this order at flowering while potassium accelerated in the order of Mn, and Ca, and retarded in the order of Mg, Si, P and N at milky stage. 8. Transport acceleration index (TAI) expressed as (F_2L_1-F_1L_2)\;100/F_1L_1$ where F and L stand for other nutrient cotents in flag and lower leaf and subscripts indicate the rate of a nutrient applied, appears to be suitable for the effect of the nutrient on the translocation of others. 9. The content of silicate $(SiO_2)$ in the flag was lower than that of lower leaf in the early season cultivation indicating hinderance in translocation or absorption. It was reverse in the normal season cultivation. 10. The infection rate of Helminthosporium frequently occurred in the potassium deficient field seemed to be related more to silicate and nitrogen content than potassium in the flag leaf. 11. Deficiency of a nutrient occured simultaniously with deficiency of a few other ones. 12. Nutritional disorder under the field condition seems mainly to be attributed to macronutrients and the role of micronutrient appears to be none or secondary.

  • PDF

A study of compaction ratio and permeability of soil with different water content (축제용흙의 함수비 변화에 의한 다짐율 및 수용계수 변화에 관한 연구)

  • 윤충섭
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.13 no.4
    • /
    • pp.2456-2470
    • /
    • 1971
  • Compaction of soil is very important for construction of soil structures such as highway fills, embankment of reservoir and seadike. With increasing compaction effort, the strength of soil, interor friction and Cohesion increas greatly while the reduction of permerbilityis evident. Factors which may influence compaction effort are moisture content, grain size, grain distribution and other physical properties as well as the variable method of compaction. The moisture content among these parameter is the most important thing. For making the maximum density to a given soil, the comparable optimum water content is required. If there is a slight change in water content when compared with optimum water content, the compaction ratio will decrease and the corresponding mechanical properties will change evidently. The results in this study of soil compaction with different water content are summarized as follows. 1) The maximum dry density increased and corresponding optimum moisture content decreased with increasing of coarse grain size and the compaction curve is steeper than increasing of fine grain size. 2) The maximum dry density is decreased with increasing of the optimum water content and a relationship both parameter becomes rdam-max=2.232-0.02785 $W_0$ But this relstionship will be change to $r_d=ae^{-bw}$ when comparable water content changes. 3) In case of most soils, a dry condition is better than wet condition to give a compactive effort, but the latter condition is only preferable when the liquid limit of soil exceeds 50 percent. 4) The compaction ratio of cohesive soil is greeter than cohesionless soil even the amount of coarse grain sizes are same. 5) The relationship between the maximum dry density and porosity is as rdmax=2,186-0.872e, but it changes to $r_d=ae^{be}$ when water content vary from optimum water content. 6) The void ratio is increased with increasing of optimum water content as n=15.85+1.075 w, but therelation becames $n=ae^{bw}$ if there is a variation in water content. 7) The increament of permeabilty is high when the soil is a high plasticity or coarse. 8) The coefficient of permeability of soil compacted in wet condition is lower than the soil compacted in dry condition. 9) Cohesive soil has higher permeability than cohesionless soil even the amount of coarse particles are same. 10) In generall, the soil which has high optimum water content has lower coefficient of permeability than low optimum water content. 11) The coefficient of permeability has a certain relations with density, gradation and void ratio and it increase with increasing of saturation degree.

  • PDF