• Title/Summary/Keyword: Multiple object

Search Result 1,031, Processing Time 0.026 seconds

Investigation of Scatter and Septal Penetration in I-131 Imaging Using GATE Simulation (GATE 시뮬레이션을 이용한 I-131 영상의 산란 및 격벽통과 보정방법 연구)

  • Jung, Ji-Young;Kim, Hee-Joung;Yu, A-Ram;Cho, Hyo-Min;Lee, Chang-Lae;Park, Hye-Suk
    • Progress in Medical Physics
    • /
    • v.20 no.2
    • /
    • pp.72-79
    • /
    • 2009
  • Scatter correction for I-131 plays a very important role to improve image quality and quantitation. I-131 has multiple and higher energy gamma-ray emissions. Image quality and quantitative accuracy in I-131 imaging are degraded by object scatter as well as scatter and septal penetration in the collimator. The purpose of this study was to estimate scatter and septal penetration and investigate two scatter correction methods using Monte Carlo simulation. The gamma camera system simulated in this study was a FORTE system (Phillips, Nederland) with high energy, general-purpose, parallel hole collimator. We simulated for two types of high energy collimators. One is composed of lead, and the other is composed of artificially high Z number and high density. We simulated energy spectrum using a point source in air. We estimated both full width at half maximum (FWHM) and full width at tenth maximum (FWTM) using line spread function (LSF) in cylindrical water phantom. We applied two scatter correction methods, triple energy window scatter correction (TEW) and extended triple energy window scatter correction (ETEW). The TEW method is a pixel-by pixel based correction which is easy to implement clinically. The ETEW is a modification of the TEW which corrects for scatter by using abutted scatter rejection window, which can overestimate or the underestimate scatter. The both FWHM and FWTM were estimated as 41.2 mm and 206.5 mm for lead collimator, respectively. The FWHM and FWTM were estimated as 27.3 mm and 45.6 mm for artificially high Z and high density collimator, respectively. ETEW showed that the estimation of scatter components was close to the true scatter components. In conclusion, correction for septal penetration and scatter is important to improve image quality and quantitative accuracy in I-131 imaging. The ETEW method in scatter correction appeared to be useful in I-131 imaging.

  • PDF

A Study for Alexithymia in the Patients with Panic Disorder (공황장애환자에서 감정표현불능증에 대한 연구)

  • Choi, Young-Hee;Jang, Hyuck-Jin;Kim, Min-Sook
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.14 no.1
    • /
    • pp.53-61
    • /
    • 2006
  • Objectives: This study was designed to evaluate difference of the alexithymia between panic patients and normal controls by examination of the relationships between different components of the alexithymia construct and level of anxiety and depression in panic patients and normal controls. Methods The subjects were 167 patients who met DSM-IV criteria for panic disorder and 110 normal controls. They drew up symptom checklists and self-rating scales, and were measured by Anxiety Disorders Interview Schedule-Panic attack & Agoraphobia(ADIS-P & A), Korean version of Toronto Alexithymia Scale (TAS-20K), Spielberger State-Trait Anxiety Inventory-State & Trait (STAI-S & T), Beck Depression Inventory (BDI), and Revised Anxiety Sensitivity Index (ASI-R). For statistical analysis, we performed t-test to compare the sociodemographic characteristics and the scores of self reported scales between panic patients and normal controls. Pearson correlation was performed between TAS-20K and it's subfactors, STAI-S & T, ASI-R and BDI in panic patients and normal controls. And stepwise multiple regression analysis was preformed to explain results of correlation analysis for alexithymia. Results: The panic patients reported more significant alexithymic (p<0.001), more difficulty identifying feeling (p<0.001) and describing feeling (p=0.001) than normal controls. Futhermore, panic patients were more significant anxious, sensitive to anxious feeling and depressive than normal controls. Moreover, the alexithymia of panic patients was explained by trait-anxiety $({\Delta}R^2=0.255)$ and anxiety sensitivity $({\Delta}R^2=0.062)$, that of normal controls was predicted by depression $({\Delta}R^2=0.144)$ and anxiety sensitivity $({\Delta}R^2=0.033)$ Conclusion: The panic patients reported more anxious and sensitive to anxious feeling, and these symptoms predict alexithymia in panic patients. However, the alexithymia of normal controls was explained by depression more than anxiety sensitivity, and such a result isn't consistent with previous studies and this may be mainly due to difference of the homogeneity in object of the studies.

  • PDF

Mapping Categories of Heterogeneous Sources Using Text Analytics (텍스트 분석을 통한 이종 매체 카테고리 다중 매핑 방법론)

  • Kim, Dasom;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.193-215
    • /
    • 2016
  • In recent years, the proliferation of diverse social networking services has led users to use many mediums simultaneously depending on their individual purpose and taste. Besides, while collecting information about particular themes, they usually employ various mediums such as social networking services, Internet news, and blogs. However, in terms of management, each document circulated through diverse mediums is placed in different categories on the basis of each source's policy and standards, hindering any attempt to conduct research on a specific category across different kinds of sources. For example, documents containing content on "Application for a foreign travel" can be classified into "Information Technology," "Travel," or "Life and Culture" according to the peculiar standard of each source. Likewise, with different viewpoints of definition and levels of specification for each source, similar categories can be named and structured differently in accordance with each source. To overcome these limitations, this study proposes a plan for conducting category mapping between different sources with various mediums while maintaining the existing category system of the medium as it is. Specifically, by re-classifying individual documents from the viewpoint of diverse sources and storing the result of such a classification as extra attributes, this study proposes a logical layer by which users can search for a specific document from multiple heterogeneous sources with different category names as if they belong to the same source. Besides, by collecting 6,000 articles of news from two Internet news portals, experiments were conducted to compare accuracy among sources, supervised learning and semi-supervised learning, and homogeneous and heterogeneous learning data. It is particularly interesting that in some categories, classifying accuracy of semi-supervised learning using heterogeneous learning data proved to be higher than that of supervised learning and semi-supervised learning, which used homogeneous learning data. This study has the following significances. First, it proposes a logical plan for establishing a system to integrate and manage all the heterogeneous mediums in different classifying systems while maintaining the existing physical classifying system as it is. This study's results particularly exhibit very different classifying accuracies in accordance with the heterogeneity of learning data; this is expected to spur further studies for enhancing the performance of the proposed methodology through the analysis of characteristics by category. In addition, with an increasing demand for search, collection, and analysis of documents from diverse mediums, the scope of the Internet search is not restricted to one medium. However, since each medium has a different categorical structure and name, it is actually very difficult to search for a specific category insofar as encompassing heterogeneous mediums. The proposed methodology is also significant for presenting a plan that enquires into all the documents regarding the standards of the relevant sites' categorical classification when the users select the desired site, while maintaining the existing site's characteristics and structure as it is. This study's proposed methodology needs to be further complemented in the following aspects. First, though only an indirect comparison and evaluation was made on the performance of this proposed methodology, future studies would need to conduct more direct tests on its accuracy. That is, after re-classifying documents of the object source on the basis of the categorical system of the existing source, the extent to which the classification was accurate needs to be verified through evaluation by actual users. In addition, the accuracy in classification needs to be increased by making the methodology more sophisticated. Furthermore, an understanding is required that the characteristics of some categories that showed a rather higher classifying accuracy of heterogeneous semi-supervised learning than that of supervised learning might assist in obtaining heterogeneous documents from diverse mediums and seeking plans that enhance the accuracy of document classification through its usage.

Analysis of User′s Satisfaction to the Small Urban Spaces by Environmental Design Pattern Language (환경디자인 패턴언어를 통해 본 도심소공간의 이용만족도 분석에 관한 연구)

  • 김광래;노재현;장동주
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.16 no.3
    • /
    • pp.21-37
    • /
    • 1989
  • Environmental design pattern of the nine Small Urban Spaces at C.B.D. in City of Seoul are surveyed and analyzed for user's satisfaction and behavior under the environmental design evaluation by using Christopher Alexander's Pattern Language. Small Urban Spaces as a part of streetscape are formed by physical factors as well as visual environment and interacting user's behavior. Therefore, user's satisfaction and behavior at the nine Urban Small Spaces were investigated under the further search for some possibilities of application of those Pattern Languages. A pattern language has a structure of a network. It is used in sequence, going through the patterns, moving always from large patterns to smaller, always from the ones which create comes simply from the observation that most of the wonderful places of the city were not blade by architects but by the people. It defines the limited number of arrangements of spaces that make sense in any given culture. And it actually gives us the power to generate these coherent arrangement of space. As a results, 'Plaza', 'Seats'and 'Aecessibility' related design Patterns are highly evaluated by Pattern Frequency, Pattern Interaction and their Composition ranks, thus reconfirm Whyte's Praise of urban Small Spaces in our inner city design environments. According to the multiple regression analysis of user's evaluation, the environmental functions related to the satisfaction were 'Plaza', 'Accessibility' and 'Paving'. According to the free response, user's prefer such visually pleasing environmental design object as 'Waterscape' and 'Setting'. In addition to, the basic needs in Urban Small Spaces are amenity facilities as bench, drinking water and shade for rest.

  • PDF

An Analysis of the Cognition of Professionals Regarding the Validity of Planting Design Change that Occurred in the Landscape Construction of a Major Private Company (민간기업 조경공사에서 나타나는 식재설계 변경 타당성에 대한 전문가 인식 분석)

  • Park, Jae-Young;Cho, Se-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.42 no.6
    • /
    • pp.101-110
    • /
    • 2014
  • This study analyzes the validity of the type classification of the type and design changes of apartment landscaping planting construction design changes that were completed in the private sector, efficiently manages the design changes that are displayed over landscaping planting work in general in the future, and performs research by placing the object underlying the presentation. The results are as follows. First, the percentage that occurred in the planting construction of design changes that have occurred in the apartment landscaping construction was carried out in the private sector and accounted for 61.8%. This indicates that part of the planting is a major design change. Second, as the cause of such a design change to be those associated with the field conditions such as lack of main construction period. In particular, due to a change in oral, appeared 7-48 times design changes of one review design change approval is complex, design changes of planting construction had shown a feature that occurs in multiple simultaneous. Third, the 7 types of Design Changes in planting design were delineated as 'design changes for consideration of the user', 'design changes for image improvement', 'design changes for ease of maintenance', 'design changes due to the mismatch of design statement', 'design changes due to the relationship with the engineering species of other', 'design changes due to lack of field study', and 'design changes due to the consideration of feasibility.' Fourth, 'design changes for consideration of the user' and 'design changes for image improvement' were found in more than half of the frequency of the overall changes. This differed from the results shown in public corporations. Fifth, if planting construction design change process, private companies, it was found that is showing the approval of the practice after the previous construction of the construction cost savings due to construction time. However, in the case of a public corporation, these exhibited a different aspect from the private sector and show a design change procedure that reflects the changes after the design change events in the field have occurred. The above results, the type of landscaping works in planting design change of public enterprises, regardless of the private sector, is the same in the seven types, the main reason of and procedures for design changes, indicating that there are other respects. In design change, it may be desirable to apply becomes liquidity rationality and efficiency of the dimension, depending on the nature of the landscape construction.

A Semantic Interpretation of the Design Language in the ChwuiseokJeong Wonlim of Gochang - Focusing on the Alegory and Mimesis in 'Chwuiseok' and 'Chilseongam' - (취석정원림에 담긴 조형언어의 의미론적 해석 - '취석'과 '칠성암'에 담긴 알레고리와 미메시스를 중심으로 -)

  • Rho, Jae-Hyun;Lee, Hyun-Woo;Lee, Jung-Han
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.30 no.1
    • /
    • pp.76-89
    • /
    • 2012
  • This study aimed at carrying out a semantic interpretation of the core Design language that seemed to influence deeply in the creation of the ChwuiseokJeong wonlim of Gochang. Especially, this paper aimed at inferring how the spiritual culture of seclusion of the 16th century influenced the creation of the wonlim by understanding the metaphor and symbolism by grasping the transmission meaning and reception meaning of the creators and the people concerned with keywords like Eunil(隱逸: seclusion), Chwuiseok(醉石), and Chilseongam(七星巖). 'Building up a wall' was intentionally carried out in order to represent 'Seven Stars(The Big Dipper)' inside of the wonlim. This is a kind of two-dimensional 'enframement', and a result of active creation of a meaningful landscape. From Chilseongam that was created by assembling, we presumed that Kyung-Hee Kim, Nohgye(蘆溪), the creator showed the recognition and thoughts of astronomy as a Confucian scholar that the ChwuiseokJeong Wonlim where he secluded is the center of the universe. The interpretation of words in Nohgyezip, an anthology, showed that the articles and writtings of Nohgye, his decsendants, and the people of ChwuiseokJeong included alcohols, Chwuiseok, Yeon-Myung Do, and Yuli(栗里) where Do secluded; this means that Nohgye ranked himself with Do because Nohgye also lived in peace by drinking alcohols and enjoying nature like Do did. 'Drinking' was what expressed the mind of Nohgye who wanted to be free and have the joy of enjoying mountains, water, and their landscape like Do did. In other words, 'Drinking' is the symbol of freedom that makes him forget himself and equate himself with nature. These are the representation, imitation, and mimesis of respecting Yeon-Myung Do. As the alegory of 'speaking something with other things' suggested, it is possible to read 'Chwuiseok', came from the story of Yeon-Myung Do, in multiple ways; it superficially points out 'a rock on which he laid when he was drinking', but it also can be interpreted as 'an object' that made him forget his personal troubles. In addition, it means freewill protecting unselfish mind with the spiritual aberration of drinking, 'Chwui(醉)', mentally; also, it can be interpreted metaphorically and broadly as a tool that makes Nohgye reach to the state of nature by the satisfied mind of Yeon-Myung Do. 'Chwuiseok' was a design language that showed the situation of Nohgye by comparing his mind with the mind of Yeon-Myung Do from the Confucian point of view, and a kind of behavioral mimesis based on his respect to Do and 'aesthetic representation of objective reality.' It is not coincidental that this mimesis was shown in the engraved words on Chwuiseok and the creation of ChwuiseokJeong that has the same name with Chwuiseok in Korea and China.

A Study on Recognition and Practice of Teakyo by Pregnant Women (임부의 태교인식과 태교실천에 관한 조사연구)

  • Shin, Yong-Bun;Koh, Hyo-Jung
    • Women's Health Nursing
    • /
    • v.6 no.1
    • /
    • pp.142-152
    • /
    • 2000
  • This study is a descriptive study to offer an actual basic data as Nursing intervention strategy of nurses before delivery in order that pregnant women in Korea may effectively practice Taekyo by examining the relation after apprehending level of recognition and practice of our pregnant women about Taekyo. This study collected questionnaires from 801 pregnant women who visited general hospitals in 10 areas(Seoul, Daejon, chunan, Daegu, Kummi, $Ky{\check{o}}ngju$, $P{\ddot{o}}hang$, Busan, $J{\ddot{o}}nju$, and $Y{\ddot{o}}nkwang$) for prenatal care through an outpatient obstetrics and gynecology from July 15 to August 30, 1999. This study used the tool of lee, Ki Young(1993) revised an complemented by investigator to measure recognition of Taekyo and the tool of Jang, Shun Buk and Park, Young Ju(1996) revised and complemented by investigator to measure practice of Taekyo. The Cronbach's alpha value of each tool was .88 in recognition of Taekyo, while the value was .90 in practice of Taekyo. For data analysis, this study used the descriptive and statistical analysis, Pearson correlation, t-test, ANOVA, Tukey's post hoc contrast, and Stepwise multiple regression in accordance with the purpose of this study with using SPSS Win 7.5 program. The results were as follows ; 1. The practice of Taekyo was low in comparison with recognition of Teakyo by showing average 4.28 points and standard deviation 0.48 at level of recognition of pregnant women about Taekyo on the basis of 5 points and showing to show average 2.81 points and standard deviation 0.36 at practice level on the basis of 4 points. 2. They showed the higher level of recognition on Taekyo at high educational background of pregnant woman(F=3.735, p=.005), Roman catholicism (F=4.570, p=.002), satisfied married life(F=5.448, p=.004), high monthly income(F=6.096, p=.000) and cases of hoping pregnancy(F=2.525, p=.012). 3. They showed the higher level of practice on Taekyo at high educational background of pregnant woman(F=2.883, P=.022), Roman catholicism(F=3.616, p=.032), satisfied married life(F=19.924, p=.000), good health condition(F=2.386, p=.017), cases of hoping pregnancy(F=0.677, p=.000), cases of planning pregnancy with husband(F=3.024, p=.001), cases of regular prenatal care before delivery(F=0.241, p=.005), cases of maternal breast feeding (F=9.132, p=.000), and the number of less children(F=2.763, p=.041). 4. In result of examining correlation between recognition and practice of Taekyo, they showed high level of practice on Taekyo under high level of recognition of pregnant women on Taekyo by showing the statistically significant correlation. 5. In result of examining the related factors that affect recognition and practice of Taekyo y the object, practice of Taekyo had 16.8 percents of explanatory range, purpose of practicing Taekyo 8.5 percents of explanatory range, and monthly income 1.9 percent of explanatory range as variables of affecting recognition of Taekyo. The total explanatory range was 27.2 percents, Recognition of Taekyo had 16.1 percents of explanatory ragne, time of starting Taekyo 3.2 percents, health condition 2.2 percents of explanatory range, condition of hoping pregnancy 1.1 percent of explanatory range, satisfaction of married life 0.8 percent of explanatory range, and religion 0.6 percent of explanatory range as variables of affecting practice of Taekyo. The total explanatory range was 24.0 percents.

  • PDF

Development of Neuropsychological Model for Spatial Ability and Application to Light & Shadow Problem Solving Process (공간능력에 대한 신경과학적 모델 개발 및 빛과 그림자 문제 해결 과정에의 적용)

  • Shin, Jung-Yun;Yang, Il-Ho;Park, Sang-woo
    • Journal of The Korean Association For Science Education
    • /
    • v.41 no.5
    • /
    • pp.371-390
    • /
    • 2021
  • The purpose of this study is to develop a neuropsychological model for the spatial ability factor and to divide the brain active area involved in the light & shadow problem solving process into the domain-general ability and the domain-specific ability based on the neuropsychological model. Twenty-four male college students participated in the study to measure the synchronized eye movement and electroencephalograms (EEG) while they performed the spatial ability test and the light & shadow tasks. Neuropsychological model for the spatial ability factor and light & shadow problem solving process was developed by integrating the measurements of the participants' eye movements, brain activity areas, and the interview findings regarding their thoughts and strategies. The results of this study are as follows; first, the spatial visualization and mental rotation factors mainly required activation of the parietal lobe, and the spatial orientation factor required activation of the frontal lobe. Second, in the light & shadow problem solving process, participants use both their spatial ability as a domain-general thought, and the application of scientific principles as a domain-specific thought. The brain activity patterns resulting from a participants' inferring the shadow by parallel light source and inferring the shadow when the direction of the light changed were similar to the neuropsychological model for the spatial visualization factor. The brain activity pattern from inferring an object from its shadow by light from multiple directions was similar to the neuropsychological model for the spatial orientation factor. The brain activity pattern from inferring a shadow with a point source of light was similar to the neuropsychological model for the spatial visualization factor. In addition, when solving the light & shadow tasks, the brain's middle temporal gyrus, precentral gyrus, inferior frontal gyrus, middle frontal gyrus were additionally activated, which are responsible for deductive reasoning, working memory, and planning for action.

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.