• Title/Summary/Keyword: Next-Generation System

Search Result 1,968, Processing Time 0.029 seconds

Highly Doped Nano-crystal Embedded Polymorphous Silicon Thin Film Deposited by Using Neutral Beam Assisted CVD at Room Temperature

  • Jang, Jin-Nyeong;Lee, Dong-Hyeok;So, Hyeon-Uk;Hong, Mun-Pyo
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.154-155
    • /
    • 2012
  • The promise of nano-crystalites (nc) as a technological material, for applications including display backplane, and solar cells, may ultimately depend on tailoring their behavior through doping and crystallinity. Impurities can strongly modify electronic and optical properties of bulk and nc semiconductors. Highly doped dopant also effect structural properties (both grain size, crystal fraction) of nc-Si thin film. As discussed in several literatures, P atoms or radicals have the tendency to reside on the surface of nc. The P-radical segregation on the nano-grain surfaces that called self-purification may reduce the possibility of new nucleation because of the five-coordination of P. In addition, the P doping levels of ${\sim}2{\times}10^{21}\;at/cm^3$ is the solubility limitation of P in Si; the solubility of nc thin film should be smaller. Therefore, the non-activated P tends to segregate on the grain boundaries and the surface of nc. These mechanisms could prevent new nucleation on the existing grain surface. Therefore, most researches shown that highly doped nc-thin film by using conventional PECVD deposition system tended to have low crystallinity, where the formation energy of nucleation should be higher than the nc surface in the intrinsic materials. If the deposition technology that can make highly doped and simultaneously highly crystallized nc at low temperature, it can lead processes of next generation flexible devices. Recently, we are developing a novel CVD technology with a neutral particle beam (NPB) source, named as neutral beam assisted CVD (NBaCVD), which controls the energy of incident neutral particles in the range of 1~300eV in order to enhance the atomic activation and crystalline of thin films at low temperatures. During the formation of the nc-/pm-Si thin films by the NBaCVD with various process conditions, NPB energy directly controlled by the reflector bias and effectively increased crystal fraction (~80%) by uniformly distributed nc grains with 3~10 nm size. In the case of phosphorous doped Si thin films, the doping efficiency also increased as increasing the reflector bias (i.e. increasing NPB energy). At 330V of reflector bias, activation energy of the doped nc-Si thin film reduced as low as 0.001 eV. This means dopants are fully occupied as substitutional site, even though the Si thin film has nano-sized grain structure. And activated dopant concentration is recorded as high as up to 1020 #/$cm^3$ at very low process temperature (< $80^{\circ}C$) process without any post annealing. Theoretical solubility for the higher dopant concentration in Si thin film for order of 1020 #/$cm^3$ can be done only high temperature process or post annealing over $650^{\circ}C$. In general, as decreasing the grain size, the dopant binding energy increases as ratio of 1 of diameter of grain and the dopant hardly be activated. The highly doped nc-Si thin film by low-temperature NBaCVD process had smaller average grain size under 10 nm (measured by GIWAXS, GISAXS and TEM analysis), but achieved very higher activation of phosphorous dopant; NB energy sufficiently transports its energy to doping and crystallization even though without supplying additional thermal energy. TEM image shows that incubation layer does not formed between nc-Si film and SiO2 under later and highly crystallized nc-Si film is constructed with uniformly distributed nano-grains in polymorphous tissues. The nucleation should be start at the first layer on the SiO2 later, but it hardly growth to be cone-shaped micro-size grains. The nc-grain evenly embedded pm-Si thin film can be formatted by competition of the nucleation and the crystal growing, which depend on the NPB energies. In the evaluation of the light soaking degradation of photoconductivity, while conventional intrinsic and n-type doped a-Si thin films appeared typical degradation of photoconductivity, all of the nc-Si thin films processed by the NBaCVD show only a few % of degradation of it. From FTIR and RAMAN spectra, the energetic hydrogen NB atoms passivate nano-grain boundaries during the NBaCVD process because of the high diffusivity and chemical potential of hydrogen atoms.

  • PDF

Current status of Brassica A genome analysis (Brassica A genome의 최근 연구 동향)

  • Choi, Su-Ryun;Kwon, Soo-Jin
    • Journal of Plant Biotechnology
    • /
    • v.39 no.1
    • /
    • pp.33-48
    • /
    • 2012
  • As a scientific curiosity to understand the structure and the function of crops and experimental efforts to apply it to plant breeding, genetic maps have been constructed in various crops. Especially, in the case of Brassica crop, genetic mapping has been accelerated since genetic information of model plant $Arabidopsis$ was available. As a result, the whole $B.$ $rapa$ genome (A genome) sequencing has recently been done. The genome sequences offer opportunities to develop molecular markers for genetic analysis in $Brassica$ crops. RFLP markers are widely used as the basis for genetic map construction, but detection system is inefficiency. The technical efficiency and analysis speed of the PCR-based markers become more preferable for many form of $Brassica$ genome study. The massive sequence informative markers such as SSR, SNP and InDels are also available to increase the density of markers for high-resolution genetic analysis. The high density maps are invaluable resources for QTLs analysis, marker assisted selection (MAS), map-based cloning and comparative analysis within $Brassica$ as well as related crop species. Additionally, the advents of new technology, next-generation technique, have served as a momentum for molecular breeding. Here we summarize genetic and genomic resources and suggest their applications for the molecular breeding in $Brassica$ crop.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

A Proposal for Promotion of Research Activities by Analysis of KOSEF's Basic Research Supports in Agricultural Sciences (한국과학재단의 농수산분야 기초연구지원 추이분석을 통한 연구활동지원 활성화 제언)

  • Min, Tae-Sun;Choi, Hyung-Kyoon;Kim, Seong-Yong;Bai, Sung-Chul;Kim, Yoo-Yong;Yang, Moon-Sik;Chung, Bong-Hyun;Hwang, Joon-Young;Han, In-Kyu
    • Applied Biological Chemistry
    • /
    • v.48 no.1
    • /
    • pp.23-33
    • /
    • 2005
  • Agricultural sciences field in South Korea has many strong points such as numerous researchers, establishment of research infra-structure, excellence in research competitiveness and high technological level. However, there are also many weaknesses including insufficient leadership at related societies and institutes, deficiency of the next generation research group, and insufficiency in research productivity. There are many opportunities including increasing the importance of the biotechnological industry, activating international cooperation researches, and exploring the multitude of possible research areas to be studied. However, some threats still exist, such as pressure from the government of developed countries to open the agricultural market, the decrease of specialized farms, and intensification for researches to gratify economic and social demands. To encourage research activities in the agricultural sciences field in Korea, the following actions and systems are required: 1) formulation of a mid- and a long-term research master plan, 2) development of a database on the man power in related fields, 3) activation of top-down research topics, and associated increase of individual research grants, 4) development of special national programs for basic researches in agricultural sciences, 5) organization of a committee for policy and planning within the related societies, and 6) system development for the fair evaluation of the research results.

Evaluation of Contrast and Resolution on the SPECT of Pre and Post Scatter Correction (산란보정 전, 후의 SPECT 대조도 및 분해능 평가)

  • Seo, Myeong-Deok;Kim, Yeong-Seon;Jeong, Yo-Cheon;Lee, Wan-Kyu;Song, Jae-Beom
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.127-132
    • /
    • 2010
  • Purpose: Because of limitation of image acquisition method and acquisition time, scatter correction cannot perform easily in SPECT study. But in our hospital, could provide to clinic doctor of scatter corrected images, through introduction of new generation gamma camera has function of simple scatter correction. Taking this opportunity, we will compare scatter corrected and non-scatter corrected image from image quality of point of view. Materials and Methods: We acquisite the 'Hoffman brain phantom' SPECT image and '1mm line phantom' SPECT image, each 18 times, with GE Infinia Hawkeye 4, SPECT-CT gamma camera. At first, we calculated each contrast from axial slice of scatter corrected and non-scatter corrected SPECT image of 'Hoffman brain phantom'. and next, calculated each FWHM of horizontal and vertical from axial slice of scatter corrected and non-scatter corrected SPECT image of '1mm line phantom'. After then, we attempted T test analysis with SAS program on data, contrast and resolution value of scatter corrected and non-scatter corrected image. Results: The contrast of scatter corrected image, elevated from 0.3979 to 0.3509. And the resolution of scatter corrected image, elevated from 3.4822 to 3.6375. p value were 0.0097 in contrast and <0.0001 in resolution. We knew the fact that do improve of contrast and resolution through scatter correction. Conclusion: We got the improved SPECT image through simple and easy way, scatter correct. We will expect to provide improved images, from contrast and resolution point of view. to our clinic doctor.

  • PDF

Application of Oceanic Camp Program for the Enhancement of Inquisitiveness and Affection to Ocean: from 2004 to 2012 (해양에 대한 호기심과 친근감 향상을 위한 해양캠프 프로그램의 적용: 2004~2012년)

  • Park, Kyung-Ae;Woo, Hye-Jin;Kim, Kyung-Ryul;Lee, Soo-Kwang;Chung, Jong-Yul;Cho, Byung-Cheol;Kang, Hyun-Joo
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.18 no.3
    • /
    • pp.142-161
    • /
    • 2013
  • In order to enhance scientific interest and a sense of affinity about ocean, the programs of the oceanic camp 'oceanic summer school' were developed and applied to $4^{th}$ and $9^{th}$-grade elementary and middle school students for 9 years from 2004 to 2012. It was composed of oceanic training for snorkeling, a tour to oceanic institutes and museums near the camp academy place, experimental learning in oceanic-related field, field trips for ocean and earth sciences, and lectures on various subjects of ocean. We developed and implemented 9-kinds of inquiry surveys to evaluate changes in cognitive and affective characteristics, and ocean literacy of students participated at the present oceanic summer camp. Based on the statistical analysis, affective characteristics such as interest, inquisitiveness, passion, and so on, were enhanced. Analysis of ocean literacy revealed that cognitive characteristics of the students were increased by 40%. We presented parents' responses on the programs of oceanic summer school. Some students with less initial interest of ocean have positively changed to make up their minds to be a oceanographer in several years later. In light of this, the summer school can be evaluated to be successfully functioned as a long-term support system for potentially young-talented students in the field of ocean science. This study addresses that long-term implementation of the summer oceanic camp may trigger students with potential talent toward in-depth science in the near future even though it could not bring positive effect immediately. This addresses the necessity of policy supports in order that various programs like the scientific camp should be more constructively developed and executed for next-generation manpower in oceanographic fields.

Handover Functional Architecture for Next Generation Wireless Networks (차세대 무선 네트워크를 위한 핸드오버 기능 구조 제안)

  • Baek, Joo-Young;Kim, Dong-Wook;Kim, Hyun-Jin;Choi, Yoon-Hee;Kim, Duk-Jin;Kim, Woo-Jae;Suh, Young-Joo;Kang, Suk-Yang;Kim, Kyung-Suk;Shin, Kyung-Chul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10d
    • /
    • pp.268-273
    • /
    • 2006
  • 차세대 무선 네트워크 (4G)는 새로운 무선 접속 기술의 개발과 함께 많은 연구가 필요한 분야이다. 그 중에서 특히 단말의 끊김없는 이동성을 제공해 주기 위한 핸드오버 기술이 가장 중요하다고 할 수 있다. 차세대 무선 네트워크는 새로운 무선 접속 기술과 함께 기존의 무선랜이나 이동통신망 등과 같이 사용될 것으로 예상되며, 네트워크 계층에서의 이동성 지원을 위하여 Mobile IPv6를 사용할 것으로 예상되는 네트워크이다. 이러한 네트워크에서 끊김없는 이동성을 제공해 주기 위해서는 현재까지 연구된 핸드오버 기능 및 구조에 대한 연구와 함께 보다 다양해진 네트워크 환경과 QoS 등을 고려한 종합적인 핸드오버 기능에 대한 연구가 필요하다. 본 논문에서는 차세대 무선 네트워크에서 단말의 끊김없는 핸드오버를 제공해 주기 위하여 필요한 기능들을 도출하고, 이들간의 유기적인 연관관계를 정의하여 다양한 네트워크 환경과 사용자의 우선순위, 어플리케이션의 QoS 요구 조건 등을 고려한 종합적인 핸드오버 기능 구조를 제안하고자 한다. 제안하는 핸드오버 구조는 Monitoring, Triggering, Handover의 세 가지 module로 나뉘어져 있으며, 각각은 필요에 따라 sub-module로 다시 세분화된다. 제안하는 핸드오버 구조의 가장 큰 특징은 핸드오버를 유발시킬 수 있는 여러 가지 요소를 종합적으로 고려하며 이들간의 수평적인 비교가 아닌 다단계 비교를 수행하여 보다 정확한 triggering이 가능하도록 한다. 또한 단말의 QoS 요구 사항을 보장하고 네트워크의 혼잡도(congestion) 및 부하 조절 (load balancing)을 위한 기능을 핸드오버 기능에 추가하여 효율적인 네트워크의 자원 사용이 가능하도록 설계하였다.서버로 분산처리하게 함으로써 성능에 대한 신뢰성을 향상 시킬 수 있는 Load Balancing System을 제안한다.할 때 가장 효과적인 라우팅 프로토콜이라고 할 수 있다.iRNA 상의 의존관계를 분석할 수 있었다.수안보 등 지역에서 나타난다 이러한 이상대 주변에는 대개 온천이 발달되어 있었거나 새로 개발되어 있는 곳이다. 온천에 이용하고 있는 시추공의 자료는 배제하였으나 온천이응으로 직접적으로 영향을 받지 않은 시추공의 자료는 사용하였다 이러한 온천 주변 지역이라 하더라도 실제는 온천의 pumping 으로 인한 대류현상으로 주변 일대의 온도를 올려놓았기 때문에 비교적 높은 지열류량 값을 보인다. 한편 한반도 남동부 일대는 이번 추가된 자료에 의해 새로운 지열류량 분포 변화가 나타났다 강원 북부 오색온천지역 부근에서 높은 지열류량 분포를 보이며 또한 우리나라 대단층 중의 하나인 양산단층과 같은 방향으로 발달한 밀양단층, 모량단층, 동래단층 등 주변부로 NNE-SSW 방향의 지열류량 이상대가 발달한다. 이것으로 볼 때 지열류량은 지질구조와 무관하지 않음을 파악할 수 있다. 특히 이러한 단층대 주변은 지열수의 순환이 깊은 심도까지 가능하므로 이러한 대류현상으로 지표부근까지 높은 지온 전달이 되어 나타나는 것으로 판단된다.의 안정된 방사성표지효율을 보였다. $^{99m}Tc$-transferrin을 이용한 감염영상을 성공적으로 얻을 수 있었으며, $^{67}Ga$-citrate 영상과 비교하여 더 빠른 시간 안에 우수한 영상을 얻을 수 있었다. 그러므로 $^{99m}Tc$-transierrin이 감염 병소의 영상진단에 사용될 수 있을 것으로 기대된다.리를 정량화 하였다. 특히 선

  • PDF

Discussions about Expanded Fests of Cartoons and Multimedia Comics as Visual Culture: With a Focus on New Technologies (비주얼 컬처로서 만화영상의 확장된 장(場, fest)에 대한 논의: 뉴 테크놀로지를 중심으로)

  • Lee, Hwa-Ja;Kim, Se-Jong
    • Cartoon and Animation Studies
    • /
    • s.28
    • /
    • pp.1-25
    • /
    • 2012
  • The rapid digitalization across all aspects of society since 1990 led to the digitalization of cartoons. As the medium of cartoons moved from paper to the web, a powerful visual culture emerged. An encounter between cartoons and multimedia technologies has helped cartoons evolve into a video culture. Today cartoons are no longer literate culture. It is critical to pay attention to cartoons as an "expanded fest" and as visual and video culture with much broader significance. In this paper, the investigator set out to diagnose the current position of cartoons changing in the rapidly changing digital age and talk about future directions that they should pursue. Thus she discussed cases of changes from 1990 when colleges began to provide specialized education for cartoons and animation to the present day when cartoon and Multimedia Comics fests exist in addition to the digitalization of cartoons. The encounter between new technologies and cartoons broke down the conventional forms of cartoons. The massive appearance of artists that made active use of new technologies in their works, in particular, has facilitated changes to the content and forms of cartoons and the expansion of character uses. The development of high technologies extends influence to the roles of appreciators beyond the artists' works. Today readers voice their opinions about works actively, build a fan base, promote the works and artists they favor, and help them rise to stardom. As artist groups of various genres were formed, the possibilities of new stories and texts and the appearance of diverse styles and world views have expanded the essence of cartoon texts and the overall cartoon system of cartoon culture, industry, education, institution, and technology. It is expected that cartoons and Multimedia Comics will continue to make a contribution as a messenger to reflect the next generation of culture, mediate it, and communicate with it. Today there is no longer a distinction between print and video cartoons. Cartoons will expand in every field through a wide range of forms and styles, given the current situations involving installation concept cartoons, blockbuster digital videos, fancy items, and characters at theme parks based on a narrative. It is therefore necessary to diversify cartoon and Multimedia Comics education in diverse ways. Today educators are faced with a task to bring up future generations of talents who are capable of leading the culture of overall senses based on literate and video culture by incorporating humanities, social studies, and new technology education into their creative artistic abilities.

Water droplet generation technique for 3D water drop sculptures (3차원 물방울 조각 생성장치의 구현을 위한 물방울 생성기법)

  • Lin, Long-Chun;Park, Yeon-yong;Jung, Moon Ryul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.143-152
    • /
    • 2019
  • This paper presents two new techniques for solving the two problems of the water curtain: 'shape distortion' caused by gravity and 'resolution degradation' caused by fine satellite droplets around the shape. In the first method, when the user converts a three-dimensional model to a vertical sequence of slices, the slices are evenly spaced. The method is to adjust the time points at which the equi-distance slices are created by the nozzle array. In this method, even if the velocity of a water drop increases with time by gravity, the water drop slices maintain the equal interval at the moment of forming the whole shape, thereby preventing distortion. The second method is called the minimum time interval technique. The minimum time interval is the time between the open command of a nozzle and the next open command of the nozzle, so that consecutive water drops are clearly created without satellite drops. When the user converts a three-dimensional model to a sequence of slices, the slices are defined as close as possible, not evenly spaced, considering the minimum time interval of consecutive drops. The slices are arranged in short intervals in the top area of the shape, and the slices are arranged in long intervals in the bottom area of the shape. The minimum time interval is pre-determined by an experiment, and consists of the time from the open command of the nozzle to the time at which the nozzle is fully open, and the time in which the fully open state is maintained, and the time from the close command to the time at which the nozzle is fully closed. The second method produces water drop sculptures with higher resolution than does the first method.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.