• Title/Summary/Keyword: concept extracting

Search Result 135, Processing Time 0.023 seconds

A Study on the Clarance Level for the Metal Waste from the KRR-1 & 2 Decommissioning (연구로 1,2호기 해체 금속폐기물의 규제해제농도기준(안) 도출을 위한 연구)

  • 홍상범;이봉재;정운수
    • Proceedings of the Korean Radioactive Waste Society Conference
    • /
    • 2003.11a
    • /
    • pp.660-664
    • /
    • 2003
  • The exposure dose form recycling on a large amount of the steel scrap from the KRR-1&2 decommissioning activities was evaluated, and also the clearance level was derived. The maximum individual dose and collective dose were evaluated by modifying internal dose conversion factor which was based on the concept of effective dose in ICRP 60, applied to the RESRAD-RECYCLE ver 3.06 computing code, IAEA Safety Series III-P-1.1 and NUREG-1640 as the assessment tool. The result of assessment for individual dose and collective dose is 23.9 ${\mu}Sv$ per year and 0.11 man$\cdot$Sv per year respectively. The clearance levels were ultimately determined by extracting the most conservative value form the results of the generic assessment and specific assessment methodologies. The result of clearance level for radionuclides($Co^60$, $Cs^137$) is less than $1.67{\times}10^{-1}$ Bq/g to comply with the clearance criterion(maximum individual dose : 10 $\muSv$ per year, collective dose : 1 man$\cdot$Sv per year) provided for Korea Atomic Energy Act and relevant regulations.

  • PDF

The Characteristics of Group and Classroom Discussions in the Scientific Modeling of the Particulate Model of Matter (물질의 입자성에 대한 모형 구성 과정에서 나타나는 소집단 토론과 전체 학급 토론의 특징)

  • Yang, Chanho;Kim, SooHyun;Jo, Minjin;Noh, Taehee
    • Journal of The Korean Association For Science Education
    • /
    • v.36 no.3
    • /
    • pp.361-369
    • /
    • 2016
  • In this study, we investigated the characteristics of group discussion and classroom discussion in the scientific modeling of the particulate model of matter. 7th graders in Seoul participated in this study. We implemented science instructions based on the GEM cycle of scientific modeling. We analyzed the differences between group discussion and classroom discussion in three steps: exploring thoughts, comparing thoughts, and drawing conclusions. We also looked into the level of argumentations of the students in the modeling activities. The analysis of the results indicated that students generated a group model by extracting commonalities from each model of their group members, and then they evaluated and modified the group model by comparing the differences among the models in classroom discussion. The main step involved in group discussion was 'exploring thoughts', whereas in classroom discussion it was 'comparing thoughts'. Although the levels of argumentation among the students were generally low, most students participated with enthusiasm, as they expressed their interest and had positive perception in the modeling activities. As a result, the modeling activities were found to have positive influences on concept development. Some suggestions to implement the modeling activities in science teaching effectively were discussed.

Automatic Extraction of Abstract Components for supporting Model-driven Development of Components (모델기반 컴포넌트 개발방법론의 지원을 위한 추상컴포넌트 자동 추출기법)

  • Yun, Sang Kwon;Park, Min Gyu;Choi, Yunja
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.8
    • /
    • pp.543-554
    • /
    • 2013
  • Model-Driven Development(MDD) helps developers verify requirements and design issues of a software system in the early stage of development process by taking advantage of a software model which is the most highly abstracted form of a software system. In practice, however, many software systems have been developed through a code-centric method that builds a software system bottom-up rather than top-down. So, without support of appropriate tools, it is not easy to introduce MDD to real development process. Although there are many researches about extracting a model from code to help developers introduce MDD to code-centrically developed system, most of them only extracted base-level models. However, using concept of abstract component one can continuously extract higher level model from base-level model. In this paper we propose a practical method for automatic extraction of base level abstract component from source code, which is the first stage of continuous extraction process of abstract component, and validate the method by implementing an extraction tool based on the method. Target code chosen is the source code of TinyOS, an operating system for wireless sensor networks. The tool is applied to the source code of TinyOS, written in nesC language.

Analysis on the Water Footprint of Crystalline Silicon PV System (결정질 실리콘 태양광시스템의 물 발자국 산정에 대한 연구)

  • Na, Won-Cheol;Kim, Younghwan;Kim, Kyung Nam;Lee, Kwan-Young
    • Clean Technology
    • /
    • v.20 no.4
    • /
    • pp.449-456
    • /
    • 2014
  • There has been increasing concerns for the problems of water security in countries, caused by the frequent occurrence of localized drought due to the climate change and uncertainty of water balance. The importance of fresh water is emphasized as considerable amount of usable fresh water is utilized for power generation sector producing electricity. PV power system, the source of renewable energy, consumes water for the every steps of life cycle: manufacturing, installation, and operation. However, it uses relatively less water than the traditional energy sources such as thermal power and nuclear power sources. In this study, to find out the use of water for the entire process of PV power system from extracting raw materials to operating the system, the footprint of water in the whole process is measured to be analyzed. Measuring the result, the PV water footprint of value chain was $0.989m^3/MWh$ and the water footprint appeared higher specially in poly-Si and solar cell process. The following two reasons explain it: poly-Si process is energy-intensive process and it consumes lots of cooling water. In solar cell process, deionized water is used considerably for washing a high-efficiency crystalline silicon. It is identified that PV system is the source using less water than traditional ones, which has a critical value in saving water. In discussing the future energy policy, it is vital to introduce the concept of water footprint as a supplementary value of renewable energy.

An Adaptive Business Process Mining Algorithm based on Modified FP-Tree (변형된 FP-트리 기반의 적응형 비즈니스 프로세스 마이닝 알고리즘)

  • Kim, Gun-Woo;Lee, Seung-Hoon;Kim, Jae-Hyung;Seo, Hye-Myung;Son, Jin-Hyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.3
    • /
    • pp.301-315
    • /
    • 2010
  • Recently, competition between companies has intensified and so has the necessity of creating a new business value inventions has increased. A numbers of Business organizations are beginning to realize the importance of business process management. Processes however can often not go the way they were initially designed or non-efficient performance process model could be designed. This can be due to a lack of cooperation and understanding between business analysts and system developers. To solve this problem, business process mining which can be used as the basis of the business process re-engineering has been recognized to an important concept. Current process mining research has only focused their attention on extracting workflow-based process model from competed process logs. Thus there have a limitations in expressing various forms of business processes. The disadvantage in this method is process discovering time and log scanning time in itself take a considerable amount of time. This is due to the re-scanning of the process logs with each new update. In this paper, we will presents a modified FP-Tree algorithm for FP-Tree based business processes, which are used for association analysis in data mining. Our modified algorithm supports the discovery of the appropriate level of process model according to the user's need without re-scanning the entire process logs during updated.

A Research on the Paradigm of Interaction Based on Attributes (인터렉션 속성에 기초한 인터렉션 범식화 연구)

  • Shan, Shu Ya;Pan, Young Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.5
    • /
    • pp.127-138
    • /
    • 2021
  • The aim of this study is to demonstrate interaction as a describable field and tries to understand interaction from the perspective of attributes, thus building a theoretical to help interactive designer understand this field by common rule, rather than waste huge time and labor cost on iteration. Since the concept of interaction language has been brought out in 2000, there are varies of related academical studies, but all with defect such as proposed theoretical models are built on a non-uniform scale, or the analyzing perspective are mainly based on researcher's personal experience and being too unobjective. The value of this study is the clustered resource of research which mainly based on academical review. It collected 21 papers researched on interaction paradigm or interaction attributes published since 2000, extracting 19 interaction attribute models which contains 174 interaction attributes. Furthermore, these 174 attributes were re-clustered based on a more unified standard scale, and the two theoretical models summarized from it are respectively focuses on interaction control and interaction experience, both of which covered 6 independent attributes. The propose of this theoretical models and the analyzation of the cluster static will contribute on further revealing of the importance of interaction attribute, or the attention interaction attribute has been paid on. Also, in this regard, the interactive designer could reasonably allocate their energy during design process, and the future potential on various direction of interaction design could be discussed.

Development of PSC I Girder Bridge Weigh-in-Motion System without Axle Detector (축감지기가 없는 PSC I 거더교의 주행중 차량하중분석시스템 개발)

  • Park, Min-Seok;Jo, Byung-Wan;Lee, Jungwhee;Kim, Sungkon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.5A
    • /
    • pp.673-683
    • /
    • 2008
  • This study improved the existing method of using the longitudinal strain and concept of influence line to develop Bridge Weigh-in-Motion system without axle detector using the dynamic strain of the bridge girders and concrete slab. This paper first describes the considered algorithms of extracting passing vehicle information from the dynamic strain signal measured at the bridge slab, girders, and cross beams. Two different analysis methods of 1) influence line method, and 2) neural network method are considered, and parameter study of measurement locations is also performed. Then the procedures and the results of field tests are described. The field tests are performed to acquire training sets and test sets for neural networks, and also to verify and compare performances of the considered algorithms. Finally, comparison between the results of different algorithms and discussions are followed. For a PSC I-girder bridge, vehicle weight can be calculated within a reasonable error range using the dynamic strain gauge installed on the girders. The passing lane and passing speed of the vehicle can be accurately estimated using the strain signal from the concrete slab. The passing speed and peak duration were added to the input variables to reflect the influence of the dynamic interaction between the bridge and vehicles, and impact of the distance between axles, respectively; thus improving the accuracy of the weight calculation.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

An Efficient Estimation of Place Brand Image Power Based on Text Mining Technology (텍스트마이닝 기반의 효율적인 장소 브랜드 이미지 강도 측정 방법)

  • Choi, Sukjae;Jeon, Jongshik;Subrata, Biswas;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.113-129
    • /
    • 2015
  • Location branding is a very important income making activity, by giving special meanings to a specific location while producing identity and communal value which are based around the understanding of a place's location branding concept methodology. Many other areas, such as marketing, architecture, and city construction, exert an influence creating an impressive brand image. A place brand which shows great recognition to both native people of S. Korea and foreigners creates significant economic effects. There has been research on creating a strategically and detailed place brand image, and the representative research has been carried out by Anholt who surveyed two million people from 50 different countries. However, the investigation, including survey research, required a great deal of effort from the workforce and required significant expense. As a result, there is a need to make more affordable, objective and effective research methods. The purpose of this paper is to find a way to measure the intensity of the image of the brand objective and at a low cost through text mining purposes. The proposed method extracts the keyword and the factors constructing the location brand image from the related web documents. In this way, we can measure the brand image intensity of the specific location. The performance of the proposed methodology was verified through comparison with Anholt's 50 city image consistency index ranking around the world. Four methods are applied to the test. First, RNADOM method artificially ranks the cities included in the experiment. HUMAN method firstly makes a questionnaire and selects 9 volunteers who are well acquainted with brand management and at the same time cities to evaluate. Then they are requested to rank the cities and compared with the Anholt's evaluation results. TM method applies the proposed method to evaluate the cities with all evaluation criteria. TM-LEARN, which is the extended method of TM, selects significant evaluation items from the items in every criterion. Then the method evaluates the cities with all selected evaluation criteria. RMSE is used to as a metric to compare the evaluation results. Experimental results suggested by this paper's methodology are as follows: Firstly, compared to the evaluation method that targets ordinary people, this method appeared to be more accurate. Secondly, compared to the traditional survey method, the time and the cost are much less because in this research we used automated means. Thirdly, this proposed methodology is very timely because it can be evaluated from time to time. Fourthly, compared to Anholt's method which evaluated only for an already specified city, this proposed methodology is applicable to any location. Finally, this proposed methodology has a relatively high objectivity because our research was conducted based on open source data. As a result, our city image evaluation text mining approach has found validity in terms of accuracy, cost-effectiveness, timeliness, scalability, and reliability. The proposed method provides managers with clear guidelines regarding brand management in public and private sectors. As public sectors such as local officers, the proposed method could be used to formulate strategies and enhance the image of their places in an efficient manner. Rather than conducting heavy questionnaires, the local officers could monitor the current place image very shortly a priori, than may make decisions to go over the formal place image test only if the evaluation results from the proposed method are not ordinary no matter what the results indicate opportunity or threat to the place. Moreover, with co-using the morphological analysis, extracting meaningful facets of place brand from text, sentiment analysis and more with the proposed method, marketing strategy planners or civil engineering professionals may obtain deeper and more abundant insights for better place rand images. In the future, a prototype system will be implemented to show the feasibility of the idea proposed in this paper.

Analysis and Performance Evaluation of Pattern Condensing Techniques used in Representative Pattern Mining (대표 패턴 마이닝에 활용되는 패턴 압축 기법들에 대한 분석 및 성능 평가)

  • Lee, Gang-In;Yun, Un-Il
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.77-83
    • /
    • 2015
  • Frequent pattern mining, which is one of the major areas actively studied in data mining, is a method for extracting useful pattern information hidden from large data sets or databases. Moreover, frequent pattern mining approaches have been actively employed in a variety of application fields because the results obtained from them can allow us to analyze various, important characteristics within databases more easily and automatically. However, traditional frequent pattern mining methods, which simply extract all of the possible frequent patterns such that each of their support values is not smaller than a user-given minimum support threshold, have the following problems. First, traditional approaches have to generate a numerous number of patterns according to the features of a given database and the degree of threshold settings, and the number can also increase in geometrical progression. In addition, such works also cause waste of runtime and memory resources. Furthermore, the pattern results excessively generated from the methods also lead to troubles of pattern analysis for the mining results. In order to solve such issues of previous traditional frequent pattern mining approaches, the concept of representative pattern mining and its various related works have been proposed. In contrast to the traditional ones that find all the possible frequent patterns from databases, representative pattern mining approaches selectively extract a smaller number of patterns that represent general frequent patterns. In this paper, we describe details and characteristics of pattern condensing techniques that consider the maximality or closure property of generated frequent patterns, and conduct comparison and analysis for the techniques. Given a frequent pattern, satisfying the maximality for the pattern signifies that all of the possible super sets of the pattern must have smaller support values than a user-specific minimum support threshold; meanwhile, satisfying the closure property for the pattern means that there is no superset of which the support is equal to that of the pattern with respect to all the possible super sets. By mining maximal frequent patterns or closed frequent ones, we can achieve effective pattern compression and also perform mining operations with much smaller time and space resources. In addition, compressed patterns can be converted into the original frequent pattern forms again if necessary; especially, the closed frequent pattern notation has the ability to convert representative patterns into the original ones again without any information loss. That is, we can obtain a complete set of original frequent patterns from closed frequent ones. Although the maximal frequent pattern notation does not guarantee a complete recovery rate in the process of pattern conversion, it has an advantage that can extract a smaller number of representative patterns more quickly compared to the closed frequent pattern notation. In this paper, we show the performance results and characteristics of the aforementioned techniques in terms of pattern generation, runtime, and memory usage by conducting performance evaluation with respect to various real data sets collected from the real world. For more exact comparison, we also employ the algorithms implementing these techniques on the same platform and Implementation level.