• Title/Summary/Keyword: Modeling step

Search Result 948, Processing Time 0.022 seconds

Analysis of the thermal-mechanical behavior of SFR fuel pins during fast unprotected transient overpower accidents using the GERMINAL fuel performance code

  • Vincent Dupont;Victor Blanc;Thierry Beck;Marc Lainet;Pierre Sciora
    • Nuclear Engineering and Technology
    • /
    • v.56 no.3
    • /
    • pp.973-979
    • /
    • 2024
  • In the framework of the Generation IV research and development project, in which the French Commission of Alternative and Atomic Energies (CEA) is involved, a main objective for the design of Sodium-cooled Fast Reactor (SFR) is to meet the safety goals for severe accidents. Among the severe ones, the Unprotected Transient OverPower (UTOP) accidents can lead very quickly to a global melting of the core. UTOP accidents can be considered either as slow during a Control Rod Withdrawal (CRW) or as fast. The paper focuses on fast UTOP accidents, which occur in a few milliseconds, and three different scenarios are considered: rupture of the core support plate, uncontrolled passage of a gas bubble inside the core and core mechanical distortion such as a core flowering/compaction during an earthquake. Several levels and rates of reactivity insertions are also considered and the thermal-mechanical behavior of an ASTRID fuel pin from the ASTRID CFV core is simulated with the GERMINAL code. Two types of fuel pins are simulated, inner and outer core pins, and three different burn-up are considered. Moreover, the feedback from the CABRI programs on these type of transients is used in order to evaluate the failure mechanism in terms of kinetics of energy injection and fuel melting. The CABRI experiments complete the analysis made with GERMINAL calculations and have shown that three dominant mechanisms can be considered as responsible for pin failure or onset of pin degradation during ULOF/UTOP accident: molten cavity pressure loading, fuel-cladding mechanical interaction (FCMI) and fuel break-up. The study is one of the first step in fast UTOP accidents modelling with GERMINAL and it has shown that the code can already succeed in modelling these type of scenarios up to the sodium boiling point. The modeling of the radial propagation of the melting front, validated by comparison with CABRI tests, is already very efficient.

Evaluation of Application of 3D Printing Phantom According to Manufacturing Method (구성 물질에 따른 3D 프린팅 팬텀의 적용 평가)

  • Young Sang Kim;Ju Young Lee;Hoon Hee Park
    • Journal of Radiation Industry
    • /
    • v.17 no.2
    • /
    • pp.173-181
    • /
    • 2023
  • 3D printing is a technology that can transform and process computerized data obtained through modeling or 3D scanning via CAD. In the medical field, studies on customized 3D printing technology for clinical use or patients and diseases continue. The importance of research on filaments and molding methods is increasing, but research on manufacturing methods and available raw materials is not being actively conducted. In this study, we compare the characteristics of each material according to the manufacturing method of the phantom manufactured with 3D printing technology and evaluate its usefulness. We manufactured phantoms of the same size using poly methyl meta acrylate (PMMA), acrylonitrile butadiene styrene (ABS), and Poly Lactic Acid (PLA) based on the international standard phantom of aluminum step wedge. We used SITEC's radiation generator (DigiRAD-FPC R-1000-150) and compared the shielding rate and line attenuation coefficient through the average after shooting 10 times. As a result, in the case of the measured dose transmitted through each phantom, it was confirmed that the appearance of the dose measured for phantoms decreased linearly as the thickness increased under each condition. The sensitivity also decreased as the steps increased for each phantom and confirmed that it was different depending on the thickness and material. Through this study, we confirmed that 3D printing technology can be usefully used for phantom production in the medical field. If further development of printing technology and studies on various materials are conducted, it is believed that they will contribute to the development of the medical research environment.

Quantum transport of doped rough-edged graphene nanoribbons FET based on TB-NEGF method

  • K.L. Wong;M.W. Chuan;A. Hamzah;S. Rusli;N.E. Alias;S.M. Sultan;C.S. Lim;M.L.P. Tan
    • Advances in nano research
    • /
    • v.17 no.2
    • /
    • pp.137-147
    • /
    • 2024
  • Graphene nanoribbons (GNRs) are considered a promising alternative to graphene for future nanoelectronic applications. However, GNRs-based device modeling is still at an early stage. This research models the electronic properties of n-doped rough-edged 13-armchair graphene nanoribbons (13-AGNRs) and quantum transport properties of n-doped rough-edged 13-armchair graphene nanoribbon field-effect transistors (13-AGNRFETs) at different doping concentrations. Step-up and edge doping are used to incorporate doping within the nanostructure. The numerical real-space nearest-neighbour tight-binding (NNTB) method constructs the Hamiltonian operator matrix, which computes electronic properties, including the sub-band structure and bandgap. Quantum transport properties are subsequently computed using the self-consistent solution of the two-dimensional Poisson and Schrödinger equations within the non-equilibrium Green's function method. The finite difference method solves the Poisson equation, while the successive over-relaxation method speeds up the convergence process. Performance metrics of the device are then computed. The results show that highly doped, rough-edged 13-AGNRs exhibit a lower bandgap. Moreover, n-doped rough-edged 13-AGNRFETs with a channel of higher doping concentration have better gate control and are less affected by leakage current because they demonstrate a higher current ratio and lower off-current. Furthermore, highly n-doped rough-edged 13-AGNRFETs have better channel control and are less affected by the short channel effect due to the lower value of subthreshold swing and drain-induced barrier lowering. The inclusion of dopants enhances the on-current by introducing more charge carriers in the highly n-doped, rough-edged channel. This research highlights the importance of optimizing doping concentrations for enhancing GNRFET-based device performance, making them viable for applications in nanoelectronics.

Comparison with the 6th and 7th Science Curricular for Inquiry Skill Elements in the Elementary and Secondary School (초.중.고등학교 탐구 기능 요소에 대한 6차와 7차 과학 교육 과정의 비교)

  • Ha, So-Hyun;Kwack, Dae-Oh;Sung, Min-Wung
    • Journal of The Korean Association For Science Education
    • /
    • v.21 no.1
    • /
    • pp.102-113
    • /
    • 2001
  • In order to compare with the 6th and 7th science curricular for the inquiry skill elements in the elementary and secondary school, we divided skill domains into five classes which were process skill, step skill for inquiry instruction, inquiry activity skill, manipulative skill and breeding-farming skill. And then we investigated the kinds and frequencies for the inquiry skill elements of the 6th and 7th curricular in the elementary and secondary school. The results were as follows: 1. The total kinds of inquiry skill element showed 17 kinds in the 6th curriculum and 23 kinds in the 7th. Therefore, the 7th curriculum was higher 1.4 times than the 6th curriculum in the kinds of skill elements. 2. The total frequencies for the inquiry skill elements of the 6th curriculum were 408 and those of the 7th were 729. Therefore, the 7th curriculum was about 1.8 times as many as the 6th. 3. In the kinds of inquiry skill elements according to the school levels, the course of the elementary school showed 14 kinds in the 6th curriculum and 18 kinds in the 7th. The course of middle school showed 7 kinds in 6th and 16 kinds in 7th. The integrated science course of high school was 10 kinds in the 6th and 10 kinds in the 7th. The skill elements in four science curricular of the high school course showed total 11 kinds in the 6th and 21 kinds in the 7th. And then the kinds of inquiry skill elements of the 7th curriculum in the middle and high school course showed about 2 times as many as the 6th curriculum. In the school level, the increase of skill elements showed the highest in the middle school course, and then in the high school course. 4. The total skill elements from the elementary school to the high school in the 6th science curriculum showed 17 kinds and in the order from the highest to the lowest rates, such as experimenting 20%, observing 15%, interpreting and analyzing data 13%, investigating 9%, measuring 7%, drawing a conclusion and assessment 7%, discussion 6%, communicating 5%, classifying 4%, recognizing problems and formulating hypothesis 4%, predicting 3%, designing and carrying out an experiment 3%, collecting and treating data 2%, manipulating skill 1%, modeling 0.5%, breeding and farming 0.3% and inferring 0.2%. 5. The total skill elements from the elementary school to the high school in the 7th curriculum appeared 23 kinds and in the order from the highest to the lowest rates, such as drawing a conclusion and assessment 31%, investigating 14%, collecting and treating data 8%, observing 7%, experimenting 7%, recognizing problems and formulating hypothesis 6%, interpreting and analyzing data 4%, measuring 3%, discussion 3%, manipulating skill 3%, modeling 3%, classifying 2%, project 2%, educational visits 1%, controlling variables 1%, predicting 1%, inferring 1%, operational definition 1%, communicating 1%, designing and carrying out an experiment 0.3%, breeding and farming 0.3%, applicating a number 0.2% and relating with time and space 0.2%. In the conclusion, the 7th curriculum was added 6 kinds of skill elements to the 6th curriculum, such as operational definition, applicating a number, relating with time and space, controlling variables, educational visits and project.

  • PDF

The Development of Education Model for CA-RP(Cognitive Apprenticeship-Based Research Paper) to Improve the Research Capabilities for Majors Students of Radiological Technology (방사선 전공학생의 연구역량 증진을 위한 인지적 도제기반 논문작성 교육 모형 개발)

  • Park, Hoon-Hee;Chung, Hyun-Suk;Lee, Yun-Hee;Kim, Hyun-Soo;Kang, Byung-Sam;Son, Jin-Hyun;Min, Jung-Hwan;Lyu, Kwang-Yeul
    • Journal of radiological science and technology
    • /
    • v.36 no.2
    • /
    • pp.99-110
    • /
    • 2013
  • In the medical field, the necessity of education growth for the professional Radiation Technologists has been emphasized to become experts on radiation and the radiation field is important of the society. Also, in hospitals and companies, important on thesis is getting higher in order to active and cope with rapidly changing internal and external environment and a more in-depth expert training, the necessity of new teaching and learning model that can cope with changes in a more proactive has become. Thesis writing classes brought limits to the in-depth learning as to start a semester and rely on only specific programs besides, inevitable on passive participation. In addition, it does not have a variety opportunity to present, an actual opportunity that can be written and discussed does not provide much caused by instructor-led classes. As well as, it has had a direct impact on the quality of the thesis, furthermore, having the opportunity to participate in various conferences showed the limitations. In order to solve these problems, in this study, writing thesis has organized training operations as a consistent gradual deepening of learning, at the same time, the operational idea was proposed based on the connectivity integrated operating and effective training program & instructional tool for improving the ability to perform the written actual thesis. The development of teaching and learning model consisted of 4 system modeling, scaffolding, articulation, exploration. Depending on the nature of the course, consisting team following the personal interest and the topic allow for connection subject, based on this, promote research capacity through a step-by-step evaluation and feedback and, fundamentally strengthen problem-solving skills through the journal studies, help not only solving the real-time problem by taking wiki-space but also efficient use of time, increase the quality of the thesis by activating cooperation through mentoring, as a result, it was to promote a positive partnership with the academic. Support system in three stages planning subject, progress & writing, writing thesis & presentation and based on cognitive apprenticeship. The ongoing Coaching and Reflection of professor and expert was applied in order to maintain these activities smoothly. The results of this study will introduce actively, voluntarily and substantially join to learners, by doing so, culture the enhancement of creativity, originality and the ability to co-work and by enhance the expertise of based-knowledge, it is considered to be help to improve the comprehensive ability.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Estimating the Yield of Marketable Potato of Mulch Culture using Climatic Elements (시기별 기상값 활용 피복재배 감자 상서수량 예측)

  • Lee, An-Soo;Choi, Seong-Jin;Jeon, Shin-Jae;Maeng, Jin-Hee;Kim, Jong-Hwan;Kim, In-Jong
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.61 no.1
    • /
    • pp.70-77
    • /
    • 2016
  • The object of this study was to evaluate the effects of climatic elements on potato yield and create a model for estimating the potato yield. We used 35 yield data of Sumi variety produced in mulching cultivation from 17 regions over 11 years. According to the results, some climatic elements showed significant level of correlation coefficient with marketable yield of potato. Totally 22 items of climatic elements appeared to be significant. Especially precipitation for 20 days after planting (Prec_1 & 2), relative humidity during 11~20 days after planting (RH_2), precipitation for 20 days before harvest (Prec_9 & 10), sunshine hours during 50~41 days before harvest (SH_6) and 20 days before harvest (SH_9 & 10), and days of rain during 10 days before harvest (DR_10) were highly significant in quadratic regression analysis. 22 items of predicted yield ($Y_i=aX_i{^2}+bX_i+c$) were induced from the 22 items of climatic elements (step 1). The correlations between the predicted yields and marketable yield were stepwised using SPSS, statistical program, and we selected a model (step 2), in which 4 items of independent variables ($Y_i$) were used. Subsequently the $Y_i$ were replaced with the equation in step 1, $aX_i{^2}+bX_i+c$. Finally we derived the model to predict the marketable yield of potato as below. $$Y=-336{\times}DR_-10^2+854{\times}DR_-10-0.422{\times}Prec_-9^2+43.3{\times}Prec_-9\\-0.0414{\times}RH_-2^2+46.2{\times}RH_-2-0.0102{\times}Prec_-2^2-7.00{\times}Prec_-2-10039$$.

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

Technology Acceptance Modeling based on User Experience for Autonomous Vehicles

  • Cho, Yujun;Park, Jaekyu;Park, Sungjun;Jung, Eui S.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.2
    • /
    • pp.87-108
    • /
    • 2017
  • Objective: The purpose of this study was to precede the acceptance study based on automation steps and user experience that was lacked in the past study on the core technology of autonomous vehicle, ADAS. The first objective was to construct the acceptance model of ADAS technology that is the core technology, and draw factors that affect behavioral intention through user experience-based evaluation by applying driving simulator. The second one was to see the change of factors on automation step of autonomous vehicle through the UX/UA score. Background: The number of vehicles with the introduction of ADAS is increasing, and it caused change of interaction between vehicle and driver as automation is being developed on the particular drive factor. For this reason, it is becoming important to study the technology acceptance on how driver can actively accept giving up some parts of automated drive operation and handing over the authority to vehicle. Method: We organized the study model and items through literature investigation and the scenario according to the 4 stages of automation of autonomous vehicle, and preceded acceptance assessment using driving simulator. Total 68 men and woman were participated in this experiment. Results: We drew results of Performance Expectancy (PE), Social Influence (SI), Perceived Safety (PS), Anxiety (AX), Trust (T) and Affective Satisfaction (AS) as the factors that affect Behavioral Intention (BI). Also the drawn factors shows that UX/UA score has a significant difference statistically according to the automation steps of autonomous vehicle, and UX/UA tends to move up until the stage 2 of automation, and at stage 3 it goes down to the lowest level, and it increases a little or stays steady at stage 4. Conclusion and Application: First, we presented the acceptance model of ADAS that is the core technology of autonomous vehicle, and it could be the basis of the future acceptance study of the ADAS technology as it verifies through user experience-based assessment using driving simulator. Second, it could be helpful to the appropriate ADAS development in the future as drawing the change of factors and predicting the acceptance level according to the automation stages of autonomous vehicle through UX/UA score, and it could also grasp and avoid the problem that affect the acceptance level. It is possible to use these study results as tools to test validity of function before ADAS offering company launches the products. Also it will help to prevent the problems that could be caused when applying the autonomous vehicle technology, and to establish technology that is easily acceptable for drivers, so it will improve safety and convenience of drivers.