• Title/Summary/Keyword: Computational

Search Result 26,485, Processing Time 0.049 seconds

Development of Elementary Maker Education Program using WeDo Robot (WeDo 로봇 활용 초등 메이커 교육 프로그램 개발)

  • Kweon, Soonhwan;Park, Jungho
    • 한국정보교육학회:학술대회논문집
    • /
    • 2021.08a
    • /
    • pp.335-340
    • /
    • 2021
  • This study conducted research on creating an environment for maker education programs for robot and SW education, development and application of maker education programs for low-grade elementary school students in farming and fishing villages. Based on the preceding maker education model, the OMCSI model was developed for the lower grade level of elementary school, and based on this, five WeDo-utilized elementary maker education programs were developed. From April 1, 2020 to October 30, 2020, the results of applying the elementary school maker education program using WeDo Robot 2.0 to 10 second graders of 10 Elementary School in Gyeongsangnam-do are as follows. The average increased by 3.40 points (t=-2.378, p=0.034) and the average increased by 3.30 points (t=-2.329, p=0.040). The average was also increased by 3.40 points (t=-2.458, p=0.038). Finally, it rose to 3.70 points (t=-2.449, p=0.037) for its reasoning ability. That is, all four sub-elements of computing thinking had a significant probability of 0.04, indicating statistical significant differences between scores of pre-post computing thinking. Therefore, the Elementary Maker Education Program using WeDo robots has worked very effectively to improve students' computing thinking skills.

  • PDF

Development of transient Monte Carlo in a fissile system with β-delayed emission from individual precursors using modified open source code OpenMC(TD)

  • J. Romero-Barrientos;F. Molina;J.I. Marquez Damian;M. Zambra;P. Aguilera;F. Lopez-Usquiano;S. Parra
    • Nuclear Engineering and Technology
    • /
    • v.55 no.5
    • /
    • pp.1593-1603
    • /
    • 2023
  • In deterministic and Monte Carlo transport codes, b-delayed emission is included using a group structure where all of the precursors are grouped together in 6 groups or families, but given the increase in computational power, nowadays there is no reason to keep this structure. Furthermore, there have been recent efforts to compile and evaluate all the available b-delayed neutron emission data and to measure new and improved data on individual precursors. In order to be able to perform a transient Monte Carlo simulation, data from individual precursors needs to be implemented in a transport code. This work is the first step towards the development of a tool to explore the effect of individual precursors in a fissile system. In concrete, individual precursor data is included by expanding the capabilities of the open source Monte Carlo code OpenMC. In the modified code - named Time Dependent OpenMC or OpenMC(TD)- time dependency related to β-delayed neutron emission was handled by using forced decay of precursors and combing of the particle population. The data for continuous energy neutron cross-sections was taken from JEFF-3.1.1 library. Regarding the data needed to include the individual precursors, cumulative yields were taken from JEFF-3.1.1 and delayed neutron emission probabilities and delayed neutron spectra were taken from ENDF-B/VIII.0. OpenMC(TD) was tested in a monoenergetic system, an energy dependent unmoderated system where the precursors were taken individually or in a group structure, and in a light-water moderated energy dependent system, using 6-groups, 50 and 40 individual precursors. Neutron flux as a function of time was obtained for each of the systems studied. These results show the potential of OpenMC(TD) as a tool to study the impact of individual precursor data on fissile systems, thus motivating further research to simulate more complex fissile systems.

Numerical study on conjugate heat transfer in a liquid-metal-cooled pipe based on a four-equation turbulent heat transfer model

  • Xian-Wen Li;Xing-Kang Su;Long Gu;Xiang-Yang Wang;Da-Jun Fan
    • Nuclear Engineering and Technology
    • /
    • v.55 no.5
    • /
    • pp.1802-1813
    • /
    • 2023
  • Conjugate heat transfer between liquid metal and solid is a common phenomenon in a liquid-metal-cooled fast reactor's fuel assembly and heat exchanger, dramatically affecting the reactor's safety and economy. Therefore, comprehensively studying the sophisticated conjugate heat transfer in a liquid-metal-cooled fast reactor is profound. However, it has been evidenced that the traditional Simple Gradient Diffusion Hypothesis (SGDH), assuming a constant turbulent Prandtl number (Prt,, usually 0.85 - 1.0), is inappropriate in the Computational Fluid Dynamics (CFD) simulations of liquid metal. In recent decades, numerous studies have been performed on the four-equation model, which is expected to improve the precision of liquid metal's CFD simulations but has not been introduced into the conjugate heat transfer calculation between liquid metal and solid. Consequently, a four-equation model, consisting of the Abe k - ε turbulence model and the Manservisi k𝜃 - ε𝜃 heat transfer model, is applied to study the conjugate heat transfer concerning liquid metal in the present work. To verify the numerical validity of the four-equation model used in the conjugate heat transfer simulations, we reproduce Johnson's experiments of the liquid lead-bismuth-cooled turbulent pipe flow using the four-equation model and the traditional SGDH model. The simulation results obtained with different models are compared with the available experimental data, revealing that the relative errors of the local Nusselt number and mean heat transfer coefficient obtained with the four-equation model are considerably reduced compared with the SGDH model. Then, the thermal-hydraulic characteristics of liquid metal turbulent pipe flow obtained with the four-equation model are analyzed. Moreover, the impact of the turbulence model used in the four-equation model on overall simulation performance is investigated. At last, the effectiveness of the four-equation model in the CFD simulations of liquid sodium conjugate heat transfer is assessed. This paper mainly proves that it is feasible to use the four-equation model in the study of liquid metal conjugate heat transfer and provides a reference for the research of conjugate heat transfer in a liquid-metal-cooled fast reactor.

Development of Homogenization Data-based Transfer Learning Framework to Predict Effective Mechanical Properties and Thermal Conductivity of Foam Structures (폼 구조의 유효 기계적 물성 및 열전도율 예측을 위한 균질화 데이터 기반 전이학습 프레임워크의 개발)

  • Wonjoo Lee;Suhan Kim;Hyun Jong Sim;Ju Ho Lee;Byeong Hyeok An;Yu Jung Kim;Sang Yung Jeong;Hyunseong Shin
    • Composites Research
    • /
    • v.36 no.3
    • /
    • pp.205-210
    • /
    • 2023
  • In this study, we developed a transfer learning framework based on homogenization data for efficient prediction of the effective mechanical properties and thermal conductivity of cellular foam structures. Mean-field homogenization (MFH) based on the Eshelby's tensor allows for efficient prediction of properties in porous structures including ellipsoidal inclusions, but accurately predicting the properties of cellular foam structures is challenging. On the other hand, finite element homogenization (FEH) is more accurate but comes with relatively high computational cost. In this paper, we propose a data-driven transfer learning framework that combines the advantages of mean-field homogenization and finite element homogenization. Specifically, we generate a large amount of mean-field homogenization data to build a pre-trained model, and then fine-tune it using a relatively small amount of finite element homogenization data. Numerical examples were conducted to validate the proposed framework and verify the accuracy of the analysis. The results of this study are expected to be applicable to the analysis of materials with various foam structures.

Metagenomic analysis of viral genes integrated in whole genome sequencing data of Thai patients with Brugada syndrome

  • Suwalak Chitcharoen;Chureerat Phokaew;John Mauleekoonphairoj;Apichai Khongphatthanayothin;Boosamas Sutjaporn;Pharawee Wandee;Yong Poovorawan;Koonlawee Nademanee;Sunchai Payungporn
    • Genomics & Informatics
    • /
    • v.20 no.4
    • /
    • pp.44.1-44.13
    • /
    • 2022
  • Brugada syndrome (BS) is an autosomal dominant inheritance cardiac arrhythmia disorder associated with sudden death in young adults. Thailand has the highest prevalence of BS worldwide, and over 60% of patients with BS still have unclear disease etiology. Here, we performed a new viral metagenome analysis pipeline called VIRIN and validated it with whole genome sequencing (WGS) data of HeLa cell lines and hepatocellular carcinoma. Then the VIRIN pipeline was applied to identify viral integration positions from unmapped WGS data of Thai males, including 100 BS patients (case) and 100 controls. Even though the sample preparation had no viral enrichment step, we can identify several virus genes from our analysis pipeline. The predominance of human endogenous retrovirus K (HERV-K) viruses was found in both cases and controls by blastn and blastx analysis. This study is the first report on the full-length HERV-K assembled genomes in the Thai population. Furthermore, the HERV-K integration breakpoint positions were validated and compared between the case and control datasets. Interestingly, Brugada cases contained HERV-K integration breakpoints at promoters five times more often than controls. Overall, the highlight of this study is the BS-specific HERV-K breakpoint positions that were found at the gene coding region "NBPF11" (n = 9), "NBPF12" (n = 8) and long non-coding RNA (lncRNA) "PCAT14" (n = 4) region. The genes and the lncRNA have been reported to be associated with congenital heart and arterial diseases. These findings provide another aspect of the BS etiology associated with viral genome integrations within the human genome.

System Reliability-Based Design Optimization Using Performance Measure Approach (성능치 접근법을 이용한 시스템 신뢰도 기반 최적설계)

  • Kang, Soo-Chang;Koh, Hyun-Moo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.3A
    • /
    • pp.193-200
    • /
    • 2010
  • Structural design requires simultaneously to ensure safety by considering quantitatively uncertainties in the applied loadings, material properties and fabrication error and to maximize economical efficiency. As a solution, system reliability-based design optimization (SRBDO), which takes into consideration both uncertainties and economical efficiency, has been extensively researched and numerous attempts have been done to apply it to structural design. Contrary to conventional deterministic optimization, SRBDO involves the evaluation of component and system probabilistic constraints. However, because of the complicated algorithm for calculating component reliability indices and system reliability, excessive computational time is required when the large-scale finite element analysis is involved in evaluating the probabilistic constraints. Accordingly, an algorithm for SRBDO exhibiting improved stability and efficiency needs to be developed for the large-scale problems. In this study, a more stable and efficient SRBDO based on the performance measure approach (PMA) is developed. PMA shows good performance when it is applied to reliability-based design optimization (RBDO) which has only component probabilistic constraints. However, PMA could not be applied to SRBDO because PMA only calculates the probabilistic performance measure for limit state functions and does not evaluate the reliability indices. In order to overcome these difficulties, the decoupled algorithm is proposed where RBDO based on PMA is sequentially performed with updated target component reliability indices until the calculated system reliability index approaches the target system reliability index. Through a mathematical problem and ten-bar truss problem, the proposed method shows better convergence and efficiency than other approaches.

An Improved Structural Reliability Analysis using Moving Least Squares Approximation (이동최소제곱근사법을 이용한 개선된 구조 신뢰성 해석)

  • Kang, Soo-Chang;Koh, Hyun-Moo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.6A
    • /
    • pp.835-842
    • /
    • 2008
  • The response surface method (RSM) is widely adopted for the structural reliability analysis because of its numerical efficiency. However, the RSM is still time consuming for large-scale applications and sometimes shows large errors in the calculation of sensitivity of reliability index with respect to random variables. Therefore, this study proposes a new RSM in which moving least squares (MLS) approximation is applied. Least squares approximation generally used in the common RSM gives equal weight to the coefficients of the response surface function (RSF). On the other hand, The MLS approximation gives higher weight to the experimental points closer to the design point, which yields the RSF more similar to the limit state at the design point. In the procedure of the proposed method, a linear RSF is constructed initially and then a quadratic RSF is formed using the axial experimental points selected from the reduced region where the design point is likely to exist. The RSF is updated successively by adding one more experimental point to the previously sampled experimental points. In order to demonstrate the effectiveness of the proposed method, mathematical problems and ten-bar truss are considered as numerical examples. As a result, the proposed method shows better accuracy and computational efficiency than the common RSM.

Statistical Data Extraction and Validation from Graph for Data Integration and Meta-analysis (데이터통합과 메타분석을 위한 그래프 통계량 추출과 검증)

  • Sung Ryul Shim;Yo Hwan Lim;Myunghee Hong;Gyuseon Song;Hyun Wook Han
    • The Journal of Bigdata
    • /
    • v.6 no.2
    • /
    • pp.61-70
    • /
    • 2021
  • The objective of this study was to describe specific approaches for data extraction from graph when statistical information is not directly reported in some articles, enabling data intergration and meta-analysis for quantitative data synthesis. Particularly, meta-analysis is an important analysis tool that allows the right decision making for evidence-based medicine by systematically and objectively selects target literature, quantifies the results of individual studies, and provides the overall effect size. For data integration and meta-analysis, we investigated the strength points about the introduction and application of Adobe Acrobet Reader and Python-based Jupiter Lab software, a computer tool that extracts accurate statistical figures from graphs. We used as an example data that was statistically verified throught an previous studies and the original data could be obtained from ClinicalTrials.gov. As a result of meta-analysis of the original data and the extraction values of each computer software, there was no statistically significant difference between the extraction methods. In addition, the intra-rater reliability of between researchers was confirmed and the consistency was high. Therefore, In terms of maintaining the integrity of statistical information, measurement using a computational tool is recommended rather than the classically used methods.

In-silico annotation of the chemical composition of Tibetan tea and its mechanism on antioxidant and lipid-lowering in mice

  • Ning Wang ;Linman Li ;Puyu Zhang;Muhammad Aamer Mehmood ;Chaohua Lan;Tian Gan ;Zaixin Li ;Zhi Zhang ;Kewei Xu ;Shan Mo ;Gang Xia ;Tao Wu ;Hui Zhu
    • Nutrition Research and Practice
    • /
    • v.17 no.4
    • /
    • pp.682-697
    • /
    • 2023
  • BACKGROUND/OBJECTIVES: Tibetan tea is a kind of dark tea, due to the inherent complexity of natural products, the chemical composition and beneficial effects of Tibetan tea are not fully understood. The objective of this study was to unravel the composition of Tibetan tea using knowledge-guided multilayer network (KGMN) techniques and explore its potential antioxidant and hypolipidemic mechanisms in mice. MATERIALS/METHODS: The C57BL/6J mice were continuously gavaged with Tibetan tea extract (T group), green tea extract (G group) and ddH2O (H group) for 15 days. The activity of total antioxidant capacity (T-AOC) and superoxide dismutase (SOD) in mice was detected. Transcriptome sequencing technology was used to investigate the molecular mechanisms underlying the antioxidant and lipid-lowering effects of Tibetan tea in mice. Furthermore, the expression levels of liver antioxidant and lipid metabolism related genes in various groups were detected by the real-time quantitative polymerase chain reaction (qPCR) method. RESULTS: The results showed that a total of 42 flavonoids are provisionally annotated in Tibetan tea using KGMN strategies. Tibetan tea significantly reduced body weight gain and increased T-AOC and SOD activities in mice compared with the H group. Based on the results of transcriptome and qPCR, it was confirmed that Tibetan tea could play a key role in antioxidant and lipid lowering by regulating oxidative stress and lipid metabolism related pathways such as insulin resistance, P53 signaling pathway, insulin signaling pathway, fatty acid elongation and fatty acid metabolism. CONCLUSIONS: This study was the first to use computational tools to deeply explore the composition of Tibetan tea and revealed its potential antioxidant and hypolipidemic mechanisms, and it provides new insights into the composition and bioactivity of Tibetan tea.

An Analysis of Pattern Activities of a Finding Rules Unit in Government-Authorized Mathematics Curricular Materials for Fourth Graders (4학년 수학 검정 교과용 도서의 규칙 찾기 단원에 제시된 패턴 활동의 지도 방안 분석)

  • Pang, JeongSuk;Lee, Soojin
    • Education of Primary School Mathematics
    • /
    • v.26 no.1
    • /
    • pp.45-63
    • /
    • 2023
  • The activity of finding rules is useful for enhancing the algebraic thinking of elementary school students. This study analyzed the pattern activities of a finding rules unit in 10 different government-authorized mathematics curricular materials for fourth graders aligned to the 2015 revised national mathematics curriculum. The analytic elements included three main activities: (a) activities of analyzing the structure of patterns, (b) activities of finding a specific term by finding a rule, and (c) activities of representing the rule. The three activities were mainly presented regarding growing numeric patterns, growing geometric patterns, and computational patterns. The activities of analyzing the structure of patterns were presented when dealing mainly with growing geometric patterns and focused on finding the number of models constituting the pattern. The activities of finding a specific term by finding a rule were evenly presented across the three patterns and the specific term tended to be close to the terms presented in the given task. The activities of representing the rule usually encouraged students to talk about or write down the rule using their own words. Based on the results of these analyses, this study provides specific implications on how to develop subsequent mathematics curricular materials regarding pattern activities to enhance elementary school students' algebraic thinking.