• Title/Summary/Keyword: advanced benchmark

Search Result 150, Processing Time 0.023 seconds

Multi-scale heat conduction models with improved equivalent thermal conductivity of TRISO fuel particles for FCM fuel

  • Mouhao Wang;Shanshan Bu;Bing Zhou;Zhenzhong Li;Deqi Chen
    • Nuclear Engineering and Technology
    • /
    • v.55 no.3
    • /
    • pp.1140-1151
    • /
    • 2023
  • Fully Ceramic Microencapsulated (FCM) fuel is emerging advanced fuel material for the future nuclear reactors. The fuel pellet in the FCM fuel is composed of matrix and a large number of TRistructural-ISOtopic (TRISO) fuel particles which are randomly dispersed in the SiC matrix. The minimum layer thickness in a TRISO fuel particle is on the order of 10-5 m, and the length of the FCM pellet is on the order of 10-2 m. Hence, the heat transfer in the FCM pellet is a multi-scale phenomenon. In this study, three multi-scale heat conduction models including the Multi-region Layered (ML) model, Multi-region Non-layered (MN) model and Homogeneous model for FCM pellet were constructed. In the ML model, the random distributed TRISO fuel particles and coating layers are completely built. While the TRISO fuel particles with coating layers are homogenized in the MN model and the whole fuel pellet is taken as the homogenous material in the Homogeneous model. Taking the results by the ML model as the benchmark, the abilities of the MN model and Homogenous model to predict the maximum and average temperature were discussed. It was found that the MN model and the Homogenous model greatly underestimate the temperature of TRISO fuel particles. The reason is mainly that the conventional equivalent thermal conductivity (ETC) models do not take the internal heat source into account and are not suitable for the TRISO fuel particle. Then the improved ETCs considering internal heat source were derived. With the improved ETCs, the MN model is able to capture the peak temperature as well as the average temperature at a wide range of the linear powers (165 W/cm~ 415 W/cm) and the packing fractions (20%-50%). With the improved ETCs, the Homogenous model is better to predict the average temperature at different linear powers and packing fractions, and able to predict the peak temperature at high packing fractions (45%-50%).

Evaluation of Edge-Based Data Collection System through Time Series Data Optimization Techniques and Universal Benchmark Development (수집 데이터 기반 경량 이상 데이터 감지 알림 시스템 개발)

  • Woojin Cho;Jae-hoi Gu
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.453-458
    • /
    • 2024
  • Due to global issues such as climate crisis and rising energy costs, there is an increasing focus on energy conservation and management. In the case of South Korea, approximately 53.5% of the total energy consumption comes from industrial complexes. In order to address this, we aimed to improve issues through the 'Shared Network Utility Plant' among companies using similar energy utilities to find energy-saving points. For effective energy conservation, various techniques are utilized, and stable data supply is crucial for the reliable operation of factories. Many anomaly detection and alert systems for checking the stability of data supply were dependent on Energy Management Systems (EMS), which had limitations. The construction of an EMS involves large-scale systems, making it difficult to implement in small factories with spatial and energy constraints. In this paper, we aim to overcome these challenges by constructing a data collection system and anomaly detection alert system on embedded devices that consume minimal space and power. We explore the possibilities of utilizing anomaly detection alert systems in typical institutions for data collection and study the construction process.

Compiling Lazy Functional Programs to Java on the basis of Spineless Taxless G-Machine with Eval-Apply Model (Eval-Apply 모델의 STGM에 기반하여 지연 계산 함수형 프로그램을 자바로 컴파일하는 기법)

  • Nam, Byeong-Gyu;Choi, Kwang-Hoon;Han, Tai-Sook
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.5
    • /
    • pp.326-335
    • /
    • 2002
  • Recently there have been a number of researches to provide code mobility to lazy functional language (LFL) programs by translating LFL programs to Java programs. These approaches are basically baled on architectural similarities between abstract machines of LFLs and Java. The abstract machines of LFLs and Java programming language, Spineless Tagless G-Machine(STGM) and Java Virtual Machine(JVM) respectively, share important common features such as built- in garbage collector and stack machine architecture. Thus, we can provide code mobility to LFLs by translating LFLs to Java utilizing these common features. In this paper, we propose a new translation scheme which fully utilizes architectural common features between STGM and JVM. By redefining STGM as an eval-apply evaluation model, we have defined a new translation scheme which utilizes Java Virtual Machine Stack for function evaluation and totally eliminates stack simulation which causes array manipulation overhead in Java. Benchmark program translated to Java programs by our translation scheme run faster on JDK 1.3 than those translated by the previous schemes.

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

Prefetching based on the Type-Level Access Pattern in Object-Relational DBMSs (객체관계형 DBMS에서 타입수준 액세스 패턴을 이용한 선인출 전략)

  • Han, Wook-Shin;Moon, Yang-Sae;Whang, Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.529-544
    • /
    • 2001
  • Prefetching is an effective method to minimize the number of roundtrips between the client and the server in database management systems. In this paper we propose new notions of the type-level access pattern and the type-level access locality and developed an efficient prefetchin policy based on the notions. The type-level access patterns is a sequence of attributes that are referenced in accessing the objects: the type-level access locality a phenomenon that regular and repetitive type-level access patterns exist. Existing prefetching methods are based on object-level or page-level access patterns, which consist of object0ids of page-ids of the objects accessed. However, the drawback of these methods is that they work only when exactly the same objects or pages are accessed repeatedly. In contrast, even though the same objects are not accessed repeatedly, our technique effectively prefetches objects if the same attributes are referenced repeatedly, i,e of there is type-level access locality. Many navigational applications in Object-Relational Database Management System(ORDBMs) have type-level access locality. Therefore our technique can be employed in ORDBMs to effectively reduce the number of roundtrips thereby significantly enhancing the performance. We have conducted extensive experiments in a prototype ORDBMS to show the effectiveness of our algorithm. Experimental results using the 007 benchmark and a real GIS application show that our technique provides orders of magnitude improvements in the roundtrips and several factors of improvements in overall performance over on-demand fetching and context-based prefetching, which a state-of the art prefetching method. These results indicate that our approach significantly and is a practical method that can be implemented in commercial ORDMSs.

  • PDF

Development of Two-Dimensional Near-field Integrated Performance Assessment Model for Near-surface LILW Disposal (중·저준위 방사성폐기물 천층처분시설 근계영역의 2차원 통합성능평가 모델 개발)

  • Bang, Je Heon;Park, Joo-Wan;Jung, Kang Il
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.12 no.4
    • /
    • pp.315-334
    • /
    • 2014
  • Wolsong Low- and Intermediate-level radioactive waste (LILW) disposal center has two different types of disposal facilities and interacts with the neighboring Wolsong nuclear power plant. These situations impose a high level of complexity which requires in-depth understanding of phenomena in the safety assessment of the disposal facility. In this context, multidimensional radionuclide transport model and hydraulic performance assessment model should be developed to identify more realistic performance of the complex system and reduce unnecessary conservatism in the conventional performance assessment models developed for the $1^{st}$ stage underground disposal. In addition, the advanced performance assessment model is required to calculate many cases to treat uncertainties or study parameter importance. To fulfill the requirements, this study introduces the development of two-dimensional integrated near-field performance assessment model combining near-field hydraulic performance assessment model and radionuclide transport model for the $2^{nd}$ stage near-surface disposal. The hydraulic and radionuclide transport behaviors were evaluated by PORFLOW and GoldSim. GoldSim radionuclide transport model was verified through benchmark calculations with PORFLOW radionuclide transport model. GoldSim model was shown to be computationally efficient and provided the better understanding of the radionuclide transport behavior than conventional model.

Policy Suggestions for Establishing Culture Technology Institute from Economic Point of View (경제성 관점에서 문화기술연구원설립에 대한 정책적 제언 -조직구조, 규모 및 설립시기를 중심으로-)

  • Lee, Yong-Kyu;Ju, Hye-Seong
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.10
    • /
    • pp.257-266
    • /
    • 2011
  • The establishment of Culture Technology Institute is now under discussion. In case of advanced countries (including USA, Japan and EU) practices market-oriented policies in the area of culture and entertainment industry. Therefore it is hard to find out Government-funded Institute which Korea could benchmark. In this situation, there are many different opinions about its organizational structure, size of organization and budget, and when the institute should be established. This study proposed a Consolidated-Decentralized model as a proper organizational structure after evaluating 4 different models based on 4 criterion. And based on results produced by the model developed for our research, this study suggested proposals that the number of research personnel and the size of budget should be larger than 300 persons and 1200 billion won respectively. However, the establishment of institute should be decided by not only economic factor but also various factors such as political element, externality, private company investment and effect on production inducement etc. If new institute focused on different area from ETRI, then the time and size of the institute could be decided by the result of policy analysis.

Biaxial Buckling Analysis of Magneto-Electro-Elastic(MEE) Nano Plates using the Nonlocal Elastic Theory (비국소 탄성이론을 이용한 자기-전기-탄성 나노 판의 2방향 좌굴 해석)

  • Han, Sung-Cheon;Park, Weon-Tae
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.30 no.5
    • /
    • pp.405-413
    • /
    • 2017
  • In this paper, we study the biaxial buckling analysis of nonlocal MEE(magneto-electro-elastic) nano plates based on the first-order shear deformation theory. The in-plane electric and magnetic fields can be ignored for MEE(magneto-electro-elastic) nano plates. According to magneto-electric boundary condition and Maxwell equation, the variation of magnetic and electric potentials along the thickness direction of the MME plate is determined. In order to reformulate the elastic theory of MEE(magneto-electro-elastic) nano-plate, the nonlocal differential constitutive relations of Eringen is used. Using the variational principle, the governing equations of the nonlocal theory are discussed. The relations between nonlocal and local theories are investigated by computational results. Also, the effects of nonlocal parameters, in-plane load directions, and aspect ratio on structural responses are studied. Computational results show the effects of the electric and magnetic potentials. These computational results can be useful in the design and analysis of advanced structures constructed from MEE(magneto-electro-elastic) materials and may be the benchmark test for the future study.

A practical analysis approach to the functional requirements standards for electronic records management system (기록관리시스템 기능요건 표준의 실무적 해석)

  • Yim, Jin-Hee
    • The Korean Journal of Archival Studies
    • /
    • no.18
    • /
    • pp.139-178
    • /
    • 2008
  • The functional requirements standards for electronic records management systems which have been published recently describe the specifications very precisely including not only core functions of records management but also the function of system management and optional modules. The fact that these functional requirements standards seem to be similar to each other in terms of the content of functions described in the standards is linked to the global standardization trends in the practical area of electronic records. In addition, these functional requirements standards which have been built upon with collaboration of archivists from many national archives, IT specialists, consultants and records management applications vendors result in not only obtaining high quality but also establishing the condition that the standards could be the certificate criteria easily. Though there might be a lot of different ways and approaches to benchmark the functional requirements standards developed from advanced electronic records management practice, this paper is showing the possibility and meaningful business cases of gaining useful practical ideas learned from imaging electronic records management practices related to the functional requirements standards. The business cases are explored central functions of records management and the intellectual control of the records such as classification scheme or disposal schedules. The first example is related to the classification scheme. Should the records classification be fixed at same number of level? Should a record item be filed only at the last node of classification scheme? The second example addresses a precise disposition schedule which is able to impose the event-driven chronological retention period to records and which could be operated using a inheritance concept between the parent nodes and child nodes in classification scheme. The third example shows the usage of the function which holds or freeze and release the records required to keep as evidence to comply with compliance like e-Discovery or the risk management of organizations under the premise that the records management should be the basis for the legal compliance. The last case shows some cases for bulk batch operation required if the records manager can use the ERMS as their useful tool. It is needed that the records managers are able to understand and interpret the specifications of functional requirements standards for ERMS in the practical view point, and to review the standards and extract required specifications for upgrading their own ERMS. The National Archives of Korea should provide various stakeholders with a sound basis for them to implement effective and efficient electronic records management practices through expanding the usage scope of the functional requirements standard for ERMS and making the common understanding about its implications.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.