• Title/Summary/Keyword: Model-based Optimization

Search Result 2,631, Processing Time 0.032 seconds

Dead Layer Thickness and Geometry Optimization of HPGe Detector Based on Monte Carlo Simulation

  • Suah Yu;Na Hye Kwon;Young Jae Jang;Byungchae Lee;Jihyun Yu;Dong-Wook Kim;Gyu-Seok Cho;Kum-Bae Kim;Geun Beom Kim;Cheol Ha Baek;Sang Hyoun Choi
    • Progress in Medical Physics
    • /
    • v.33 no.4
    • /
    • pp.129-135
    • /
    • 2022
  • Purpose: A full-energy-peak (FEP) efficiency correction is required through a Monte Carlo simulation for accurate radioactivity measurement, considering the geometrical characteristics of the detector and the sample. However, a relative deviation (RD) occurs between the measurement and calculation efficiencies when modeling using the data provided by the manufacturers due to the randomly generated dead layer. This study aims to optimize the structure of the detector by determining the dead layer thickness based on Monte Carlo simulation. Methods: The high-purity germanium (HPGe) detector used in this study was a coaxial p-type GC2518 model, and a certified reference material (CRM) was used to measure the FEP efficiency. Using the MC N-Particle Transport Code (MCNP) code, the FEP efficiency was calculated by increasing the thickness of the outer and inner dead layer in proportion to the thickness of the electrode. Results: As the thickness of the outer and inner dead layer increased by 0.1 mm and 0.1 ㎛, the efficiency difference decreased by 2.43% on average up to 1.0 mm and 1.0 ㎛ and increased by 1.86% thereafter. Therefore, the structure of the detector was optimized by determining 1.0 mm and 1.0 ㎛ as thickness of the dead layer. Conclusions: The effect of the dead layer on the FEP efficiency was evaluated, and an excellent agreement between the measured and calculated efficiencies was confirmed with RDs of less than 4%. It suggests that the optimized HPGe detector can be used to measure the accurate radioactivity using in dismantling and disposing medical linear accelerators.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

Optimization of Medium for the Carotenoid Production by Rhodobacter sphaeroides PS-24 Using Response Surface Methodology (반응 표면 분석법을 사용한 Rhodobacter sphaeroides PS-24 유래 carotenoid 생산 배지 최적화)

  • Bong, Ki-Moon;Kim, Kong-Min;Seo, Min-Kyoung;Han, Ji-Hee;Park, In-Chul;Lee, Chul-Won;Kim, Pyoung-Il
    • Korean Journal of Organic Agriculture
    • /
    • v.25 no.1
    • /
    • pp.135-148
    • /
    • 2017
  • Response Surface Methodology (RSM), which is combining with Plackett-Burman design and Box-Behnken experimental design, was applied to optimize the ratios of the nutrient components for carotenoid production by Rhodobacter sphaeroides PS-24 in liquid state fermentation. Nine nutrient ingredients containing yeast extract, sodium acetate, NaCl, $K_2HPO_4$, $MgSO_4$, mono-sodium glutamate, $Na_2CO_3$, $NH_4Cl$ and $CaCl_2$ were finally selected for optimizing the medium composition based on their statistical significance and positive effects on carotenoid yield. Box-Behnken design was employed for further optimization of the selected nutrient components in order to increase carotenoid production. Based on the Box-Behnken assay data, the secondary order coefficient model was set up to investigate the relationship between the carotenoid productivity and nutrient ingredients. The important factors having influence on optimal medium constituents for carotenoid production by Rhodobacter sphaeroides PS-24 were determined as follows: yeast extract 1.23 g, sodium acetate 1 g, $NH_4Cl$ 1.75 g, NaCl 2.5 g, $K_2HPO_4$ 2 g, $MgSO_4$ 1.0 g, mono-sodium glutamate 7.5 g, $Na_2CO_3$ 3.71 g, $NH_4Cl$ 3.5g, $CaCl_2$ 0.01 g, per liter. Maximum carotenoid yield of 18.11 mg/L was measured by confirmatory experiment in liquid culture using 500 L fermenter.

Applications of Fuzzy Theory on The Location Decision of Logistics Facilities (퍼지이론을 이용한 물류단지 입지 및 규모결정에 관한 연구)

  • 이승재;정창무;이헌주
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.1
    • /
    • pp.75-85
    • /
    • 2000
  • In existing models in optimization, the crisp data improve has been used in the objective or constraints to derive the optimal solution, Besides, the subjective environments are eliminated because the complex and uncertain circumstances were regarded as Probable ambiguity, In other words those optimal solutions in the existing models could be the complete satisfactory solutions to the objective functions in the Process of application for industrial engineering methods to minimize risks of decision-making. As a result of those, decision-makers in location Problems couldn't face appropriately with the variation of demand as well as other variables and couldn't Provide the chance of wide selection because of the insufficient information. So under the circumstance. it has been to develop the model for the location and size decision problems of logistics facility in the use of the fuzzy theory in the intention of making the most reasonable decision in the Point of subjective view under ambiguous circumstances, in the foundation of the existing decision-making problems which must satisfy the constraints to optimize the objective function in strictly given conditions in this study. Introducing the Process used in this study after the establishment of a general mixed integer Programming(MIP) model based upon the result of existing studies to decide the location and size simultaneously, a fuzzy mixed integer Programming(FMIP) model has been developed in the use of fuzzy theory. And the general linear Programming software, LINDO 6.01 has been used to simulate, to evaluate the developed model with the examples and to judge of the appropriateness and adaptability of the model(FMIP) in the real world.

  • PDF

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

Classification of Carbon-Based Global Marine Eco-Provinces Using Remote Sensing Data and K-Means Clustering (K-Means Clustering 기법과 원격탐사 자료를 활용한 탄소기반 글로벌 해양 생태구역 분류)

  • Young Jun Kim;Dukwon Bae;Jungho Im ;Sihun Jung;Minki Choo;Daehyeon Han
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.1043-1060
    • /
    • 2023
  • An acceleration of climate change in recent years has led to increased attention towards 'blue carbon' which refers to the carbon captured by the ocean. However, our comprehension of marine ecosystems is still incomplete. This study classified and analyzed global marine eco-provinces using k-means clustering considering carbon cycling. We utilized five input variables during the past 20 years (2001-2020): Carbon-based Productivity Model (CbPM) Net Primary Production (NPP), particulate inorganic and organic carbon (PIC and POC), sea surface salinity (SSS), and sea surface temperature (SST). A total of nine eco-provinces were classified through an optimization process, and the spatial distribution and environmental characteristics of each province were analyzed. Among them, five provinces showed characteristics of open oceans, while four provinces reflected characteristics of coastal and high-latitude regions. Furthermore, a qualitative comparison was conducted with previous studies regarding marine ecological zones to provide a detailed analysis of the features of nine eco-provinces considering carbon cycling. Finally, we examined the changes in nine eco-provinces for four periods in the past (2001-2005, 2006-2010, 2011-2015, and 2016-2020). Rapid changes in coastal ecosystems were observed, and especially, significant decreases in the eco-provinces having higher productivity by large freshwater inflow were identified. Our findings can serve as valuable reference material for marine ecosystem classification and coastal management, with consideration of carbon cycling and ongoing climate changes. The findings can also be employed in the development of guidelines for the systematic management of vulnerable coastal regions to climate change.

Opportunity Tree Framework Design For Optimization of Software Development Project Performance (소프트웨어 개발 프로젝트 성능의 최적화를 위한 Opportunity Tree 모델 설계)

  • Song Ki-Won;Lee Kyung-Whan
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.417-428
    • /
    • 2005
  • Today, IT organizations perform projects with vision related to marketing and financial profit. The objective of realizing the vision is to improve the project performing ability in terms of QCD. Organizations have made a lot of efforts to achieve this objective through process improvement. Large companies such as IBM, Ford, and GE have made over $80\%$ of success through business process re-engineering using information technology instead of business improvement effect by computers. It is important to collect, analyze and manage the data on performed projects to achieve the objective, but quantitative measurement is difficult as software is invisible and the effect and efficiency caused by process change are not visibly identified. Therefore, it is not easy to extract the strategy of improvement. This paper measures and analyzes the project performance, focusing on organizations' external effectiveness and internal efficiency (Qualify, Delivery, Cycle time, and Waste). Based on the measured project performance scores, an OT (Opportunity Tree) model was designed for optimizing the project performance. The process of design is as follows. First, meta data are derived from projects and analyzed by quantitative GQM(Goal-Question-Metric) questionnaire. Then, the project performance model is designed with the data obtained from the quantitative GQM questionnaire and organization's performance score for each area is calculated. The value is revised by integrating the measured scores by area vision weights from all stakeholders (CEO, middle-class managers, developer, investor, and custom). Through this, routes for improvement are presented and an optimized improvement method is suggested. Existing methods to improve software process have been highly effective in division of processes' but somewhat unsatisfactory in structural function to develop and systemically manage strategies by applying the processes to Projects. The proposed OT model provides a solution to this problem. The OT model is useful to provide an optimal improvement method in line with organization's goals and can reduce risks which may occur in the course of improving process if it is applied with proposed methods. In addition, satisfaction about the improvement strategy can be improved by obtaining input about vision weight from all stakeholders through the qualitative questionnaire and by reflecting it to the calculation. The OT is also useful to optimize the expansion of market and financial performance by controlling the ability of Quality, Delivery, Cycle time, and Waste.

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

Software Development for Optimal Productivity and Service Level Management in Ports (항만에서 최적 생산성 및 서비스 수준 관리를 위한 소프트웨어 개발)

  • Park, Sang-Kook
    • Journal of Navigation and Port Research
    • /
    • v.41 no.3
    • /
    • pp.137-148
    • /
    • 2017
  • Port service level is a metric of competitiveness among ports for the operating/managing bodies such as the terminal operation company (TOC), Port Authority, or the government, and is used as an important indicator for shipping companies and freight haulers when selecting a port. Considering the importance of metrics, we developed software to objectively define and manage six important service indicators exclusive to container and bulk terminals including: berth occupancy rate, ship's waiting ratio, berth throughput, number of berths, average number of vessels waiting, and average waiting time. We computed the six service indicators utilizing berth 1 through berth 5 in the container terminals and berth 1 through berth 4 in the bulk terminals. The software model allows easy computation of expected ship's waiting ratio over berth occupancy rate, berth throughput, counts of berth, average number of vessels waiting and average waiting time. Further, the software allows prediction of yearly throughput by utilizing a ship's waiting ratio and other productivity indicators and making calculations based on arrival patterns of ship traffic. As a result, a TOC is able to make strategic decisions on the trade-offs in the optimal operating level of the facility with better predictors of the service factors (ship's waiting ratio) and productivity factors (yearly throughput). Successful implementation of the software would attract more shipping companies and shippers and maximize TOC profits.

Process Alignment between MND-AF and ADDMe for Products Reusability (산출물 재사용성을 위한 MND-AF와 ADDMe 프로세스 정렬)

  • Bu, Yong-Hee;Lee, Tae-Gong
    • Journal of the military operations research society of Korea
    • /
    • v.32 no.2
    • /
    • pp.131-142
    • /
    • 2006
  • Nowadays, most enterprises have introduced both EA methodology to optimize an entire enterprise and CBD methodology to improve a software reusability. The Korea Government not only have developed many EA guiding products such as EA framework, Reference Model, Guideline, etc. but also have instituted a law to optimize a government-wide enterprise. The Minister of National Defense(MND) have developed the MND-AF as a standard methodology for EA and the ADDMe as a standard methodology for CBD. But it is possible to develop products of WD-AF and ADDMe redundantly because the process of MND-AF and ADDMe is not quitely aligned. The purpose of this paper is to present a scheme that ADDMe can reuse the artifacts of MND-AF by analyzing the relationships between two processes. In order to identify the relationships between two processes, we first identify the relation of a 'definition' part of two processes and then identify the relation of an 'attribute' part based on the relation of a 'detailed definition' part. As a result we found that 113 attributes of MND-AF are related to 49 attributes of ADDMe. Therefore the proposed study will decrease the development cost and time and will be a good example for aligning the process of EA and CBD methodology.