• Title/Summary/Keyword: Automatic model selection

Search Result 101, Processing Time 0.027 seconds

Automatic Selection of the Turning Parametter in the Minimum Density Power Divergence Estimation

  • Changkon Hong;Kim, Youngseok
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.3
    • /
    • pp.453-465
    • /
    • 2001
  • It is often the case that one wants to estimate parameters of the distribution which follows certain parametric model, while the dta are contaminated. it is well known that the maximum likelihood estimators are not robust to contamination. Basuet al.(1998) proposed a robust method called the minimum density power divergence estimation. In this paper, we investigate data-driven selection of the tuning parameter $\alpha$ in the minimum density power divergence estimation. A criterion is proposed and its performance is studied through the simulation. The simulation includes three cases of estimation problem.

  • PDF

The Optimization of the Production Ratio by the Mean-variance Analysis of the Chemical Products Prices (화학 제품 가격의 변동으로 인한 위험을 최소화하며 수익을 극대화하기 위한 생산 비율 최적화에 관한 연구)

  • Park, Jeong-Ho;Park, Sun-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.12
    • /
    • pp.1169-1172
    • /
    • 2006
  • The prices of chemical products are fluctuated by several factors. The chemical companies can't predict and be ready to all of these changes, so they are exposed to the risk of a profit fluctuation. But they can reduce this risk by making a well-diversified product portfolio. This problem can be thought as the optimization of the product portfolio. We assume that the profits come from the 'spread' between a naphtha and a chemical product. We calculate a mean and a variation of each spread and develop an automatic module to calculate the optimal portion of each product. The theory is based on the Markowitz portfolio management. It maximizes the expected return while minimizing the volatility. At last we draw an investment selection curve to compare each alternative and to demonstrate the superiority. And we suggest that an investment selection curve can be a decision-making tool.

Strategies for the Automatic Decision of Railway Shunting Routes Based on the Heuristic Search Method (휴리스틱 탐색기법에 근거한 철도입환진로의 자동결정전략 설계)

  • Ko Yun-Seok
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.5
    • /
    • pp.283-289
    • /
    • 2003
  • This paper proposes an expert system which can determine automatically the shunting routes corresponding to the given shunting works by considering totally the train operating environments in the station. The expert system proposes the multiple shunting routes with priority of selection based on heuristic search strategy. Accordingly, system operator can select a shunting route with the safety and efficiency among the those shunting routes. The expert system consists of a main inference engine and a sub inference engine. The main inference engine determines the shunting routes with selection priority using the segment routes obtained from the sub inference engine. The heuristic rules are extracted from operating knowledges of the veteran route operator and station topology. It is implemented in C computer language for the purpose of the implementation of the inference engine using the dynamic memory allocation technique. And, the validity of the builted expert system is proved by a test case for the model station.

The Camparative study of NHPP Extreme Value Distribution Software Reliability Model from the Perspective of Learning Effects (NHPP 극값 분포 소프트웨어 신뢰모형에 대한 학습효과 기법 비교 연구)

  • Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.2
    • /
    • pp.1-8
    • /
    • 2011
  • In this study, software products developed in the course of testing, software managers in the process of testing software test and test tools for effective learning effects perspective has been studied using the NHPP software. The finite failure non-homogeneous Poisson process models presented and the life distribution applied extreme distribution which used to find the minimum (or the maximum) of a number of samples of various distributions. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. As a result, the learning factor is greater than automatic error that is generally efficient model could be confirmed. This paper, a numerical example of applying using time between failures and parameter estimation using maximum likelihood estimation method, after the efficiency of the data through trend analysis model selection were efficient using the mean square error.

Automatic order selection procedure for count time series models (계수형 시계열 모형을 위한 자동화 차수 선택 알고리즘)

  • Ji, Yunmi;Seong, Byeongchan
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.2
    • /
    • pp.147-160
    • /
    • 2020
  • In this paper, we study an algorithm that automatically determines the orders of past observations and conditional mean values that play an important role in count time series models. Based on the orders of the ARIMA model, the algorithm constitutes the order candidates group for time series generalized linear models and selects the final model based on information criterion among the combinations of the order candidates group. To evaluate the proposed algorithm, we perform small simulations and empirical analysis according to underlying models and time series as well as compare forecasting performances with the ARIMA model. The results of the comparison confirm that the time series generalized linear model offers better performance than the ARIMA model for the count time series analysis. In addition, the empirical analysis shows better performance in mid and long term forecasting than the ARIMA model.

Empirical Study on Analyzing Training Data for CNN-based Product Classification Deep Learning Model (CNN기반 상품분류 딥러닝모델을 위한 학습데이터 영향 실증 분석)

  • Lee, Nakyong;Kim, Jooyeon;Shim, Junho
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.1
    • /
    • pp.107-126
    • /
    • 2021
  • In e-commerce, rapid and accurate automatic product classification according to product information is important. Recent developments in deep learning technology have been actively applied to automatic product classification. In order to develop a deep learning model with good performance, the quality of training data and data preprocessing suitable for the model are crucial. In this study, when categories are inferred based on text product data using a deep learning model, both effects of the data preprocessing and of the selection of training data are extensively compared and analyzed. We employ our CNN model as an example of deep learning model. In the experimental analysis, we use a real e-commerce data to ensure the verification of the study results. The empirical analysis and results shown in this study may be meaningful as a reference study for improving performance when developing a deep learning product classification model.

A Study on the Selection of Optimum Auto-design Data using FEA (유한요소해석을 이용한 최적자동설계 데이터 선정에 관한 연구)

  • 박진형;이승수;김민주;김순경;전언찬
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2001.04a
    • /
    • pp.406-409
    • /
    • 2001
  • This study is an investigation for the ADS optimum design by using FEA. We write out program which express ADS perfectly and reduce the required time for correcting of model to the minimum in solution and manufacture result. We complete algorithm which can plan optimum forming of model by feedback error information in CAE. Then we correct model by feedback date obtaining in solution process, repeat course following stress solution again and do modeling rachet wheel for optimum forming. That is our aim. In rachet wheel, greatest equivalence stress originates in key groove corner and KS standard is proved the design for security.

  • PDF

A New Dynamic Auction Mechanism in the Supply Chain: N-Bilateral Optimized Combinatorial Auction (N-BOCA)

  • Choi, Jin-Ho;Chang, Yong-Sik;Han, In-Goo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2005.11a
    • /
    • pp.379-390
    • /
    • 2005
  • In this paper, we introduce a new combinatorial auction mechanism - N-Bilateral Optimized Combinatorial Auction (N-BOCA). N-BOCA is a flexible iterative combinatorial auction model that offers optimized trading for multi-suppliers and multi-purchasers in the supply chain. We design the N-BOCA system from the perspectives of architecture, protocol, and trading strategy. Under the given N-BOCA architecture and protocol, auctioneers and bidders have diverse decision strategies for winner determination. This needs flexible modeling environments. Hence, we propose an optimization modeling agent for bid and auctioneer selection. The agent has the capability to automatic model formulation for Integer Programming modeling. Finally, we show the viability of N-BOCA through prototype and experiments. The results say both higher allocation efficiency and effectiveness compared with I-to-N general combinatorial auction mechanisms.

  • PDF

A Study On User Skin Color-Based Foundation Color Recommendation Method Using Deep Learning (딥러닝을 이용한 사용자 피부색 기반 파운데이션 색상 추천 기법 연구)

  • Jeong, Minuk;Kim, Hyeonji;Gwak, Chaewon;Oh, Yoosoo
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.9
    • /
    • pp.1367-1374
    • /
    • 2022
  • In this paper, we propose an automatic cosmetic foundation recommendation system that suggests a good foundation product based on the user's skin color. The proposed system receives and preprocesses user images and detects skin color with OpenCV and machine learning algorithms. The system then compares the performance of the training model using XGBoost, Gradient Boost, Random Forest, and Adaptive Boost (AdaBoost), based on 550 datasets collected as essential bestsellers in the United States. Based on the comparison results, this paper implements a recommendation system using the highest performing machine learning model. As a result of the experiment, our system can effectively recommend a suitable skin color foundation. Thus, our system model is 98% accurate. Furthermore, our system can reduce the selection trials of foundations against the user's skin color. It can also save time in selecting foundations.

A Decision Support System for Product Design Common Attribute Selection under the Semantic Web and SWCL (시맨틱 웹과 SWCL하의 제품설계 최적 공통속성 선택을 위한 의사결정 지원 시스템)

  • Kim, Hak-Jin;Youn, Sohyun
    • Journal of Information Technology Services
    • /
    • v.13 no.2
    • /
    • pp.133-149
    • /
    • 2014
  • It is unavoidable to provide products that meet customers' needs and wants so that firms may survive under the competition in this globalized market. This paper focuses on how to provide levels for attributes that compse product so that firms may give the best products to customers. In particular, its main issue is how to determine common attributes and the others with their appropriate levels to maximize firms' profits, and how to construct a decision support system to ease decision makers' decisons about optimal common attribute selection using the Semantic Web and SWCL technologies. Parameter data in problems and the relationships in the data are expressed in an ontology data model and a set of constraints by using the Semantic Web and SWCL technologies. They generate a quantitative decision making model through the automatic process in the proposed system, which is fed into the solver using the Logic-based Benders Decomposition method to obtain an optimal solution. The system finally provides the generated solution to the decision makers. This presentation suggests the opportunity of the integration of the proposed system with the broader structured data network and other decision making tools because of the easy data shareness, the standardized data structure and the ease of machine processing in the Semantic Web technology.