• Title/Summary/Keyword: feature models

Search Result 1,096, Processing Time 0.029 seconds

Development of Accident Model by Traffic Violation Type in Korea 4-legged Circular Intersections (국내 4지 원형교차로 법규위반별 사고모형 개발)

  • Park, Byung Ho;Kim, Kyeong Yong
    • Journal of the Korean Society of Safety
    • /
    • v.30 no.2
    • /
    • pp.70-76
    • /
    • 2015
  • This study deals with the traffic accident of circular intersections. The purpose of the study is to develop the accident models by traffic violation type. In pursuing the above, this study gives particular attention to analyzing various factors that influence traffic accident and developing such the optimal models as Poisson and Negative binomial regression models. The main results are the followings. First, 4 negative binomial models which were statistically significant were developed. This was because the over-dispersion coefficients had a value greater than 1.96. Second, the common variables in these models were not adopted. The specific variables by model were analyzed to be traffic volume, conflicting ratio, number of circulatory lane, width of circulatory lane, number of traffic island by access road, number of reduction facility, feature of central island and crosswalk.

Review of Stormwater Quality, Quantity and Treatment Methods Part 1: Stormwater Quantity Modelling

  • Aryal, Rupak;Kandasamy, J.;Vigneswaran, S.;Naidu, R.;Lee, S.H.
    • Environmental Engineering Research
    • /
    • v.14 no.2
    • /
    • pp.71-78
    • /
    • 2009
  • A review of stormwater quantity and quality in the urban environment is presented. The review is presented in three parts. The first part reviews the mathematical methods for stormwater quantity and has been undertaken by examining a number of stormwater models that are in current use. The important feature of models, their applications, and management has been discussed. Different types of stormwater management models are presented in the literatures. Generally, all the models are simplified as conceptual or empirical depending on whether the model is based on physical laws or not. In both cases if any of the variables in the model are regarded as random variables having a probability distribution, then the model is stochastic model. Otherwise the model is deterministic (based on process descriptions). The analytical techniques are presented in this paper.

A sensitivity analysis of machine learning models on fire-induced spalling of concrete: Revealing the impact of data manipulation on accuracy and explainability

  • Mohammad K. al-Bashiti;M.Z. Naser
    • Computers and Concrete
    • /
    • v.33 no.4
    • /
    • pp.409-423
    • /
    • 2024
  • Using an extensive database, a sensitivity analysis across fifteen machine learning (ML) classifiers was conducted to evaluate the impact of various data manipulation techniques, evaluation metrics, and explainability tools. The results of this sensitivity analysis reveal that the examined models can achieve an accuracy ranging from 72-93% in predicting the fire-induced spalling of concrete and denote the light gradient boosting machine, extreme gradient boosting, and random forest algorithms as the best-performing models. Among such models, the six key factors influencing spalling were maximum exposure temperature, heating rate, compressive strength of concrete, moisture content, silica fume content, and the quantity of polypropylene fiber. Our analysis also documents some conflicting results observed with the deep learning model. As such, this study highlights the necessity of selecting suitable models and carefully evaluating the presence of possible outcome biases.

Extending the Scope of Automatic Time Series Model Selection: The Package autots for R

  • Jang, Dong-Ik;Oh, Hee-Seok;Kim, Dong-Hoh
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.3
    • /
    • pp.319-331
    • /
    • 2011
  • In this paper, we propose automatic procedures for the model selection of various univariate time series data. Automatic model selection is important, especially in data mining with large number of time series, for example, the number (in thousands) of signals accessing a web server during a specific time period. Several methods have been proposed for automatic model selection of time series. However, most existing methods focus on linear time series models such as exponential smoothing and autoregressive integrated moving average(ARIMA) models. The key feature that distinguishes the proposed procedures from previous approaches is that the former can be used for both linear time series models and nonlinear time series models such as threshold autoregressive(TAR) models and autoregressive moving average-generalized autoregressive conditional heteroscedasticity(ARMA-GARCH) models. The proposed methods select a model from among the various models in the prediction error sense. We also provide an R package autots that implements the proposed automatic model selection procedures. In this paper, we illustrate these algorithms with the artificial and real data, and describe the implementation of the autots package for R.

Forecasting Volatility of Stocks Return: A Smooth Transition Combining Forecasts

  • HO, Jen Sim;CHOO, Wei Chong;LAU, Wei Theng;YEE, Choy Leng;ZHANG, Yuruixian;WAN, Cheong Kin
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.9 no.10
    • /
    • pp.1-13
    • /
    • 2022
  • This paper empirically explores the predicting ability of the newly proposed smooth transition (ST) time-varying combining forecast methods. The proposed method allows the "weight" of combining forecasts to change gradually over time through its unique feature of transition variables. Stock market returns from 7 countries were applied to Ad Hoc models, the well-known Generalized Autoregressive Conditional Heteroskedasticity (GARCH) family models, and the Smooth Transition Exponential Smoothing (STES) models. Of the individual models, GJRGARCH and STES-E&AE emerged as the best models and thereby were chosen for constructing the combined forecast models where a total of nine ST combining methods were developed. The robustness of the ST combining forecasts is also validated by the Diebold-Mariano (DM) test. The post-sample forecasting performance shows that ST combining forecast methods outperformed all the individual models and fixed weight combining models. This study contributes in two ways: 1) the ST combining methods statistically outperformed all the individual forecast methods and the existing traditional combining methods using simple averaging and Bates & Granger method. 2) trading volume as a transition variable in ST methods was superior to other individual models as well as the ST models with single sign or size of past shocks as transition variables.

A Feature Selection Method Based on Fuzzy Cluster Analysis (퍼지 클러스터 분석 기반 특징 선택 방법)

  • Rhee, Hyun-Sook
    • The KIPS Transactions:PartB
    • /
    • v.14B no.2
    • /
    • pp.135-140
    • /
    • 2007
  • Feature selection is a preprocessing technique commonly used on high dimensional data. Feature selection studies how to select a subset or list of attributes that are used to construct models describing data. Feature selection methods attempt to explore data's intrinsic properties by employing statistics or information theory. The recent developments have involved approaches like correlation method, dimensionality reduction and mutual information technique. This feature selection have become the focus of much research in areas of applications with massive and complex data sets. In this paper, we provide a feature selection method considering data characteristics and generalization capability. It provides a computational approach for feature selection based on fuzzy cluster analysis of its attribute values and its performance measures. And we apply it to the system for classifying computer virus and compared with heuristic method using the contrast concept. Experimental result shows the proposed approach can give a feature ranking, select the features, and improve the system performance.

A Neural Network Model for Visual Selection: Top-down mechanism of Feature Gate model (시각적 선택에 대한 신경 망 모형FeatureGate 모형의 하향식 기제)

  • 김민식
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.3
    • /
    • pp.1-15
    • /
    • 1999
  • Based on known physiological and psychophysical results, a neural network model for visual selection, called FeaureGate is proposed. The model consists of a hierarchy of spatial maps. and the flow of information from each level of the hierarchy to the next is controlled by attentional gates. The gates are jointly controlled by a bottom-up system favoring locations with unique features. and a top-down mechanism favoring locations with features designated as target features. The present study focuses on the top-down mechanism of the FeatureGate model that produces results similar to Moran and Desimone's (1985), which many current models have failed to explain, The FeatureGate model allows a consistent interpretation of many different experimental results in visual attention. including parallel feature searches and serial conjunction searches. attentional gradients triggered by cuing, feature-driven spatial selection, split a attention, inhibition of distractor locations, and flanking inhibition. This framework can be extended to produce a model of shape recognition using upper-level units that respond to configurations of features.

  • PDF

A Study on Applying Feature-Oriented Analysis Model to Video-On Demand (VOD) Service Development (주문형 비디오 서비스 개발의 피처지향 분석모델 적용 연구)

  • KO, Kwangil
    • Journal of Digital Contents Society
    • /
    • v.18 no.3
    • /
    • pp.457-463
    • /
    • 2017
  • VOD service provides an additional revenue model for digital broadcasting companies in addition to the existing subscription fees and advertisement-based revenue models. Therefore, each digital broadcasting company develops its own VOD service and performs frequent improvement work. In this circumstance, the developer is seeking to improve the efficiency of the VOD service development. To address the needs of such developers, this study conducted a basic study to apply the feature-oriented analysis model to the development of VOD services. The feature-oriented analysis model is recognized (through a number of case studies) as an effective tool for analyzing the requirements of softwares with the functions that are interconnected organically. In this paper, we developed a feature model of VOD service and designed the primary functions of each feature and the test-cases that can test the these functions, laying the foundation for developing VOD services based on feature-oriented analysis model.

Improving Hypertext Classification Systems through WordNet-based Feature Abstraction (워드넷 기반 특징 추상화를 통한 웹문서 자동분류시스템의 성능향상)

  • Roh, Jun-Ho;Kim, Han-Joon;Chang, Jae-Young
    • The Journal of Society for e-Business Studies
    • /
    • v.18 no.2
    • /
    • pp.95-110
    • /
    • 2013
  • This paper presents a novel feature engineering technique that can improve the conventional machine learning-based text classification systems. The proposed method extends the initial set of features by using hyperlink relationships in order to effectively categorize hypertext web documents. Web documents are connected to each other through hyperlinks, and in many cases hyperlinks exist among highly related documents. Such hyperlink relationships can be used to enhance the quality of features which consist of classification models. The basic idea of the proposed method is to generate a sort of ed concept feature which consists of a few raw feature words; for this, the method computes the semantic similarity between a target document and its neighbor documents by utilizing hierarchical relationships in the WordNet ontology. In developing classification models, the ed concept features are equated with other raw features, and they can play a great role in developing more accurate classification models. Through the extensive experiments with the Web-KB test collection, we prove that the proposed methods outperform the conventional ones.

3D Model Retrieval using Distribution of Interpolated Normal Vectors on Simplified Mesh (간략화된 메쉬에서 보간된 법선 벡터의 분포를 이용한 3차원 모델 검색)

  • Kim, A-Mi;Song, Ju-Whan;Gwun, Ou-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.11
    • /
    • pp.1692-1700
    • /
    • 2009
  • This paper proposes the direction distribution of surface normal vectors as a feature descriptor of three-dimensional models. Proposed the feature descriptor handles rotation invariance using a principal component analysis(PCA) method, and performs mesh simplification to make it robust and nonsensitive against noise addition. Our method picks samples for the distribution of normal vectors to be proportional to the area of each polygon, applies weight to the normal vectors, and applies interpolation to enhance discrimination so that the information on the surface with less area may be less reflected on composing a feature descriptor. This research measures similarity between models with a L1-norm in the probability density histogram where the distances of feature descriptors are normalized. Experimental results have shown that the proposed method has improved the retrieval performance described in an average normalized modified retrieval rank(ANMRR) by about 17.2% and the retrieval performance described in a quantitative discrimination scale by 9.6%~17.5% as compared to the existing method.

  • PDF