• Title/Summary/Keyword: Count model

Search Result 514, Processing Time 0.026 seconds

Vector Quantization of Image Signal using Larning Count Control Neural Networks (학습 횟수 조절 신경 회로망을 이용한 영상 신호의 벡터 양자화)

  • 유대현;남기곤;윤태훈;김재창
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.1
    • /
    • pp.42-50
    • /
    • 1997
  • Vector quantization has shown to be useful for compressing data related with a wide rnage of applications such as image processing, speech processing, and weather satellite. Neural networks of images this paper propses a efficient neural network learning algorithm, called learning count control algorithm based on the frquency sensitive learning algorithm. This algorithm can train a results more codewords can be assigned to the sensitive region of the human visual system and the quality of the reconstructed imate can be improved. We use a human visual systrem model that is a cascade of a nonlinear intensity mapping function and a modulation transfer function with a bandpass characteristic.

  • PDF

DMAC implementation On $Excalibur^{TM}$ ($Excalibur^{TM}$ 상에서의 DMAC 구현)

  • Hwang, In-Ki
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.959-961
    • /
    • 2003
  • In this paper, we describe implemented DMAC (Direct Memory Access Controller) architecture on Altera's $Excalibur^{TM}$ that includes industry-standard $ARM922T^{TM}$ 32-bit RISC processor core operating at 200 MHz. We implemented DMAC based on AMBA (Advanced Micro-controller Bus Architecture) AHB (Advanced Micro-performance Bus) interface. Implemented DMAC has 8-channel and can extend supportable channel count according to user application. We used round-robin method for priority selection. Implemented DMAC supports data transfer between Memory-to-Memory, Memory-to-Peripheral and Peripheral-to-Memory. The max transfer count is 1024 per a time and it can support byte, half-word and word transfer according to AHB protocol (HSIZE signals). We implemented with VHDL and functional verification using $ModelSim^{TM}$. Then, we synthesized using $LeonardoSpectrum^{TM}$ with Altera $Excalibur^{TM}$ library. We did FPGA P&R and targeting using $Quartus^{TM}$. We can use implemented DMAC module at any system that needs high speed and broad bandwidth data transfers.

  • PDF

A study on the characteristics of acoustic emission signal in dynamic cutting process (동적 절삭과정에서 AE 신호의 특성에 관한 연구)

  • Kim, Jeong-Suk;Kang, Myeong-Chang;Kim, Duk-Whan
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.11 no.4
    • /
    • pp.69-76
    • /
    • 1994
  • AE(Acoustic Emission) signal is correlated to workpiece material, cutting conditions and tool geometry during metal cutting. The relationship between AE signal and cutting parameters can be obtained by theoretical model and experiments. The value of CR(Count Rate) is nearly constant in stable cutting, but when the chatter vibration occours, the value of CR is rapidly increased due to the vibration deformation zone. By experimental signal processing of AE, it is more effective than by RMS(Root Mean Square) measurement to detect the threshold of chatter vibration by CR measurement.

  • PDF

A Study on Estimating Function Point Count of Domestic Software Development Projects (국내 소프트웨어 개발사업에 적합한 기능점수규모 예측방법에 관한 연구)

  • 박찬규;신수정;이현옥
    • Korean Management Science Review
    • /
    • v.20 no.2
    • /
    • pp.179-196
    • /
    • 2003
  • Function point model is the international standard method to measure the software size which is one of the most important factors to determine the software development cost. Function point model can successfully be applied only when the detailed specification of users' requirements is available. In the domestic public sector, however, the budgeting for software projects is carried out before the requirements of softwares ere specified in detail. Therefore, an efficient function point estimation method is required to apply function point model at the early stage of software development projects. The purpose of this paper is to compare various function point estimation methods and analyse their accuracies in domestic software projects. We consider four methods : NESMA model, ISBSG model, the simplified function point model and the backfiring method. The methods are applied to about one hundred of domestic projects, and their estimation errors are compared. The results can used as a criterion to select an adequate estimation model for function point counts.

A Generalized Markov Chain Model for IEEE 802.11 Distributed Coordination Function

  • Zhong, Ping;Shi, Jianghong;Zhuang, Yuxiang;Chen, Huihuang;Hong, Xuemin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.2
    • /
    • pp.664-682
    • /
    • 2012
  • To improve the accuracy and enhance the applicability of existing models, this paper proposes a generalized Markov chain model for IEEE 802.11 Distributed Coordination Function (DCF) under the widely adopted assumption of ideal transmission channel. The IEEE 802.11 DCF is modeled by a two dimensional Markov chain, which takes into account unsaturated traffic, backoff freezing, retry limits, the difference between maximum retransmission count and maximum backoff exponent, and limited buffer size based on the M/G/1/K queuing model. We show that existing models can be treated as special cases of the proposed generalized model. Furthermore, simulation results validate the accuracy of the proposed model.

Defect Severity-based Ensemble Model using FCM (FCM을 적용한 결함심각도 기반 앙상블 모델)

  • Lee, Na-Young;Kwon, Ki-Tae
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.12
    • /
    • pp.681-686
    • /
    • 2016
  • Software defect prediction is an important factor in efficient project management and success. The severity of the defect usually determines the degree to which the project is affected. However, existing studies focus only on the presence or absence of a defect and not the severity of defect. In this study, we proposed an ensemble model using FCM based on defect severity. The severity of the defect of NASA data set's PC4 was reclassified. To select the input column that affected the severity of the defect, we extracted the important defect factor of the data set using Random Forest (RF). We evaluated the performance of the model by changing the parameters in the 10-fold cross-validation. The evaluation results were as follows. First, defect severities were reclassified from 58, 40, 80 to 30, 20, 128. Second, BRANCH_COUNT was an important input column for the degree of severity in terms of accuracy and node impurities. Third, smaller tree number led to more variables for good performance.

A Study of Software Optimal Release Time for Based on Superposition NHPP Model (중첩 NHPP 모형에 근거한 소프트웨어 최적방출시기에 관한 연구)

  • Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.6 no.3
    • /
    • pp.9-17
    • /
    • 2010
  • Decision problem called an optimal release policies, after testing a software system in development phase and transfer it to the user, is studied. The applied model of release time exploited infinite non-homogeneous Poisson process. This infinite non-homogeneous Poisson process is a model which reflects the possibility of introducing new faults when correcting or modifying the software. The failure life-cycle distribution used superposition which has various intensity, if the system is complicated. Thus, software release policies which minimize a total average software cost of development and maintenance under the constraint of satisfying a software reliability requirement becomes an optimal release policies. In a numerical example, after trend test applied and estimated the parameters using maximum likelihood estimation of inter-failure time data, estimated software optimal release time. Through this study, in terms of superposition model and simply model, the optimal time to using superposition model release the software developer to determine how much could count will help.

Joint Modeling of Death Times and Number of Failures for Repairable Systems using a Shared Frailty Model (공유환경효과를 고려한 수리가능한 시스템의 수명과 고장회수의 결합모형 개발)

  • 박희창;이석훈
    • Journal of Korean Society for Quality Management
    • /
    • v.26 no.4
    • /
    • pp.111-123
    • /
    • 1998
  • We consider the problem of modeling count data where the observation period is determined by the life time of the system under study. We assume random effects or a frailty model to allow for a possible association between the death times and the counts. We assume that, given a random effect or a frailty, the death times follow a Weibull distribution with a hazard rate. For the counts, given a frailty, a Poisson process is assumed with the intensity depending on time. A gamma distribution is assumed for the frailty model. Maximum likelihood estimators of the model parameters are obtained. A model for the time to death and the number of failures system received is constructed and consequences of the model are examined.

  • PDF

A Study for Recent Development of Generalized Linear Mixed Model (일반화된 선형 혼합 모형(GENERALIZED LINEAR MIXED MODEL: GLMM)에 관한 최근의 연구 동향)

  • 이준영
    • The Korean Journal of Applied Statistics
    • /
    • v.13 no.2
    • /
    • pp.541-562
    • /
    • 2000
  • The generalized linear mixed model framework is for handling count-type categorical data as well as for clustered or overdispersed non-Gaussian data, or for non-linear model data. In this study, we review its general formulation and estimation methods, based on quasi-likelihood and Monte-Carlo techniques. The current research areas and topics for further development are also mentioned.

  • PDF

Developing an Accident Model for Rural Signalized Intersections Using a Random Parameter Negative Binomial Method (RPNB모형을 이용한 지방부 신호교차로 교통사고 모형개발)

  • PARK, Min Ho;LEE, Dongmin
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.6
    • /
    • pp.554-563
    • /
    • 2015
  • This study dealt with developing an accident model for rural signalized intersections with random parameter negative binomial method. The limitation of previous count models(especially, Poisson/Negative Binomial model) is not to explain the integrated variations in terms of time and the distinctive characters a specific point/segment has. This drawback of the traditional count models results in the underestimation of the standard error(t-value inflation) of the derived coefficient and finally affects the low-reliability of the whole model. To solve this problem, this study improves the limitation of traditional count models by suggesting the use of random parameter which takes account of heterogeneity of each point/segment. Through the analyses, it was found that the increase of traffic flow and pedestrian facilities on minor streets had positive effects on the increase of traffic accidents. Left turning lanes and median on major streets reduced the number of accidents. The analysis results show that the random parameter modeling is an effective method for investigating the influence on traffic accident from road geometries. However, this study could not analyze the effects of sequential changes of driving conditions including geometries and safety facilities.