• Title/Summary/Keyword: Computer optimization

Search Result 2,426, Processing Time 0.04 seconds

Improving Generalization Performance of Neural Networks using Natural Pruning and Bayesian Selection (자연 프루닝과 베이시안 선택에 의한 신경회로망 일반화 성능 향상)

  • 이현진;박혜영;이일병
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.326-338
    • /
    • 2003
  • The objective of a neural network design and model selection is to construct an optimal network with a good generalization performance. However, training data include noises, and the number of training data is not sufficient, which results in the difference between the true probability distribution and the empirical one. The difference makes the teaming parameters to over-fit only to training data and to deviate from the true distribution of data, which is called the overfitting phenomenon. The overfilled neural network shows good approximations for the training data, but gives bad predictions to untrained new data. As the complexity of the neural network increases, this overfitting phenomenon also becomes more severe. In this paper, by taking statistical viewpoint, we proposed an integrative process for neural network design and model selection method in order to improve generalization performance. At first, by using the natural gradient learning with adaptive regularization, we try to obtain optimal parameters that are not overfilled to training data with fast convergence. By adopting the natural pruning to the obtained optimal parameters, we generate several candidates of network model with different sizes. Finally, we select an optimal model among candidate models based on the Bayesian Information Criteria. Through the computer simulation on benchmark problems, we confirm the generalization and structure optimization performance of the proposed integrative process of teaming and model selection.

Automated Schedulability-Aware Mapping of Real-Time Object-Oriented Models to Multi-Threaded Implementations (실시간 객체 모델의 다중 스레드 구현으로의 스케줄링을 고려한 자동화된 변환)

  • Hong, Sung-Soo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.2
    • /
    • pp.174-182
    • /
    • 2002
  • The object-oriented design methods and their CASE tools are widely used in practice by many real-time software developers. However, object-oriented CASE tools require an additional step of identifying tasks from a given design model. Unfortunately, it is difficult to automate this step for a couple of reasons: (1) there are inherent discrepancies between objects and tasks; and (2) it is hard to derive tasks while maximizing real-time schedulability since this problem makes a non-trivial optimization problem. As a result, in practical object-oriented CASE tools, task identification is usually performed in an ad-hoc manner using hints provided by human designers. In this paper, we present a systematic, schedulability-aware approach that can help mapping real-time object-oriented models to multi-threaded implementations. In our approach, a task contains a group of mutually exclusive transactions that may possess different periods and deadline. For this new task model, we provide a new schedulability analysis algorithm. We also show how the run-time system is implemented and how executable code is generated in our frame work. We have performed a case study. It shows the difficulty of task derivation problem and the utility of the automated synthesis of implementations as well as the Inappropriateness of the single-threaded implementations.

Why Gabor Frames? Two Fundamental Measures of Coherence and Their Role in Model Selection

  • Bajwa, Waheed U.;Calderbank, Robert;Jafarpour, Sina
    • Journal of Communications and Networks
    • /
    • v.12 no.4
    • /
    • pp.289-307
    • /
    • 2010
  • The problem of model selection arises in a number of contexts, such as subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper studies non-asymptotic model selection for the general case of arbitrary (random or deterministic) design matrices and arbitrary nonzero entries of the signal. In this regard, it generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence-termed as the worst-case coherence and the average coherence-among the columns of a design matrix. It utilizes these two measures of coherence to provide an in-depth analysis of a simple, model-order agnostic one-step thresholding (OST) algorithm for model selection and proves that OST is feasible for exact as well as partial model selection as long as the design matrix obeys an easily verifiable property, which is termed as the coherence property. One of the key insights offered by the ensuing analysis in this regard is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In addition, the paper establishes that if the design matrix has reasonably small worst-case and average coherence then OST performs near-optimally when either (i) the energy of any nonzero entry of the signal is close to the average signal energy per nonzero entry or (ii) the signal-to-noise ratio in the measurement system is not too high. Finally, two other key contributions of the paper are that (i) it provides bounds on the average coherence of Gaussian matrices and Gabor frames, and (ii) it extends the results on model selection using OST to low-complexity, model-order agnostic recovery of sparse signals with arbitrary nonzero entries. In particular, this part of the analysis in the paper implies that an Alltop Gabor frame together with OST can successfully carry out model selection and recovery of sparse signals irrespective of the phases of the nonzero entries even if the number of nonzero entries scales almost linearly with the number of rows of the Alltop Gabor frame.

Optimization of White Pan Bread Preparation via Addition of Purple Barley Flour and Olive Oil by Response Surface Methodology (자맥가루와 올리브유 첨가 식빵의 제조조건 최적화)

  • Kim, Jin Kon;Kim, Young-Ho;Oh, Jong Chul;Yu, Hyeon Hee
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.41 no.12
    • /
    • pp.1813-1822
    • /
    • 2012
  • The purpose of this study was to determine the optimal mixing conditions of two different amounts of purple barley flour ($X_1$), and olive oil ($X_2$) in baking white pan bread. The experiment was designed according to the central composite design of response surface methodology, which showed 10 experimental points including 2 replicates. The more purple barley flour added, the more weight, yellowness (b-value), hardness, gumminess, and chewiness increased; but the more volume, specific loaf volume, lightness (L-value), and springiness decreased. The greater the amount of olive oil added, the more hardness, cohesiveness, gumminess, and chewiness increased; but the more yellowness (b-value) and springiness decreased. The physical and mechanical properties were affected more by the amount of purple barley flour than by the amount of olive oil. Sensory properties except flavor were more affected by the amount of purple barley flour than by the amount of olive oil.

A study on the design exploration of Optical Image Stabilization (OIS) for Smart phone (스마트폰을 위한 광학식 손떨림 보정 설계 탐색에 관한 연구)

  • Lee, Seung-Kwon;Kong, Jin-Hyeung
    • Journal of Digital Contents Society
    • /
    • v.19 no.8
    • /
    • pp.1603-1615
    • /
    • 2018
  • In order to achieve the low complexity and area, power in the design of Optical Image Stabilization (OIS) suitable for the smart phone, this paper presents the following design explorations, such as; optimization of gyroscope sampling rate, simple and accurate gyroscope filters, and reduced operating frequency of motion compensation, optimized bit width in ADC and DAC, evaluation of noise effects due to PWM driving. In experiments of gyroscope sampling frequencies, it is found that error values are unvaried in the frequency above 5KHz. The gyroscope filter is efficiently designed by combining the Fuzzy algorithm, to illustrate the reasonable compensation for the angle and phase errors. Further, in the PWM design, the power consumption of 2MHz driving is shown to decrease up to 50% with respect to the linear driving, and the imaging noises are reduced in the driving frequency above 2MHz driving frequency. The operating frequency could be reduced to 5KHz in controller and 10KHz in driver, respectively, in the motion compensation. For ADC and DAC, the optimized exploration experiments verify the minimum bit width of 11bits in ADC as well as 10bits in DAC without the performance degradation.

Bayesian Image Restoration Using a Continuation Method (연속방법을 사용한 Bayesian 영상복원)

  • Lee, Soo-Jin
    • The Journal of Engineering Research
    • /
    • v.3 no.1
    • /
    • pp.65-73
    • /
    • 1998
  • One approach to improved image restoration methods has been the incorporation of additional source information via Gibbs priors that assume a source that is piecewise smooth. A natural Gibbs prior for expressing such constraints is an energy function defined on binary valued line processes as well as source intensities. However, the estimation of both continuous variables and binary variables is known to be a difficult problem. In this work, we consider the application of the deterministic annealing method. Unlike other methods, the deterministic annealing method offers a principled and efficient means of handling the problems associated with mixed continuous and binary variable objectives. The application of the deterministic annealing method results in a sequence of objective functions (defined only on the continuous variables) whose sequence of solutions approaches that of the original mixed variable objective function. The sequence is indexed by a control parameter (the temperature). The energy functions at high temperatures are smooth approximations of the energy functions at lower temperatures. Consequently, it is easier to minimize the energy functions at high temperatures and then track the minimum through the variation of the temperature. This is the essence of a continuation method. We show experimental results, which demonstrate the efficacy of the continuation method applied to a Bayesian restoration model.

  • PDF

Optimization of White Pan Bread Preparation by Addition of Black Barley Flour and Olive Oil using Response Surface Methodology (흑맥가루와 올리브유 첨가 식빵의 제조조건 최적화)

  • Kim, Jin Kon;Kim, Young-Ho;Oh, Jong Chul;Yu, Hyeon Hee
    • Korean Journal of Food Science and Technology
    • /
    • v.45 no.2
    • /
    • pp.180-190
    • /
    • 2013
  • The purpose of this study was to determine the optimal amount of 2 ingredients, i.e., black barley flour ($X_1$), and olive oil ($X_2$), for the production of white pan bread from black barley flour. The experiment was designed according to the central composite design of response surface methodology, which showed 10 experimental points, including 2 replicates for black barley flour and olive oil. Significant differences were found in the results of the physical and mechanical properties analysis of each sample, including weight (p<0.05), volume (p<0.01), specific loaf volume (p<0.01), color L (p<0.01), color a (p<0.001), color b (p<0.05), hardness (p<0.001), springiness (p<0.01), cohesiveness (p<0.01), gumminess (p<0.001) and chewiness (p<0.05). Significant differences in the sensory measurements were observed in color (p<0.01), appearance (p<0.01), texture (p<0.05), taste (p<0.05) and overall quality (p<0.05). The optimum formulation, which was calculated using the numerical and graphical methods, was determined to be 18.00% black barley flour and 1.80% olive oil.

Protocol Optimization of Coronary CT Angiography (심혈관 CT 조영술의 프로토콜 최적화)

  • Lee, Hae-Kag;Yoo, Heung-Joon;Lee, Sun-Yeob;Goo, Eun-Hoe;Seok, Jong-Min;Han, Man-Seok;Lee, Kwang-Sung;Cho, Jae-Hwan;Kim, Bo-Hui;Park, Cheol-Soo
    • Journal of the Korean Society of Radiology
    • /
    • v.5 no.2
    • /
    • pp.51-58
    • /
    • 2011
  • This research compared and analyzed the heart rate of the patient in which the LVEF value is 40% less than and normal patient. When as for LVEF 40% or less becomes to each heart rate and LVEF in a relation, we can know that the time to reach 100HU hangs long. Therefore, in patients, that is 40% less than, when setting up the Premonitoring delay, we could know to could give 5 primary solid phrases. It is seen that subsequently an addition injected 40cc as to Saline, to all patients by 4cc/sec speeds after injecting the capacity of Scan time ${\times}$ 4cc + 30cc with 4cc/sec speeds. And HR excludes 80 or greater in 40% less than, the contrast agent shows the large-scale difference. In addition, in 40% less than, it could predict that the time difference was big and the contrast agent was already out in the Left ventricle Wash- when the contrast agent reached 100HU and Scan was started There is a wide difference between under 40% LVEF and normal. when starting scan from low LVEF patients. So, Injection contrast media protocol Should be determined to CCTA. And then In case of low LVEF is recommended to more low Pitch than routine Pitch because we should reduce scan failed in accordance with low LVEF.

RSSI-based Location Determination via Segmentation-based Linear Spline Interpolation Method (분할기반의 선형 호 보간법에 의한 RSSI기반의 위치 인식)

  • Lau, Erin-Ee-Lin;Chung, Wan-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.473-476
    • /
    • 2007
  • Location determination of mobile user via RSSI approach has received ample attention from researchers lately. However, it remains a challenging issue due to the complexities of RSSI signal propagation characteristics, which are easily exacerbated by the mobility of user. Hence, a segmentation-based linear spline interpolation method is proposed to cater for the dynamic fluctuation pattern of radio signal in complex environment. This optimization algorithm is proposed in addition to the current radiolocation's (CC2431, Chipcon, Norway) algorithm, which runs on IEEE802.15.4 standard. The enhancement algorithm involves four phases. First phase consists of calibration model in which RSSI values at different static locations are collected and processed to obtain the mean and standard deviation value for the predefined distance. RSSI smoothing algorithm is proposed to minimize the dynamic fluctuation of radio signal received from each reference node when the user is moving. Distances are computed using the segmentation formula obtain in the first phase. In situation where RSSI value falls in more than one segment, the ambiguity of distance is solved by probability approach. The distance probability distribution function(pdf) for each distances are computed and distance with the highest pdf at a particular RSSI is the estimated distance. Finally, with the distances obtained from each reference node, an iterative trilateration algorithm is used for position estimation. Experiment results obtained position the proposed algorithm as a viable alternative for location tracking.

  • PDF

Human Motion Tracking by Combining View-based and Model-based Methods for Monocular Video Sequences (하나의 비디오 입력을 위한 모습 기반법과 모델 사용법을 혼용한 사람 동작 추적법)

  • Park, Ji-Hun;Park, Sang-Ho;Aggarwal, J.K.
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.657-664
    • /
    • 2003
  • Reliable tracking of moving humans is essential to motion estimation, video surveillance and human-computer interface. This paper presents a new approach to human motion tracking that combines appearance-based and model-based techniques. Monocular color video is processed at both pixel level and object level. At the pixel level, a Gaussian mixture model is used to train and classily individual pixel colors. At the object level, a 3D human body model projected on a 2D image plane is used to fit the image data. Our method does not use inverse kinematics due to the singularity problem. While many others use stochastic sampling for model-based motion tracking, our method is purely dependent on nonlinear programming. We convert the human motion tracking problem into a nonlinear programming problem. A cost function for parameter optimization is used to estimate the degree of the overlapping between the foreground input image silhouette and a projected 3D model body silhouette. The overlapping is computed using computational geometry by converting a set of pixels from the image domain to a polygon in the real projection plane domain. Our method is used to recognize various human motions. Motion tracking results from video sequences are very encouraging.