• Title/Summary/Keyword: Optimal Process Mean

Search Result 192, Processing Time 0.026 seconds

Determination of the Optimal Aggregation Interval Size of Individual Vehicle Travel Times Collected by DSRC in Interrupted Traffic Flow Section of National Highway (국도 단속류 구간에서 DSRC를 활용하여 수집한 개별차량 통행시간의 최적 수집 간격 결정 연구)

  • PARK, Hyunsuk;KIM, Youngchan
    • Journal of Korean Society of Transportation
    • /
    • v.35 no.1
    • /
    • pp.63-78
    • /
    • 2017
  • The purpose of this study is to determine the optimal aggregation interval to increase the reliability when estimating representative value of individual vehicle travel time collected by DSRC equipment in interrupted traffic flow section in National Highway. For this, we use the bimodal asymmetric distribution data, which is the distribution of the most representative individual vehicle travel time collected in the interrupted traffic flow section, and estimate the MSE(Mean Square Error) according to the variation of the aggregation interval of individual vehicle travel time, and determine the optimal aggregation interval. The estimation equation for the MSE estimation utilizes the maximum estimation error equation of t-distribution that can be used in asymmetric distribution. For the analysis of optimal aggregation interval size, the aggregation interval size of individual vehicle travel time was only 3 minutes or more apart from the aggregation interval size of 1-2 minutes in which the collection of data was normally lost due to the signal stop in the interrupted traffic flow section. The aggregation interval that causes the missing part in the data collection causes another error in the missing data correction process and is excluded. As a result, the optimal aggregation interval for the minimum MSE was 3~5 minutes. Considering both the efficiency of the system operation and the improvement of the reliability of calculation of the travel time, it is effective to operate the basic aggregation interval as 5 minutes as usual and to reduce the aggregation interval to 3 minutes in case of congestion.

An Optimal Process Design U sing a Robust Desirability Function(RDF) Model to Improve a Process/Product Quality on a Pharmaceutical Manufacturing Process (제약공정에서 공정 및 제품의 품질향상을 위해 강건 호감도 함수 모형을 이용한 최적공정설계)

  • Park, Kyung-Jin;Shin, Sang-Mun;Jeong, Hea-Jin
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.33 no.1
    • /
    • pp.1-9
    • /
    • 2010
  • Quality design methodologies have received constituent attention from a number of researchers and practitioners for more than twenty years. Specially, the quality design for drug products must be carefully considered because of the hazards involved in the pharmaceutical industry. Conventional pharmaceutical formulation design problems with mixture experiments have been typically studied under the assumption of an unconstrained experimental region with a single quality characteristic. However, real-world pharmaceutical industrial situations have many physical limitations. We are often faced with multiple quality characteristics with constrained experimental regions. ln order to address these issues, the main objective of this paper is to propose a robust desirability function (RDF) model using a desirability function (DF) and mean square error (MSE) to simultaneously consider a number of multiple quality characteristics. This paper then present L-pseudocomponents and U-pseudocomponents to handle physical constraints. Finally, a numerical example shows that the proposed RDF can efficiently be applied to a pharmaceutical process design.

The Comparative Study of Software Optimal Release Time Based on Log-Logistic Distribution (Log-Logistic 분포 모형에 근거한 소프트웨어 최적방출시기에 관한 비교연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.7
    • /
    • pp.1-9
    • /
    • 2008
  • In this paper, make a study decision problem called an optimal release policies after testing a software system in development phase and transfer it to the user. When correcting or modifying the software, because of the possibility of introducing new faults when correcting or modifying the software, infinite failure non-homogeneous Poisson process models presented and propose an optimal release policies of the life distribution applied log-logistic distribution which can capture the increasing! decreasing nature of the failure occurrence rate per fault. In this paper, discuss optimal software release policies which minimize a total average software cost of development and maintenance under the constraint of satisfying a software reliability requirement. In a numerical example, after trend test applied and estimated the parameters using maximum likelihood estimation of inter-failure time data, make out estimating software optimal release time.

  • PDF

A Study for Co-channel Interference Cancelation Algorithm with Channel Estimation for WBAN System Application (WBAN 환경에서 채널 추정 기반의 공용 채널 간섭 제거 기술)

  • Choi, Won-Seok;Kim, Jeong-Gon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.6C
    • /
    • pp.476-482
    • /
    • 2012
  • In this paper, we analyze and compare several co-channel interference mitigation algorithms for WBAN application in 2.4 GHz ISM frequency bands. ML (Maximum Likelihood), OC (Optimal Combining) and MMSE (Minimum Mean Square Error) has been considered for the possible techniques for interference cancellation in view of the trade off between the performance and the complexity of implementation. Based on the channel model of IEEE 802.15.6 standard, simulation results show that ML and OC attains the lower BER performance than that of MMSE if we assume the perfect channel estimation. But, ML and OC have the additional requirement of implementation for his own and other users's channel estimation process, hence, besides the BER performance, the complexity of implementation and the sensitivity to channel estimation error should be considered since it requires the simple and small sized equipment for WBAN system application. In addition, the gap of detection BER performance between ML, OC and MMSE is much decreased under the imperfect channel estimation if we adopt real channel estimation process, therefore, in order to apply to WBAN system, the trade off between the BER performance and complexity of implemetation should be seriously considered to decide the best co-channel interference cancellation for WBAN system application.

Optimization of Energy Modulation Filter for Dual Energy CBCT Using Geant4 Monte-Carlo Simulation

  • Ju, Eun Bin;Ahn, So Hyun;Choi, Sang Gyu;Lee, Rena
    • Progress in Medical Physics
    • /
    • v.27 no.3
    • /
    • pp.125-130
    • /
    • 2016
  • Dual energy computed tomography (DECT) is used to classify two materials and quantify the mass density of each material in the human body. An energy modulation filter based DECT could acquire two images, which are generated by the low- and high-energy photon spectra, in one scan, with one tube and detector. In the case of DECT using the energy modulation filter, the filter should perform the optimization process for the type of materials and thicknesses for generating two photon spectra. In this study, Geant4 Monte-Carlo simulation toolkit was used to execute the optimization process for determining the property of the energy modulation filter. In the process, various materials used for the energy modulation filter are copper (Cu, $8.96g/cm^3$), niobium (Nb, $8.57g/cm^3$), stannum (Sn, $7.31g/cm^3$), gold (Au, $19.32g/cm^3$), and lead (Pb, $11.34g/cm^3$). The thickness of the modulation filter varied from 0.1 mm to 1.0 mm. To evaluate the overlap region of the low- and high-energy spectrum, Geant4 Monte-Carlo simulation is used. The variation of the photon flux and the mean energy of photon spectrum that passes through the energy modulation filter are evaluated. In the primary photon spectrum of 80 kVp, the optimal modulation filter is a 0.1 mm lead filter that can acquire the same mean energy of 140 kVp photon spectrum. The lead filter of 0.1 mm based dual energy CBCT is required to increase the tube current 4.37 times than the original tube current owing to the 77.1% attenuation in the filter.

Monte Carlo Simulation based Optimal Aiming Point Computation Against Multiple Soft Targets on Ground (몬테칼로 시뮬레이션 기반의 다수 지상 연성표적에 대한 최적 조준점 산출)

  • Kim, Jong-Hwan;Ahn, Nam-Su
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.1
    • /
    • pp.47-55
    • /
    • 2020
  • This paper presents a real-time autonomous computation of shot numbers and aiming points against multiple soft targets on grounds by applying an unsupervised learning, k-mean clustering and Monte carlo simulation. For this computation, a 100 × 200 square meters size of virtual battlefield is created where an augmented enemy infantry platoon unit attacks, defences, and is scatted, and a virtual weapon with a lethal range of 15m is modeled. In order to determine damage types of the enemy unit: no damage, light wound, heavy wound and death, Monte carlo simulation is performed to apply the Carlton damage function for the damage effect of the soft targets. In addition, in order to achieve the damage effectiveness of the enemy units in line with the commander's intention, the optimal shot numbers and aiming point locations are calculated in less than 0.4 seconds by applying the k-mean clustering and repetitive Monte carlo simulation. It is hoped that this study will help to develop a system that reduces the decision time for 'detection-decision-shoot' process in battalion-scaled combat units operating Dronebot combat system.

Development of curing facility to improve environment for burley curing (I. Changes in microclimate during air-curing) (버어리종 잎담배 건조 환경 개선을 위한 건조실 개발 (I. 건조기간중의 미기상 변화))

  • Cha, Kwang-Ho;Jang, Soo-Won;Yang, Jin-Chul;Oh, Kyoung-Hwan;Shin, Seung-Ku;Jo, Chun-Joon
    • Journal of the Korean Society of Tobacco Science
    • /
    • v.29 no.2
    • /
    • pp.66-73
    • /
    • 2007
  • This study was carried out to investigate the changes of curing condition on microclimate of temperature, relative humidity during curing process of burley tobacco leaves. The developed facility, ridge opening type was designed to open the central top roof. The air-cured variety, (N. tabacum cv KB111) was normally grown at the Eumseong tobacco experimental station in 2007. Mean daily temperature of $3^{\circ}C$ in ridge opening type curing facility was lower than that of conventional, whereas mean daily relative humidity of 12.6 % RH was lower in conventional curing facility for the entire stage of curing. The frequency distribution of optimal air temperature at daytime was higher 37.5 % in ridge opening type curing facility than that of conventional, while that of optimal relative humidity was lower 8.2 %. In the ridge opening type curing facility, the excessive drying leaves were low, however the price per kilogram was high. These results suggest that the new developed curing facility may be applied to improved microclimate inside the curing facility for curing burley.

Post-processing of vector quantized images using the projection onto quantization constraint set (양자화 제약 집합에 투영을 이용한 벡터 양자화된 영상의 후처리)

  • 김동식;박섭형;이종석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.4
    • /
    • pp.662-674
    • /
    • 1997
  • In order to post process the vector-quantized images employing the theory of projections onto convex sets or the constrained minimization technique, the the projector onto QCS(quantization constraint set) as well as the filter that smoothes the lock boundaries should be investigated theoretically. The basic idea behind the projection onto QCS is to prevent the processed data from diverging from the original quantization region in order to reduce the blurring artifacts caused by a filtering operation. However, since the Voronoi regions in order to reduce the blurring artifacts caused by a filtering operation. However, since the Voronoi regions in the vector quantization are arbitrarilly shaped unless the vector quantization has a structural code book, the implementation of the projection onto QCS is very complicate. This paper mathematically analyzes the projection onto QCS from the viewpoit of minimizing the mean square error. Through the analysis, it has been revealed that the projection onto a subset of the QCS yields lower distortion than the projection onto QCS does. Searching for an optimal constraint set is not easy and the operation of the projector is complicate, since the shape of optimal constraint set is dependent on the statistical characteristics between the filtered and original images. Therefore, we proposed a hyper-cube as a constraint set that enables a simple projection. It sill be also shown that a proper filtering technique followed by the projection onto the hyper-cube can reduce the quantization distortion by theory and experiment.

  • PDF

A new rock brittleness index on the basis of punch penetration test data

  • Ghadernejad, Saleh;Nejati, Hamid Reza;Yagiz, Saffet
    • Geomechanics and Engineering
    • /
    • v.21 no.4
    • /
    • pp.391-399
    • /
    • 2020
  • Brittleness is one of the most important properties of rock which has a major impact not only on the failure process of intact rock but also on the response of rock mass to tunneling and mining projects. Due to the lack of a universally accepted definition of rock brittleness, a wide range of methods, including direct and indirect methods, have been developed for its measurement. Measuring rock brittleness by direct methods requires special equipment which may lead to financial inconveniences and is usually unavailable in most of rock mechanic laboratories. Accordingly, this study aimed to develop a new strength-based index for predicting rock brittleness based on the obtained base form. To this end, an innovative algorithm was developed in Matlab environment. The utilized algorithm finds the optimal index based on the open access dataset including the results of punch penetration test (PPT), uniaxial compressive and Brazilian tensile strength. Validation of proposed index was checked by the coefficient of determination (R2), the root mean square error (RMSE), and also the variance for account (VAF). The results indicated that among the different brittleness indices, the suggested equation is the most accurate one, since it has the optimal R2, RMSE and VAF as 0.912, 3.47 and 89.8%, respectively. It could finally be concluded that, using the proposed brittleness index, rock brittleness can be reliably predicted with a high level of accuracy.

Sea Cucumber (Stichopus japonicus) Grading System Based on Morphological Features during Rehydration Process (수화 시의 형태학적 특징에 따른 건해삼의 등급 분류 시스템 개발)

  • Lee, Choong Uk;Yoon, Won Byong
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.46 no.3
    • /
    • pp.374-380
    • /
    • 2017
  • Image analysis and k-mean clustering were conducted to develop a grading system of dried sea cucumber (SC) based on rehydration rate. The SC images were obtained by taking pictures in a box under controlled light conditions. The region of interest was extracted to depict the shape of the SC in a 2D graph, and those 2D shapes were rendered to build a 3D model. The results from the image analysis provided the morphological features of the SC, including length, width, surface area, and volume, to obtain the parameters of the k-mean clustering weight. The k-mean clustering classified the SC samples into three different grades. Each SC sample was rehydrated at $30^{\circ}C$ for 40 h. During rehydration, the flux of each grade was analyzed. Our study demonstrates that the mass transfer rate of SC increased as the surface area increased, and the grade of SC was classified based on rehydration rate. This study suggests that the optimal rehydration process for SC can be achieved by applying a suitable grading system.