• Title/Summary/Keyword: Error sum

Search Result 492, Processing Time 0.024 seconds

Thermodynamics-Based Weight Encoding Methods for Improving Reliability of Biomolecular Perceptrons (생체분자 퍼셉트론의 신뢰성 향상을 위한 열역학 기반 가중치 코딩 방법)

  • Lim, Hee-Woong;Yoo, Suk-I.;Zhang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.12
    • /
    • pp.1056-1064
    • /
    • 2007
  • Biomolecular computing is a new computing paradigm that uses biomolecules such as DNA for information representation and processing. The huge number of molecules in a small volume and the innate massive parallelism inspired a novel computation method, and various computation models and molecular algorithms were developed for problem solving. In the meantime, the use of biomolecules for information processing supports the possibility of DNA computing as an application for biological problems. It has the potential as an analysis tool for biochemical information such as gene expression patterns. In this context, a DNA computing-based model of a biomolecular perceptron has been proposed and the result of its experimental implementation was presented previously. The weight encoding and weighted sum operation, which are the main components of a biomolecular perceptron, are based on the competitive hybridization reactions between the input molecules and weight-encoding probe molecules. However, thermodynamic symmetry in the competitive hybridizations is assumed, so there can be some error in the weight representation depending on the probe species in use. Here we suggest a generalized model of hybridization reactions considering the asymmetric thermodynamics in competitive hybridizations and present a weight encoding method for the reliable implementation of a biomolecular perceptron based on this model. We compare the accuracy of our weight encoding method with that of the previous one via computer simulations and present the condition of probe composition to satisfy the error limit.

A study on the scheduling of multiple products production through a single facility (단일시설에 의한 다품종소량생산의 생산계획에 관한 연구)

  • Kwak, Soo-Il;Lee, Kwang-Soo;Won, Young-Jong
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.1 no.1
    • /
    • pp.151-170
    • /
    • 1976
  • There are many cases of production processes which intermittently produce several different kinds of products for stock through one set of physical facility. In this case, an important question is what size of production run should be prduced once we do set-up for a product in order to minimize the total cost, that is, the sum of the set-up, carrying, and stock-out costs. This problem is used to be called scheduling of multiple products through a single facility in the production management field. Despite the very common occurrence of this type of production process, no one has yet devised a method for determining the optimal production schedule. The purpose of this study is to develop quantitative analytical models which can be used practically and give us rational production schedules. The study is to show improved models with application to a can-manufacturing plant. In this thesis the economic production quantity (EPQ) model was used as a basic model to develop quantitative analytical models for this scheduling problem and two cases, one with stock-out cost, the other without stock-out cost, were taken into consideration. The first analytical model was developed for the scheduling of products through a single facility. In this model we calculate No, the optimal number of production runs per year, minimizing the total annual cost above all. Next we calculate No$_{i}$ is significantly different from No, some manipulation of the schedule can be made by trial and error in order to try to fit the product into the basic (No schedule either more or less frequently as dictated by) No$_{i}$, But this trial and error schedule is thought of inefficient. The second analytical model was developed by reinterpretation by reinterpretation of the calculating process of the economic production quantity model. In this model we obtained two relationships, one of which is the relationship between optimal number of set-ups for the ith item and optimal total number of set-ups, the other is the relationship between optimal average inventory investment for the ith item and optimal total average inventory investment. From these relationships we can determine how much average inventory investment per year would be required if a rational policy based on m No set-ups per year for m products were followed and, alternatively, how many set-ups per year would be required if a rational policy were followed which required an established total average inventory inventory investment. We also learned the relationship between the number of set-ups and the average inventory investment takes the form of a hyperbola. But, there is no reason to say that the first analytical model is superior to the second analytical model. It can be said that the first model is useful for a basic production schedule. On the other hand, the second model is efficient to get an improved production schedule, in a sense of reducing the total cost. Another merit of the second model is that, unlike the first model where we have to know all the inventory costs for each product, we can obtain an improved production schedule with unknown inventory costs. The application of these quantitative analytical models to PoHang can-manufacturing plants shows this point.int.

  • PDF

A Feasibility study on the Simplified Two Source Model for Relative Electron Output Factor of Irregular Block Shape (단순화 이선원 모델을 이용한 전자선 선량율 계산 알고리듬에 관한 예비적 연구)

  • 고영은;이병용;조병철;안승도;김종훈;이상욱;최은경
    • Progress in Medical Physics
    • /
    • v.13 no.1
    • /
    • pp.21-26
    • /
    • 2002
  • A practical calculation algorithm which calculates the relative output factor(ROF) for irregular shaped electron field has been developed and evaluated the accuracy of the algorithm. The algorithm adapted two-source model, which assumes that the electron dose can be express as sum of the primary source component and the scattered component from the shielding block. Original two-source model has been modified in order to make the algorithm simpler and to reduce the number of parameters needed in the calculation, while the calculation error remains within clinical tolerance range. The primary source is assumed to have Gaussian distribution, while the scattered component follows the inverse square law. Depth and angular dependency of the primary and the scattered are ignored ROF can be calculated with three parameters such as, the effective source distance, the variance of primary source, and the scattering power of the block. The coefficients are obtained from the square shaped-block measurements and the algorithm is confirmed from the rectangular or irregular shaped-fields used in the clinic. The results showed less than 1.0 % difference between the calculation and measurements for most cases. None of cases which have bigger than 2.1 % have been found. By improving the algorithm for the aperture region which shows the largest error, the algorithm could be practically used in the clinic, since one can acquire the 1011 parameter's with minimum measurements(5∼6 measurements per cones) and generates accurate results within the clinically acceptable range.

  • PDF

An Effect for Sequential Information Processing by the Anxiety Level and Temporary Affect Induction (불안수준 및 일시적 유발정서가 서열정보 어휘처리에 미치는 효과)

  • Kim, Choong-Myung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.4
    • /
    • pp.224-231
    • /
    • 2019
  • The current paper was conducted to unravel the influence of affect induction as a background emotion in the process of cognitive task to judge the degree of sequence in groups with or without anxiety symptoms. Four types of affect induction and two sequential task types were used as within-subject variables, and two types of college students groups classified under the Beck Anxiety Inventory (BAI) as a between-subject variable were selected to determine reaction times involving sequential judgment among the lexical relevance information. DmDx5 was used to present a series of stimuli and elicit a response from subjects. Repeated measured ANOVA analyses revealed that reaction times and error rates were significantly larger with anxiety participants compared to the normal group regardless of affect and task types. Within-subject variable effects found that specific affect type (sorrow condition) and number-related task type showed a more rapid response compared to other affect types and magnitude-related task type, respectively. In sum, these findings confirmed the difference in tendency with reaction time and error rates that varied as a function of accompanying affect types as well as anxiety level and task types suggesting the that underlying background affect plays a major role in processing affect-cognitive association tasks.

Salinity and water level measuring device using fixed type buoyancy (고정식 부력을 이용한 염도 및 수위 측정 방식에 대한 연구)

  • Yang, Seung-Young;Byun, Kyung-Seok
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.21 no.1
    • /
    • pp.1-6
    • /
    • 2020
  • To make an automated system for a salt field, it is necessary to measure the salinity and water level of the evaporation site. In this paper, a method to simultaneously measure the salinity and water level by measuring the buoyancy forces of two fixed buoyancy bodies is proposed. The proposed measurement method measures the buoyancy of the main part and reference part when the measuring device is immersed in the salty water, and simultaneously measures the salinity and water level through the sum and difference of the two buoyancy forces. Since there is no mechanical movement in the measurement of buoyancy, measurement errors and maintenance needs can be reduced in the mudy environment of salt field. By applying the proposed method, we developed a system that can simultaneously measure salinity and water level remotely at the evaporation site of a salt field. Through a measurement experiment using a reference salty water having various levels of salinity, the results of a salinity error of 0% and a water level error of 2mm were obtained, and the effectiveness of the proposed salinity and water level measuring device was verified. When an automated system is constructed using the developed salinity and water level measuring device, labor reduction, work environment improvement, and productivity improvement are expected.

Preoperative estimation of hemi-liver volume using standard liver volume and portal vein diameter ratio in living donor liver transplantation

  • Sung-Min Kim;Amro Hasan Ageel;Shin Hwang;Dong-Hwan Jung;Tae-Yong Ha;Gi-Won Song;Gil-Chun Park;Chul-Soo Ahn;Deok-Bog Moon
    • Annals of Hepato-Biliary-Pancreatic Surgery
    • /
    • v.26 no.4
    • /
    • pp.308-312
    • /
    • 2022
  • Backgrounds/Aims: Although body surface area (BSA)-based standard liver volume (SLV) formulae have been used for living donor liver transplantation and hepatic resection, hemi-liver volume (HLV) is needed more frequently. HLV can be assessed using right or left portal vein diameter (RPVD or LPVD). The aim of this study was to validate the reliability of using portal vein diameter ratio (PVDR) for assessing HLV in living liver donors. Methods: This study included 92 living liver donors (59 males and 33 females) who underwent surgery between January 2020 and December 2020. Computed tomography (CT) images were used for measurements. Results: Mean age of donors was 35.5 ± 7.2 years. CT volumetry-measured total liver volume (TLV), right HLV, left HLV, and percentage of right HLV in TLV were 1,442.9 ± 314.2 mL, 931.5 ± 206.4 mL, 551.4 ± 126.5 mL, and 64.6% ± 3.6%, respectively. RPVD, LPVD, and main portal vein diameter were 12.2 ± 1.5 mm, 10.0 ± 1.3 mm, and 15.3 ± 1.7 mm, respectively (corresponding square values: 149.9 ± 36.9 mm2, 101.5 ± 25.2 mm2, and 237.2 ± 52.2 mm2, respectively). The sum of RPVD2 and LPVD2 was 251.1 ± 56.9 mm2. BSA-based SLV was 1,279.5 ± 188.7 mL (error rate: 9.1% ± 14.4%). SLV formula- and PVDR-based right HLV was 760.0 ± 130.7 mL (error rate: 16.2% ± 13.3%). Conclusions: Combining BSA-based SLV and PVDR appears to be a simple method to predict right or left HLV in living donors or split liver transplantation.

Development of QSAR Model Based on the Key Molecular Descriptors Selection and Computational Toxicology for Prediction of Toxicity of PCBs (PCBs 독성 예측을 위한 주요 분자표현자 선택 기법 및 계산독성학 기반 QSAR 모델 개발)

  • Kim, Dongwoo;Lee, Seungchel;Kim, Minjeong;Lee, Eunji;Yoo, ChangKyoo
    • Korean Chemical Engineering Research
    • /
    • v.54 no.5
    • /
    • pp.621-629
    • /
    • 2016
  • Recently, the researches on quantitative structure activity relationship (QSAR) for describing toxicities or activities of chemicals based on chemical structural characteristics have been widely carried out in order to estimate the toxicity of chemicals in multiuse facilities. Because the toxicity of chemicals are explained by various kinds of molecular descriptors, an important step for QSAR model development is how to select significant molecular descriptors. This research proposes a statistical selection of significant molecular descriptors and a new QSAR model based on partial least square (PLS). The proposed QSAR model is applied to estimate the logarithm of partition coefficients (log P) of 130 polychlorinated biphenyls (PCBs) and lethal concentration ($LC_{50}$) of 14 PCBs, where the prediction accuracies of the proposed QSAR model are compared to a conventional QSAR model provided by OECD QSAR toolbox. For the selection of significant molecular descriptors that have high correlation with molecular descriptors and activity information of the chemicals of interest, correlation coefficient (r) and variable importance of projection (VIP) are applied and then PLS model of the selected molecular descriptors and activity information is used to predict toxicities and activity information of chemicals. In the prediction results of coefficient of regression ($R^2$) and prediction residual error sum of square (PRESS), the proposed QSAR model showed improved prediction performances of log P and $LC_{50}$ by 26% and 91% than the conventional QSAR model, respectively. The proposed QSAR method based on computational toxicology can improve the prediction performance of the toxicities and the activity information of chemicals, which can contribute to the health and environmental risk assessment of toxic chemicals.

Development and Analysis of COMS AMV Target Tracking Algorithm using Gaussian Cluster Analysis (가우시안 군집분석을 이용한 천리안 위성의 대기운동벡터 표적추적 알고리듬 개발 및 분석)

  • Oh, Yurim;Kim, Jae Hwan;Park, Hyungmin;Baek, Kanghyun
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.6
    • /
    • pp.531-548
    • /
    • 2015
  • Atmospheric Motion Vector (AMV) from satellite images have shown Slow Speed Bias (SSB) in comparison with rawinsonde. The causes of SSB are originated from tracking, selection, and height assignment error, which is known to be the leading error. However, recent works have shown that height assignment error cannot be fully explained the cause of SSB. This paper attempts a new approach to examine the possibility of SSB reduction of COMS AMV by using a new target tracking algorithm. Tracking error can be caused by averaging of various wind patterns within a target and changing of cloud shape in searching process over time. To overcome this problem, Gaussian Mixture Model (GMM) has been adopted to extract the coldest cluster as target since the shape of such target is less subject to transformation. Then, an image filtering scheme is applied to weigh more on the selected coldest pixels than the other, which makes it easy to track the target. When AMV derived from our algorithm with sum of squared distance method and current COMS are compared with rawindsonde, our products show noticeable improvement over COMS products in mean wind speed by an increase of $2.7ms^{-1}$ and SSB reduction by 29%. However, the statistics regarding the bias show negative impact for mid/low level with our algorithm, and the number of vectors are reduced by 40% relative to COMS. Therefore, further study is required to improve accuracy for mid/low level winds and increase the number of AMV vectors.

The Comparative Analysis of External Dose Reconstruction in EPID and Internal Dose Measurement Using Monte Carlo Simulation (몬테 카를로 전산모사를 통한 EPID의 외부적 선량 재구성과 내부 선량 계측과의 비교 및 분석)

  • Jung, Joo-Young;Yoon, Do-Kun;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.24 no.4
    • /
    • pp.253-258
    • /
    • 2013
  • The purpose of this study is to evaluate and analyze the relationship between the external radiation dose reconstruction which is transmitted from the patient who receives radiation treatment through electronic portal imaging device (EPID) and the internal dose derived from the Monte Carlo simulation. As a comparative analysis of the two cases, it is performed to provide a basic indicator for similar studies. The geometric information of the experiment and that of the radiation source were entered into Monte Carlo n-particle (MCNPX) which is the computer simulation tool and to derive the EPID images, a tally card in MCNPX was used for visualizing and the imaging of the dose information. We set to source to surface distance (SSD) 100 cm for internal measurement and EPID. And the water phantom was set to be 100 cm of the source to surface distance (SSD) for the internal measurement and EPID was set to 90 cm of SSD which is 10 cm below. The internal dose was collected from the water phantom by using mesh tally function in MCNPX, accumulated dose data was acquired by four-portal beam exposures. At the same time, after getting the dose which had been passed through water phantom, dose reconstruction was performed using back-projection method. In order to analyze about two cases, we compared the penetrated dose by calibration of itself with the absorbed one. We also evaluated the reconstructed dose using EPID and partially accumulated (overlapped) dose in water phantom by four-portal beam exposures. The sum dose data of two cases were calculated as each 3.4580 MeV/g (absorbed dose in water) and 3.4354 MeV/g (EPID reconstruction). The result of sum dose match from two cases shows good agreement with 0.6536% dose error.

Parameter Estimation of Coastal Water Quality Model Using the Inverse Theory (역산이론을 이용한 연안 수질모형의 매개변수 추정)

  • Cho, Hong-Yeon;Cho, Bum-Jun;Jeong, Shin-Taek
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.17 no.3
    • /
    • pp.149-157
    • /
    • 2005
  • Typical water quality (WQ) parameters defined in the governing equation of the WQ model are the pollutant loads from atmosphere and watersheds, pollutant release rates from sediment, diffusion coefficient and reaction coefficient etc. The direct measurement of these parameters is very difficult as well as requires high cost. In this study, the pollutant budget equation including these parameters was used to construct the linear simultaneous equations. Based on these equations, the inverse problems were constructed and WQ parameter estimation method minimizing the sum of squared errors between the computed and observed amounts of the mass changes was suggested. WQ parameters, i.e., the atmospheric pollutant loads, sediment release rates, diffusion coefficients and reaction coefficient, were estimated using .this method by utilizing the vertical concentration profile data which has been observed in Cheonsu Bay and Ulsan Port. Values of the estimated parameters show a large temporal variation. However, this technique is persuasive in that the RHS (root mean square) error was less than $5.0\%$ of the observed value ranges and the agreement index was greater than 0.95.