• Title/Summary/Keyword: selection function

Search Result 1,530, Processing Time 0.032 seconds

COMPARISON OF FLUX AND RESIDENT CONCENTRATION BREAKTHROUGH CURVES IN STRUCTURED SOIL COLUMNS (구조토양에서의 침출수와 잔존수농도의 파과곡선에 관한 비교연구)

  • Kim, Dong-Ju
    • Journal of Korea Soil Environment Society
    • /
    • v.2 no.2
    • /
    • pp.81-94
    • /
    • 1997
  • In many solute transport studies, either flux or resident concentration has been used. Choice of the concentration mode was dependent on the monitoring device in solute displacement experiments. It has been accepted that no priority exists in the selection of concentration mode in the study of solute transport. It would be questionable, however, to accept the equivalency in the solute transport parameters between flux and resident concentrations in structured soils exhibiting preferential movement of solute. In this study, we investigate how they differ in the monitored breakthrough curves (BTCs) and transport parameters for a given boundary and flow condition by performing solute displacement experiments on a number of undisturbed soil columns. Both flux and resident concentrations have been simultaneously obtained by monitoring the effluent and resistance of the horizontally-positioned TDR probes. Two different solute transport models namely, convection-dispersion equation (CDE) and convective lognormal transfer function (CLT) models, were fitted to the observed breakthrough data in order to quantify the difference between two concentration modes. The study reveals that soil columns having relatively high flux densities exhibited great differences in the degree of peak concentration and travel time of peak between flux and resident concentrations. The peak concentration in flux mode was several times higher than that in resident one. Accordingly, the estimated parameters of flux mode differed greatly from those of resident mode and the difference was more pronounced in CDE than CLT model. Especially in CDE model, the parameters of flux mode were much higher than those of resident mode. This was mainly due to the bypassing of solute through soil macropores and failure of the equilibrium CDE model to adequate description of solute transport in studied soils. In the domain of the relationship between the ratio of hydrodynamic dispersion to molecular diffusion and the peclet number, both concentrations fall on a zone of predominant mechanical dispersion. However, it appears that more molecular diffusion contributes to the solute spreading in the matrix region than the macropore region due to the nonliearity present in the pore water velocity and dispersion coefficient relationship.

  • PDF

Morbidity Pattern of Residents in Urban Poor Area by Health Screening (도시 영세지역 주민의 건강진단 결과)

  • Kim, Chang-Yoon;SaKong, Jun;Kim, Seok-Beom;Kang, Pock-Soo;Chung, Jong-Hak
    • Journal of Yeungnam Medical Science
    • /
    • v.8 no.2
    • /
    • pp.150-157
    • /
    • 1991
  • The purpose of the this study was to assess the morbidity pattern of urban residents in the poor area by health screening for the community diagnosis. The items of health screening were history taking and physical examination by medical doctor and hearing test, check blood pressure, test for hematocrit, liver function(sGOT, sGPT), urine sugar and protein, and chest X-ray. The examinee in health screening were 437 persons and they occupied 16.9% of total residents in the poor area. Male examinee were 129 persons(9.9% of total residents) and female examinee were 308 persons(23.9% of total residents). Age group of above sixty years old, 42.0% of total residents in the poor area were participated, but only 5.9% were participated in age group of 10 to 19 years old. Among the 437 examinee, 191 persons(43.7%) had one or more abnormal findings in health screening. In male 38.7% had abnormal findings, and some what lower than that of female(45.8%). Age group of above sixty years had most high rate of abnormal findings(69.8%), in contrast to age group of 10 to 19 years old (10.9%). Diseases of the digestive system was the most common and which occupies 23.7% of total abnormal findings. And diseases of the circulatory system occupied 19.7%. Low hematocrit(14.6% of total participants of 437 persons) occupies the most common abnormal findings for screening test(hematocrit, blood pressure, hearing test, sGOT/sGPT, urine protein and urine sugar, chest X-ray) and high blood pressure(10.1%) occupied second, third; hearing impairment (5.5%), fourth ; abnormal liver function (4.1%), fifth ; sugar in urine (2.3%), sixth ; protein in urine(1.4%) and lastly abnormal chest X-ray (0.9%). The positive rate of abnormal findings in health screening was very high compared with morbidity rate by health interview. It is supposed that some portion of this high rate is by selection bias in examinee in health screening specially high participating rate in older age, and the other portion is due to the low socioecomic status and bad environment of the residents of the poor area. These findings will be good information for the research and development of health care system in the urban poor area.

  • PDF

The Flow-rate Measurements in a Multi-phase Flow Pipeline by Using a Clamp-on Sealed Radioisotope Cross Correlation Flowmeter (투과 감마선 계측신호의 Cross correlation 기법 적용에 의한 다중상 유체의 유량측정)

  • Kim, Jin-Seop;Kim, Jong-Bum;Kim, Jae-Ho;Lee, Na-Young;Jung, Sung-Hee
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.1
    • /
    • pp.13-20
    • /
    • 2008
  • The flow rate measurements in a multi-phase flow pipeline were evaluated quantitatively by means of a clamp-on sealed radioisotope based on a cross correlation signal processing technique. The flow rates were calculated by a determination of the transit time between two sealed gamma sources by using a cross correlation function following FFT filtering, then corrected with vapor fraction in the pipeline which was measured by the ${\gamma}$-ray attenuation method. The pipeline model was manufactured by acrylic resin(ID. 8 cm, L=3.5 m, t=10 mm), and the multi-phase flow patterns were realized by an injection of compressed $N_2$ gas. Two sealed gamma sources of $^{137}Cs$ (E=0.662 MeV, ${\Gamma}$ $factor=0.326\;R{\cdot}h^{-1}{\cdot}m^2{\cdot}Ci^{-1}$) of 20 mCi and 17 mCi, and radiation detectors of $2"{\times}2"$ NaI(Tl) scintillation counter (Eberline, SP-3) were used for this study. Under the given conditions(the distance between two sources: 4D(D; inner diameter), N/S ratio: $0.12{\sim}0.15$, sampling time ${\Delta}t$: 4msec), the measured flow rates showed the maximum. relative error of 1.7 % when compared to the real ones through the vapor content corrections($6.1\;%{\sim}9.2\;%$). From a subsequent experiment, it was proven that the closer the distance between the two sealed sources is, the more precise the measured flow rates are. Provided additional studies related to the selection of radioisotopes their activity, and an optimization of the experimental geometry are carried out, it is anticipated that a radioisotope application for flow rate measurements can be used as an important tool for monitoring multi-phase facilities belonging to petrochemical and refinery industries and contributes economically in the light of maintenance and control of them.

The Comparison of Image Quality between Computed Radiography(CR) and Direct Digital Radiography(DDR) which Follows the Proper Exposure Conditions in General Photographing under the Digital Radiography(DR) (Digital Radiography 환경하에서 일반촬영시 적정 노출조건에 따른 CR과 DDR의 Image Quality 비교)

  • Kim, Jin-Bae;Kang, Chung-Hwan;Kang, Sung-Jin;Park, Soo-In;Park, Jong-Won;Kim, Yeong-Su;Kim, Seung-Sik
    • Korean Journal of Digital Imaging in Medicine
    • /
    • v.5 no.1
    • /
    • pp.64-77
    • /
    • 2002
  • DR has had an important fact not only in the department of radiology but also in productivity or work efficiency of a whole hospital. The environment of DR has more various parameter than CR, so it is able to supply high quality of medical services. The current environment of radiology department in each hospital has been changed from Film-Screen system to DR through Full-PACS. This hospital which uses Full-PACS became to study the proper condition of CR and DDR and how the image quality of them is expressed among general photographing systems in the DR environment. From this experiment, the image quality of DDR is better than CR under the same exposure condition. And in the DDR system, the score of image which uses AEC is a little higher than the score which doesn't use it. Especially it can be known that the function of AEC of DDR is useful to improve the image quality in the part of skull and chest. (The function of AEC : It is the tool that detects the ionized current of x-ray which goes through objects with using the ion chamber which is in the detector. Also it controls the examination of X-ray when the proper density is reached.) Because the proper degree of density can be represented by this system, the photographing can be taken much easily without consideration of the exposure condition with the thickness of various objects. From the result of this experiment, it can be known that the selection of proper exposure condition plays an important rule to gain good Image Quality. More researches will be necessary about DDR system which has potential ability in the future.

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

The Precision Test Based on States of Bone Mineral Density (골밀도 상태에 따른 검사자의 재현성 평가)

  • Yoo, Jae-Sook;Kim, Eun-Hye;Kim, Ho-Seong;Shin, Sang-Ki;Cho, Si-Man
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.67-72
    • /
    • 2009
  • Purpose: ISCD (International Society for Clinical Densitometry) requests that users perform mandatory Precision test to raise their quality even though there is no recommendation about patient selection for the test. Thus, we investigated the effect on precision test by measuring reproducibility of 3 bone density groups (normal, osteopenia, osteoporosis). Materials and Methods: 4 users performed precision test with 420 patients (age: $57.8{\pm}9.02$) for BMD in Asan Medical Center (JAN-2008 ~ JUN-2008). In first group (A), 4 users selected 30 patient respectively regardless of bone density condition and measured 2 part (L-spine, femur) in twice. In second group (B), 4 users measured bone density of 10 patients respectively in the same manner of first group (A) users but dividing patient into 3 stages (normal, osteopenia, osteoporosis). In third group (C), 2 users measured 30 patients respectively in the same manner of first group (A) users considering bone density condition. We used GE Lunar Prodigy Advance (Encore. V11.4) and analyzed the result by comparing %CV to LSC using precision tool from ISCD. Check back was done using SPSS. Results: In group A, the %CV calculated by 4 users (a, b, c, d) were 1.16, 1.01, 1.19, 0.65 g/$cm^2$ in L-spine and 0.69, 0.58, 0.97, 0.47 g/$cm^2$ in femur. In group B, the %CV calculated by 4 users (a, b, c, d) were 1.01, 1.19, 0.83, 1.37 g/$cm^2$ in L-spine and 1.03, 0.54, 0.69, 0.58 g/$cm^2$ in femur. When comparing results (group A, B), we found no considerable differences. In group C, the user_1's %CV of normal, osteopenia and osteoporosis were 1.26, 0.94, 0.94 g/$cm^2$ in L-spine and 0.94, 0.79, 1.01 g/$cm^2$ in femur. And the user_2's %CV were 0.97, 0.83, 0.72 g/$cm^2$ L-spine and 0.65, 0.65, 1.05 g/$cm^2$ in femur. When analyzing the result, we figured out that the difference of reproducibility was almost not found but the differences of two users' several result values have effect on total reproducibility. Conclusions: Precision test is a important factor of bone density follow up. When Machine and user's reproducibility is getting better, it’s useful in clinics because of low range of deviation. Users have to check machine's reproducibility before the test and keep the same mind doing BMD test for patient. In precision test, the difference of measured value is usually found for ROI change caused by patient position. In case of osteoporosis patient, there is difficult to make initial ROI accurately more than normal and osteopenia patient due to lack of bone recognition even though ROI is made automatically by computer software. However, initial ROI is very important and users have to make coherent ROI because we use ROI Copy function in a follow up. In this study, we performed precision test considering bone density condition and found LSC value was stayed within 3%. There was no considerable difference. Thus, patient selection could be done regardless of bone density condition.

  • PDF

Relationships on Magnitude and Frequency of Freshwater Discharge and Rainfall in the Altered Yeongsan Estuary (영산강 하구의 방류와 강우의 규모 및 빈도 상관성 분석)

  • Rhew, Ho-Sang;Lee, Guan-Hong
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.16 no.4
    • /
    • pp.223-237
    • /
    • 2011
  • The intermittent freshwater discharge has an critical influence upon the biophysical environments and the ecosystems of the Yeongsan Estuary where the estuary dam altered the continuous mixing of saltwater and freshwater. Though freshwater discharge is controlled by human, the extreme events are mainly driven by the heavy rainfall in the river basin, and provide various impacts, depending on its magnitude and frequency. This research aims to evaluate the magnitude and frequency of extreme freshwater discharges, and to establish the magnitude-frequency relationships between basin-wide rainfall and freshwater inflow. Daily discharge and daily basin-averaged rainfall from Jan 1, 1997 to Aug 31, 2010 were used to determine the relations between discharge and rainfall. Consecutive daily discharges were grouped into independent events using well-defined event-separation algorithm. Partial duration series were extracted to obtain the proper probability distribution function for extreme discharges and corresponding rainfall events. Extreme discharge events over the threshold 133,656,000 $m^3$ count up to 46 for 13.7y years, following the Weibull distribution with k=1.4. The 3-day accumulated rain-falls which occurred one day before peak discharges (1day-before-3day -sum rainfall), are determined as a control variable for discharge, because their magnitude is best correlated with that of the extreme discharge events. The minimum value of the corresponding 1day-before-3day-sum rainfall, 50.98mm is initially set to a threshold for the selection of discharge-inducing rainfall cases. The number of 1day-before-3day-sum rainfall groups after selection, however, exceeds that of the extreme discharge events. The canonical discriminant analysis indicates that water level over target level (-1.35 m EL.) can be useful to divide the 1day-before-3day-sum rainfall groups into discharge-induced and non-discharge ones. It also shows that the newly-set threshold, 104mm, can just separate these two cases without errors. The magnitude-frequency relationships between rainfall and discharge are established with the newly-selected lday-before-3day-sum rainfalls: $D=1.111{\times}10^8+1.677{\times}10^6{\overline{r_{3day}}$, (${\overline{r_{3day}}{\geqq}104$, $R^2=0.459$), $T_d=1.326T^{0.683}_{r3}$, $T_d=0.117{\exp}[0.0155{\overline{r_{3day}}]$, where D is the quantity of discharge, ${\overline{r_{3day}}$ the 1day-before-3day-sum rainfall, $T_{r3}$ and $T_d$, are respectively return periods of 1day-before-3day-sum rainfall and freshwater discharge. These relations provide the framework to evaluate the effect of freshwater discharge on estuarine flow structure, water quality, responses of ecosystems from the perspective of magnitude and frequency.

A Time Series Analysis of Urban Park Behavior Using Big Data (빅데이터를 활용한 도시공원 이용행태 특성의 시계열 분석)

  • Woo, Kyung-Sook;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.48 no.1
    • /
    • pp.35-45
    • /
    • 2020
  • This study focused on the park as a space to support the behavior of urban citizens in modern society. Modern city parks are not spaces that play a specific role but are used by many people, so their function and meaning may change depending on the user's behavior. In addition, current online data may determine the selection of parks to visit or the usage of parks. Therefore, this study analyzed the change of behavior in Yeouido Park, Yeouido Hangang Park, and Yangjae Citizen's Forest from 2000 to 2018 by utilizing a time series analysis. The analysis method used Big Data techniques such as text mining and social network analysis. The summary of the study is as follows. The usage behavior of Yeouido Park has changed over time to "Ride" (Dynamic Behavior) for the first period (I), "Take" (Information Communication Service Behavior) for the second period (II), "See" (Communicative Behavior) for the third period (III), and "Eat" (Energy Source Behavior) for the fourth period (IV). In the case of Yangjae Citizens' Forest, the usage behavior has changed over time to "Walk" (Dynamic Behavior) for the first, second, and third periods (I), (II), (III) and "Play" (Dynamic Behavior) for the fourth period (IV). Looking at the factors affecting behavior, Yeouido Park was had various factors related to sports, leisure, culture, art, and spare time compared to Yangjae Citizens' Forest. The differences in Yangjae Citizens' Forest that affected its main usage behavior were various elements of natural resources. Second, the behavior of the target areas was found to be focused on certain main behaviors over time and played a role in selecting or limiting future behaviors. These results indicate that the space and facilities of the target areas had not been utilized evenly, as various behaviors have not occurred, however, a certain main behavior has appeared in the target areas. This study has great significance in that it analyzes the usage of urban parks using Big Data techniques, and determined that urban parks are transformed into play spaces where consumption progressed beyond the role of rest and walking. The behavior occurring in modern urban parks is changing in quantity and content. Therefore, through various types of discussions based on the results of the behavior collected through Big Data, we can better understand how citizens are using city parks. This study found that the behavior associated with static behavior in both parks had a great impact on other behaviors.

Applications of Fuzzy Theory on The Location Decision of Logistics Facilities (퍼지이론을 이용한 물류단지 입지 및 규모결정에 관한 연구)

  • 이승재;정창무;이헌주
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.1
    • /
    • pp.75-85
    • /
    • 2000
  • In existing models in optimization, the crisp data improve has been used in the objective or constraints to derive the optimal solution, Besides, the subjective environments are eliminated because the complex and uncertain circumstances were regarded as Probable ambiguity, In other words those optimal solutions in the existing models could be the complete satisfactory solutions to the objective functions in the Process of application for industrial engineering methods to minimize risks of decision-making. As a result of those, decision-makers in location Problems couldn't face appropriately with the variation of demand as well as other variables and couldn't Provide the chance of wide selection because of the insufficient information. So under the circumstance. it has been to develop the model for the location and size decision problems of logistics facility in the use of the fuzzy theory in the intention of making the most reasonable decision in the Point of subjective view under ambiguous circumstances, in the foundation of the existing decision-making problems which must satisfy the constraints to optimize the objective function in strictly given conditions in this study. Introducing the Process used in this study after the establishment of a general mixed integer Programming(MIP) model based upon the result of existing studies to decide the location and size simultaneously, a fuzzy mixed integer Programming(FMIP) model has been developed in the use of fuzzy theory. And the general linear Programming software, LINDO 6.01 has been used to simulate, to evaluate the developed model with the examples and to judge of the appropriateness and adaptability of the model(FMIP) in the real world.

  • PDF