• Title/Summary/Keyword: evolution algorithm

Search Result 640, Processing Time 0.028 seconds

Land Use Optimization using Genetic Algorithms - Focused on Yangpyeong-eup - (유전 알고리즘을 적용한 토지이용 최적화 배분 연구 - 양평군 양평읍 일대를 대상으로 -)

  • Park, Yoonsun;Lee, Dongkun;Yoon, Eunjoo;Mo, Yongwon;Leem, Jihun
    • Journal of Environmental Impact Assessment
    • /
    • v.26 no.1
    • /
    • pp.44-56
    • /
    • 2017
  • Sustainable development is important because the ultimate objective is efficient development combining the economic, social, and environmental aspects of urban conservation. Despite Korea's rapid urbanization and economic development, the distribution of resources is inefficient, and land-use is not an exception. Land use distribution is difficult, as it requires considering a variety of purposes, whose solutions lie in a multipurpose optimization process. In this study, Yangpyeong-eup, Yangpyeong, Gyeonggi-do, is selected, as the site has ecological balance, is well-preserved, and has the potential to support population increases. Further, we have used the genetic algorithm method, as it helps to evolve solutions for complex spatial problems such as planning and distribution of land use. This study applies change to the way of mutation. With four goals and restrictions of area, spatial objectives, minimizing land use conversion, ecological conservation, maximizing economic profit, restricting area to a specific land use, and setting a fixed area, we developed an optimal planning map. No urban areas at the site needed preservation and the high urban area growth rate coincided with the optimization of purpose and maximization of economic profit. When the minimum point of the fitness score is the convergence point, we found optimization occurred approximately at 1500 generations. The results of this study can support planning at Yangpyeong-eup.ausative relationship between the perception of improving odor regulation and odor acceptance.

Implementation of Intelligent Characters adapting to Action Patterns of Opponent Characters (상대캐릭터의 행동패턴에 적응하는 지능캐릭터의 구현)

  • Lee, Myun-Sub;Cho, Byeong-Heon;Jung, Sung-Hoon;Seong, Yeong-Rak;Oh, Ha-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.42 no.3
    • /
    • pp.31-38
    • /
    • 2005
  • This paper proposes an implementation method of intelligent characters that can properly adapt to action patterns of opponent characters in fighting games by using genetic algorithm. For this intelligent characters, past actions patterns of opponent characters should be included in the learning process. To verify the effectiveness of the proposed method, two types of experiments are performed and their results are compared. In first experiment(exp-1), intelligent characters consider current action and its step of an opponent character. In second experiment (exp-2), on the other hands, they take past actions of an opponent characters into account additionally. As a performance index, the ratio of score obtained by an intelligent character to that of an opponent character is adopted. Experimental results shows that even if the performance index of exp-1 is better than that of exp-2 at the beginning of stages, but the performance index of exp-2 outperforms that of exp-1 as stages go on. Moreover, optimum solutions are always found in all experimental cases in exp-2. Futhermore, intelligent characters in exp-2 could learn moving actions (forward and backward) and waiting actions for getting more scores through self evolution.

A Schematic Map Generation System Using Centroidal Voronoi Tessellation and Icon-Label Replacement Algorithm (중심 보로노이 조각화와 아이콘 및 레이블 배치 알고리즘을 이용한 도식화된 지도 생성 시스템)

  • Ryu Dong-Sung;Uh Yoon;Park Dong-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.2
    • /
    • pp.139-150
    • /
    • 2006
  • A schematic map is a special purpose map which is generated to recognize it's objects easily and conveniently via simplifying and highlighting logical geometric information of a map. To manufacture the schematic map with road, label and icon, we must generate simplified route map and replace many geometric objects. Performing a give task, however, there are an amount of overlap areas between geometric objects whenever we process the replacement of geometry objects. Therefore we need replacing geometric objects without overlap. But this work requires much computational resources, because of the high complexity of the original geometry map. We propose the schematic map generation system whose map consists of icons and label. The proposed system has following steps: 1) eliminating kinks that are least relevant to the shape of polygonal curve using DCE(Discrete Curve Evolution) method. 2) making an evenly distributed route using CVT(Centroidal Voronoi Tessellation) and Grid snapping method. Therefore we can keep the structural information of the route map from CVT method. 3) replacing an icon and label information with collision avoidance algorithm. As a result, we can replace the vertices with a uniform distance and guarantee the available spaces for the replacement of icons and labels. We can also minimize the overlap between icons and labels and obtain more schematized map.

  • PDF

Particle Based Discrete Element Modeling of Hydraulic Stimulation of Geothermal Reservoirs, Induced Seismicity and Fault Zone Deformation (수리자극에 의한 지열저류층에서의 유도지진과 단층대의 변형에 관한 입자기반 개별요소법 모델링 연구)

  • Yoon, Jeoung Seok;Hakimhashemi, Amir;Zang, Arno;Zimmermann, Gunter
    • Tunnel and Underground Space
    • /
    • v.23 no.6
    • /
    • pp.493-505
    • /
    • 2013
  • This numerical study investigates seismicity and fault slip induced by fluid injection in deep geothermal reservoir with pre-existing fractures and fault. Particle Flow Code 2D is used with additionally implemented hydro-mechanical coupled fluid flow algorithm and acoustic emission moment tensor inversion algorithm. The output of the model includes spatio-temporal evolution of induced seismicity (hypocenter locations and magnitudes) and fault deformation (failure and slip) in relation to fluid pressure distribution. The model is applied to a case of fluid injection with constant rates changing in three steps using different fluid characters, i.e. the viscosity, and different injection locations. In fractured reservoir, spatio-temporal distribution of the induced seismicity differs significantly depending on the viscosity of the fracturing fluid. In a fractured reservoir, injection of low viscosity fluid results in larger volume of induced seismicity cloud as the fluid can migrate easily to the reservoir and cause large number and magnitude of induced seismicity in the post-shut-in period. In a faulted reservoir, fault deformation (co-seismic failure and aseismic slip) can occur by a small perturbation of fracturing fluid (<0.1 MPa) can be induced when the injection location is set close to the fault. The presented numerical model technique can practically be used in geothermal industry to predict the induced seismicity pattern and magnitude distribution resulting from hydraulic stimulation of geothermal reservoirs prior to actual injection operation.

Prediction of Adfreeze Bond Strength Using Artificial Neural Network (인공신경망을 활용한 동착강도 예측)

  • Ko, Sung-Gyu;Shin, Hyu-Soung;Choi, Chang-Ho
    • Journal of the Korean Geotechnical Society
    • /
    • v.27 no.11
    • /
    • pp.71-81
    • /
    • 2011
  • Adfreeze bond strength is a primary design parameter, which determines bearing capacity of pile foundation in frozen ground. It is reported that adfreeze bond strength is influenced by various affecting factors like freezing temperature, confining pressure, characteristics of pile surface, soil type, etc. However, several limited researches have been performed to obtain adfreeze bond strength, for past studies considered only few affecting factors such as freezing temperature and type of pile structures. Therefore, there exists a limitation of estimating the design parameter of pile foundation with various factors in frozen ground. In this study, artificial neural network algorithm was involved to predict adfreeze bond strength with various affecting factors. From past five studies, 137 data for various experimental conditions were collected. It was divided by 100 training data and 37 testing data in random manner. Based on the analysis result, it was found that it is necessary to consider various affecting factors for the prediction of adfreeze bond strength and the prediction with artificial neural network algorithm provides enough reliability. In addition, the result of parametric study showed that temperature and pile type are primary affecting factors for adfreeze bond strength. And it was also shown that vertical stress influences only certain temperature zone, and various soil types and loading speeds might cause the change of evolution trend for adfreeze bond strength.

Improved Method for Learning Context-Free Grammar using Tabular representation

  • Jung, Soon-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.43-51
    • /
    • 2022
  • In this paper, we suggest the method to improve the existing method leaning context-free grammar(CFG) using tabular representation(TBL) as a chromosome of genetic algorithm in grammatical inference and show the more efficient experimental result. We have two improvements. The first is to improve the formula to reflect the learning evaluation of positive and negative examples at the same time for the fitness function. The second is to classify partitions corresponding to TBLs generated from positive learning examples according to the size of the learning string, proceed with the evolution process by class, and adjust the composition ratio according to the success rate to apply the learning method linked to survival in the next generation. These improvements provide better efficiency than the existing method by solving the complexity and difficulty in the crossover and generalization steps between several individuals according to the size of the learning examples. We experiment with the languages proposed in the existing method, and the results show a rather fast generation rate that takes fewer generations to complete learning with the same success rate than the existing method. In the future, this method can be tried for extended CYK, and furthermore, it suggests the possibility of being applied to more complex parsing tables.

Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm (유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습)

  • Kim, Sang Hun;Chung, Byung Hee;Lee, Gun Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.351-360
    • /
    • 2018
  • The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

The Understanding and Application of Noise Reduction Software in Static Images (정적 영상에서 Noise Reduction Software의 이해와 적용)

  • Lee, Hyung-Jin;Song, Ho-Jun;Seung, Jong-Min;Choi, Jin-Wook;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.54-60
    • /
    • 2010
  • Purpose: Nuclear medicine manufacturers provide various softwares which shorten imaging time using their own image processing techniques such as UlatraSPECT, ASTONISH, Flash3D, Evolution, and nSPEED. Seoul National University Hospital has introduced softwares from Siemens and Philips, but it was still hard to understand algorithm difference between those two softwares. Thus, the purpose of this study was to figure out the difference of two softwares in planar images and research the possibility of application to images produced with high energy isotopes. Materials and Methods: First, a phantom study was performed to understand the difference of softwares in static studies. Various amounts of count were acquired and the images were analyzed quantitatively after application of PIXON, Siemens and ASTONISH, Philips, respectively. Then, we applied them to some applicable static studies and searched for merits and demerits. And also, they have been applied to images produced with high energy isotopes. Finally, A blind test was conducted by nuclear medicine doctors except phantom images. Results: There was nearly no difference between pre and post processing image with PIXON for FWHM test using capillary source whereas ASTONISH was improved. But, both of standard deviation(SD) and variance were decreased for PIXON while ASTONISH was highly increased. And in background variability comparison test using IEC phantom, PIXON has been decreased over all while ASTONISH has shown to be somewhat increased. Contrast ratio in each spheres has also been increased for both methods. For image scale, window width has been increased for 4~5 times after processing with PIXON while ASTONISH showed nearly no difference. After phantom test analysis, ASTONISH seemed to be applicable for some studies which needs quantitative analysis or high contrast, and PIXON seemed to be applicable for insufficient counts studies or long time studies. Conclusion: Quantitative values used for usual analysis were generally improved after application of the two softwares, however it seems that it's hard to maintain the consistency for all of nuclear medicine studies because result images can not be the same due to the difference of algorithm characteristic rather than the difference of gamma cameras. And also, it's hard to expect high image quality with the time shortening method such as whole body scan. But it will be possible to apply to static studies considering the algorithm characteristic or we can expect a change of image quality through application to high energy isotope images.

  • PDF

Optimization of Single-stage Mixed Refrigerant LNG Process Considering Inherent Explosion Risks (잠재적 폭발 위험성을 고려한 단단 혼합냉매 LNG 공정의 설계 변수 최적화)

  • Kim, Ik Hyun;Dan, Seungkyu;Cho, Seonghyun;Lee, Gibaek;Yoon, En Sup
    • Korean Chemical Engineering Research
    • /
    • v.52 no.4
    • /
    • pp.467-474
    • /
    • 2014
  • Preliminary design in chemical process furnishes economic feasibility through calculation of both mass balance and energy balance and makes it possible to produce a desired product under the given conditions. Through this design stage, the process possesses unchangeable characteristics, since the materials, reactions, unit configuration, and operating conditions were determined. Unique characteristics could be very economic, but it also implies various potential risk factors as well. Therefore, it becomes extremely important to design process considering both economics and safety by integrating process simulation and quantitative risk analysis during preliminary design stage. The target of this study is LNG liquefaction process. By the simulation using Aspen HYSYS and quantitative risk analysis, the design variables of the process were determined in the way to minimize the inherent explosion risks and operating cost. Instead of the optimization tool of Aspen HYSYS, the optimization was performed by using stochastic optimization algorithm (Covariance Matrix Adaptation-Evolution Strategy, CMA-ES) which was implemented through automation between Aspen HYSYS and Matlab. The research obtained that the important variable to enhance inherent safety was the operation pressure of mixed refrigerant. The inherent risk was able to be reduced about 4~18% by increasing the operating cost about 0.5~10%. As the operating cost increases, the absolute value of risk was decreased as expected, but cost-effectiveness of risk reduction had decreased. Integration of process simulation and quantitative risk analysis made it possible to design inherently safe process, and it is expected to be useful in designing the less risky process since risk factors in the process can be numerically monitored during preliminary process design stage.