• 제목/요약/키워드: Initial Search Point

검색결과 82건 처리시간 0.025초

유전과 기울기 최적화기법을 이용한 퍼지 파라메터의 자동 생성 (Automatic generation of Fuzzy Parameters Using Genetic and gradient Optimization Techniques)

  • 유동완;라경택;전순용;서보혁
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1998년도 하계학술대회 논문집 B
    • /
    • pp.515-518
    • /
    • 1998
  • This paper proposes a new hybrid algorithm for auto-tuning fuzzy controllers improving the performance. The presented algorithm estimates automatically the optimal values of membership functions, fuzzy rules, and scaling factors for fuzzy controllers, using a genetic-MGM algorithm. The object of the proposed algorithm is to promote search efficiency by a genetic and modified gradient optimization techniques. The proposed genetic and MGM algorithm is based on both the standard genetic algorithm and a gradient method. If a maximum point don't be changed around an optimal value at the end of performance during given generation, the genetic-MGM algorithm searches for an optimal value using the initial value which has maximum point by converting the genetic algorithms into the MGM(Modified Gradient Method) algorithms that reduced the number of variables. Using this algorithm is not only that the computing time is faster than genetic algorithm as reducing the number of variables, but also that can overcome the disadvantage of genetic algorithms. Simulation results verify the validity of the presented method.

  • PDF

유전 알고리듬을 이용한 자동 동조 퍼지 제어기의 하이브리드 최적화 기법 (Hybrid Optimization Techniques Using Genetec Algorithms for Auto-Tuning Fuzzy Logic Controllers)

  • 유동완;이영석;박윤호;서보혁
    • 대한전기학회논문지:전력기술부문A
    • /
    • 제48권1호
    • /
    • pp.36-43
    • /
    • 1999
  • This paper proposes a new hybrid genetic algorithm for auto-tuning fuzzy controllers improving the performance. In general, fuzzy controllers use pre-determined moderate membership functions, fuzzy rules, and scaling factors, by trial and error. The presented algorithm estimates automatically the optimal values of membership functions, fuzzy rules, and scaling factors for fuzzy controllers, using a hybrid genetic algorithm. The object of the proposed algorithm is to promote search efficiency by the hybrid optimization technique. The proposed hybrid genetic algorithm is based on both the standard genetic algorithm and a modified gradient method. If a maximum point is not be changed around an optimal value at the end of performance during given generation, the hybrid genetic algorithm searches for an optimal value using the the initial value which has maximum point by converting the genetic algorithms into the MGM(Modified Gradient Method) algorithms that reduced the number of variables. Using this algorithm is not only that the computing time is faster than genetic algorithm as reducing the number of variables, but also that can overcome the disadvantage of genetic algoritms. Simulation results verify the validity of the presented method.

  • PDF

능동적 블록정합기법을 이용한 객체의 움직임 검출에 관한 연구 (A Study on Motion Detection of Object Using Active Block Matching Algorithm)

  • 이창수;박미옥;이경석
    • 한국통신학회논문지
    • /
    • 제31권4C호
    • /
    • pp.407-416
    • /
    • 2006
  • 카메라를 통한 객체의 움직임 검출은 불필요한 잡음이나 조명의 변화에 따라 정확한 움직임을 검출하는 것은 어렵다. 또한 객체의 유입 후 일정시간 동안 움직임이 없을 경우에는 배경으로 인식될 수도 있다. 따라서 실시간으로 입력되는 영상에서 정확한 객체를 추출하고 움직임을 검출할 수 있는 알고리즘이 필요하다. 본 논문에서는 객체의 정확한 움직임을 검출하기 위한 방법은 시간에 따라 변화하는 배경영상의 일부를 교체하고, 객체가 유입된 시점에서 객체의 영역을 추출하기 위하여 그물형 픽셀검사를 통하여 객체의 윤곽점을 추출한다. 추출된 윤곽점은 객체의 사각영역인 최소블록의 생성과 객체의 움직임을 빠르게 검출하기 위한 가변 탐색블록을 생성하여 정확한 객체의 움직임을 검출한다. 설계하고 구현한 시스템은 실험을 통한 성능평가에서 95% 이상의 높은 정확도를 보였다.

토크제어를 이용한 풍력발전시스템의 적응 최대 출력 제어 (The Adaptive Maximum Power Point Tracking Control in Wind Turbine System Using Torque Control)

  • 현종호;김경연
    • 전기전자학회논문지
    • /
    • 제19권2호
    • /
    • pp.225-231
    • /
    • 2015
  • 토크제어를 이용한 최대 출력 제어에서 얼마나 많은 풍력에너지를 전기에너지로 변환하는지 결정하는 파라미터 K는 블레이드 형상 변화. 공기 밀도 등으로 인하여 변동하게 된다. 이러한 파라미터 K가 최적의 값이 아니면 이는 출력의 손실까지 이어진다. 이렇게 변동하는 K로 인하여 최적의 K를 찾는 것이 풍력발전시스템의 손실을 줄이는 중요한 문제이다. 본 논문은 양방향 컨버터 제어와 토크제어를 사용한 풍력발전시스템을 고려하여 초기의 K를 이용하여 빠른 제어를 수행하고 칼만 필터를 이용한 기계적 출력을 추정하여 최대 출력 제어 알고리즘의 입력으로 다시 사용하여 결과적으로 최적의 최대 출력 제어 제어를 수행하는 적응 최대 출력 제어 알고리즘을 제안한다.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • 유통과학연구
    • /
    • 제8권3호
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

Extraction of Passive Device Model Parameters Using Genetic Algorithms

  • Yun, Il-Gu;Carastro, Lawrence A.;Poddar, Ravi;Brooke, Martin A.;May, Gary S.;Hyun, Kyung-Sook;Pyun, Kwang-Eui
    • ETRI Journal
    • /
    • 제22권1호
    • /
    • pp.38-46
    • /
    • 2000
  • The extraction of model parameters for embedded passive components is crucial for designing and characterizing the performance of multichip module (MCM) substrates. In this paper, a method for optimizing the extraction of these parameters using genetic algorithms is presented. The results of this method are compared with optimization using the Levenberg-Marquardt (LM) algorithm used in the HSPICE circuit modeling tool. A set of integrated resistor structures are fabricated, and their scattering parameters are measured for a range of frequencies from 45 MHz to 5 GHz. Optimal equivalent circuit models for these structures are derived from the s-parameter measurements using each algorithm. Predicted s-parameters for the optimized equivalent circuit are then obtained from HSPICE. The difference between the measured and predicted s-parameters in the frequency range of interest is used as a measure of the accuracy of the two optimization algorithms. It is determined that the LM method is extremely dependent upon the initial starting point of the parameter search and is thus prone to become trapped in local minima. This drawback is alleviated and the accuracy of the parameter values obtained is improved using genetic algorithms.

  • PDF

Numerical convergence and validation of the DIMP inverse particle transport model

  • Nelson, Noel;Azmy, Yousry
    • Nuclear Engineering and Technology
    • /
    • 제49권6호
    • /
    • pp.1358-1367
    • /
    • 2017
  • The data integration with modeled predictions (DIMP) model is a promising inverse radiation transport method for solving the special nuclear material (SNM) holdup problem. Unlike previous methods, DIMP is a completely passive nondestructive assay technique that requires no initial assumptions regarding the source distribution or active measurement time. DIMP predicts the most probable source location and distribution through Bayesian inference and quasi-Newtonian optimization of predicted detector responses (using the adjoint transport solution) with measured responses. DIMP performs well with forward hemispherical collimation and unshielded measurements, but several considerations are required when using narrow-view collimated detectors. DIMP converged well to the correct source distribution as the number of synthetic responses increased. DIMP also performed well for the first experimental validation exercise after applying a collimation factor, and sufficiently reducing the source search volume's extent to prevent the optimizer from getting stuck in local minima. DIMP's simple point detector response function (DRF) is being improved to address coplanar false positive/negative responses, and an angular DRF is being considered for integration with the next version of DIMP to account for highly collimated responses. Overall, DIMP shows promise for solving the SNM holdup inverse problem, especially once an improved optimization algorithm is implemented.

천안호 침몰해역의 해상조건 분석 (Analysis of the Sea Condition on the Patrol Ship Cheonan Sinking Waters)

  • 김강민;이중우;김규광;권소현;이형하
    • 한국항해항만학회지
    • /
    • 제34권5호
    • /
    • pp.349-354
    • /
    • 2010
  • 2010년 3월 26일 21시 45분경, 백령도 서남쪽 1.6km(1마일)해상에서 대한민국 해군의 초계함 천안함이 원인 미상의 사고로 침몰한 사건이 발생하였다. 이에 연안공학자의 입장에서는 수색 및 구조에 필요한 기초자료인 해상조건들을 제공하고 시뮬레이션을 통한 보다 자세한 예측 및 유추가능한 자료를 제공한다는 것은 뜻깊은 일임에 틀림없다. 이에 본 연구에서는 백령도-대청도 부근해역의 기상, 파랑, 조석 및 조류, 저질, 부유사 상태 등을 조사 분석하고 이를 기초로 해역특성을 분석하였다. 사건당시의 유속상황은 소조기-중조기 사이에 해당하며 사고발생일인 3월 26일이 지나고 4월 3~4일까지는 유속이 가장 강한 대조기가 진행되는 시점으로 수색 및 구조작업에 애로사항이 있는 것으로 파악되었다. 또한, 21:00-22:00 경은 낙조가 진행 중에 있기 때문에 물질이동은 남동쪽이 우세할 것으로 보이며 특히, 불규칙한 해저지형으로 인하여 급격한 와류 등이 존재할 것으로 판단되어 입자추적실험을 수행하였다. 수행결과, 입자는 유속상황에 따라서, 초기에는 남동쪽으로 이동하지만 장기 예측결과, 외해쪽으로 흘러가는 것으로 나타났다. 이를 통하여 추후, 수색작업의 범위를 외해쪽으로 확대시켜야 할 것으로 사료된다.

MOC-NA 영상의 영역기준 영상정합 (Area based image matching with MOC-NA imagery)

  • 윤준희;박정환
    • 한국측량학회지
    • /
    • 제28권4호
    • /
    • pp.463-469
    • /
    • 2010
  • 화성의 고도정보를 제공하는 MOLA 센서는 화성전역에 대한 데이터를 제공하지 못하므로, 수치표고모형을 만들기 위해서는 MOC-NA영상을 이용한 영상정합이 수행되어야만 한다. 그러나 특색(feature)이 적고 명암대비가 낮은 화성영상의 특성상, 자동 영상정합은 어려운 실정이다. 본 논문은 MOC-NA 영상에 대하여 영역기준 영상정합에 기반한 반 자동 영상정합의 알고리즘을 다룬다. 공액점을 나타내는 시드(seed)포인트 들이 수동으로 스테레오 영상에 추가되고 이를 바탕으로 특징점들이 자동으로 삽입된다. 각 영상의 특징점들은 서로의 초기 공액점으로 사용되며, 영역기준 영상정합으로 정제된다. 영상정합의 과정 중 정합에 실패한 점들은 초기 공액점의 위치를 정합에 성공한 주변의 여섯 점들을 이용하여 재 계산한 후 정제된다. 타깃영상과 검색영상의 역할을 바꾸어 수행한 영상정합의 질적 평가 결과, 97.3%의 점들이 한 화소 이하의 절대거리를 나타내었다.

집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법 (Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach)

  • 윤영수
    • 지능정보연구
    • /
    • 제19권4호
    • /
    • pp.55-79
    • /
    • 2013
  • 본 연구에서는 집중형 센터를 가진 역물류네트워크(Reverse logistics network with centralized centers : RLNCC)를 효율적을 해결하기 위한 혼합형 유전알고리즘(Hybrid genetic algorithm : HGA) 접근법을 제안한다. 제안된 HGA에서는 유전알고리즘(Genetic algorithm : GA)이 주요한 알고리즘으로 사용되며, GA 실행을 위해 0 혹은 1의 값을 가질 수 있는 새로운 비트스트링 표현구조(Bit-string representation scheme), Gen and Chang(1997)이 제안한 확장샘플링공간에서의 우수해 선택전략(Elitist strategy in enlarged sampling space) 2점 교차변이 연산자(Two-point crossover operator), 랜덤 돌연변이 연산자(Random mutation operator)가 사용된다. 또한 HGA에서는 혼합형 개념 적용을 위해 Michalewicz(1994)가 제안한 반복적언덕오르기법(Iterative hill climbing method : IHCM)이 사용된다. IHCM은 지역적 탐색기법(Local search technique) 중의 하나로서 GA탐색과정에 의해 수렴된 탐색공간에 대해 정밀하게 탐색을 실시한다. RLNCC는 역물류 네트워크에서 수집센터(Collection center), 재제조센터(Remanufacturing center), 재분배센터(Redistribution center), 2차 시장(Secondary market)으로 구성되며, 이들 각 센터 및 2차 시장들 중에서 하나의 센터 및 2차 시장만 개설되는 형태를 가지고 있다. 이러한 형태의 RLNCC는 혼합정수계획법(Mixed integer programming : MIP)모델로 표현되며, MIP 모델은 수송비용, 고정비용, 제품처리비용의 총합을 최소화하는 목적함수를 가지고 있다. 수송비용은 각 센터와 2차 시장 간에 제품수송에서 발생하는 비용을 의미하며, 고정비용은 각 센터 및 2차 시장의 개설여부에 따라 결정된다. 예를 들어 만일 세 개의 수집센터(수집센터 1, 2, 3의 개설비용이 각각 10.5, 12.1, 8.9)가 고려되고, 이 중에서 수집센터 1이 개설되고, 나머지 수집센터 2, 3은 개설되지 않을 경우, 전체고정비용은 10.5가 된다. 제품처리비용은 고객으로부터 회수된 제품을 각 센터 및 2차 시장에서 처리할 경우에 발생되는 비용을 의미한다. 수치실험에서는 본 연구에서 제안된 HGA접근법과 Yun(2013)의 연구에서 제안한 GA접근법이 다양한 수행도 평가 척도에 의해 서로 비교, 분석된다. Yun(2013)이 제안한 GA는 HGA에서 사용되는 IHCM과 같은 지역적탐색기법을 가지지 않는 접근법이다. 이들 두 접근법에서 동일한 조건의 실험을 위해 총세대수 : 10,000, 집단의 크기 : 20, 교차변이 확률 : 0.5, 돌연변이 확률 : 0.1, IHCM을 위한 탐색범위 : 2.0이 사용되며, 탐색의 랜덤성을 제거하기 위해 총 20번의 반복실행이 이루어 졌다. 사례로 제시된 두 가지 형태의 RLNCC에 대해 GA와 HGA가 각각 실행되었으며, 그 실험결과는 본 연구에서 제안된 HGA가 기존의 접근법인 GA보다 더 우수하다는 것이 증명되었다. 다만 본 연구에서는 비교적 규모가 작은 RLNCC만을 고려하였기에 추후 연구에서는 보다 규모가 큰 RLNCC에 대해 비교분석이 이루어 져야 할 것이다.