• Title/Summary/Keyword: System efficiency

Search Result 20,038, Processing Time 0.047 seconds

Application of unmanned helicopter on pest management in rice cultivation (무인 항공기 이용 벼 병해충 방제기술 연구)

  • Park, K.H.;Kim, J.K.
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.10 no.1
    • /
    • pp.43-58
    • /
    • 2008
  • This research was conducted to determine the alternative tool of chemical spray for rice cultivation using the unmanned helicopter(Yamaha, R-Max Type 2G-remote controlled system) at farmer's field in Korea. The unmanned helicopter tested was introduced form Japan. In Korea the application of chemicals by machine sprayer for pest management in rice cultivation has been ordinarily used at the farmer's level. However, it involved a relatively high cost and laborious for the small scale of cultivation per farm household. Farm population has been highly decreased to 7.5% in 2002 and the population is expected to rapidly reduce by 3.5% in 2012. In Japan, pest control depending on unmanned helicopter has been increased by leaps and bounds. This was due in part to the materialization of the low-cost production technology under agricultural policy and demand environmentally friendly farm products. The practicability of the unmanned helicopter in terms of super efficiency and effectiveness has been proven, and the farmers have understood that the unmanned helicopter is indispensable in the future farming system that they visualized. Also, the unmanned helicopter has been applied to rice, wheat, soybean, vegetables, fruit trees, pine trees for spraying chemicals and/or fertilizers in Japan Effect of disease control by unmanned helicopter was partially approved against rice blast and sheath blight. However, the result was not satisfactory due to the weather conditions and cultural practices. The spray density was also determined in this experiment at 0, 15, 30, and 60cm height from the paddy soil surface and there was 968 spots at 0cm, 1,560 spots at 15cm, 1,923 spots at 30cm, and 2,999 spots at 60cm height. However, no significant difference was found among the treatments. At the same time, there was no phytotoxicity observed under the chemical stray using this unmanned helicopter, nor the rice plant itself was damaged by the wind during the operation.

Evaluation of surface dose comparison by treatment equipment (치료 장비 별 표면 선량 비교평가)

  • Choi Eun Ha;Yoon Bo Reum;Park Byoung Suk;An Ye Chan;Park Myoung Hwan;Park Yong Chul
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.34
    • /
    • pp.31-42
    • /
    • 2022
  • Purpose: This study measures and compares the surface dose values in the virtual target volume using Tomotherapy, Halcyon, and TrueBeam equipment using 6MV-Flattening Filter-Free(FFF) energy. Materials and Methods: CT scan was performed under three conditions of without bolus, 0.5 cm bolus, and 1 cm bolus using an IMRT phantom (IBA, Germany). The Planning Target Volume (PTV) was set at the virtual target depth, and the treatment plan was established at 200 cGy at a time. For surface dosimetry, the Gafchromic EBT3 film was placed in the same section as the treatment planning system and repeated measurements were performed 10 times and then analyzed. Result: As a result of measuring the surface dose for each equipment, without, 0.5 cm, 1 cm bolus is in this order, and the result of Tomotherapy is 115.2±2.0 cGy, 194.4±3.3 cGy, 200.7±2.9 cGy, The result in Halcyon was 104.7±3.0 cGy, 180.1±10.8 cGy, 187.0±10.1 cGy, and the result in TrueBeam was 92.4±3.2 cGy, 148.6±5.7 cGy, 155.8±6.1 cGy, In all three conditions, the same as the treatment planning system, Tomotherapy, Halcyon, TreuBeam was measured highly in that order. Conclusion: Higher surface doses were measured in Tomotherapy and Halcyon compared to TrueBeam equipment. If the characteristics of each equipment are considered according to the treatment site and treatment purpose, it is expected that the treatment efficiency of the patient will increase as well as the treatment satisfaction of the patient.

Effect of Varying Excessive Air Ratios on Nitrogen Oxides and Fuel Consumption Rate during Warm-up in a 2-L Hydrogen Direct Injection Spark Ignition Engine (2 L급 수소 직접분사 전기점화 엔진의 워밍업 시 공기과잉률에 따른 질소산화물 배출 및 연료 소모율에 대한 실험적 분석)

  • Jun Ha;Yongrae Kim;Cheolwoong Park;Young Choi;Jeongwoo Lee
    • Journal of the Korean Institute of Gas
    • /
    • v.27 no.3
    • /
    • pp.52-58
    • /
    • 2023
  • With the increasing awareness of the importance of carbon neutrality in response to global climate change, the utilization of hydrogen as a carbon-free fuel source is also growing. Hydrogen is commonly used in fuel cells (FC), but it can also be utilized in internal combustion engines (ICE) that are based on combustion. Particularly, ICEs that already have established infrastructure for production and supply can greatly contribute to the expansion of hydrogen energy utilization when it becomes difficult to rely solely on fuel cells or expand their infrastructure. However, a disadvantage of utilizing hydrogen through combustion is the potential generation of nitrogen oxides (NOx), which are harmful emissions formed when nitrogen in the air reacts with oxygen at high temperatures. In particular, for the EURO-7 exhaust regulation, which includes cold start operation, efforts to reduce exhaust emissions during the warm-up process are required. Therefore, in this study, the characteristics of nitrogen oxides and fuel consumption were investigated during the warm-up process of cooling water from room temperature to 88℃ using a 2-liter direct injection spark ignition (SI) engine fueled with hydrogen. One advantage of hydrogen, compared to conventional fuels like gasoline, natural gas, and liquefied petroleum gas (LPG), is its wide flammable range, which allows for sparser control of the excessive air ratio. In this study, the excessive air ratio was varied as 1.6/1.8/2.0 during the warm-up process, and the results were analyzed. The experimental results show that as the excessive air ratio becomes sparser during warm-up, the emission of nitrogen oxides per unit time decreases, and the thermal efficiency relatively increases. However, as the time required to reach the final temperature becomes longer, the cumulative emissions and fuel consumption may worsen.

Professional Speciality of Communication Administration and, Occupational Group and Series Classes of Position in National Public Official Law -for Efficiency of Telecommunication Management- (통신행정의 전문성과 공무원법상 직군렬 - 전기통신의 관리들 중심으로-)

  • 조정현
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.3 no.1
    • /
    • pp.26-27
    • /
    • 1978
  • It can be expected that intelligence and knowledge will be the core of the post-industrial society in a near future. Accordingly, the age of intelligence shall be accelerated extensively to find ourselves in an age of 'Communication' service enterprise. The communication actions will increase its efficiency and multiply its utility, indebted to its scientic principles and legal idea. The two basic elements of communication action, that is, communication station and communication men are considered to perform their function when they are properly supported and managed by the government administration. Since the communication action itself is composed of various factors, the elements such as communication stations and officials must be cultivated and managed by specialist or experts with continuous and extensive study practices concerned. With the above mind, this study reviewed our public service officials law with a view to improve it by providing some suggestions for communication experts and researchers to find suitable positions in the framework of government administration. In this study, I would like to suggest 'Occupational Group of Communication' that is consisted of a series of comm, management positions and research positions in parallel to the existing series of comm, technical position. The communication specialist or expert is required to be qualified with necessary scientific knowledge and techniques of communication, as well as prerequisites as government service officials. Communication experts must succeed in the first hand to obtain government licence concerned in with the government law and regulation, and international custom before they can be appointed to the official positions. This system of licence-prior-to-appointment is principally applied in the communication management position. And communication research positions are for those who shall engage themselves to the work of study and research in the field of both management and technical nature. It is hopefully expected that efficient and extensive management of communication activities, as well as scientific and continuous study over than communication enterprise will be upgraded at national dimensions.

  • PDF

A New Exploratory Research on Franchisor's Provision of Exclusive Territories (가맹본부의 배타적 영업지역보호에 대한 탐색적 연구)

  • Lim, Young-Kyun;Lee, Su-Dong;Kim, Ju-Young
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.37-63
    • /
    • 2012
  • In franchise business, exclusive sales territory (sometimes EST in table) protection is a very important issue from an economic, social and political point of view. It affects the growth and survival of both franchisor and franchisee and often raises issues of social and political conflicts. When franchisee is not familiar with related laws and regulations, franchisor has high chance to utilize it. Exclusive sales territory protection by the manufacturer and distributors (wholesalers or retailers) means sales area restriction by which only certain distributors have right to sell products or services. The distributor, who has been granted exclusive sales territories, can protect its own territory, whereas he may be prohibited from entering in other regions. Even though exclusive sales territory is a quite critical problem in franchise business, there is not much rigorous research about the reason, results, evaluation, and future direction based on empirical data. This paper tries to address this problem not only from logical and nomological validity, but from empirical validation. While we purse an empirical analysis, we take into account the difficulties of real data collection and statistical analysis techniques. We use a set of disclosure document data collected by Korea Fair Trade Commission, instead of conventional survey method which is usually criticized for its measurement error. Existing theories about exclusive sales territory can be summarized into two groups as shown in the table below. The first one is about the effectiveness of exclusive sales territory from both franchisor and franchisee point of view. In fact, output of exclusive sales territory can be positive for franchisors but negative for franchisees. Also, it can be positive in terms of sales but negative in terms of profit. Therefore, variables and viewpoints should be set properly. The other one is about the motive or reason why exclusive sales territory is protected. The reasons can be classified into four groups - industry characteristics, franchise systems characteristics, capability to maintain exclusive sales territory, and strategic decision. Within four groups of reasons, there are more specific variables and theories as below. Based on these theories, we develop nine hypotheses which are briefly shown in the last table below with the results. In order to validate the hypothesis, data is collected from government (FTC) homepage which is open source. The sample consists of 1,896 franchisors and it contains about three year operation data, from 2006 to 2008. Within the samples, 627 have exclusive sales territory protection policy and the one with exclusive sales territory policy is not evenly distributed over 19 representative industries. Additional data are also collected from another government agency homepage, like Statistics Korea. Also, we combine data from various secondary sources to create meaningful variables as shown in the table below. All variables are dichotomized by mean or median split if they are not inherently dichotomized by its definition, since each hypothesis is composed by multiple variables and there is no solid statistical technique to incorporate all these conditions to test the hypotheses. This paper uses a simple chi-square test because hypotheses and theories are built upon quite specific conditions such as industry type, economic condition, company history and various strategic purposes. It is almost impossible to find all those samples to satisfy them and it can't be manipulated in experimental settings. However, more advanced statistical techniques are very good on clean data without exogenous variables, but not good with real complex data. The chi-square test is applied in a way that samples are grouped into four with two criteria, whether they use exclusive sales territory protection or not, and whether they satisfy conditions of each hypothesis. So the proportion of sample franchisors which satisfy conditions and protect exclusive sales territory, does significantly exceed the proportion of samples that satisfy condition and do not protect. In fact, chi-square test is equivalent with the Poisson regression which allows more flexible application. As results, only three hypotheses are accepted. When attitude toward the risk is high so loyalty fee is determined according to sales performance, EST protection makes poor results as expected. And when franchisor protects EST in order to recruit franchisee easily, EST protection makes better results. Also, when EST protection is to improve the efficiency of franchise system as a whole, it shows better performances. High efficiency is achieved as EST prohibits the free riding of franchisee who exploits other's marketing efforts, and it encourages proper investments and distributes franchisee into multiple regions evenly. Other hypotheses are not supported in the results of significance testing. Exclusive sales territory should be protected from proper motives and administered for mutual benefits. Legal restrictions driven by the government agency like FTC could be misused and cause mis-understandings. So there need more careful monitoring on real practices and more rigorous studies by both academicians and practitioners.

  • PDF

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

Evaluating efficiency of application the skin flash for left breast IMRT. (왼쪽 유방암 세기변조방사선 치료시 Skin Flash 적용에 대한 유용성 평가)

  • Lim, Kyoung Dal;Seo, Seok Jin;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.49-63
    • /
    • 2018
  • Purpose : The purpose of this study is investigating the changes of treatment plan and comparing skin dose with or without the skin flash. To investigate optimal applications of the skin flash, the changes of skin dose of each plans by various thicknesses of skin flash were measured and analyzed also. Methods and Material : Anthropomorphic phantom was scanned by CT for this study. The 2 fields hybrid IMRT and the 6 fields static IMRT were generated from the Eclipse (ver. 13.7.16, Varian, USA) RTP system. Additional plans were generated from each IMRT plans by changing skin flash thickness to 0.5 cm, 1.0 cm, 1.5 cm, 2.0 cm and 2.5 cm. MU and maximum doses were measured also. The treatment equipment was 6MV of VitalBeam (Varian Medical System, USA). Measuring device was a metal oxide semiconductor field-effect transistor(MOSFET). Measuring points of skin doses are upper (1), middle (2) and lower (3) positions from center of the left breast of the phantom. Other points of skin doses, artificially moved to medial and lateral sides by 0.5 cm, were also measured. Results : The reference value of 2F-hIMRT was 206.7 cGy at 1, 186.7 cGy at 2, and 222 cGy at 3, and reference values of 6F-sIMRT were measured at 192 cGy at 1, 213 cGy at 2, and 215 cGy at 3. In comparison with these reference values, the first measurement point in 2F-hIMRT was 261.3 cGy with a skin flash 2.0 cm and 2.5 cm, and the highest dose difference was 26.1 %diff. and 5.6 %diff, respectively. The third measurement point was 245.3 cGy and 10.5 %diff at the skin flash 2.5 cm. In the 6F-sIMRT, the highest dose difference was observed at 216.3 cGy and 12.7 %diff. when applying the skin flash 2.0 cm for the first measurement point and the dose difference was the largest at the application point of 2.0 cm, not the skin flash 2.5 cm for each measurement point. In cases of medial 0.5 cm shift points of 2F-hIMRT and 6F-sIMRT without skin flash, the measured value was -75.2 %diff. and -70.1 %diff. at 2F, At -14.8, -12.5, and -21.0 %diff. at the 1st, 2nd and 3rd measurement points, respectively. Generally, both treatment plans showed an increase in total MU, maximum dose and %diff as skin flash thickness increased, except for some results. The difference of skin dose using 0.5 cm thickness of skin flash was lowest lesser than 20 % in every conditions. Conclusion : Minimizing the thickness of skin flash by 0.5 cm is considered most ideal because it makes it possible to keep down MUs and lowering maximum doses. In addition, It was found that MUs, maximum doses and differences of skin doses did not increase infinitely as skin flash thickness increase by. If the error margin caused by PTV or other factors is lesser than 1.0 cm, It is considered that there will be many advantages in with the skin flash technique comparing without it.

  • PDF

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF