• Title/Summary/Keyword: Direct effects

Search Result 4,767, Processing Time 0.038 seconds

Construction of Consumer Confidence index based on Sentiment analysis using News articles (뉴스기사를 이용한 소비자의 경기심리지수 생성)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.1-27
    • /
    • 2017
  • It is known that the economic sentiment index and macroeconomic indicators are closely related because economic agent's judgment and forecast of the business conditions affect economic fluctuations. For this reason, consumer sentiment or confidence provides steady fodder for business and is treated as an important piece of economic information. In Korea, private consumption accounts and consumer sentiment index highly relevant for both, which is a very important economic indicator for evaluating and forecasting the domestic economic situation. However, despite offering relevant insights into private consumption and GDP, the traditional approach to measuring the consumer confidence based on the survey has several limits. One possible weakness is that it takes considerable time to research, collect, and aggregate the data. If certain urgent issues arise, timely information will not be announced until the end of each month. In addition, the survey only contains information derived from questionnaire items, which means it can be difficult to catch up to the direct effects of newly arising issues. The survey also faces potential declines in response rates and erroneous responses. Therefore, it is necessary to find a way to complement it. For this purpose, we construct and assess an index designed to measure consumer economic sentiment index using sentiment analysis. Unlike the survey-based measures, our index relies on textual analysis to extract sentiment from economic and financial news articles. In particular, text data such as news articles and SNS are timely and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. There exist two main approaches to the automatic extraction of sentiment from a text, we apply the lexicon-based approach, using sentiment lexicon dictionaries of words annotated with the semantic orientations. In creating the sentiment lexicon dictionaries, we enter the semantic orientation of individual words manually, though we do not attempt a full linguistic analysis (one that involves analysis of word senses or argument structure); this is the limitation of our research and further work in that direction remains possible. In this study, we generate a time series index of economic sentiment in the news. The construction of the index consists of three broad steps: (1) Collecting a large corpus of economic news articles on the web, (2) Applying lexicon-based methods for sentiment analysis of each article to score the article in terms of sentiment orientation (positive, negative and neutral), and (3) Constructing an economic sentiment index of consumers by aggregating monthly time series for each sentiment word. In line with existing scholarly assessments of the relationship between the consumer confidence index and macroeconomic indicators, any new index should be assessed for its usefulness. We examine the new index's usefulness by comparing other economic indicators to the CSI. To check the usefulness of the newly index based on sentiment analysis, trend and cross - correlation analysis are carried out to analyze the relations and lagged structure. Finally, we analyze the forecasting power using the one step ahead of out of sample prediction. As a result, the news sentiment index correlates strongly with related contemporaneous key indicators in almost all experiments. We also find that news sentiment shocks predict future economic activity in most cases. In almost all experiments, the news sentiment index strongly correlates with related contemporaneous key indicators. Furthermore, in most cases, news sentiment shocks predict future economic activity; in head-to-head comparisons, the news sentiment measures outperform survey-based sentiment index as CSI. Policy makers want to understand consumer or public opinions about existing or proposed policies. Such opinions enable relevant government decision-makers to respond quickly to monitor various web media, SNS, or news articles. Textual data, such as news articles and social networks (Twitter, Facebook and blogs) are generated at high-speeds and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. Although research using unstructured data in economic analysis is in its early stages, but the utilization of data is expected to greatly increase once its usefulness is confirmed.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

Studies on Grain Filling and Quality Changes of Hard and Soft Wheat Grown under the Different Environmental Conditions (환경 변동에 따른 경ㆍ연질 소맥의 등숙 및 품질의 변화에 관한 연구)

  • Young-Soo Han
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.17
    • /
    • pp.1-44
    • /
    • 1974
  • These studies were made at Suwon in 1972 and at Suwon, Iri, and Kwangju in 1973 to investigate grain filling process and variation of grain quality of NB 68513 and Caprock as hard red winter wheat, Suke #169 as soft red winter wheat variety and Yungkwang as semi-hard winter variety, grown under-three different fertilizer levels and seeding dates. Other experiments were conducted to find the effects of temperature, humidity and light intensity on the grain filling process and grain quality of Yungkwang and NB 68513 wheat varieties. These, experiments were conducted at Suwon in 1973 and 1974. 1. Grain filling process of wheat cultivars: 1) The frequency distribution of a grain weight shows that wider distribution of grain weight was associated with large grain groups rather than small grain group. In the large grain groups, the frequency was mostly concentrated near mean value, while the frequency was dispersed over the values in the small grain group. 2) The grain weight was more affected by the grain thickness and width than by grain length. 3) The grain weight during the ripening period was rapidly increased from 14 days after flowering to 35 days in Yungkwang and from 14 days after flowering to 28 days in NB 68513. The large grain group, Yungkwang was rather slowly increased and took a longer period in increase of endosperm ratio of grain than the small grain group, NB 68513. 4) In general, the 1, 000 grain weight was reduced under high temperature, low humidity, while it was increased under low temperature and high humidity condition, and under high temperature and humidity condition. The effect of shading on grain weight was greater in high temperature than in low temperature condition and no definite tendency was found in high humidity condition. 5) The effects of temperature, humidity and shading on 1, 000 grain weight were greater in large-grain group, Yungkwang than in small grain group, NB 68513. Highly significant positive correlation was found between 1, 000 grain weight and days to ripening. 6) The 1, 000 grain weight and test weight were increased more or less as the fertilizer levels applied were increased. However, the rate of increasing 1, 000 grain weight was low when fertilizer levels were increased from standard to double. The 1, 000 grain weight was high when planted early. Such tendency was greater in Suwon than in Kwangju or Iri area. 2. Milling quality: 7) The milling rate in a same group of varieties was higher under the condition of low temperature, high humidity and early maturing culture which were responsible for increasing 1, 000 grain weight. No definite relations were found along with locations. 8) In the varieties tested, the higher milling rate was found in large grain variety, Yungkwang, and the lowest milling rate was obtained from Suke # 169, the small grain variety. But the small grained hard wheat variety such as Caprock and NB 68513 showed higher milling rate compared with the soft wheat variety, Suke # 169. 9) There were no great differences of ash content due to location, fertilizer level and seeding date while remarkable differences due to variety were found. The ash content was high in the hard wheat varieties such as NB 68513, Caprock and low in soft wheat varieties such as Yungkwang and Suke # 169. 3. Protein content: 10) The protein content was increased under the condition of high temperature, low humidity and shading, which were responsible for reduction of 1, 000 grain weight. The varietal differences of protein content due to high temperature, low humidity and shading conditions were greater in Yungkwang than in NB 68513. 11) The high content of protein in grain within one to two weeks after flowering might be due to the high ratio of pericarp and embryo to endosperm. As grains ripen, the effects of embryo and pericarp on protein content were decreased, reducing protein content. However, the protein content was getting increased from three or four weeks after flowering, and maximized at seven weeks after flowering. The protein content of grain at three to four weeks after flowering increased as the increase of 1, 000 grain weight. But the protein content of matured grain appeared to be affected by daily temperature on calender rather than by duration of ripening period. 12) Highly significant positive correlation value was found between the grain protein content and flour protein content. 13) The protein content was increased under the high level of fertilizers and late seeding. The local differences of protein content were greater in Suwon than in Kwangju and Iri. 14) Protein content in the varieties tested were high in Yungkwang, NB 68513 and Caprock, and low in Suke # 169. However, variation in protein content due to the cultural methods was low in Suke # 169. 15) Protein yield per unit area was increased in accordance with increase of fertilizer levels and early maturing culture. However, nitrogen fertilizer was utilized rather effectively in early maturing culture and Yungkwang was the highest in protein yield per unit area. 4. Physio-chemical properties of wheat flour: 16) Sedimentation value was higher under the conditions of high temperature, low humidity and high levels of fertilizers than under the conditions of low temperature, high moisture and low levels of fertilizers. Such differences of sedimentation values were more apparent in NB 68513 and Caprock than Yungkwang and Suke # 169. The local difference of sedimentation value was greater in Suwon than in Kwangju and Iri. Even though the sedimentation value was highly correlated with protein content of grain, the high humidity was considered one of the factors affecting sedimentation value. 17) Changes of Pelshenke values due to the differences of cultural practices and locations were generally coincident with sedimentation values. 18) The mixing time required for mixogram was four to six minutes in NB 68513, five to seven minutes in Cap rock. The great variation of mixing time for Yungkwang and Suke # 169 due to location and planting conditions was found. The mixing height and area were high in hard wheat than in soft wheat. Variation of protein content due to cultural methods were inconsistent. However, the pattern of mixogram were very much same regardless the treatments applied. With this regard, it could be concluded that the mixogram is a kind of method expressing the specific character of the variety. 19) Even though the milling property of NB 68513 and Caprock was deteriorated under either high temperature and low humidity of high fertilizer levels and late seeding conditions, baking quality was better due to improved physio-chemical properties of flour. In contrast, early maturing culture deteriorated physio-chemical properties, milling property of grain and grain protein yield per unit area was increased. However, it might be concluded that the hard wheat production of NB 68513 and Caprock for baking purpose could be done better in Suwon than in Iri or Kwangju area. 5. Interrelationships between the physio-chemical characters of wheat flour: 20) Physio-chemical properties of flour didn't have direct relationship with milling rate and ash content. Low grain weight produced high protein content and better physio-chemical flour properties. 21) In hard wheat varieties like NB 68513 and Caprock, protein content was significantly correlated with sedimentation value, Pelshenke value and mixing height. However, gluten strength and baking quality were improved by the increased protein content. In Yungkwang and Suk # 169, protein content was correlated with sedimentation value, but no correlations were found with Pelshenke value and mixing height. Consequently, increase of protein content didn't improve the gluten strength in soft wheat. 22) The highly significant relationships between protein content and gluten strength and sedimentation . value, and between Pelshenke value, mixogram and gluten strength indicated that the determination of mixogram and Pelshenke value are useful for de terming soft and hard type of varieties. Determination of sedimentation value is considered useful method for quality evaluation of wheat grain under different cultural practices.

  • PDF

Studies on Neck Blast Infection of Rice Plant (벼 이삭목도열병(病)의 감염(感染)에 관(關)한 연구(硏究))

  • Kim, Hong Gi;Park, Jong Seong
    • Korean Journal of Agricultural Science
    • /
    • v.12 no.2
    • /
    • pp.206-241
    • /
    • 1985
  • Attempts to search infection period, infection speed in the tissue of neck blast of rice plant, location of inoculum source and effects of several conditions about the leaf sheath of rice plants for neck blast incidence have been made. 1. The most infectious period for neck blast incidence was the booting stage just before heading date, and most of necks have been infected during the booting stage and on heading date. But $Indica{\times}Japonica$ hybrid varieties had shown always high possibility for infection after booting stage. 2. Incubation period for neck blast of rice plants under natural conditions had rather a long period ranging from 10 to 22 days. Under artificial inoculation condition incubation period in the young panicle was shorter than in the old panicle. Panicles that emerged from the sheath of flag leaf had long incubation period, with a low infection rate and they also shown slow infection speed in the tissue. 3. Considering the incubation period of neck blast of rice plant, we assumed that the most effective application periods of chemicals are 5-10 days for immediate effective chemicals and 10-15 days for slow effective chemicals before heading. 4. Infiltration of conidia into the leaf sheath of rice plant carried out by saturation effect with water through the suture of the upper three leaves. The number of conidia observed in the leaf sheath during the booting stage were higher than those in the leaf sheath during other stages. Ligule had protected to infiltrate of conidia into the leaf sheath. 5. When conidia were infiltrated into the leaf sheath, the highest number of attached conidia was observed on the panicle base and panicle axis with hairs and degenerated panicle, which seemed to promote the infection of neck blast. 6. The lowest spore concentration for neck blast incidence was variable with rice varietal groups. $Indica{\times}Japonica$ hybrid varieties were infected easily compared to the Japonica type varieties, especially. The number of spores for neck blast incidence in $Indica{\times}Japonica$ hybrid varieties was less than 100 and disease index was higher also in $Indica{\times}Japonica$ hybrid than in Japonica type varieties. 7. Nitrogen content and silicate content were related with blast incidence in necks of rice plants in the different growing stage changed during growing period. Nitrogen content increased from booting stage to heading date and then decreased gradually as time passes. Silicate content increased from booting stage after heading with time. Change of these content promoted to increase neck blast infection. 8. Conidia moved to rice plant by ascending and desending dispersal and then attached on the rice plant. Conidia transfered horizontally was found very negligible. So we presumed that infection rate of neck blast was very low after emergence of panicle base from the leaf sheath. Also ascending air current by temperature difference between upper and lower side of rice plant seemed to increase the liberation of spores. 9. Conidial number of the blast fungus collected just before and after heading date was closely related with neck blast incidence. Lesions on three leaves from the top were closely related with neck blast incidence, because they had high potential for conidia formation of rice blast fungus and they were direct inoculum sources for neck blast. 10. The condition inside the leaf sheath was very favorable for the incidence of neck blast and the neck blast incidence in the leaf sheath increased as the level of fertilizer applied increased. Therefore, the infection rate of neck blast on the all panicle parts such as panicle base, panicle branches, spikelets, nodes, and internodes inside the leaf sheath didn't show differences due to varietal resistance or fertilizers applied. 11. Except for others among dominant species of fungi in the leaf sheath, only Gerlachia oryzae appeared to promote incidence of neck blast. It was assumed that days for heading of varieties were related with neck blast incidence.

  • PDF

A Study on the long-term Hemodialysis patient중s hypotension and preventation from Blood loss in coil during the Hemodialysis (장기혈액투석환자의 투석중 혈압하강과 Coil내 혈액손실 방지를 위한 기초조사)

  • 박순옥
    • Journal of Korean Academy of Nursing
    • /
    • v.11 no.2
    • /
    • pp.83-104
    • /
    • 1981
  • Hemodialysis is essential treatment for the chronic renal failure patient's long-term cure and for the patient management before and after kidney transplantation. It sustains the endstage renal failure patient's life which didn't get well despite strict regimen and furthermore it becomes an essential treatment to maintain civil life. Bursing implementation in hemodialysis may affect the significant effect on patient's life. The purpose of this study was to obtain the basic data to solve the hypotension problem encountable to patient and the blood loss problem affecting hemodialysis patient'a anemic states by incomplete rinsing of blood in coil through all process of hemodialysis. The subjects for this study were 44 patients treated hemodialysis 691 times in the hemodialysis unit, The .data was collected at Gang Nam 51. Mary's Hospital from January 1, 1981 to April 30, 1981 by using the direct observation method and the clinical laboratory test for laboratory data and body weight and was analysed by the use of analysis of Chi-square, t-test and anlysis of varience. The results obtained an follows; A. On clinical laboratory data and other data by dialysis Procedure. The average initial body weight was 2.37 ± 0.97kg, and average body weight after every dialysis was 2.33 ± 0.9kg. The subject's average hemoglobin was 7.05±1.93gm/dl and average hematocrit was 20.84± 3.82%. Average initial blood pressure was 174.03±23,75mmHg and after dialysis was 158.45±25.08mmHg. The subject's average blood ion due to blood sample for laboratory data was 32.78±13.49cc/ month. The subject's average blood replacement for blood complementation was 1.31 ±0.88 pint/ month for every patient. B. On the hypotensive state and the coping approaches occurrence rate of hypotension was 28.08%. It was 194 cases among 691 times. 1. In degrees of initial blood pressure, the most 36.6% was in the group of 150-179mmHg, and in degrees of hypotension during dialysis, the most 28.9% in the group of 40-50mmHg, especially if the initial blood pressure was under 180mmHg, 59.8% clinical symptoms appeared in the group of“above 20mmHg of hypotension”. If initial blood pressure was above 180mmHg, 34.2% of clinical symptoms were appeared in the group of“above 40mmHg of hypotension”. These tendencies showed the higher initial blood pressure and the stronger degree of hypotension, these results showed statistically singificant differences. (P=0.0000) 2. Of the occuring times of hypotension,“after 3 hrs”were 29.4%, the longer the dialyzing procedure, the stronger degree of hypotension ann these showed statistically significant differences. (P=0.0142). 3. Of the dispersion of symptoms observed, sweat and flush were 43.3%, and Yawning, and dizziness 37.6%. These were the important symptoms implying hypotension during hemodialysis accordingly. Strages of procedures in coping with hypotension were as follows ; 45.9% were recovered by reducing the blood flow rate from 200cc/min to 1 00cc/min, and by reducing venous pressure to 0-30mmHg. 33.51% were recovered by controling (adjusting) blood flow rate and by infusion of 300cc of 0,9% Normal saline. 4.1% were recovered by infusion of over 300cc of 0.9% normal saline. 3.6% by substituting Nor-epinephiine, 5.7% by substituting blood transfusion, and 7,2% by substituting Albumin were recovered. And the stronger the degree of symptoms observed in hypotention, the more the treatments required for recovery and these showed statistically significant differences (P=0.0000). C. On the effects of the changes of blood pressure and osmolality by albumin and hemofiltration. 1. Changes of blood pressure in the group which didn't required treatment in hypotension and the group required treatment, were averaged 21.5mmHg and 44.82mmHg. So the difference in the latter was bigger than the former and these showed statistically significant difference (P=0.002). On the changes of osmolality, average mean were 12.65mOsm, and 17.57mOsm. So the difference was bigger in the latter than in the former but these not showed statistically significance (P=0.323). 2. Changes of blood pressure in the group infused albumin and in the group didn't required treatment in hypotension, were averaged 30mmHg and 21.5mmHg. So there was no significant differences and it showed no statistical significance (P=0.503). Changes of osmolality were averaged 5.63mOsm and 12.65mOsm. So the difference was smaller in the former but these was no stitistical significance (P=0.287). Changes of blood pressure in the group infused Albumin and in the group required treatment in hypotension were averaged 30mmHg and 44.82mmHg. So the difference was smaller in the former but there is no significant difference (P=0.061). Changes of osmolality were averaged 8.63mOsm, and 17.59mOsm. So the difference were smaller in the former but these not showed statistically significance (P=0.093). 3. Changes of blood pressure in the group iutplemented hemofiltration and in the Uoup didn't required treatment in hypotension were averaged 22mmHg and 21.5mmHg. So there was no significant differences and also these showed no statistical significance (P=0.320). Changes of osmolality were averaged 0.4mOsm and 12.65mOsm. So the difference was smaller in the former but these not showed statistical significance(P=0.199). Changes of blood pressure in the group implemented hemofiltration and in the group required treatment in hypotension were averaged 22mmHg and 44.82mmHg. So the difference was smatter in the former and these showed statistically significant differences (P=0.035). Changes of osmolality were averaged 0.4mOsm and 17.59mOsm. So the difference was smaller in the former but these not showed statistical significance (P=0.086). D. On the changes of body weight, and blood pressure, between the group of hemofiltration and hemodialysis. 1, Changes of body weight in the group implemented hemofiltration and hemodialysis were averaged 3.340 and 3.320. So there was no significant differences and these showed no statistically significant difference, (P=0.185) but standard deviation of body weight averaged in comparison with standard difference of body weight was statistically significant difference (P=0.0000). Change of blood Pressure in the group implemented hemofiltration and hemodialysis were averaged 17.81mmHg and 19.47mmHg. So there was no significant differences and these showed no statistically significant difference (P=0.119), But in comparison with standard deviation about difference of blood pressure was statistically significant difference. (P=0.0000). E. On the blood infusion method in coil after hemodialysis and residual blood losing method in coil. 1, On comparing and analysing Hct of residual blood in coil by factors influencing blood infusion method. Infusion method of saline 200cc reduced residual blood in coil after the quantitative comparison of Saline Occ, 50cc, 100cc, 200cc and the differences showed statistical significance (p < 0.001). Shaking Coil method reduced residual blood in Coil in comparison of Shaking Coil method and Non-Shaking Coil method this showed statistically significant difference (P < 0.05). Adjusting pressure in Coil at OmmHg method reduced residual blood in Coil in comparison of adjusting pressure in Coil at OmmHg and 200mmHg, and this showed statistically significant difference (P < 0.001). 2. Comparing blood infusion method divided into 10 methods in Coil with every factor respectively, there was seldom difference in group of choosing Saline 100cc infusion between Coil at OmmHg. The measured quantity of blood loss was averaged 13.49cc. Shaking Coil method in case of choosing saline 50cc infusion while adjusting pressure in coil at OmmHg was the most effective to reduce residual blood. The measured quantity of blood loss was averaged 15.18cc.

  • PDF

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

The Characteristics and Performances of Manufacturing SMEs that Utilize Public Information Support Infrastructure (공공 정보지원 인프라 활용한 제조 중소기업의 특징과 성과에 관한 연구)

  • Kim, Keun-Hwan;Kwon, Taehoon;Jun, Seung-pyo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.1-33
    • /
    • 2019
  • The small and medium sized enterprises (hereinafter SMEs) are already at a competitive disadvantaged when compared to large companies with more abundant resources. Manufacturing SMEs not only need a lot of information needed for new product development for sustainable growth and survival, but also seek networking to overcome the limitations of resources, but they are faced with limitations due to their size limitations. In a new era in which connectivity increases the complexity and uncertainty of the business environment, SMEs are increasingly urged to find information and solve networking problems. In order to solve these problems, the government funded research institutes plays an important role and duty to solve the information asymmetry problem of SMEs. The purpose of this study is to identify the differentiating characteristics of SMEs that utilize the public information support infrastructure provided by SMEs to enhance the innovation capacity of SMEs, and how they contribute to corporate performance. We argue that we need an infrastructure for providing information support to SMEs as part of this effort to strengthen of the role of government funded institutions; in this study, we specifically identify the target of such a policy and furthermore empirically demonstrate the effects of such policy-based efforts. Our goal is to help establish the strategies for building the information supporting infrastructure. To achieve this purpose, we first classified the characteristics of SMEs that have been found to utilize the information supporting infrastructure provided by government funded institutions. This allows us to verify whether selection bias appears in the analyzed group, which helps us clarify the interpretative limits of our study results. Next, we performed mediator and moderator effect analysis for multiple variables to analyze the process through which the use of information supporting infrastructure led to an improvement in external networking capabilities and resulted in enhancing product competitiveness. This analysis helps identify the key factors we should focus on when offering indirect support to SMEs through the information supporting infrastructure, which in turn helps us more efficiently manage research related to SME supporting policies implemented by government funded institutions. The results of this study showed the following. First, SMEs that used the information supporting infrastructure were found to have a significant difference in size in comparison to domestic R&D SMEs, but on the other hand, there was no significant difference in the cluster analysis that considered various variables. Based on these findings, we confirmed that SMEs that use the information supporting infrastructure are superior in size, and had a relatively higher distribution of companies that transact to a greater degree with large companies, when compared to the SMEs composing the general group of SMEs. Also, we found that companies that already receive support from the information infrastructure have a high concentration of companies that need collaboration with government funded institution. Secondly, among the SMEs that use the information supporting infrastructure, we found that increasing external networking capabilities contributed to enhancing product competitiveness, and while this was no the effect of direct assistance, we also found that indirect contributions were made by increasing the open marketing capabilities: in other words, this was the result of an indirect-only mediator effect. Also, the number of times the company received additional support in this process through mentoring related to information utilization was found to have a mediated moderator effect on improving external networking capabilities and in turn strengthening product competitiveness. The results of this study provide several insights that will help establish policies. KISTI's information support infrastructure may lead to the conclusion that marketing is already well underway, but it intentionally supports groups that enable to achieve good performance. As a result, the government should provide clear priorities whether to support the companies in the underdevelopment or to aid better performance. Through our research, we have identified how public information infrastructure contributes to product competitiveness. Here, we can draw some policy implications. First, the public information support infrastructure should have the capability to enhance the ability to interact with or to find the expert that provides required information. Second, if the utilization of public information support (online) infrastructure is effective, it is not necessary to continuously provide informational mentoring, which is a parallel offline support. Rather, offline support such as mentoring should be used as an appropriate device for abnormal symptom monitoring. Third, it is required that SMEs should improve their ability to utilize, because the effect of enhancing networking capacity through public information support infrastructure and enhancing product competitiveness through such infrastructure appears in most types of companies rather than in specific SMEs.