• Title/Summary/Keyword: type of information

Search Result 14,315, Processing Time 0.05 seconds

Crystallographic and Magnetic Properties of Brownmillerite Ca1-xSrxFeO2.5(x=0, 0.3, 0.5, 0.7, 1.0) (Brownmillerite Ca1-xSrxFeO2.5(x=0, 0.3, 0.5, 0.7, 1.0)의 결정학적 및 자기적 성질에 관한 연구)

  • Yoon, Sung-Hyun;Yang, Ju-Il;Kim, Chul-Sung
    • Journal of the Korean Magnetics Society
    • /
    • v.14 no.2
    • /
    • pp.76-82
    • /
    • 2004
  • Crystallographic and magnetic properties for Brownmillerite-type oxides $Ca_{1-x}$Sr$_{x}$FeO$_{2.5}$ (x = 0, 0.3, 0.5, 0.7, 1.0) were investigated using x-ray diffraction (XRD) and Mossbauer spectroscopy. Polycrystalline samples were prepared by conventional solid-state reaction method. Information on exact crystalline structures, lattice parameters, bond lengths and bond angles were obtained by refining their XRD profiles using a Rietveld method. The crystal structures were found to be all orthorhombic with space group Icmm (x = 0, 0.3) and Icmm (x = 0.5, 0.7, 1.0) The lattice parameters increased monotonically with increasing Sr concentration. Both the tetrahedral and the octahedral sites were considerably distorted and elongated along b-axis. While bond lengths and bond angles O-Fe-O tend to increase minutely with the increase of Sr content, bond angles Fe-O-Fe decreased accordingly. The Mossbauer spectra showed two sets of sharp sextets originating from ferric ions occupying the tetrahedral and the octahedral sites under the magnetic transition temperature T$_{N}$. Regardless of the compositions x, the electric quadrupole splittings were -0.3 mm/s and 0.4 mm/s for the octahedral and the tetrahedral site, respectively. Above T$_{N}$, the Mossbauer spectra showed the paramagnetic doublets whose electric quadrupole splittings were about 1.6 mm/s, irrespective of compositions x. T$_{N}$ was found to decrease monotonically with the increase of Sr concentration. Ratios of absorption area for the two sites were almost 1:1 up to as high as 0.95 T$_{N}$ for all x. The result of the Debye temperature indicated that the inter-atomic binding force for the Fe atoms in the tetrahedral site was stronger than that for the octahedral site.hedral site.

Treatment and Follow-up of Human Papillomavirus Infected Women in a Municipality in Southern Brazil

  • Ruggeri, Joao Batista;Agnolo, Catia Millene Dell;Gravena, Angela Andreia Franca;Demitto, Marcela de Oliveira;Lopes, Tiara Cristina Romeiro;Delatorre, Silvana;Carvalho, Maria Dalva de Barros;Consolaro, Marcia Edilaine Lopes;Pelloso, Sandra Marisa
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.16 no.15
    • /
    • pp.6521-6526
    • /
    • 2015
  • Background: This study aimed toanalyze the risk behavior for cervical cancer (CC) and the human papillomavirus (HPV) prevalence and resolution among women who received care through the private healthcare network of a municipality in southern Brazil. Materials and Methods: This descriptive and retrospective study was conducted with 25 women aged 20 to 59 years who received care through the private healthcare network and were treated at a specialty clinic in the period from January to December 2012 in a municipality in Northwest Parana, Southern Brazil. Data from medical records with cytological and HPV results were used. Following treatment, these women were followed-up and reassessed after 6 months. Data were statistically analyzed using the t-test and chi-squared test at a 5% significance level. Results: The mean age of the studied women was $27.8{\pm}7.75$ years old, and the majority were married, with paid employment and were non-smokers. The mean age at menarche was $13.0{\pm}0.50$ years old, and the mean age at first intercourse was $17.5{\pm}1.78$ years, with only 8.0% (2) initiating sexual activity at an age ${\leq}15$ years old. The majority had 1 to 2 children (60.0%), while 88.0% reported having had one sexual partner in their lifetime, and all the women were sexually active. A total of 68.0% used a hormonal contraceptive method. All the women had leukorrhea and pain and were infected by a single HPV type. Regarding the lesion grade, 80.0% showed high risk and 20.0% low risk. The most prevalent high-risk HPV strain was 16. Conclusions: These findings provide relevant information on HPV risk factors and infection, as well as the treatment and 6-month follow-up results, in economically and socially advantaged women with no traditional risk factors, corroborating previous reports that different risk factors may be described in different populations. Thus, this study reinforces the fact that even women without the traditional risk factors should undergo HPVmonitoring and assessment to determine the persistence of infection, promoting early diagnosis of the lesions presented and appropriate treatment to thus prevent the occurrence of CC.

Prediction of Entrance Surface Dose in Chest Digital Radiography (흉부 디지털촬영에서 입사표면선량 예측)

  • Lee, Won-Jeong;Jeong, Sun-Cheol
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.4
    • /
    • pp.573-579
    • /
    • 2019
  • The purpose of this study is predicted easily the entrance surface dose (ESD) in chest digital radiography. We used two detector type such as flat-panel detector (FP) and IP (Imaging plate detector). ESD was measured at each exposure condition combined tube voltage with tube current using dosimeter, after attaching on human phantom, it was repeated 3 times. Phantom images were evaluated independently by three chest radiologists after blinding image. Dose-area product (DAP) or exposure index (EI) was checked by Digital Imaging and Communications in Medicine (DICOM) header on phantom images. Statistical analysis was performed by the linear regression using SPSS ver. 19.0. ESD was significant difference between FP and IP($85.7{\mu}Gy$ vs. $124.6{\mu}Gy$, p=0.017). ESD was positively correlated with image quality in FP as well as IP. In FP, adjusted R square was 0.978 (97.8%) and linear regression model was $ESD=0.407+68.810{\times}DAP$. DAP was 4.781 by calculating the $DAP=0.021+0.014{\times}340{\mu}Gy$. In IP, adjusted R square was 0.645 (64.5%) and linear regression model was $ESD=-63.339+0.188{\times}EI$. EI was 1748.97 by calculating the $EI=565.431+3.481{\times}340{\mu}Gy$. In chest digital radiography, the ESD can be easily predicted by the DICOM header information.

Analysis of Characteristics of Horizontal Response Spectrum of Velocity Ground Motions from 5 Macro Earthquakes (5개 중규모 지진의 속도 관측자료를 이용한 수평 응답스펙트럼 특성 분석)

  • Kim, Jun-Kyoung
    • Tunnel and Underground Space
    • /
    • v.21 no.6
    • /
    • pp.471-479
    • /
    • 2011
  • The velocity horizontal response spectra using the observed ground motions from the recent 5 macro earthquakes, equal to or larger than 4.8 in magnitude, around Korean Peninsula were analysed and then were compared to the acceleration horizontal response spectra, seismic design response spectra (Reg Guide 1.60), applied to the domestic nuclear power plants, and finally the Korean Standard Design Response Spectrum for general structures and buildings. 102 velocity horizontal ground motions, including NS and EW components, were used for velocity horizontal response spectra and then normalized with respect to the peak velocity value of each ground motion. First, the results showed that velocity horizontal response spectra have larger values at the range of medium natural period, but acceleration horizontal response spectra have larger values at the range of short natural periods. Secondly, the results also showed that velocity horizontal response spectra exceed Reg. Guide 1.60 for longer natural periods bands less than 6-7 Hz. Finally, the results were also compared to the Korean Standard Response Spectrum for the 3 different soil types(SC, SD, and SE soil type) and showed that velocity horizontal response spectra revealed much higher values for the frequency bands below 1.5(SC), 2.0(SD), and 3.0(SE) seconds, respectively, than the Korean Standard Response Spectrum. The results suggest that the fact that acceleration, velocity, and displacement horizontal response spectra have larger values at the range of short, medium, and long natural periods, respectively, can be applied consistently to those form domestic ground motion, especially, the velocity ground motion. Information on response spectrum at such medium range periods can be very important since the domestic design of buildings and structures emphasizes recently medium and long natural periods than short one due to increased super high-rise buildings.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

A study on the rock mass classification in boreholes for a tunnel design using machine learning algorithms (머신러닝 기법을 활용한 터널 설계 시 시추공 내 암반분류에 관한 연구)

  • Lee, Je-Kyum;Choi, Won-Hyuk;Kim, Yangkyun;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.6
    • /
    • pp.469-484
    • /
    • 2021
  • Rock mass classification results have a great influence on construction schedule and budget as well as tunnel stability in tunnel design. A total of 3,526 tunnels have been constructed in Korea and the associated techniques in tunnel design and construction have been continuously developed, however, not many studies have been performed on how to assess rock mass quality and grade more accurately. Thus, numerous cases show big differences in the results according to inspectors' experience and judgement. Hence, this study aims to suggest a more reliable rock mass classification (RMR) model using machine learning algorithms, which is surging in availability, through the analyses based on various rock and rock mass information collected from boring investigations. For this, 11 learning parameters (depth, rock type, RQD, electrical resistivity, UCS, Vp, Vs, Young's modulus, unit weight, Poisson's ratio, RMR) from 13 local tunnel cases were selected, 337 learning data sets as well as 60 test data sets were prepared, and 6 machine learning algorithms (DT, SVM, ANN, PCA & ANN, RF, XGBoost) were tested for various hyperparameters for each algorithm. The results show that the mean absolute errors in RMR value from five algorithms except Decision Tree were less than 8 and a Support Vector Machine model is the best model. The applicability of the model, established through this study, was confirmed and this prediction model can be applied for more reliable rock mass classification when additional various data is continuously cumulated.

Inexpensive Visual Motion Data Glove for Human-Computer Interface Via Hand Gesture Recognition (손 동작 인식을 통한 인간 - 컴퓨터 인터페이스용 저가형 비주얼 모션 데이터 글러브)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.341-346
    • /
    • 2009
  • The motion data glove is a representative human-computer interaction tool that inputs human hand gestures to computers by measuring their motions. The motion data glove is essential equipment used for new computer technologiesincluding home automation, virtual reality, biometrics, motion capture. For its popular usage, this paper attempts to develop an inexpensive visual.type motion data glove that can be used without any special equipment. The proposed approach has the special feature; it can be developed as a low-cost one becauseof not using high-cost motion-sensing fibers that were used in the conventional approaches. That makes its easy production and popular use possible. This approach adopts a visual method that is obtained by improving conventional optic motion capture technology, instead of mechanical method using motion-sensing fibers. Compared to conventional visual methods, the proposed method has the following advantages and originalities Firstly, conventional visual methods use many cameras and equipments to reconstruct 3D pose with eliminating occlusions But the proposed method adopts a mono vision approachthat makes simple and low cost equipments possible. Secondly, conventional mono vision methods have difficulty in reconstructing 3D pose of occluded parts in images because they have weak points about occlusions. But the proposed approach can reconstruct occluded parts in images by using originally designed thin-bar-shaped optic indicators. Thirdly, many cases of conventional methods use nonlinear numerical computation image analysis algorithm, so they have inconvenience about their initialization and computation times. But the proposed method improves these inconveniences by using a closed-form image analysis algorithm that is obtained from original formulation. Fourthly, many cases of conventional closed-form algorithms use approximations in their formulations processes, so they have disadvantages of low accuracy and confined applications due to singularities. But the proposed method improves these disadvantages by original formulation techniques where a closed-form algorithm is derived by using exponential-form twist coordinates, instead of using approximations or local parameterizations such as Euler angels.

A Study on UX-centered Smart Office Phone Design Development Process Using Service Design Process (서비스디자인 프로세스를 활용한 UX중심 오피스 전화기 디자인개발 프로세스 연구)

  • Seo, Hong-Seok
    • Science of Emotion and Sensibility
    • /
    • v.25 no.1
    • /
    • pp.41-54
    • /
    • 2022
  • The purpose of this study was to propose a "user experience (UX)-centered product development process" so that the product design development process using the service design process can be systematized and used in practice. In a situation in which usability research on office phones is lacking compared to general home phones, this study expands to a product-based service design point of view rather than simple product development, intending to research ways to provide user experience value through office phone design in smart office. This study focused on extracting UX-centered user needs using the service design process and developing product design that realizes user experience value. In particular, the service design process was applied to systematically extract user needs and experience value elements in the product development process and to discover ideas that were converged with product-based services. For this purpose, the "Double Diamond Design Process Model," which is widely used in the service design field, was adopted. In addition, a product design development process was established so that usability improvement plans, user experience value elements, and product-service connected ideas could be extracted through a work-flow in which real users and people from various fields participate. Based on the double diamond design process, in the "Discover" information collection stage, design trends were identified mainly in the office phone markets. In the "Define" analysis and extraction stage, user needs were analyzed through user observation, interview, and usability survey, and design requirements and user experience issues were extracted. Persona was set through user type analysis, and user scenarios were presented. In the "Develop" development stage, ideation workshops and concept renderings were conducted to embody the design, and people from various fields within the company participated to set the design direction reflecting design preference and usability improvement plans. In the "Deliver" improvement/prototype development/evaluation stage, a working mock-up of a design prototype was produced and design and usability evaluation were conducted through consultation with external design experts. It is meaningful that it established a "UX-centered product development process" model that converged with the existing product design development process and service design process. Ultimately, service design-based product design development process was presented so that I Corp.'s products could realize user experience value through service convergence.

LCD 연구 개발 동향

  • 이종천
    • The Magazine of the IEIE
    • /
    • v.29 no.6
    • /
    • pp.76-80
    • /
    • 2002
  • 'Liquid Crystal의 상전이(相轉移)와 광학적 이방성(異方性)이 1888년과 1889년 F. Reinitzer와 O. Lehmann에 의해 Monatsch Chem.과 Z.Physikal.Chem.에 각각 보고된 후 부터 제2차 세계대전이 끝난 뒤인 1950년대 까지는 Liquid Crystal을 단지실험실에서의 기초학문 차원의 연구 대상으로만 다루어 왔다. 1963년 Williams가 Liquid Crystal Device로는 최초로 특허 출원을 하였으며, 1968년 RCA사의 Heilmeier등은 Nematic 액정(液晶)에 저주파(低周波) 전압(電壓)을 인가하면 투명한 액정이 혼탁(混濁)상태로 변화하는 '동적산란(動的散亂)'(Dynamic Scattering) 현상을 이용하여 최초의 DSM(Dynamic Scattering Mode) LCD(Liquid Crystal Display)를 발명하였다. 비록 150V 이상의 높은 구동전압과 과소비전력의 특성 때문에 실용화에는 실패하였지만 Guest-Host효과와 Memory효과 등을 발견하였다. 1970년대에 이르러 실온에서 안정되게 사용 가능한 액정물질들이 합성되고(H. Kelker에 의해 MBBA, G. Gray에 의한 Cyano-Biphenyl 액정의 합성), CMOS 트랜지스터의 발명, 투명도전막(ITO), 수은전지등의 주변기술들의 발전으로 인하여 LCD의 상품화가 본격적으로 이루어지게 되었다. 1971년에는 M. Shadt, W. Helfrich, J.L. Fergason등이 TN(Twisted Nematic) LCD를 발명하여 전자 계산기와 손목시계에 응용되었고, 1970년대 말에는 Sharp에서 Dot Matrix형의 휴대형 컴퓨터를 발매하였다. 이러한 단순 구동형의 TN LCD는 그래픽 정보를 표시하는 데에는 품질의 한계가 있어 1979년 영국의 Le Comber에 의해 a-Si TFT(amorphous Silicon Thin Film Transistor) LCD의 연구가 시작되었고, 1983년 T.J. Scheffer, J. Nehring, G. Waters에 의해 STN(Super Twisted Nematic) LCD가 창안되었고, 1980년 N. Clark, S. Lagerwall 및 1983년 K.Yossino에 의해 Ferroelectric LCD가 등장하여 LCD의 정보 표시량 증대에 크게 기여하였다. Color화의 진전은 1972년 A.G. Ficher의 셀 외부에 RGB(Red, Green, Blue) filter를 부착하는 방안과, 1981년 T. Uchida 등에 의한 셀 내부에 RGB filter를 부착하는 방법에 의해 상품화가 되었다. 1985년에는 J.L. Fergason에 의해 Polymer Dispersed LCD가 발명되었고, 1980년대 중반에 이르러 동화상(動畵像) 표시가 가능한 a-Si TFT LCD의 시제품(試製品) 개발이 이루어지고 1990년부터는 본격적인 양산 시대에 접어들게 되었다. 1990년대 초에는 STN LCD의 Color화 및 대형화(大型化) 고(高)품위화에 힘입어 Note-Book PC에 LCD가 본격적으로 적용이 되었고, 1990년대 후반에는TFT LCD의 표시품질 대비 가격경쟁력 확보로 인하여 Note-Book PC 시장을 독점하기에 이르렀다. 이후로는 TFT LCD의 대형화가 중요한 쟁점으로 부각되고 있고, 1995년 삼성전자는 당시 세계최대 크기의 22' TFT LCD를 개발하였다. 또한 LCD의 고정세(高情細)화를 위해 Poly Si TFT LCD의 개발이 이루어졌고, 디지타이져 일체형 LCD의 상품화가 그 응용의 폭을 넓혔으며, LCD의 대형화를 위해 1994년 Canon에 의해 14.8', 21' 등의 FLCD가 개발되었다. 대형화 방안으로 Tiled LCD 기술이 개발되고 있으며, 1995년에 Sharp에 의해 21' 두장의 Panel을 이어 붙인 28' TFT LCD가 전시되었고 1996년에는 21' 4장의 Panel을 이어 붙인 40'급 까지의 개발이 시도 되었으며 현재는 LCD의 특성향상과 생산설비의 성능개선과 안정적인 공정관리기술을 바탕으로 삼성전자에서 단패널 40' TFT LCD가 최근에 개발되었다. Projection용 디스플레이로는 Poly-Si TFT LCD를 이용하여 $25'{\sim}100'$사이의 배면투사형과 전면투사형 까지 개발되어 대형 TV시장을 주도하고 있다. 21세기 디지털방송 시대를 맞아 플라즈마디스플레이패널(PDP) TV, 액정표시장치 (LCD)TV, 강유전성액정(FLCD) TV 등 2005년에 약 1500만대 규모의 거대 시장을 형성할 것으로 예상되는 이른바 '벽걸이TV'로 불리는 차세대 초박형 TV 시장을 선점하기 위하여 세계 가전업계들이 양산에 총력을 기울이고 있다. 벽걸이TV 시장이 본격적으로 형성되더라도 PDP TV와 LCD TV가 직접적으로 시장에서 경쟁을 벌이는 일은 별로 없을 것으로 보인다. 향후 디지털TV 시장이 본격적으로 열리면 40인치 이하의 중대형 시장은 LCD TV가 주도하고 40인치 이상 대화면 시장은 PDP TV가 주도할 것으로 보는 시각이 지배적이기 때문이다. 그러나 이러한 직시형 중대형(重大型)디스플레이는 그 가격이 너무 높아서 현재의 브라운관 TV를 대체(代替)하기에는 시일이 많이 소요될 것으로 추정되고 있다. 그 대안(代案)으로는 비교적 저가격(低價格)이면서도 고품질의 디지털 화상구현이 가능한 고해상도 프로젝션 TV가 유력시되고 있다. 이러한 고해상도 프로젝션 TV용으로 DMD(Digital Micro-mirror Display), Poly-Si TFT LCD와 LCOS(Liquid Crystals on Silicon) 등의 상품화가 진행되고 있다. 인터넷과 정보통신 기술의 발달로 휴대형 디스플레이의 시장이 예상 외로 급성장하고 있으며, 요구되는 디스플레이의 품질도 단순한 문자표시에서 그치지 않고 고해상도의 그래픽 동화상 표시와 칼라 표시 및 3차원 화상표시까지 점차로 그 영역이 넓어지고 있다. <표 1>에서 보여주는 바와 같이 LCD의 시장규모는 적용분야 별로 지속적인 성장이 예상되며, 새로운 응용분야의 시장도 성장성을 어느 정도 예측할 수 있다. 따라서 LCD기술의 연구개발 방향은 크게 두가지로 분류할 수 있으며 첫째로는, 현재 양산되고 있는 LCD 상품의 경쟁력강화를 위하여 원가(原價) 절감(節減)과 표시품질을 향상시키는 것이며 둘째로는, 새로운 타입의 LCD를 개발하여 기존 상품을 대체하거나 새로운 시장을 창출하는 분야로 나눌 수 있다. 이와 같은 관점에서 현재 진행되고 있는 LCD기술개발은 다음과 같이 분류할 수 있다. 1) 원가 절감 2) 특성 향상 3) New Type LCD 개발.

  • PDF

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF