• Title/Summary/Keyword: image information

Search Result 19,673, Processing Time 0.045 seconds

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

Analysis of Oceanic Current Maps of the East Sea in the Secondary School Science Textbooks (중등 과학 교과서의 동해 해류도 분석)

  • Park, Kyung-Ae;Park, Ji-Eun;Seo, Kang-Sun;Choi, Byoung-Ju;Byun, Do-Seong
    • Journal of the Korean earth science society
    • /
    • v.32 no.7
    • /
    • pp.832-859
    • /
    • 2011
  • The importance of scientific education on accurate oceanic currents and circulation has been increasingly addressed because the currents have played a significant role in climate change and global energy balance. The objectives of this study are to analyze errors of the oceanic current maps in the textbooks, to discuss a variety of error sources, to suggest how to produce a unified oceanic current map of the East Sea for the students. Twenty-seven textbooks based on the 7th National Curriculum were analyzed and quantitatively investigated on the characteristics of the current maps by comparing with both the previous literature and up-to-date scientific knowledge. All the maps in the textbooks with different mappings were converted to digitalized image data with Mercator mapping using geolocation information. Detailed analysis were performed to investigate the patterns of the Tsushima Warm Current (TWC) in the Korea Strait, to examine how closely the nearshore branch of the TWC flows along the Japanese coast, to scrutinize the features of the offshore branch of the TWC south of the subpolar front in the East Sea, to quantitatively investigate the northern range of the northward-propagating East Korea Warm Current and its latitude turning to the east, and lastly to examine the outflow of the TWC near the Tsugaru Strait and the Soya Strait. In addition, the origins, southern limits, and distances from the coast of the Liman Current and the North Korea Cold Current were analyzed. Other erroneous expressions of the currents in the textbooks were presented. These analyses revealed the problems in the present current maps of the textbooks, which might lead the students to misconception. This study also addressed a necessity in a bridge between scientists with up-to-date scientific results and educators who needed educational materials.

The Relationship among Country of Origin, Brand Equity and Brand Loyalty: Comparison among USA, China and Korea (원산지효과, 상표자산 및 상표충성 간의 관계에 관한 연구: 미국, 중국, 한국의 비교분석)

  • Ko, Eun-Ju;Kim, Kyung-Hoon;Kim, Sook-Hyun;Li, Guo-Feng;Zou, Peng;Zhang, Hao
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.1
    • /
    • pp.47-58
    • /
    • 2009
  • The marketing environment has become competitive to an extent that requires firms to target their products at markets that span national boundaries. However, competitive clout cannot be achieved in global consumer markets unless firms thoroughly understand and adequately respond to the core values and needs of those consumers. Brand equity is one of the most important assets to a company. Especially in sportswear markets, brand equity is the crucial value added to a product by its brand name. Factors such as country of origin also influence customer's attitude towards brand equity. Therefore, this paper discusses the relationship between country of origin effect and brand equity, and how they influence consumers' loyalty for respective brands. This paper focused on the sports shoes market, because it is an increasing area of opportunity for world manufacturers. The objectives of this study were the following. (1) Test the effect of country of origin on brand equity. (2) Test how brand equity influences consumers' brand loyalty. (3) Find whether there are differences in the effects of country of origin and brand equity among the three countries. (4) Find whether there are differences in the effects of country of origin and brand equity among the different lifestyles. Based on the review of literature results, the hypotheses are concluded as the following: H1-a: Country image has positive influence on country of origin. H1-b: Product perception has positive influence on country of origin. H2-a: Perceived quality has positive effect on brand equity. H2-b: Perceived price has positive effect on brand equity. H3: Country of origin has positive effect on brand equity. H4: Brand equity has a positive impact on brand loyalty. Research model was constructed (see Fig. 1). After data analysis, the following results were concluded: sports shoes purchase behavior showed significant differences among Korean, Chinese, and American consumers for favorite brand, purchased brand, purchased place, information usage, and favorite sports games. The results of this study also extend the research of the relationship among country of origin, brand equity and brand loyalty to the sports shoes market. Brand equity was proven to have a significant relationship with brand loyalty for all countries. The factors which can influence brand equity are different for different countries. The third finding of this paper is that we identified different three lifestyles, adventurer, follower, and laggard, for Korean, Chinese and American consumers. Without the nationality boundary, seeing the emergence of a new group of consumers who have similar preferences and buy similar brands is more important. All of the consumers consider brand equity to keep their brand loyalty. Perceived price is the only factor which can influence brand equity for adventurers; brand is more important for them. The laggards were not influenced by any factor. All of the factors expect perceived price are important for the followers. Marketing managers should consider brand equity when introducing their brand into a new market. Also localization is the basic strategy that all the sports shoes companies should understand. But as a global brand, understanding the same characteristics for each country is more important to build global strategy.

  • PDF

Wavelet Transform-based Face Detection for Real-time Applications (실시간 응용을 위한 웨이블릿 변환 기반의 얼굴 검출)

  • 송해진;고병철;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.829-842
    • /
    • 2003
  • In this Paper, we propose the new face detection and tracking method based on template matching for real-time applications such as, teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Since the main purpose of paper is to track a face regardless of various environments, we use template-based face tracking method. To generate robust face templates, we apply wavelet transform to the average face image and extract three types of wavelet template from transformed low-resolution average face. However template matching is generally sensitive to the change of illumination conditions, we apply Min-max normalization with histogram equalization according to the variation of intensity. Tracking method is also applied to reduce the computation time and predict precise face candidate region. Finally, facial components are also detected and from the relative distance of two eyes, we estimate the size of facial ellipse.

The research of promotion plan about regional design innovation center - focusing on the establishment and roll - (지역디자인 혁신센터의 활성화 방안에 대한 연구 - 설립과 역할(활동)을 중심으로 -)

  • Yun, Young-Tae
    • Archives of design research
    • /
    • v.18 no.4 s.62
    • /
    • pp.85-94
    • /
    • 2005
  • The purpose of this research is the activation proposal about the local design innovation center that was established as a national design policy For this proposal, I have to research about the established process of local design innovation center and then, I analyzed the present condition of local design innovation center for the promoting plan. As a result, we must establish three basic elements to activate the local design center. the first, we have to know the local characteristic. the second, we have to make up the management direction of local design center the third, we have to get the sympathy from the local administration and local people for the positive support. With above conditions, the local design innovation center have to arrange infra elements. (1) design developing facilities for the lending to the local designer, (2) professional designer for the developing of design industry, (3) program development for various activities, (4) the trend research for supply to local company, (5) design one stop service support, (6) the network foundation construction between design administration and design company for the active communication, (7) the innovation of design center for the benefit model, (8) the local design policy establishment with local administration, (9) the independent management of responsibility for the fulfillment For the promotion of the local design innovation center have to make efforts continually with below listed elements. 1. Design supporting for the local industry 2. Various design campaign for the spreading of public recognition about design 3. The supporting for design company and local company with established facilities and expensive equipments. 4. The construction of design information infra for local company 5. The development of new program about the connection between industry and university. 6. The development of local characteristic and local image innovation to make new local where we are.

  • PDF

Hue Shift Model and Hue Correction in High Luminance Display (고휘도 디스플레이의 색상이동모델과 색 보정)

  • Lee, Tae-Hyoung;Kwon, Oh-Seol;Park, Tae-Yong;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.4 s.316
    • /
    • pp.60-69
    • /
    • 2007
  • The human eye usually experiences a loss of color sensitivity when it is subjected to high levels of luminance, and perceives a discrepancy in color between high and normal-luminance displays, generally known as a hue shift. Accordingly, this paper models the hue-shift phenomenon and proposes a hue-correction method to provide perceptual matching between high and normal-luminance displays. The value of hue-shift is determined by perceived hue matching experiments. At first the phenomenon is observed at three lightness levels, that is, the ratio of luminance is the same between high and normal-luminance display when the perceived hue matching experiments we performed. To quantify the hue-shift phenomenon for the whole hue angle, color patches with the same lightness are first created and equally spaced inside the hue angle. These patches are then displayed one-by-one on both displays with the ratio of luminance between two displays. Next, the hue value for each patch appearing on the high-luminance display is adjusted by observers until the perceived hue for the patches on both displays appears the same visually. After obtaining the hue-shift values, these values are fit piecewise to allow shifted-hue amounts to be approximately determined for arbitrary hue values of pixels in a high-luminance display and then used for correction. Essentially, input RGB values of an image is converted to CIELAB values, and then, LCh (lightness, chroma, and hue) values are calculated to obtain the hue values for all the pixels. These hue values are shifted according to the amount calculated by the functions of the hue-shift model. Finally, the corrected CIELAB values are calculated from corrected hue values, after that, output RGB values for all pixels are estimated. For evaluation, an observer's preference test was performed with hue-shift results and Almost observers conclude that the images from hue-shift model were visually matched with images on normal luminance display.

Analysis of Consumer Consumption Status and Demand of Rice-wine (약주에 대한 소비자의 소비실태 및 요구도 분석)

  • Kim, Eun-Hae;Ahn, Byung-Hak;Lee, Min-A
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.42 no.3
    • /
    • pp.478-486
    • /
    • 2013
  • The purpose of this study was to analyze consumer consumption and product concept demands of Korean rice-wine. An online survey, conducted from April 28, 2010 to May 6 2010, targeted 200 consumers in Seoul and the Gyeonggi-do area. More than half of the respondents (51.3%) drank rice-wine because of the taste. The common reasons for dissatisfaction with rice-wine were hangovers (35.7%) and taste (16.9%). From analyzing rice-wine preferences, the most preferred ingredient was rice (57.8%), while the most preferred aroma and taste was derived from the fruit (48.7% and 58.4%, respectively). The most common methods consumers observed for promoting rice-wine consumption were the "development and management of rice-wine brands" (59.7%), and "continuous promotion" (44.8%). The most important attributes of a rice-wine product included its taste (4.60), followed by its quality (4.41) using 5-point Likert scale. An importance-performance analysis (IPA) was performed for the 17 attributes of rice-wine and identified targets for product management strategies, including the "usage of domestic ingredients", "ease of purchase", clarity of "product information", and "external image". Therefore, developing solid concepts in marketing strategy are required and may be achieved by understanding the consumer preferences and demands of rice-wine.

Radiation Absorbed Dose Calculation Using Planar Images after Ho-166-CHICO Therapy (Ho-166-CHICO 치료 후 평면 영상을 이용한 방사선 흡수선량의 계산)

  • 조철우;박찬희;원재환;왕희정;김영미;박경배;이병기
    • Progress in Medical Physics
    • /
    • v.9 no.3
    • /
    • pp.155-162
    • /
    • 1998
  • Ho-l66 was produced by neutron reaction in a reactor at the Korea Atomic Energy Institute (Taejon, Korea). Ho-l66 emits a high energy beta particles with a maximum energy of 1.85 MeV and small proportion of gamma rays (80 keV). Therefore, the radiation absorbed dose estimation could be based on the in-vivo quantification of the activity in tumors from the gamma camera images. Approximately 1 mCi of Ho-l66 in solution was mixed into the flood phantom and planar scintigraphic images were acquired with and without patient interposed between the phantom and scintillation camera. Transmission factor over an area of interest was calculated from the ratio of counts in selected regions of the two images described above. A dual-head gamma camera(Multispect2, Siemens, Hoffman Estates, IL, USA) equipped with medium energy collimators was utilized for imaging(80 keV${\pm}$10%). Fifty-nine year old female patient with hepatoma was enrolled into the therapeutic protocol after the informed consent obtained. Thirty millicuries(110MBq) of Ho-166-CHICO was injected into the right hepatic arterial branch supplying hepatoma. When the injection was completed, anterior and posterior scintigraphic views of the chest and pelvic regions were obtained for 3 successive days. Regions of interest (ROIs) were drawn over the organs in both the anterior and posterior views. The activity in those ROIs was estimated from geometric mean, calibration factor and transmission factors. Absorbed dose was calculated using the Marinelli formula and Medical Internal Radiation Dose (MIRD) schema. Tumor dose of the patient treated with 1110 MBq(30 mCi) Ho-l66 was calculated to be 179.7 Gy. Dose distribution to normal liver, spleen, lung and bone was 9.1, 10.3, 3.9, 5.0 % of the tumor dose respectively. In conclusion, tumor dose and absorbed dose to surrounding structures were calculated by daily external imaging after the Ho-l66 therapy for hepatoma. In order to limit the thresholding dose to each surrounding organ, absorbed dose calculation provides useful information.

  • PDF