• Title/Summary/Keyword: qualitative study

Search Result 7,659, Processing Time 0.039 seconds

A Study on the Determination of Scan Speed in Whole Body Bone Scan Applying Oncoflash (Oncoflash를 적용한 전신 뼈 영상 검사의 스캔 속도 결정에 관한 연구)

  • Yang, Gwang-Gil;Jung, Woo-Young
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.3
    • /
    • pp.56-60
    • /
    • 2009
  • Purpose: The various studies and efforts to develop program are in progress in the field of nuclear medicine for the purpose of reducing scan time. The Oncoflash is one of the programs used in whole body bone scan which allows to maintain the image quality while to reduce scan time. When Those applications are used in clinical setting, both the image quality and reduction of scan time should be considered, therefore, the purpose of this study was to determine the criteria for proper scan speed. Materials and Methods: The subjects of this study were the patients who underwent whole body bone scan at the departments of nuclear medicine in the Asan Medical Center located in Seoul from 1st to 10th, July, 2008. The whole body bone images obtained in the scan speed of 30cm/min were classified by the total counts into under 800 K, and over 800 K, 900 K, 1,000 K, 1,500 K, and 2,000 K. The image quality were assessed qualitatively and the percentages of those of 1,000K and under of total counts were calculated. The FWHM before and after applying the Oncoflash were analyzed using images obtained in $^{99m}Tc$ Flood and 4-Quadrant bar phantom in order to compare the resolution according to the amount of total counts by the application of the Oncoflash. Considering the counts of the whole body bone scan, the dosed 2~5 mCi were used. 152 patients underwent the measurement in which the counts of Patient Postioning Monitor (PPM) were measured with including head and the parts of chest which the starting point of whole body bone scan from 7th to 26th, August, 2008. The correlations with total counts obtained in the scan speed of 30cm/min among them were analyzed (The exclusion criteria were after over six hours of applying isotopes or low amount of doses). Results: The percentage of the whole body bone image which has the geometric average of total counts of under 1,000K among them obtained in the scan speed of 30cm/min were 17.6%(n=58) of 329 patients. The qualitative analysis of the image groups according to the whole body counts showed that the images of under 1,000K were assessed to have coarse particles and increased noises. The analysis on the FWHM of the images before and after applying the Oncoflash showed that, in the case of PPM counts of under 3.6 K, FWHM values after applying the Oncoflash were higher than that before applying the Oncoflash, whereas, in the case of that of over 3.6 K, the FWHM after applying the Oncoflash were not higher than that before applying the Oncoflash. The average of total counts at 2.5~3.0 K, 3.1~3.5 K, 3.6~4.0 k, 4.1~4.5 K, 4.6~5.0 K, 5.1~6.0 K, 6.1~7.0 K, and 7.1 K over (in PPM) were $965{\pm}173\;K$, $1084{\pm}154\;K$, $1242{\pm}186\;K$, $1359{\pm}170\;K$, $1405{\pm}184\;K$, $1640{\pm}376\;K$, $1,771{\pm}324\;K$, and $1,972{\pm}385\;K$, respectively and the correlations between the counts in PPM and the total counts of image obtained in the scan speed of 30 cm/min demonstrated strong correlation (r=.775, p<.01). Conclusions: In the case of PPM coefficient over 3.6 K, the image quality obtained in the scan speed of 30cm/min and after applying the Oncoflash was similar to that obtained in the scan speed of 15 cm/min. In the case of total counts over 1,000 K, it is expected to reduce scan time without any damage on the image quality. In the case of total counts under 1,000 K, however, the image quality were decreased even though the Oncoflash is applied, so it is recommended to perform the re-image in the scan speed of 15 cm/min.

  • PDF

Actual Conditions and Perception of Safety Accidents by School Foodservice Employees in Chungbuk (충북지역 학교급식 조리종사원의 안전사고 실태 및 인식)

  • Cho, Hyun A;Lee, Young Eun;Park, Eun Hye
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.43 no.10
    • /
    • pp.1594-1606
    • /
    • 2014
  • The purpose of this study was to examine safety accidents related to school foodservice, working and operating environments of school foodservice, status and awareness of safety education, educational needs, and information on qualitative improvement of school foodservice. The subjects in this study were 234 cooks in charge of cooking at elementary and secondary schools in Chungbuk. A survey was conducted from July 30 to August 8, 2012, and among 202 questionnaires gathered, 194 completed questionnaires were analyzed. Statistical analyses were performed on data utilizing the SPSS version 19.0. The main results of this study were as follows: 44.3% of workers experienced safety accidents. The most frequent safety accident was 'once' (60.5%), and most safety accidents took place between June and August (31.4%). The time at which most safety accidents happened was between 8 and 11 am. Most safety accidents happened during cooking (52.3%) and while using a soup pot or frying pot (52.4%). The most common accidents were 'burns', 'wrist and arm pain', and 'slips and falls'. Respondents who experienced safety accidents replied that 57.6% of employees dealt with injuries at their own expense, and only 35.3% utilized industrial accident insurance. In terms of the operating environment, the score for 'offering information and application' was highest (3.76 points), whereas that for 'security of budget' was lowest (1.77 points). As for accident education, employees received safety education approximately 3.45 times and 5.10 hours per year. Improving the working environment of school foodservice cooks requires administrative and financial support. Furthermore, educational materials and guidelines based on the working environment and safety accident status of school foodservice cooks are required in order to minimize potential risk factors and control safety accidents in school foodservice.

A study on Broad Quantification Calibration to various isotopes for Quantitative Analysis and its SUVs assessment in SPECT/CT (SPECT/CT 장비에서 정량분석을 위한 핵종 별 Broad Quantification Calibration 시행 및 SUV 평가를 위한 팬텀 실험에 관한 연구)

  • Hyun Soo, Ko;Jae Min, Choi;Soon Ki, Park
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.26 no.2
    • /
    • pp.20-31
    • /
    • 2022
  • Purpose Broad Quantification Calibration(B.Q.C) is the procedure for Quantitative Analysis to measure Standard Uptake Value(SUV) in SPECT/CT scanner. B.Q.C was performed with Tc-99m, I-123, I-131, Lu-177 respectively and then we acquired the phantom images whether the SUVs were measured accurately. Because there is no standard for SUV test in SPECT, we used ACR Esser PET phantom alternatively. The purpose of this study was to lay the groundwork for Quantitative Analysis with various isotopes in SPECT/CT scanner. Materials and Methods Siemens SPECT/CT Symbia Intevo 16 and Intevo Bold were used for this study. The procedure of B.Q.C has two steps; first is point source Sensitivity Cal. and second is Volume Sensitivity Cal. to calculate Volume Sensitivity Factor(VSF) using cylinder phantom. To verify SUV, we acquired the images with ACR Esser PET phantom and then we measured SUVmean on background and SUVmax on hot vials(25, 16, 12, 8 mm). SPSS was used to analyze the difference in the SUV between Intevo 16 and Intevo Bold by Mann-Whitney test. Results The results of Sensitivity(CPS/MBq) and VSF were in Detector 1, 2 of four isotopes (Intevo 16 D1 sensitivity/D2 sensitivity/VSF and Intevo Bold) 87.7/88.6/1.08, 91.9/91.2/1.07 on Tc-99m, 79.9/81.9/0.98, 89.4/89.4/0.98 on I-123, 124.8/128.9/0.69, 130.9, 126.8/0.71, on I-131, 8.7/8.9/1.02, 9.1/8.9/1.00 on Lu-177 respectively. The results of SUV test with ACR Esser PET phantom were (Intevo 16 BKG SUVmean/25mm SUVmax/16mm/12mm/8mm and Intevo Bold) 1.03/2.95/2.41/1.96/1.84, 1.03/2.91/2.38/1.87/1.82 on Tc-99m, 0.97/2.91/2.33/1.68/1.45, 1.00/2.80/2.23/1.57/1.32 on I-123, 0.96/1.61/1.13/1.02/0.69, 0.94/1.54/1.08/0.98/ 0.66 on I-131, 1.00/6.34/4.67/2.96/2.28, 1.01/6.21/4.49/2.86/2.21 on Lu-177. And there was no statistically significant difference of SUV between Intevo 16 and Intevo Bold(p>0.05). Conclusion Only Qualitative Analysis was possible with gamma camera in the past. On the other hand, it's possible to acquire not only anatomic localization, 3D tomography but also Quantitative Analysis with SUV measurements in SPECT/CT scanner. We could lay the groundwork for Quantitative Analysis with various isotopes; Tc-99m, I-123, I-131, Lu-177 by carrying out B.Q.C and could verify the SUV measurement with ACR phantom. It needs periodic calibration to maintain for precision of Quantitative evaluation. As a result, we can provide Quantitative Analysis on follow up scan with the SPECT/CT exams and evaluate the therapeutic response in theranosis.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

An Expert System for the Estimation of the Growth Curve Parameters of New Markets (신규시장 성장모형의 모수 추정을 위한 전문가 시스템)

  • Lee, Dongwon;Jung, Yeojin;Jung, Jaekwon;Park, Dohyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.17-35
    • /
    • 2015
  • Demand forecasting is the activity of estimating the quantity of a product or service that consumers will purchase for a certain period of time. Developing precise forecasting models are considered important since corporates can make strategic decisions on new markets based on future demand estimated by the models. Many studies have developed market growth curve models, such as Bass, Logistic, Gompertz models, which estimate future demand when a market is in its early stage. Among the models, Bass model, which explains the demand from two types of adopters, innovators and imitators, has been widely used in forecasting. Such models require sufficient demand observations to ensure qualified results. In the beginning of a new market, however, observations are not sufficient for the models to precisely estimate the market's future demand. For this reason, as an alternative, demands guessed from those of most adjacent markets are often used as references in such cases. Reference markets can be those whose products are developed with the same categorical technologies. A market's demand may be expected to have the similar pattern with that of a reference market in case the adoption pattern of a product in the market is determined mainly by the technology related to the product. However, such processes may not always ensure pleasing results because the similarity between markets depends on intuition and/or experience. There are two major drawbacks that human experts cannot effectively handle in this approach. One is the abundance of candidate reference markets to consider, and the other is the difficulty in calculating the similarity between markets. First, there can be too many markets to consider in selecting reference markets. Mostly, markets in the same category in an industrial hierarchy can be reference markets because they are usually based on the similar technologies. However, markets can be classified into different categories even if they are based on the same generic technologies. Therefore, markets in other categories also need to be considered as potential candidates. Next, even domain experts cannot consistently calculate the similarity between markets with their own qualitative standards. The inconsistency implies missing adjacent reference markets, which may lead to the imprecise estimation of future demand. Even though there are no missing reference markets, the new market's parameters can be hardly estimated from the reference markets without quantitative standards. For this reason, this study proposes a case-based expert system that helps experts overcome the drawbacks in discovering referential markets. First, this study proposes the use of Euclidean distance measure to calculate the similarity between markets. Based on their similarities, markets are grouped into clusters. Then, missing markets with the characteristics of the cluster are searched for. Potential candidate reference markets are extracted and recommended to users. After the iteration of these steps, definite reference markets are determined according to the user's selection among those candidates. Then, finally, the new market's parameters are estimated from the reference markets. For this procedure, two techniques are used in the model. One is clustering data mining technique, and the other content-based filtering of recommender systems. The proposed system implemented with those techniques can determine the most adjacent markets based on whether a user accepts candidate markets. Experiments were conducted to validate the usefulness of the system with five ICT experts involved. In the experiments, the experts were given the list of 16 ICT markets whose parameters to be estimated. For each of the markets, the experts estimated its parameters of growth curve models with intuition at first, and then with the system. The comparison of the experiments results show that the estimated parameters are closer when they use the system in comparison with the results when they guessed them without the system.

A study on evaluation of the image with washed-out artifact after applying scatter limitation correction algorithm in PET/CT exam (PET/CT 검사에서 냉소 인공물 발생 시 산란 제한 보정 알고리즘 적용에 따른 영상 평가)

  • Ko, Hyun-Soo;Ryu, Jae-kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.55-66
    • /
    • 2018
  • Purpose In PET/CT exam, washed-out artifact could occur due to severe motion of the patient and high specific activity, it results in lowering not only qualitative reading but also quantitative analysis. Scatter limitation correction by GE is an algorism to correct washed-out artifact and recover the images in PET scan. The purpose of this study is to measure the threshold of specific activity which can recovers to original uptake values on the image shown with washed-out artifact from phantom experiment and to compare the quantitative analysis of the clinical patient's data before and after correction. Materials and Methods PET and CT images were acquired in having no misalignment(D0) and in 1, 2, 3, 4 cm distance of misalignment(D1, D2, D3, D4) respectively, with 20 steps of each specific activity from 20 to 20,000 kBq/ml on $^{68}Ge$ cylinder phantom. Also, we measured the distance of misalignment of foley catheter line between CT and PET images, the specific activity which makes washed-out artifact, $SUV_{mean}$ of muscle in artifact slice and $SUV_{max}$ of lesion in artifact slice and $SUV_{max}$ of the other lesion out of artifact slice before and after correction respectively from 34 patients who underwent $^{18}F-FDG$ Fusion Whole Body PET/CT exam. SPSS 21 was used to analyze the difference in the SUV between before and after scatter limitation correction by paired t-test. Results In phantom experiment, $SUV_{mean}$ of $^{68}Ge$ cylinder decreased as specific activity of $^{18}F$ increased. $SUV_{mean}$ more and more decreased as the distance of misalignment between CT and PET more increased. On the other hand, the effect of correction increased as the distance more increased. From phantom experiments, there was no washed-out artifact below 50 kBq/ml and $SUV_{mean}$ was same from origin. On D0 and D1, $SUV_{mean}$ recovered to origin(0.95) below 120 kBq/ml when applying scatter limitation correction. On D2 and D3, $SUV_{mean}$ recovered to origin below 100 kBq/ml. On D4, $SUV_{mean}$ recovered to origin below 80 kBq/ml. From 34 clinical patient's data, the average distance of misalignment was 2.02 cm and the average specific activity which makes washed-out artifact was 490.15 kBq/ml. The average $SUV_{mean}$ of muscles and the average $SUV_{max}$ of lesions in artifact slice before and after the correction show a significant difference according to a paired t-test respectively(t=-13.805, p=0.000)(t=-2.851, p=0.012), but the average $SUV_{max}$ of lesions out of artifact slice show a no significant difference (t=-1.173, p=0.250). Conclusion Scatter limitation correction algorism by GE PET/CT scanner helps to correct washed-out artifact from motion of a patient or high specific activity and to recover the PET images. When we read the image occurred with washed-out artifact by measuring the distance of misalignment between CT and PET image, specific activity after applying scatter limitation algorism, we can analyze the images more accurately without repeating scan.

Developmental Plans and Research on Private Security in Korea (한국 민간경비 실태 및 발전방안)

  • Kim, Tea-Hwan;Park, Ok-Cheol
    • Korean Security Journal
    • /
    • no.9
    • /
    • pp.69-98
    • /
    • 2005
  • The security industry for civilians (Private Security), was first introduced to Korea via the US army's security system in the early 1960's. Shortly after then, official police laws were enforced in 1973, and private security finally started to develop with the passing of the 'service security industry' law in 1976. Korea's Private Security industry grew rapidly in the 1980's with the support of foreign funds and products, and now there are thought to be approximately 2000 private security enterprises currently running in Korea. However, nowadays the majority of these enterprises are experiencing difficulties such as lack of funds, insufficient management, and lack of control over employees, as a result, it seems difficult for some enterprises to avoid the low production output and bankruptcy. As a result of this these enterprises often settle these matters illegally, such as excessive dumping or avoiding problems by hiring inappropriate employees who don't have the right skills or qualifications for the jobs. The main problem with the establishment of this kind of security service is that it is so easy to make inroads into this private service market. All these hindering factors inhibit the market growth and impede qualitative development. Based on these main reasons, I researched this area, and will analyze and criticize the present condition of Korea's private security. I will present a possible development plan for the private security of Korea by referring to cases from the US and Japan. My method of researching was to investigate any related documentary records and articles and to interview people for necessary evidence. The theoretical study, involves investigation books and dissertations which are published from inside and outside of the country, and studying the complete collection of laws and regulations, internet data, various study reports, and the documentary records and the statistical data of many institutions such as the National Police Office, judicial training institute, and the enterprises of private security. Also, in addition, the contents of professionals who are in charge of practical affairs on the spot in order to overcomes the critical points of documentary records when investigating dissertation. I tried to get a firm grasp of the problems and difficulties which people in these work enterprises experience, this I thought would be most effective by interviewing the workers, for example: how they feel in the work places and what are the elements which inpede development? And I also interviewed policemen who are in charge of supervising the private escort enterprises, in an effort to figure out the problems and differences in opinion between domestic private security service and the police. From this investigation and research I will try to pin point the major problems of the private security and present a developmental plan. Firstly-Companies should unify the private police law and private security service law. Secondly-It is essential to introduce the 'specialty certificate' system for the quality improvement of private security service. Thirdly-must open up a new private security market by improving old system. Fourth-must build up the competitive power of the security service enterprises which is based on an efficient management. Fifth-needs special marketing strategy to hold customers Sixth-needs positive research based on theoretical studies. Seventh-needs the consistent and even training according to effective market demand. Eighth-Must maintain interrelationship with the police department. Ninth-must reinforce the system of Korean private security service association. Tenth-must establish private security laboratory. Based on these suggestions there should be improvement of private security service.

  • PDF

A Study on the Funerary Mean of the Vertical Plate Armour from the 4th Century - Mainly Based on the Burial Patterns Shown by the Ancient Tombs No.164 and No.165 in Bokcheon-dong - (종장판갑(縱長板甲) 부장의 다양성과 의미 - 부산 복천동 164·165호분 출토 자료를 중심으로 -)

  • Lee, Yu Jin
    • Korean Journal of Heritage: History & Science
    • /
    • v.44 no.3
    • /
    • pp.178-199
    • /
    • 2011
  • The ancient tombs found in Bokcheon-dong, Busan originate from the time between the $4^{th}$ and $5^{th}$ centuries, the period of the Three Nations. They are known as the tombs where the Vertical Plate Armour was mainly buried. In 2006, two units of the Vertical Plate Armour were additionally investigated in the tombs No.164 and No.165 which had been constructed at the end of the eastern slope near the hill of the group of ancient tombs in Bokcheon-dong. Throughout this study, the contents of the two units of the Vertical Plate Armour, whose preservation process has been completed, have been arranged, while the group of constructed ancient tombs in Bokcheon-dong from the $4^{th}$ century has been observed through the consideration of the burial pattern. The units of the Vertical Plate Armour from the tombs No.164 and No.165 can be classified as the IIa-typed armor showing the Gyeongju and Ulsan patterns, according to the attribute of the manufacturing technology. Also, they can be chronologically recorded as those from the early period of Stage II among the three stages regarding the chronological recording of the Vertical Plate Armour. While more than two units of the Vertical Plate Armour were buried in the largesized tomb on the top of the hill of the group of ancient tombs, one unit of the Vertical Plate Armour was buried in the small-sized tomb. By considering such a trend, it can be said that in the stage of burying the armor showing the Gyeongju and Ulsan patterns (I-type and IIa-type), different units of the Vertical Plate Armour were buried according to the size of the tomb. However, as the armor showing the Busan pattern (IIb-type) was settled, only one unit was buried. Meanwhile, the tombs No.164 and No.165 can be included in the wooden chamber tomb showing the Gyeongju pattern, which is a slender rectangular wooden chamber tomb with the aspect ratio of more than 1:3. However, according to the trend shown by the buried earthenware, it can be said that there seem to be common types and patterns shared with the earthenware which has been found in the area of Gimhae and is called the one showing the Geumgwan Gaya pattern. In other words, there seem to be close relationships between the subject tombs and the tomb No.3 in Gujeong-dong and the tomb No.55 in Sara-ri, Gyeongju, regarding the types of armor and tombs and the arrangement of buried artifacts. However, the buried earthenware shows a relationship with the areas of Busan and Gimhae. By considering the combined trend of the Gyeongju and Gimhae elements found in one tomb, it is possible to assume that the group of constructed ancient tombs in Bokcheon-dong used to be actively related with both areas. It has been thought that the Vertical Plate Armour used to be the exclusive property of the upper hierarchy until now, since it was buried in the large-sized tomb located on the top of the hill of the group of ancient tombs in Bokcheondong. However, as shown in case of the tombs No.164 and No.165, it has been verified that the Vertical Plate Armour was also buried in the small-sized tomb in terms of such factors as locations, sizes, the amount of buried artifacts and the qualitative aspect. Therefore, it is impossible to discuss the hierarchical characteristic of the tomb just based on the buried units of the Vertical Plate Armour. Also, it is difficult to assume that armor used to symbolize the domination of the military forces. The hierarchical characteristic of the group of constructed ancient tombs in Bokcheon-dong from the $4^{th}$ century can be verified according to the location and size of each tomb. As are sult, the re seem to be some differences regarding the buried units of the vertical plate armour. However, it would be necessary to carry out amore multilateral examination in order to find out whether the burial of the vertical plate armour could be regarded as the artifact which symbolizes the status or class of the deceased.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

Different Look, Different Feel: Social Robot Design Evaluation Model Based on ABOT Attributes and Consumer Emotions (각인각색, 각봇각색: ABOT 속성과 소비자 감성 기반 소셜로봇 디자인평가 모형 개발)

  • Ha, Sangjip;Lee, Junsik;Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.55-78
    • /
    • 2021
  • Tosolve complex and diverse social problems and ensure the quality of life of individuals, social robots that can interact with humans are attracting attention. In the past, robots were recognized as beings that provide labor force as they put into industrial sites on behalf of humans. However, the concept of today's robot has been extended to social robots that coexist with humans and enable social interaction with the advent of Smart technology, which is considered an important driver in most industries. Specifically, there are service robots that respond to customers, the robots that have the purpose of edutainment, and the emotionalrobots that can interact with humans intimately. However, popularization of robots is not felt despite the current information environment in the modern ICT service environment and the 4th industrial revolution. Considering social interaction with users which is an important function of social robots, not only the technology of the robots but also other factors should be considered. The design elements of the robot are more important than other factors tomake consumers purchase essentially a social robot. In fact, existing studies on social robots are at the level of proposing "robot development methodology" or testing the effects provided by social robots to users in pieces. On the other hand, consumer emotions felt from the robot's appearance has an important influence in the process of forming user's perception, reasoning, evaluation and expectation. Furthermore, it can affect attitude toward robots and good feeling and performance reasoning, etc. Therefore, this study aims to verify the effect of appearance of social robot and consumer emotions on consumer's attitude toward social robot. At this time, a social robot design evaluation model is constructed by combining heterogeneous data from different sources. Specifically, the three quantitative indicator data for the appearance of social robots from the ABOT Database is included in the model. The consumer emotions of social robot design has been collected through (1) the existing design evaluation literature and (2) online buzzsuch as product reviews and blogs, (3) qualitative interviews for social robot design. Later, we collected the score of consumer emotions and attitudes toward various social robots through a large-scale consumer survey. First, we have derived the six major dimensions of consumer emotions for 23 pieces of detailed emotions through dimension reduction methodology. Then, statistical analysis was performed to verify the effect of derived consumer emotionson attitude toward social robots. Finally, the moderated regression analysis was performed to verify the effect of quantitatively collected indicators of social robot appearance on the relationship between consumer emotions and attitudes toward social robots. Interestingly, several significant moderation effects were identified, these effects are visualized with two-way interaction effect to interpret them from multidisciplinary perspectives. This study has theoretical contributions from the perspective of empirically verifying all stages from technical properties to consumer's emotion and attitudes toward social robots by linking the data from heterogeneous sources. It has practical significance that the result helps to develop the design guidelines based on consumer emotions in the design stage of social robot development.