• Title/Summary/Keyword: target set

Search Result 1,602, Processing Time 0.032 seconds

Usability assessment of thermoplastic Bolus for skin VMAT radiotherapy (피부 병변에 대한 VMAT 치료 시 열가소성 bolus의 유용성 평가: case review)

  • Kim, Min Soo;Kim, Joo Ho;Shin, Hyun Kyung;Cho, Min Seok;Park, Ga Yeon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.85-92
    • /
    • 2020
  • Purpose: To find out the advantages of thermoplastic bolus compared to conventional bolus, which is mainly used in clinical practice, We evaluated Two cases in terms of dose and location reproducibility to assess Usability of thermoplastic Bolus for skin VMAT radiotherapy. Materials and Methods: Two patient's treated with left breast skin lesion were simulated using thermoplastic Bolus and planned with 2arc VMAT. the prescription dose was irradiated to 95% or more of the target volume. We evaluated The reproducibility of the bolus position by measuring the length of the air gap in the CBCT (Cone Beam CT) image. to evaluate dose reproducibility, we compared The dose distribution in the plan and CBCT and measured in vivo for patient 2. Results: The difference between the air gap in patient 1's simulation CT and the mean air gap (M1) during 10 treatments in the CBCT image was -0.42±1.24mm. In patient 2, the difference between the average air gap between the skin and the bolus (M2) during 14 treatments was -1.08±1.3mm, and the air gap between the bolus (M3) was 0.49±1.16. The difference in the dose distribution between Plan CT and CBCT was -1.38% for PTV1 D95 and 0.39% for SKIN (max) in patient 1. In patient 2, PTV1 D95 showed a difference of 0.63% and SKIN (max) -0.53%. The in vivo measurement showed a difference of -1.47% from the planned dose. Conclusion: thermoplastic Bolus is simpler and takes less time to manufacture compared to those produced by 3D printer. Also compared to conventional bolus, it has high reproducibility in the set-up side and stable results in terms of dose delivery.

Does Brand Experience Affect Consumer's Emotional Attachments? (브랜드의 총체적 체험이 소비자-브랜드의 정서적 유대관계에 미치는 영향)

  • Lee, Jieun;Jeon, Jooeon;Yoon, Jaeyoung
    • Asia Marketing Journal
    • /
    • v.12 no.2
    • /
    • pp.53-81
    • /
    • 2010
  • Brand experience has received much attention from considerable marketing research. When consumers consume and use brands, they are exposed to various specific brand-related stimuli. These brand-related stimuli include brand identity and brand communications(e.g., colors, shapes, designs, slogans, mascots, brand characters) components. Brakus, Schmitt, and Zarantonello(2009) conceptualized brand experience as subjective and internal consumer responses evoked by brand-related stimuli. They demonstrated that brand experience can be broken down into four dimensions(sensory, affective, intellectual, and behavioral). Because experiences result from stimulations and lead to pleasurable outcomes, we expect consumers to want to repeat theses experiences. That is, brand experiences, stored in consumer memory, should affect brand loyalty. Consumers with positive experiences should be more likely to buy a brand again and less likely to buy an alternative brand(Fournier 1998; Oliver 1997). Brand attachment, one of dimensions of the consumer-brand relationship, is defined as an emotional bond to the specific brand(Thomson, MacInnis, and Park 2005). Brand attachment is target-specific bond between the consumer and the specific brand. Thus, strong attachment is attended by a rich set of schema that link the brand to the consumer. Previous researches propose that brand attachments should affect consumers' commitment to the brand. Brand experience differs from affective construct such as brand attachment. Brand attachment is based on interaction between a consumer and the brand. In contrast, brand experience occurs whenever there is a direct and indirect interaction with the brand. Furthermore, brand experience is not an emotional relationship concept. Brakus et al.(2009) suggest that brand experience may result in brand attachment. This study aims to distinguish brand experience dimensions and investigate the effects of brand experience on brand attachment and brand commitment. We test research problems with data from 265 customers having brand experiences in various product categories by using multiple regression and structural equation model. The empirical results can be summarized as follows. First, the paths from affective, behavior, and intellectual experience to the brand attachment were found to be positively significant whereas the effect of sensory experience to brand attachment was not supported. In the consumer literature, sensory experiences for consumers are often equated with aesthetic pleasure. Over time, these pleasure experiences can affect consumer satisfaction. However, sensory pleasures are not linked to attachment such as consumers' strong emotional bond(i.e., hot affect). These empirical results confirms the results of previous studies. Second, brand attachment including passion and connection influences brand commitment positively but affection does not influence brand commitment. In marketing context, consumers with brand attachment have intention to have a willingness to stay with the relationship. The results also imply that consumers' emotional attachment is characterized by a set of brand experience dimensions and consumers who are emotionally attached to the brand are committed. The findings of this research contribute to develop differences between brand experience and brand attachment and to provide practical implications on the brand experience management. Recently, many brand managers have focused on short-term view. According to this study, we suggest that effective brand experience management requires taking a long-term view of marketing decisions.

  • PDF

Effects of firm strategies on customer acquisition of Software as a Service (SaaS) providers: A mediating and moderating role of SaaS technology maturity (SaaS 기업의 차별화 및 가격전략이 고객획득성과에 미치는 영향: SaaS 기술성숙도 수준의 매개효과 및 조절효과를 중심으로)

  • Chae, SeongWook;Park, Sungbum
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.151-171
    • /
    • 2014
  • Firms today have sought management effectiveness and efficiency utilizing information technologies (IT). Numerous firms are outsourcing specific information systems functions to cope with their short of information resources or IT experts, or to reduce their capital cost. Recently, Software-as-a-Service (SaaS) as a new type of information system has become one of the powerful outsourcing alternatives. SaaS is software deployed as a hosted and accessed over the internet. It is regarded as the idea of on-demand, pay-per-use, and utility computing and is now being applied to support the core competencies of clients in areas ranging from the individual productivity area to the vertical industry and e-commerce area. In this study, therefore, we seek to quantify the value that SaaS has on business performance by examining the relationships among firm strategies, SaaS technology maturity, and business performance of SaaS providers. We begin by drawing from prior literature on SaaS, technology maturity and firm strategy. SaaS technology maturity is classified into three different phases such as application service providing (ASP), Web-native application, and Web-service application. Firm strategies are manipulated by the low-cost strategy and differentiation strategy. Finally, we considered customer acquisition as a business performance. In this sense, specific objectives of this study are as follows. First, we examine the relationships between customer acquisition performance and both low-cost strategy and differentiation strategy of SaaS providers. Secondly, we investigate the mediating and moderating effects of SaaS technology maturity on those relationships. For this purpose, study collects data from the SaaS providers, and their line of applications registered in the database in CNK (Commerce net Korea) in Korea using a questionnaire method by the professional research institution. The unit of analysis in this study is the SBUs (strategic business unit) in the software provider. A total of 199 SBUs is used for analyzing and testing our hypotheses. With regards to the measurement of firm strategy, we take three measurement items for differentiation strategy such as the application uniqueness (referring an application aims to differentiate within just one or a small number of target industry), supply channel diversification (regarding whether SaaS vendor had diversified supply chain) as well as the number of specialized expertise and take two items for low cost strategy like subscription fee and initial set-up fee. We employ a hierarchical regression analysis technique for testing moderation effects of SaaS technology maturity and follow the Baron and Kenny's procedure for determining if firm strategies affect customer acquisition through technology maturity. Empirical results revealed that, firstly, when differentiation strategy is applied to attain business performance like customer acquisition, the effects of the strategy is moderated by the technology maturity level of SaaS providers. In other words, securing higher level of SaaS technology maturity is essential for higher business performance. For instance, given that firms implement application uniqueness or a distribution channel diversification as a differentiation strategy, they can acquire more customers when their level of SaaS technology maturity is higher rather than lower. Secondly, results indicate that pursuing differentiation strategy or low cost strategy effectively works for SaaS providers' obtaining customer, which means that continuously differentiating their service from others or making their service fee (subscription fee or initial set-up fee) lower are helpful for their business success in terms of acquiring their customers. Lastly, results show that the level of SaaS technology maturity mediates the relationships between low cost strategy and customer acquisition. That is, based on our research design, customers usually perceive the real value of the low subscription fee or initial set-up fee only through the SaaS service provide by vender and, in turn, this will affect their decision making whether subscribe or not.

Research Framework for International Franchising (국제프랜차이징 연구요소 및 연구방향)

  • Kim, Ju-Young;Lim, Young-Kyun;Shim, Jae-Duck
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.4
    • /
    • pp.61-118
    • /
    • 2008
  • The purpose of this research is to construct research framework for international franchising based on existing literature and to identify research components in the framework. Franchise can be defined as management styles that allow franchisee use various management assets of franchisor in order to make or sell product or service. It can be divided into product distribution franchise that is designed to sell products and business format franchise that is designed for running it as business whatever its form is. International franchising can be defined as a way of internationalization of franchisor to foreign country by providing its business format or package to franchisee of host country. International franchising is growing fast for last four decades but academic research on this is quite limited. Especially in Korea, research about international franchising is carried out on by case study format with single case or empirical study format with survey based on domestic franchise theory. Therefore, this paper tries to review existing literature on international franchising research, providing research framework, and then stimulating new research on this field. International franchising research components include motives and environmental factors for decision of expanding to international franchising, entrance modes and development plan for international franchising, contracts and management strategy of international franchising, and various performance measures from different perspectives. First, motives of international franchising are fee collection from franchisee. Also it provides easier way to expanding to foreign country. The other motives including increase total sales volume, occupying better strategic position, getting quality resources, and improving efficiency. Environmental factors that facilitating international franchising encompasses economic condition, trend, and legal or political factors in host and/or home countries. In addition, control power and risk management capability of franchisor plays critical role in successful franchising contract. Final decision to enter foreign country via franchising is determined by numerous factors like history, size, growth, competitiveness, management system, bonding capability, industry characteristics of franchisor. After deciding to enter into foreign country, franchisor needs to set entrance modes of international franchising. Within contractual mode, there are master franchising and area developing franchising, licensing, direct franchising, and joint venture. Theories about entrance mode selection contain concepts of efficiency, knowledge-based approach, competence-based approach, agent theory, and governance cost. The next step after entrance decision is operation strategy. Operation strategy starts with selecting a target city and a target country for franchising. In order to finding, screening targets, franchisor needs to collect information about candidates. Critical information includes brand patent, commercial laws, regulations, market conditions, country risk, and industry analysis. After selecting a target city in target country, franchisor needs to select franchisee, in other word, partner. The first important criteria for selecting partners are financial credibility and capability, possession of real estate. And cultural similarity and knowledge about franchisor and/or home country are also recognized as critical criteria. The most important element in operating strategy is legal document between franchisor and franchisee with home and host countries. Terms and conditions in legal documents give objective information about characteristics of franchising agreement for academic research. Legal documents have definitions of terminology, territory and exclusivity, agreement of term, initial fee, continuing fees, clearing currency, and rights about sub-franchising. Also, legal documents could have terms about softer elements like training program and operation manual. And harder elements like law competent court and terms of expiration. Next element in operating strategy is about product and service. Especially for business format franchising, product/service deliverable, benefit communicators, system identifiers (architectural features), and format facilitators are listed for product/service strategic elements. Another important decision on product/service is standardization vs. customization. The rationale behind standardization is cost reduction, efficiency, consistency, image congruence, brand awareness, and competitiveness on price. Also standardization enables large scale R&D and innovative change in management style. Another element in operating strategy is control management. The simple way to control franchise contract is relying on legal terms, contractual control system. There are other control systems, administrative control system and ethical control system. Contractual control system is a coercive source of power, but franchisor usually doesn't want to use legal power since it doesn't help to build up positive relationship. Instead, self-regulation is widely used. Administrative control system uses control mechanism from ordinary work relationship. Its main component is supporting activities to franchisee and communication method. For example, franchisor provides advertising, training, manual, and delivery, then franchisee follows franchisor's direction. Another component is building franchisor's brand power. The last research element is performance factor of international franchising. Performance elements can be divided into franchisor's performance and franchisee's performance. The conceptual performance measures of franchisor are simple but not easy to obtain objectively. They are profit, sale, cost, experience, and brand power. The performance measures of franchisee are mostly about benefits of host country. They contain small business development, promotion of employment, introduction of new business model, and level up technology status. There are indirect benefits, like increase of tax, refinement of corporate citizenship, regional economic clustering, and improvement of international balance. In addition to those, host country gets socio-cultural change other than economic effects. It includes demographic change, social trend, customer value change, social communication, and social globalization. Sometimes it is called as westernization or McDonaldization of society. In addition, the paper reviews on theories that have been frequently applied to international franchising research, such as agent theory, resource-based view, transaction cost theory, organizational learning theory, and international expansion theories. Resource based theory is used in strategic decision based on resources, like decision about entrance and cooperation depending on resources of franchisee and franchisor. Transaction cost theory can be applied in determination of mutual trust or satisfaction of franchising players. Agent theory tries to explain strategic decision for reducing problem caused by utilizing agent, for example research on control system in franchising agreements. Organizational Learning theory is relatively new in franchising research. It assumes organization tries to maximize performance and learning of organization. In addition, Internalization theory advocates strategic decision of direct investment for removing inefficiency of market transaction and is applied in research on terms of contract. And oligopolistic competition theory is used to explain various entry modes for international expansion. Competency theory support strategic decision of utilizing key competitive advantage. Furthermore, research methodologies including qualitative and quantitative methodologies are suggested for more rigorous international franchising research. Quantitative research needs more real data other than survey data which is usually respondent's judgment. In order to verify theory more rigorously, research based on real data is essential. However, real quantitative data is quite hard to get. The qualitative research other than single case study is also highly recommended. Since international franchising has limited number of applications, scientific research based on grounded theory and ethnography study can be used. Scientific case study is differentiated with single case study on its data collection method and analysis method. The key concept is triangulation in measurement, logical coding and comparison. Finally, it provides overall research direction for international franchising after summarizing research trend in Korea. International franchising research in Korea has two different types, one is for studying Korean franchisor going overseas and the other is for Korean franchisee of foreign franchisor. Among research on Korean franchisor, two common patterns are observed. First of all, they usually deal with success story of one franchisor. The other common pattern is that they focus on same industry and country. Therefore, international franchise research needs to extend their focus to broader subjects with scientific research methodology as well as development of new theory.

  • PDF

The Effect of Partially Used High Energy Photon on Intensity-modulated Radiation Therapy Plan for Head and Neck Cancer (두경부암 세기변조방사선치료 계획 시 부분적 고에너지 광자선 사용에 따른 치료계획 평가)

  • Chang, Nam Joon;Seok, Jin Yong;Won, Hui Su;Hong, Joo Wan;Choi, Ji Hun;Park, Jin Hong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.1-8
    • /
    • 2013
  • Purpose: A selection of proper energy in treatment planning is very important because of having different dose distribution in body as photon energy. In generally, the low energy photon has been used in intensity-modulated radiation therapy (IMRT) for head and neck (H&N) cancer. The aim of this study was to evaluate the effect of partially used high energy photon at posterior oblique fields on IMRT plan for H&N cancer. Materials and Methods: The study was carried out on 10 patients (nasopharyngeal cancer 5, tonsilar cancer 5) treated with IMRT in Seoul National University Bundang Hospital. CT images were acquired 3 mm of thickness in the same condition and the treatment plan was performed by Eclipse (Ver.7.1, Varian, Palo Alto, USA). Two plans were generated under same planing objectives, dose volume constraints, and eight fields setting: (1) The low energy plan (LEP) created using 6 MV beam alone, (2) the partially used high energy plan (PHEP) created partially using 15 MV beam at two posterior oblique fields with deeper penetration depths, while 6 MV beam was used at the rest of fields. The plans for LEP and PHEP were compared in terms of coverage, conformity index (CI) and homogeneity index (HI) for planning target volume (PTV). For organs at risk (OARs), $D_{mean}$ and $D_{50%}$ were analyzed on both parotid glands and $D_{max}$, $D_{1%}$ for spinal cord were analyzed. Integral dose (ID) and total monitor unit (MU) were compared as addition parameters. For the comparing dose to normal tissue of posterior neck, the posterior-normal tissue volume (P-NTV) was set on the patients respectively. The $D_{mean}$, $V_{20Gy}$ and $V_{25Gy}$ for P-NTV were evaluated by using dose volume histogram (DVH). Results: The dose distributions were similar with regard to coverage, CI and HI for PTV between the LEP and PHEP. No evident difference was observed in the spinal cord. However, the $D_{mean}$, $D_{50%}$ for both parotid gland were slightly reduced by 0.6%, 0.7% in PHEP. The ID was reduced by 1.1% in PHEP, and total MU for PHEP was 1.8% lower than that for LEP. In the P-NTV, the $D_{mean}$, $V_{20Gy}$ and $V_{25Gy}$ of the PHEP were 1.6%, 1.8% and 2.9% lower than those of LEP. Conclusion: Dose to some OARs and a normal tissue, total monitor unit were reduced in IMRT plan with partially used high energy photon. Although these reduction are unclear how have a clinical benefit to patient, application of the partially used high energy photon could improve the overall plan quality of IMRT for head and neck cancer.

  • PDF

Effectiveness Assessment on Jaw-Tracking in Intensity Modulated Radiation Therapy and Volumetric Modulated Arc Therapy for Esophageal Cancer (식도암 세기조절방사선치료와 용적세기조절회전치료에 대한 Jaw-Tracking의 유용성 평가)

  • Oh, Hyeon Taek;Yoo, Soon Mi;Jeon, Soo Dong;Kim, Min Su;Song, Heung Kwon;Yoon, In Ha;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.33-41
    • /
    • 2019
  • Purpose : To evaluate the effectiveness of Jaw-tracking(JT) technique in Intensity-modulated radiation therapy(IMRT) and Volumetric-modulated arc therapy(VMAT) for radiation therapy of esophageal cancer by analyzing volume dose of perimetrical normal organs along with the low-dose volume regions. Materials and Method: A total of 27 patients were selected who received radiation therapy for esophageal cancer with using $VitalBeam^{TM}$(Varian Medical System, U.S.A) in our hospital. Using Eclipse system(Ver. 13.6 Varian, U.S.A), radiation treatment planning was set up with Jaw-tracking technique(JT) and Non-Jaw-tracking technique(NJT), and was conducted for the patients with T-shaped Planning target volume(PTV), including Supraclavicular lymph nodes(SCL). PTV was classified into whether celiac area was included or not to identify the influence on the radiation field. To compare the treatment plans, Organ at risk(OAR) was defined to bilateral lung, heart, and spinal cord and evaluated for Conformity index(CI) and Homogeneity index(HI). Portal dosimetry was performed to verify a clinical application using Electronic portal imaging device(EPID) and Gamma analysis was performed with establishing thresholds of radiation field as a parameter, with various range of 0 %, 5 %, and 10 %. Results: All treatment plans were established on gamma pass rates of 95 % with 3 mm/3 % criteria. For a threshold of 10 %, both JT and NJT passed with rate of more than 95 % and both gamma passing rate decreased more than 1 % in IMRT as the low dose threshold decreased to 5 % and 0 %. For the case of JT in IMRT on PTV without celiac area, $V_5$ and $V_{10}$ of both lung showed a decrease by respectively 8.5 % and 5.3 % in average and up to 14.7 %. A $D_{mean}$ decreased by $72.3{\pm}51cGy$, while there was an increase in radiation dose reduction in PTV including celiac area. A $D_{mean}$ of heart decreased by $68.9{\pm}38.5cGy$ and that of spinal cord decreased by $39.7{\pm}30cGy$. For the case of JT in VMAT, $V_5$ decreased by 2.5 % in average in lungs, and also a little amount in heart and spinal cord. Radiation dose reduction of JT showed an increase when PTV includes celiac area in VMAT. Conclusion: In the radiation treatment planning for esophageal cancer, IMRT showed a significant decrease in $V_5$, and $V_{10}$ of both lungs when applying JT, and dose reduction was greater when the irradiated area in low-dose field is larger. Therefore, IMRT is more advantageous in applying JT than VMAT for radiation therapy of esophageal cancer and can protect the normal organs from MLC leakage and transmitted doses in low-dose field.

Analysis of Variation for Parallel Test between Reagent Lots in in-vitro Laboratory of Nuclear Medicine Department (핵의학 체외검사실에서 시약 lot간 parallel test 시 변이 분석)

  • Chae, Hong Joo;Cheon, Jun Hong;Lee, Sun Ho;Yoo, So Yeon;Yoo, Seon Hee;Park, Ji Hye;Lim, Soo Yeon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.51-58
    • /
    • 2019
  • Purpose In in-vitro laboratories of nuclear medicine department, when the reagent lot or reagent lot changes Comparability test or parallel test is performed to determine whether the results between lots are reliable. The most commonly used standard domestic laboratories is to obtain %difference from the difference in results between two lots of reagents, and then many laboratories are set the standard to less than 20% at low concentrations and less than 10% at medium and high concentrations. If the range is deviated from the standard, the test is considered failed and it is repeated until the result falls within the standard range. In this study, several tests are selected that are performed in nuclear medicine in-vitro laboratories to analyze parallel test results and to establish criteria for customized percent difference for each test. Materials and Methods From January to November 2018, the result of parallel test for reagent lot change is analyzed for 7 items including thyroid-stimulating hormone (TSH), free thyroxine (FT4), carcinoembryonic antigen (CEA), CA-125, prostate-specific antigen (PSA), HBs-Ab and Insulin. The RIA-MAT 280 system which adopted the principle of IRMA is used for TSH, FT4, CEA, CA-125 and PSA. TECAN automated dispensing equipment and GAMMA-10 is used to measure insulin test. For the test of HBs-Ab, HAMILTON automated dispensing equipment and Cobra Gamma ray measuring instrument are used. Separate reagent, customized calibrator and quality control materials are used in this experiment. Results 1. TSH [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [14.8 / 4.4 / 3.7 / 0.0 ] C-2(middle concentration) [10.1 / 4.2 / 3.7 / 0.0] 2. FT4 [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [10.0 / 4.2 / 3.9 / 0.0] C-2(high concentration) [9.6 / 3.3 / 3.1 / 0.0 ] 3. CA-125 [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [9.6 / 4.3 / 4.3 / 0.3] C-2(high concentration) [6.5 / 3.5 / 4.3 / 0.4] 4. CEA [%diffrence Max / Mean / median] (P-value by t-test > 0.05) C-1(low concentration) [9.8 / 4.2 / 3.0 / 0.0] C-2(middle concentration) [8.7 / 3.7 / 2.3 / 0.3] 5. PSA [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [15.4 / 7.6 / 8.2 / 0.0] C-2(middle concentration) [8.8 / 4.5 / 4.8 / 0.9] 6. HBs-Ab [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [9.6 / 3.7 / 2.7 / 0.2] C-2(high concentration) [8.9 / 4.1 / 3.6 / 0.3] 7. Insulin [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [8.7 / 3.1 / 2.4 / 0.9] C-2(high concentration) [8.3 / 3.2 / 1.5 / 0.1] In some low concentration measurements, the percent difference is found above 10 to nearly 15 percent in result of target value calculated at a lower concentration. In addition, when the value is measured after Standard level 6, which is the highest value of reagents in the dispensing sequence, the result would have been affected by a hook effect. Overall, there was no significant difference in lot change of quality control material (p-value>0.05). Conclusion Variations between reagent lots are not large in immunoradiometric assays. It is likely that this is due to the selection of items that have relatively high detection rate in the immunoradiometric method and several remeasurements. In most test results, the difference was less than 10 percent, which was within the standard range. TSH control level 1 and PSA control level 1, which have low concentration target value, exceeded 10 percent more than twice, but it did not result in a value that was near 20 percent. As a result, it is required to perform a longer period of observation for more homogenized average results and to obtain laboratory-specific acceptance criteria for each item. Also, it is advised to study observations considering various variables.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Comparison and evaluation between 3D-bolus and step-bolus, the assistive radiotherapy devices for the patients who had undergone modified radical mastectomy surgery (변형 근치적 유방절제술 시행 환자의 방사선 치료 시 3D-bolus와 step-bolus의 비교 평가)

  • Jang, Wonseok;Park, Kwangwoo;Shin, Dongbong;Kim, Jongdae;Kim, Seijoon;Ha, Jinsook;Jeon, Mijin;Cho, Yoonjin;Jung, Inho
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.1
    • /
    • pp.7-16
    • /
    • 2016
  • Purpose : This study aimed to compare and evaluate between the efficiency of two respective devices, 3D-bolus and step-bolus when the devices were used for the treatment of patients whose chest walls were required to undergo the electron beam therapy after the surgical procedure of modified radical mastectomy, MRM. Materials and Methods : The treatment plan of reverse hockey stick method, using the photon beam and electron beam, had been set for six breast cancer patients and these 6 breast cancer patients were selected to be the subjects for this study. The prescribed dose of electron beam for anterior chest wall was set to be 180 cGy per treatment and both the 3D-bolus, produced using 3D printer(CubeX, 3D systems, USA) and the self-made conventional step-bolus were used respectively. The surface dose under 3D-bolus and step-bolus was measured at 5 measurement spots of iso-center, lateral, medial, superior and inferior point, using GAFCHROMIC EBT3 film (International specialty products, USA) and the measured value of dose at 5 spots was compared and analyzed. Also the respective treatment plan was devised, considering the adoption of 3D-bolus and stepbolus and the separate treatment results were compared to each other. Results : The average surface dose was 179.17 cGy when the device of 3D-bolus was adopted and 172.02 cGy when step-bolus was adopted. The average error rate against the prescribed dose of 180 cGy was -(minus) 0.47% when the device of 3D-bolus was adopted and it was -(minus) 4.43% when step-bolus was adopted. It was turned out that the maximum error rate at the point of iso-center was 2.69%, in case of 3D-bolus adoption and it was 5,54% in case of step-bolus adoption. The maximum discrepancy in terms of treatment accuracy was revealed to be about 6% when step-bolus was adopted and to be about 3% when 3D-bolus was adopted. The difference in average target dose on chest wall between 3D-bolus treatment plan and step-bolus treatment plan was shown to be insignificant as the difference was only 0.3%. However, to mention the average prescribed dose for the part of lung and heart, that of 3D-bolus was decreased by 11% for lung and by 8% for heart, compared to that of step-bolus. Conclusion : It was confirmed through this research that the dose uniformity could be improved better through the device of 3D-bolus than through the device of step-bolus, as the device of 3D-bolus, produced in consideration of the contact condition of skin surface of chest wall, could be attached to patients' skin more nicely and the thickness of chest wall can be guaranteed more accurately by the device of 3D-bolus. It is considered that 3D-bolus device can be highly appreciated clinically because 3D-bolus reduces the dose on the adjacent organs and make the normal tissues protected, while that gives no reduction of dose on chest wall.

  • PDF

The Impact of the Internet Channel Introduction Depending on the Ownership of the Internet Channel (도입주체에 따른 인터넷경로의 도입효과)

  • Yoo, Weon-Sang
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.1
    • /
    • pp.37-46
    • /
    • 2009
  • The Census Bureau of the Department of Commerce announced in May 2008 that U.S. retail e-commerce sales for 2006 reached $ 107 billion, up from $ 87 billion in 2005 - an increase of 22 percent. From 2001 to 2006, retail e-sales increased at an average annual growth rate of 25.4 percent. The explosive growth of E-Commerce has caused profound changes in marketing channel relationships and structures in many industries. Despite the great potential implications for both academicians and practitioners, there still exists a great deal of uncertainty about the impact of the Internet channel introduction on distribution channel management. The purpose of this study is to investigate how the ownership of the new Internet channel affects the existing channel members and consumers. To explore the above research questions, this study conducts well-controlled mathematical experiments to isolate the impact of the Internet channel by comparing before and after the Internet channel entry. The model consists of a monopolist manufacturer selling its product through a channel system including one independent physical store before the entry of an Internet store. The addition of the Internet store to this channel system results in a mixed channel comprised of two different types of channels. The new Internet store can be launched by the independent physical store such as Bestbuy. In this case, the physical retailer coordinates the two types of stores to maximize the joint profits from the two stores. The Internet store also can be introduced by an independent Internet retailer such as Amazon. In this case, a retail level competition occurs between the two types of stores. Although the manufacturer sells only one product, consumers view each product-outlet pair as a unique offering. Thus, the introduction of the Internet channel provides two product offerings for consumers. The channel structures analyzed in this study are illustrated in Fig.1. It is assumed that the manufacturer plays as a Stackelberg leader maximizing its own profits with the foresight of the independent retailer's optimal responses as typically assumed in previous analytical channel studies. As a Stackelberg follower, the independent physical retailer or independent Internet retailer maximizes its own profits, conditional on the manufacturer's wholesale price. The price competition between two the independent retailers is assumed to be a Bertrand Nash game. For simplicity, the marginal cost is set at zero, as typically assumed in this type of study. In order to explore the research questions above, this study develops a game theoretic model that possesses the following three key characteristics. First, the model explicitly captures the fact that an Internet channel and a physical store exist in two independent dimensions (one in physical space and the other in cyber space). This enables this model to demonstrate that the effect of adding an Internet store is different from that of adding another physical store. Second, the model reflects the fact that consumers are heterogeneous in their preferences for using a physical store and for using an Internet channel. Third, the model captures the vertical strategic interactions between an upstream manufacturer and a downstream retailer, making it possible to analyze the channel structure issues discussed in this paper. Although numerous previous models capture this vertical dimension of marketing channels, none simultaneously incorporates the three characteristics reflected in this model. The analysis results are summarized in Table 1. When the new Internet channel is introduced by the existing physical retailer and the retailer coordinates both types of stores to maximize the joint profits from the both stores, retail prices increase due to a combination of the coordination of the retail prices and the wider market coverage. The quantity sold does not significantly increase despite the wider market coverage, because the excessively high retail prices alleviate the market coverage effect to a degree. Interestingly, the coordinated total retail profits are lower than the combined retail profits of two competing independent retailers. This implies that when a physical retailer opens an Internet channel, the retailers could be better off managing the two channels separately rather than coordinating them, unless they have the foresight of the manufacturer's pricing behavior. It is also found that the introduction of an Internet channel affects the power balance of the channel. The retail competition is strong when an independent Internet store joins a channel with an independent physical retailer. This implies that each retailer in this structure has weak channel power. Due to intense retail competition, the manufacturer uses its channel power to increase its wholesale price to extract more profits from the total channel profit. However, the retailers cannot increase retail prices accordingly because of the intense retail level competition, leading to lower channel power. In this case, consumer welfare increases due to the wider market coverage and lower retail prices caused by the retail competition. The model employed for this study is not designed to capture all the characteristics of the Internet channel. The theoretical model in this study can also be applied for any stores that are not geographically constrained such as TV home shopping or catalog sales via mail. The reasons the model in this study is names as "Internet" are as follows: first, the most representative example of the stores that are not geographically constrained is the Internet. Second, catalog sales usually determine the target markets using the pre-specified mailing lists. In this aspect, the model used in this study is closer to the Internet than catalog sales. However, it would be a desirable future research direction to mathematically and theoretically distinguish the core differences among the stores that are not geographically constrained. The model is simplified by a set of assumptions to obtain mathematical traceability. First, this study assumes the price is the only strategic tool for competition. In the real world, however, various marketing variables can be used for competition. Therefore, a more realistic model can be designed if a model incorporates other various marketing variables such as service levels or operation costs. Second, this study assumes the market with one monopoly manufacturer. Therefore, the results from this study should be carefully interpreted considering this limitation. Future research could extend this limitation by introducing manufacturer level competition. Finally, some of the results are drawn from the assumption that the monopoly manufacturer is the Stackelberg leader. Although this is a standard assumption among game theoretic studies of this kind, we could gain deeper understanding and generalize our findings beyond this assumption if the model is analyzed by different game rules.

  • PDF