• Title/Summary/Keyword: Actual data

Search Result 7,450, Processing Time 0.039 seconds

The Effects of Evaluation Attributes of Cultural Tourism Festivals on Satisfaction and Behavioral Intention (문화관광축제 방문객의 평가속성 만족과 행동의도에 관한 연구 - 2006 광주김치대축제를 중심으로 -)

  • Kim, Jung-Hoon
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.55-73
    • /
    • 2007
  • Festivals are an indispensable feature of cultural tourism(Formica & Uysal, 1998). Cultural tourism festivals are increasingly being used as instruments promoting tourism and boosting the regional economy. So much research related to festivals is undertaken from a variety of perspectives. Plans to revisit a particular festival have been viewed as an important research topic both in academia and the tourism industry. Therefore festivals have frequently been leveled as cultural events. Cultural tourism festivals have become a crucial component in constituting the attractiveness of tourism destinations(Prentice, 2001). As a result, a considerable number of tourist studies have been carried out in diverse cultural tourism festivals(Backman et al., 1995; Crompton & Mckay, 1997; Park, 1998; Clawson & Knetch, 1996). Much of previous literature empirically shows the close linkage between tourist satisfaction and behavioral intention in festivals. The main objective of this study is to investigate the effects of evaluation attributes of cultural tourism festivals on satisfaction and behavioral intention. accomplish the research objective, to find out evaluation items of cultural tourism festivals through the literature study an empirical study. Using a varimax rotation with Kaiser normalization, the research obtained four factors in the 18 evaluation attributes of cultural tourism festivals. Some empirical studies have examined the relationship between behavioral intention and actual behavior. To understand between tourist satisfaction and behavioral intention, this study suggests five hypotheses and hypothesized model. In this study, the analysis is based on primary data collected from visitors who participated in '2006 Gwangju Kimchi Festival'. In total, 700 self-administered questionnaires were distributed and 561 usable questionnaires were obtained. Respondents were presented with the 18 satisfactions item on a scale from 1(strongly disagree) to 7(strongly agree). Dimensionality and stability of the scale were evaluated by a factor analysis with varimax rotation. Four factors emerged with eigenvalues greater than 1, which explained 66.40% of the total variance and Cronbach' alpha raging from 0.876 to 0.774. And four factors named: advertisement and guides, programs, food and souvenirs, and convenient facilities. To test and estimate the hypothesized model, a two-step approach with an initial measurement model and a subsequent structural model for Structural Equation Modeling was used. The AMOS 4.0 analysis package was used to conduct the analysis. In estimating the model, the maximum likelihood procedure was used.In this study Chi-square test is used, which is the most common model goodness-of-fit test. In addition, considering the literature about the Structural Equation Modeling, this study used, besides Chi-square test, more model fit indexes to determine the tangibility of the suggested model: goodness-of-fit index(GFI) and root mean square error of approximation(RMSEA) as absolute fit indexes; normed-fit index(NFI) and non-normed-fit index(NNFI) as incremental fit indexes. The results of T-test and ANOVAs revealed significant differences(0.05 level), therefore H1(Tourist Satisfaction level should be different from Demographic traits) are supported. According to the multiple Regressions analysis and AMOS, H2(Tourist Satisfaction positively influences on revisit intention), H3(Tourist Satisfaction positively influences on word of mouth), H4(Evaluation Attributes of cultural tourism festivals influences on Tourist Satisfaction), and H5(Tourist Satisfaction positively influences on Behavioral Intention) are also supported. As the conclusion of this study are as following: First, there were differences in satisfaction levels in accordance with the demographic information of visitors. Not all visitors had the same degree of satisfaction with their cultural tourism festival experience. Therefore it is necessary to understand the satisfaction of tourists if the experiences that are provided are to meet their expectations. So, in making festival plans, the organizer should consider the demographic variables in explaining and segmenting visitors to cultural tourism festival. Second, satisfaction with attributes of evaluation cultural tourism festivals had a significant direct impact on visitors' intention to revisit such festivals and the word of mouth publicity they shared. The results indicated that visitor satisfaction is a significant antecedent of their intention to revisit such festivals. Festival organizers should strive to forge long-term relationships with the visitors. In addition, it is also necessary to understand how the intention to revisit a festival changes over time and identify the critical satisfaction factors. Third, it is confirmed that behavioral intention was enhanced by satisfaction. The strong link between satisfaction and behavioral intentions of visitors areensured by high quality advertisement and guides, programs, food and souvenirs, and convenient facilities. Thus, examining revisit intention from a time viewpoint may be of a great significance for both practical and theoretical reasons. Additionally, festival organizers should give special attention to visitor satisfaction, as satisfied visitors are more likely to return sooner. The findings of this research have several practical implications for the festivals managers. The promotion of cultural festivals should be based on the understanding of tourist satisfaction for the long- term success of tourism. And this study can help managers carry out this task in a more informed and strategic manner by examining the effects of demographic traits on the level of tourist satisfaction and the behavioral intention. In other words, differentiated marketing strategies should be stressed and executed by relevant parties. The limitations of this study are as follows; the results of this study cannot be generalized to other cultural tourism festivals because we have not explored the many different kinds of festivals. A future study should be a comparative analysis of other festivals of different visitor segments. Also, further efforts should be directed toward developing more comprehensive temporal models that can explain behavioral intentions of tourists.

  • PDF

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

STUDIES ON AVIAN VISCERAL LYMPHOMATOSIS I. THE INCREASED INCIDENSE AMONG CHICKEN FLOCKS AND PATHOLOGIC PICTURES (장기형임파종증(臟器型淋巴腫症)에 관(關)한 연구(硏究) 1. 계군(鷄群)에서의 임파종증(淋巴腫症)의 발생(發生) 및 병리학적소견(病理學的所見))

  • Kim, Uh Ho;Lim, Chang Hyeong
    • Korean Journal of Veterinary Research
    • /
    • v.4 no.1
    • /
    • pp.35-42
    • /
    • 1964
  • 1). An nanlysis was made of 3,500 postmortem diagnoses for the three years 1961 through 1963 to determine whether there was any actual incidence of avian visceral lymphomntosis in the field. Chickens autopsied, which showed gross alterations were 7.6 percent or 266 cases. The diminished incidence of the disease in second and third years seemed due to decreased total numbers of chicken flocks year by year for the reason of difficult feed supply. 2). Because chickens autopsied in this study were not clearly known of their breeds and lines, no distinct data on the incidence in various breeds were made. Some exact breeds were in too small numbers to have any statistical significance. Inconceivably, no other types of avian leukosis than visceral lymphomatosis had been observed in any appreciable number in this analysis. 3). Pathologic analysis for affected organs was made grossly and microscopically. In the gross pictures, liver, spleen, kidney, ovary, and in some case, intestine principally showed lesions, but its manifestation was variable in different organs. In such organs, livers were affected more frequently, and spleens followed next. The organs were classified and arranged according to the gross alterations, and among their distribution one-half of livers were in diffuse variety; one-fourths in nodular; about one-sevenths in mixed; and granular variety followed next. In the spleen samples, two-thirds were in diffuse variety; one-fourths in nodular; and follicular only in three cases. Ovaries almost showed follicular lesions, the diffused were less than one-fifths of total specimens. Kidneys were occurred almost in diffuse variety. And intestine showed only nodular tomors. Microscopically, 42 cases of visceral lymphomatosis composed of 24 livers, 10 spleens, 3 kidneys, 3 intestines and 2 ovaries were examined. The tumor cells were lymphoid cells showing various component in size, shape and stainability. Mitotic figures were usually present. The proportion of the component cells were various in all cases and there were variations in the distribution of the tumor cells. The types of distribution were classified according to the standard proposed by Horiuchi as nodular, infiltrative and diffuse proliferation. In cases of visceral lymphomatosis of the livers and the spleens the types of infiltrative, nodular and diffuse proliferation could be classified. In the cases of the kidneys the types of diffuse and nodular proliferation were observed. In the cases of the intestines and the ovaries the types of infiltrative and diffuse proliferation were observed respectively.

  • PDF

Chinese Communist Party's Management of Records & Archives during the Chinese Revolution Period (혁명시기 중국공산당의 문서당안관리)

  • Lee, Won-Kyu
    • The Korean Journal of Archival Studies
    • /
    • no.22
    • /
    • pp.157-199
    • /
    • 2009
  • The organization for managing records and archives did not emerge together with the founding of the Chinese Communist Party. Such management became active with the establishment of the Department of Documents (文書科) and its affiliated offices overseeing reading and safekeeping of official papers, after the formation of the Central Secretariat(中央秘書處) in 1926. Improving the work of the Secretariat's organization became the focus of critical discussions in the early 1930s. The main criticism was that the Secretariat had failed to be cognizant of its political role and degenerated into a mere "functional organization." The solution to this was the "politicization of the Secretariat's work." Moreover, influenced by the "Rectification Movement" in the 1940s, the party emphasized the responsibility of the Resources Department (材料科) that extended beyond managing documents to collecting, organizing and providing various kinds of important information data. In the mean time, maintaining security with regard to composing documents continued to be emphasized through such methods as using different names for figures and organizations or employing special inks for document production. In addition, communications between the central political organs and regional offices were emphasized through regular reports on work activities and situations of the local areas. The General Secretary not only composed the drafts of the major official documents but also handled the reading and examination of all documents, and thus played a central role in record processing. The records, called archives after undergoing document processing, were placed in safekeeping. This function was handled by the "Document Safekeeping Office(文件保管處)" of the Central Secretariat's Department of Documents. Although the Document Safekeeping Office, also called the "Central Repository(中央文庫)", could no longer accept, beginning in the early 1930s, additional archive transfers, the Resources Department continued to strengthen throughout the 1940s its role of safekeeping and providing documents and publication materials. In particular, collections of materials for research and study were carried out, and with the recovery of regions which had been under the Japanese rule, massive amounts of archive and document materials were collected. After being stipulated by rules in 1931, the archive classification and cataloguing methods became actively systematized, especially in the 1940s. Basically, "subject" classification methods and fundamental cataloguing techniques were adopted. The principle of assuming "importance" and "confidentiality" as the criteria of management emerged from a relatively early period, but the concept or process of evaluation that differentiated preservation and discarding of documents was not clear. While implementing a system of secure management and restricted access for confidential information, the critical view on providing use of archive materials was very strong, as can be seen in the slogan, "the unification of preservation and use." Even during the revolutionary movement and wars, the Chinese Communist Party continued their efforts to strengthen management and preservation of records & archives. The results were not always desirable nor were there any reasons for such experiences to lead to stable development. The historical conditions in which the Chinese Communist Party found itself probably made it inevitable. The most pronounced characteristics of this process can be found in the fact that they not only pursued efficiency of records & archives management at the functional level but, while strengthening their self-awareness of the political significance impacting the Chinese Communist Party's revolution movement, they also paid attention to the value possessed by archive materials as actual evidence for revolutionary policy research and as historical evidence of the Chinese Communist Party.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Performance Evaluation of Radiochromic Films and Dosimetry CheckTM for Patient-specific QA in Helical Tomotherapy (나선형 토모테라피 방사선치료의 환자별 품질관리를 위한 라디오크로믹 필름 및 Dosimetry CheckTM의 성능평가)

  • Park, Su Yeon;Chae, Moon Ki;Lim, Jun Teak;Kwon, Dong Yeol;Kim, Hak Joon;Chung, Eun Ah;Kim, Jong Sik
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.93-109
    • /
    • 2020
  • Purpose: The radiochromic film (Gafchromic EBT3, Ashland Advanced Materials, USA) and 3-dimensional analysis system dosimetry checkTM (DC, MathResolutions, USA) were evaluated for patient-specific quality assurance (QA) of helical tomotherapy. Materials and Methods: Depending on the tumors' positions, three types of targets, which are the abdominal tumor (130.6㎤), retroperitoneal tumor (849.0㎤), and the whole abdominal metastasis tumor (3131.0㎤) applied to the humanoid phantom (Anderson Rando Phantom, USA). We established a total of 12 comparative treatment plans by the four geometric conditions of the beam irradiation, which are the different field widths (FW) of 2.5-cm, 5.0-cm, and pitches of 0.287, 0.43. Ionization measurements (1D) with EBT3 by inserting the cheese phantom (2D) were compared to DC measurements of the 3D dose reconstruction on CT images from beam fluence log information. For the clinical feasibility evaluation of the DC, dose reconstruction has been performed using the same cheese phantom with the EBT3 method. Recalculated dose distributions revealed the dose error information during the actual irradiation on the same CT images quantitatively compared to the treatment plan. The Thread effect, which might appear in the Helical Tomotherapy, was analyzed by ripple amplitude (%). We also performed gamma index analysis (DD: 3mm/ DTA: 3%, pass threshold limit: 95%) for pattern check of the dose distribution. Results: Ripple amplitude measurement resulted in the highest average of 23.1% in the peritoneum tumor. In the radiochromic film analysis, the absolute dose was on average 0.9±0.4%, and gamma index analysis was on average 96.4±2.2% (Passing rate: >95%), which could be limited to the large target sizes such as the whole abdominal metastasis tumor. In the DC analysis with the humanoid phantom for FW of 5.0-cm, the three regions' average was 91.8±6.4% in the 2D and 3D plan. The three planes (axial, coronal, and sagittal) and dose profile could be analyzed with the entire peritoneum tumor and the whole abdominal metastasis target, with planned dose distributions. The dose errors based on the dose-volume histogram in the DC evaluations increased depending on FW and pitch. Conclusion: The DC method could implement a dose error analysis on the 3D patient image data by the measured beam fluence log information only without any dosimetry tools for patient-specific quality assurance. Also, there may be no limit to apply for the tumor location and size; therefore, the DC could be useful in patient-specific QAl during the treatment of Helical Tomotherapy of large and irregular tumors.

Dual Path Model in Store Loyalty of Discount Store (대형마트 충성도의 이중경로모형)

  • Ji, Seong-Goo;Lee, Ihn-Goo
    • Journal of Distribution Research
    • /
    • v.15 no.1
    • /
    • pp.1-24
    • /
    • 2010
  • I. Introduction The industry of domestic discount store was reorganized with 2 bigs and 1 middle, and then Home Plus took over Home Ever in 2008. In present, Oct, 2008, E-Mart has 118 outlets, Home Plus 112 outlets, and Lotte Mart 60 stores. With total number of 403 outlets, they are getting closer to a saturation point. We know that the industry of discount store has been getting through the mature stage in retail life cycle. There are many efforts to maintain existing customers rather than to get new customers. These competitions in this industry lead firms to acknowledge 'store loyalty' to be the first strategic tool for their sustainable competitiveness. In other words, the strategic goal of discount store is to boost up the repurchase rate of customers throughout increasing store loyalty. If owners of retail shops can figure out main factors for store loyalty, they can easily make more efficient and effective retail strategies which bring about more sales and profits. In this practical sense, there are many papers which are focusing on the antecedents of store loyalty. Many researchers have been inspecting causal relationships between antecedents and store loyalty; store characteristics, store image, atmosphere in store, sales promotion in store, service quality, customer characteristics, crowding, switching cost, trust, satisfaction, commitment, etc., In recent times, many academic researchers and practitioners have been interested in 'dual path model for service loyalty'. There are two paths in store loyalty. First path has an emphasis on symbolic and emotional dimension of service brand, and second path focuses on quality of product and service. We will call the former an extrinsic path and call the latter an intrinsic path. This means that consumers' cognitive path for store loyalty is not single but dual. Existing studies for dual path model are as follows; First, in extrinsic path, some papers in domestic settings show that there is 'store personality-identification-loyalty' path. Second, service quality has an effect on loyalty, which is a behavioral variable, in the mediation of customer satisfaction. But, it's very difficult to find out an empirical paper applied to domestic discount store based on this mediating model. The domestic research for store loyalty concentrates on not only intrinsic path but also extrinsic path. Relatively, an attention for intrinsic path is scarce. And then, we acknowledge that there should be a need for integrating extrinsic and intrinsic path. Also, in terms of retail industry, this study is meaningful because retailers want to achieve their competitiveness by using store loyalty. And so, the purpose of this paper is to integrate and complement two existing paths into one specific model, dual path model. This model includes both intrinsic and extrinsic path for store loyalty. With this research, we would expect to understand the full process of forming customers' store loyalty which had not been clearly explained. In other words, we propose the dual path model for discount store loyalty which has been originated from store personality and service quality. This model is composed of extrinsic path, discount store personality$\rightarrow$store identification$\rightarrow$store loyalty, and intrinsic path, service quality of discount store$\rightarrow$customer satisfaction$\rightarrow$store loyalty. II. Research Model Dual path model integrates intrinsic path and extrinsic path into one specific model. Intrinsic path put an emphasis on quality characteristics and extrinsic path focuses on brand characteristics. Intrinsic path is based on information processing perspective, and extrinsic path emphasizes symbolic and emotional dimension of brand. This model is composed of extrinsic path, discount store personality$\rightarrow$store identification$\rightarrow$store loyalty, and intrinsic path, service quality of discount store$\rightarrow$customer satisfaction$\rightarrow$store loyalty. Hypotheses are as follows; Hypothesis 1: Service quality perceived by customers in discount store has an positive effect on customer satisfaction Hypothesis 2: Store personality perceived by customers in discount store has an positive effect on store identification Hypothesis 3: Customer satisfaction in discount store has an positive effect on store loyalty. Hypothesis 4: Store identification has an positive effect on store loyalty. III. Results and Implications We examined consumers who patronize discount stores for samples of this study. With the structural equation model(SEM) analysis, we empirically tested the validity and fitness of the dual path model for store loyalty in discount stores. As results, the fitness indices of this model were well fitted to data obtained. In an intrinsic path, service quality(SQ) is positively related to customer satisfaction(CS), customer satisfaction(CS) has very significantly positive effect on store loyalty(SL). Also, in an extrinsic path, the store personality(SP) is positively related to store identification(SI), it shows significant effect on store loyalty. Table 1 shows the results as follows; There are some theoretical and practical implications. First, Many studies on discount store loyalty have been executed from various perspectives. But there has been no integrative view on this issue. And so, this research was theoretically designed to integrate various and controversial arguments into one systematic model. We empirically tested dual path model forming store loyalty, and brought up a systematic and integrative framework for future studies. We want to expect creative and aggressive research activities. Second, a few established papers are focused on the relationship between antecedents and store loyalty; store characteristics, atmosphere, sales promotion in store, service quality, trust, commitment, etc., There has been some limits in understanding thoroughly the formation process of store loyalty with a singular path, intrinsic or extrinsic. Beyond these limits in single path, we could propose the new path for store loyalty. This is meaningful. Third, discount store firms make and execute marketing strategies for increasing store loyalty. This research provides real practitioners with reference framework needed for actual strategy formation. Because this paper shows integrated and systematic path for store loyalty. A special feature of this study is to represent 6 sub dimensions of service quality in intrinsic path and 4 sub dimensions of store personality in extrinsic path. Marketers can make more analytic marketing planning with concrete sub dimensions of service quality and store personality. When marketers of discount stores make strategic planning like MPR, Ads, campaign, sales promotion, they can use many items which are more competitive than competitors.

  • PDF

Problems in the Korean National Family Planning Program (한국가족계획사업(韓國家族計劃事業)의 문제점(問題點))

  • Hong, Jong-Kwan
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.2 no.2
    • /
    • pp.27-36
    • /
    • 1975
  • The success of the family planning program in Korea is reflected in the decrease in the growth rate from 3.0% in 1962 to 2.0% in 1971, and in the decrease in the fertility rate from 43/1,000 in 1960 to 29/1,000 in 1970. However, it would be erroneous to attribute these reductions entirely to the family planning program. Other socio-economic factors, such as the increasing age at marriage and the increasing use of induced abortions, definitely had an impact on the lowered growth and fertility rate. Despite the relative success of the program to data in meeting its goals, there is no room for complacency. Meeting the goal of a further reduction in the population growth rate to 1.3% by 1981 is a much more difficult task than any one faced in the past. Not only must fertility be lowered further, but the size of the target population itself will expand tremendously in the late seventies; due to the post-war baby boom of the 1950's reaching reproductive ages. Furthermore, it is doubtful that the age at marriage will continue to rise as in the past or that the incidence of induced abortion will continue to increase. Consequently, future reductions in fertility will be more dependent on the performance of the national family planning program, with less assistance from these non-program factors. This paper will describe various approaches to help to the solution of these current problems. 1. PRACTICE RATE IN FAMILY PLANNING In 1973, the attitude (approval) and knowledge rates were quite high; 94% and 98% respectively. But a large gap exists between that and the actual practice rate, which is only 3695. Two factors must be considered in attempting to close the KAP-gap. The first is to change social norms, which still favor a larger family, increasing the practice rate cannot be done very quickly. The second point to consider is that the family planning program has not yet reached all the eligible women. A 1973 study determineded that a large portion, 3096 in fact, of all eligible women do not want more children, but are not practicing family planning. Thus, future efforts to help close the KAP-gap must focus attention and services on this important large group of potential acceptors. 2. CONTINUATION RATES Dissatisfaction with the loop and pill has resulted in high discontinuation rates. For example, a 1973 survey revealed that within the first six months initial loop acceptance. nearly 50% were dropouts, and that within the first four months of inital pill acceptance. nearly 50% were dropouts. These discontinuation rates have risen over the past few years. The high rate of discontinuance obviously decreases the contraceptive effectiveness. and has resulted in many unwanted births which is directly related to the increase of induced abortions. In the future, the family planning program must emphasize the improved quality of initial and follow-up services. rather than more quantity, in order to insure higher continuation rates and thus more effective contraceptive protection. 3. INDUCED ABORTION As noted earlier. the use of induced abortions has been increase yearly. For example, in 1960, the average number of abortions was 0.6 abortions per women in the 15-44 age range. By 1970. that had increased to 2 abortions per women. In 1966. 13% of all women between 15-44 had experienced at least one abortion. By 1971, that figure jumped to 28%. In 1973 alone, the total number of abortions was 400,000. Besides the ever incre.sing number of induced abortions, another change has that those who use abortions have shifted since 1965 to include- not. only the middle class, but also rural and low-income women. In the future. in response to the demand for abortion services among rural and low-income w~men, the government must provide and support abortion services for these women as a part of the national family planning program. 4. TARGET SYSTIi:M Since 1962, the nationwide target system has been used to set a target for each method, and the target number of acceptors is then apportioned out to various sub-areas according to the number of eligible couples in each area. Because these targets are set without consideration for demographic factors, particular tastes, prejudices, and previous patterns of acceptance in the area, a high discontinuation rate for all methods and a high wastage rate for the oral pill and condom results. In the future. to alleviate these problems of the methodbased target system. an alternative. such as the weighted-credit system, should be adopted on a nation wide basis. In this system. each contraceptive method is. assigned a specific number of points based upon the couple-years of protection (CYP) provided by the method. and no specific targets for each method are given. 5. INCREASE OF STERILIZA.TION TARGET Two special projects. the hospital-based family planning program and the armed forces program, has greatly contributed to the increasing acceptance in female and male sterilization respectively. From January-September 1974, 28,773 sterilizations were performed. During the same time in 1975, 46,894 were performed; a 63% increase. If this trend continues, by the end of 1975. approximately 70,000 sterilizations will have been performed. Sterilization is a much better method than both the loop and pill, in terms of more effective contraceptive protection and the almost zero dropout rate. In the future, the. family planning program should continue to stress the special programs which make more sterilizations possible. In particular, it should seek to add the laparoscope techniques to facilitate female sterilization acceptance rates. 6. INCREASE NUMBER OF PRIVATE ACCEPTORS Among the current family planning users, approximately 1/3 are in the private sector and thus do not- require government subsidy. The number of private acceptors increases with increasing urbanization and economic growth. To speed this process, the government initiated the special hospital based family planning program which is utilized mostly by the private sector. However, in the future, to further hasten the increase of private acceptors, the government should encourage doctors in private practice to provide family planning services, and provide the contraceptive supplies. This way, those do utilize the private medical system will also be able to receive family planning services and pay for it. Another means of increasing the number of private acceptors, IS to greatly expand the commercial outlets for pills and condoms beyond the existing service points of drugstores, hospitals, and health centers. 7. IE&C PROGRAM The current preferred family size is nearly twice as high as needed to achieve a stable poplation. Also, a strong boy preference hinders a small family size as nearly all couples fuel they must have at least one or more sons. The IE&C program must, in the future, strive to emphasize the values of the small family and equality of the sexes. A second problem for the IE&C program to work. with in the: future is the large group of people who approves family planning, want no more children, but do not practice. The IE&C program must work to motivate these people to accept family planning And finally, for those who already practice, an IE&C program in the future must stress continuation of use. The IE&C campaign, to insure highest effectiveness, should be based on a detailed factor analysis of contraceptive discontinuance. In conclusion, Korea faces a serious unfavorable sociodemographic situation- in the future unless the population growth rate can be curtailed. And in the future, the decrease in fertility will depend solely on the family planning program, as the effect of other socio-economic factors has already been maximumally felt. A second serious factor to consider is the increasing number of eligible women due to the 1950's baby boom. Thus, to meet these challenges, the program target must be increased and the program must improve the effectiveness of its current activities and develop new programs.

  • PDF

A Study on the Effect of Water Soluble Extractive upon Physical Properties of Wood (수용성(水溶性) 추출물(抽出物)이 목재(木材)의 물리적(物理的) 성질(性質)에 미치는 영향(影響))

  • Shim, Chong-Supp
    • Journal of the Korean Wood Science and Technology
    • /
    • v.10 no.3
    • /
    • pp.13-44
    • /
    • 1982
  • 1. Since long time ago, it has been talked about that soaking wood into water for a long time would be profitable for the decreasing of defects such as checking, cupping and bow due to the undue-shrinking and swelling. There are, however, no any actual data providing this fact definitly, although there are some guesses that water soluble extractives might effect on this problem. On the other hand, this is a few work which has been done about the effect of water soluble extractives upon the some physical properties of wood and that it might be related to the above mentioned problem. If man does account for that whether soaking wood into water for a long time would be profitable for the decreasing of defects due to the undue-shrinking and swelling in comparison with unsoaking wood or not, it may bring a great contribution on the reasonable uses of wood. To account for the effect of water soluble extractives upon physical properties of wood, this study has been made at the wood technology laboratory, School of Forestry, Yale university, under competent guidance of Dr. F. F. Wangaard, with the following three different species which had been provided at the same laboratory. 1. Pinus strobus 2. Quercus borealis 3. Hymenaea courbaril 2. The physical properties investigated in this study are as follows. a. Equilibrium moisture content at different relative humidity conditions. b. Shrinkage value from gre condition to different relative humidity conditions and oven dry condition. c. Swelling value from oven dry condition to different relative humidity conditions. d. Specific gravity 3. In order to investigate the effect of water soluble extractives upon physical properties of wood, the experiment has been carried out with two differently treated specimens, that is, one has been treated into water and the other into sugar solution, and with controlled specimens. 4. The quantity of water soluble extractives of each species and the group of chemical compounds in the extracted liquid from each species have shown in Table 36. Between species, there is some difference in quantity of extractives and group of chemical compounds. 5. In the case of equilibrium moisture contents at different relative humidity condition, (a) Except the desorption case at 80% R. H. C. (Relative Humidity Condition), there is a definite line between untreated specimens and treated specimens that is, untreated specimens hold water more than treated specimens at the same R.H.C. (b) The specimens treated into sugar solution have shown almost the same tendency in results compared with the untreated specimens. (c) Between species, there is no any definite relation in equilibrium moisture content each other, however E. M. C. in heartwood of pine is lesser than in sapwood. This might cause from the difference of wood anatomical structure. 6. In the case of shrinkage, (a) The shrinkage value of the treated specimen into water is more than that of the untreated specimens, except anyone case of heartwood of pine at 80% R. H. C. (b) The shrinkage value of treated specimens in the sugar solution is less than that of the others and has almost the same tendency to the untreated specimens. It would mean that the penetration of some sugar into the wood can decrease the shrinkage value of wood. (c) Between species, the shrinkage value of heartwood of pine is less than sapwood of the same, shrinkage value of oak is the largest, Hymenaea is lesser than oak and more than pine. (d) Directional difference of shrinkage value through all species can also see as other all kind of species previously tested. (e) There is a definite relation in between the difference of shrinkage value of treated and untreated specimens and amount of extractives, that is, increasing extractives gives increasing the difference of shrinkage value between treated and untreated specimens. 7. In the case of swelling, (a) The swelling value of treated specimens is greater than that of the untreated specimens through all cases. (b) In comparison with the tangential direction and radial direction, the swelling value of tangential direction is larger than that of radial direction in the same species. (c) Between species, the largest one in swelling values is oak and the smallest pine heartwood, there are also a tendency that species which shrink more swell also more and, on the contrary, species which shrink lesser swell also lesser than the others. 8. In the case of specific gravity, (a) The specific gravity of the treated specimens is larger than that of untreated specimens. This reversed value between treated and untreated specimens has been resulted from the volume of specimen of oven dry condition. (b) Between species, there are differences, that is, the specific gravity of Hymenaea is the largest one and the sapwood of pine is the smallest. 9. Through this investigation, it has been concluded that soaking wood into plain water before use without any special consideration may bring more hastful results than unsoaking for use of wood. However soaking wood into the some specially provided solutions such as salt water or inorganic matter may be dissolved in it, can be profitable for the decreasing shrinkage and swelling, checking, shaking and bow etc. if soaking wood into plain water might bring the decreasing defects, it might come from even shrinking and swelling through all dimension.

  • PDF