Transfer of Isolated Mitochondria to Bovine Oocytes by Microinjection (미세주입을 이용한 난자로의 분리된 미토콘드리아 전달)
-
- Journal of Life Science
- /
- v.27 no.12
- /
- pp.1445-1451
- /
- 2017
Mitochondria play a central role in energy generation by using electron transport coupled with oxidative phosphorylation. They also participate in other important cellular functions including metabolism, apoptosis, signaling, and reactive oxygen species production. Therefore, mitochondrial dysfunction is known to contribute to a variety of human diseases. Furthermore, there are various inherited diseases of energy metabolism due to mitochondrial DNA (mtDNA) mutations. Unfortunately, therapeutic options for these inherited mtDNA diseases are extremely limited. In this regard, mitochondrial replacement techniques are taking on increased importance in developing a clinical approach to inherited mtDNA diseases. In this study, green fluorescence protein (GFP)-tagged mitochondria were isolated by differential centrifugation from a mammalian cell line. Using microinjection technique, the isolated GFP-tagged mitochondria were then transferred to bovine oocytes that were triggered for early development. During the early developmental period from bovine oocytes to blastocysts, the transferred mitochondria were observed using fluorescent microscopy. The microinjected mitochondria were dispersed rapidly into the cytoplasm of oocytes and were passed down to subsequent cells of 2-cell, 4-cell, 8-cell, morula, and blastocyst stages. Together, these results demonstrate a successful in vitro transfer of isolated mitochondria to oocytes and provide a model for mitochondrial replacement implicated in inherited mtDNA diseases and animal cloning.
The aim of this paper is to analyze the mechanics of price formation in the tramp shipping. For the purpose of this study, the main characteristics of tramp freight rates and the market is examined, and a brief examination of the nature ofthe costs of operation is given which are essential for the understanding of the functioning of shipping firms as well as for the understanding of developments in the tramp freight market. The demand and supply relationships in the market is also analysed in detail. Tramp shipping is an industry that has a market which functions under conditions that are not dissimilar to the theoretical model of perfect competition. However, it does notmean that tramp shipping market is a perfectly competitive market. It is apparent that this realworld competitive system has its imperfections, which means that the market for tramp shipping is near to being a perfectly competitive market on an internaitonal scale and it is freight are therefore subjext to the laws of supply and demand. In theory, the minimum freight rate in the short term is that at which the lowest cost vessels will lay-up in preference to operating, and is equal to the variable costs minus lay-up costs; and this would imply that in all times except those of full employment for ships there is a tendency for newer low-cost, and, probably, faster vessels to be driving the older high-cost vessels in the breaker's yards. In this case, shipowners may be reluctant to lay-up their ships becasue of obligations to crews, or because they would lose credibility with shippers or financiers, or simply because of lost prestige. Mainly, however, the decision is made on strictly economic grounds. When, for example, the total operating costs minus the likely freight earnings are greater than the cost of taking the ship out of service, maintaining it, and recommissioning it, then a ship may be considered for laying-up; shipowners will, in other words, run the ships at freight earnings below operating costs by as much as the cost of laying them up. As described above, the freight rates fixed on the tramp shipping market are subject to the laws of supply and demand. In other words, the basic properties of supply and demand are of significance so far as price or rate fluctuations in the tramp freight market are concerned. In connection with the same of the demand for tramp shipping services, the following points should be brone in mind: (a) That the magnitude of demand for sea transport of dry cargoes in general and for tramp shipping services in particular is increasing in the long run. (b) That owning to external factors, the demand for tramp shipping services is capable of varying sharphy at a given going of time. (c) The demad for the industry's services tends to be price inelastic in the short run. On the other hand the demand for the services offered by the individual shipping firm tends as a rule to be infinitely price elastic. In the meantime, the properties of the supply of the tramp shipping facilities are that it cannot expand or contract in the short run. Also, that in the long run there is a time-lag between entrepreneurs' decision to expand their fleets and the actual time of delivery of the new vessels. Thus, supply is inelastic and not capable of responding to demand and price changes at a given period of time. In conclusion, it can be safely stated that short-run changes in freight rates are a direct result of variations in the magnitude of demand for tramp shipping facilities, whilest the average level of freight rates is brought down to relatively low levels over prolonged periods of time.
Currently, our country operates gifted education only as a special curriculum, which results in many problems, e.g., there are few beneficiaries of gifted education, considerable time and effort are required to gifted students, and gifted students' educational needs are ignored during the operation of regular curriculum. In order to solve these problems, the present study formulates the following research questions, finding it advisable to conduct gifted education in elementary regular classrooms within the scope of the regular curriculum. A. To devise a teaching plan for the gifted students on mathematics in the elementary school regular classroom. B. To develop a learning program for the gifted students in the elementary school regular classroom. C. To apply an in-depth learning program to gifted students in mathematics and analyze the effectiveness of the program. In order to answer these questions, a teaching plan was provided for the gifted students in mathematics using a differentiating instruction type. This type was developed by researching literature reviews. Primarily, those on characteristics of gifted students in mathematics and teaching-learning models for gifted education. In order to instruct the gifted students on mathematics in the regular classrooms, an in-depth learning program was developed. The gifted students were selected through teachers' recommendation and an advanced placement test. Furthermore, the effectiveness of the gifted education in mathematics and the possibility of the differentiating teaching type in the regular classrooms were determined. The analysis was applied through an in-depth learning program of selected gifted students in mathematics. To this end, an in-depth learning program developed in the present study was applied to 6 gifted students in mathematics in one first grade class of D Elementary School located in Nowon-gu, Seoul through a 10-period instruction. Thereafter, learning outputs, math diaries, teacher's checklist, interviews, video tape recordings the instruction were collected and analyzed. Based on instruction research and data analysis stated above, the following results were obtained. First, it was possible to implement the gifted education in mathematics using a differentiating instruction type in the regular classrooms, without incurring any significant difficulty to the teachers, the gifted students, and the non-gifted students. Specifically, this instruction was effective for the gifted students in mathematics. Since the gifted students have self-directed learning capability, the teacher can teach lessons to the gifted students individually or in a group, while teaching lessons to the non-gifted students. The teacher can take time to check the learning state of the gifted students and advise them, while the non-gifted students are solving their problems. Second, an in-depth learning program connected with the regular curriculum, was developed for the gifted students, and greatly effective to their development of mathematical thinking skills and creativity. The in-depth learning program held the interest of the gifted students and stimulated their mathematical thinking. It led to the creative learning results, and positively changed their attitude toward mathematics. Third, the gifted students with the most favorable results who took both teacher's recommendation and advanced placement test were more self-directed capable and task committed. They also showed favorable results of the in-depth learning program. Based on the foregoing study results, the conclusions are as follows: First, gifted education using a differentiating instruction type can be conducted for gifted students on mathematics in the elementary regular classrooms. This type of instruction conforms to the characteristics of the gifted students in mathematics and is greatly effective. Since the gifted students in mathematics have self-directed learning capabilities and task-commitment, their mathematical thinking skills and creativity were enhanced during individual exploration and learning through an in-depth learning program in a differentiating instruction. Second, when a differentiating instruction type is implemented, beneficiaries of gifted education will be enhanced. Gifted students and their parents' satisfaction with what their children are learning at school will increase. Teachers will have a better understanding of gifted education. Third, an in-depth learning program for gifted students on mathematics in the regular classrooms, should conform with an instructing and learning model for gifted education. This program should include various and creative contents by deepening the regular curriculum. Fourth, if an in-depth learning program is applied to the gifted students on mathematics in the regular classrooms, it can enhance their gifted abilities, change their attitude toward mathematics positively, and increase their creativity.
The purpose of this research was to analyze the contents of housing teaching learning studies in Home Economics of secondary schools since 2001. The 22 research, drawn from the database 'riss4u', were analyzed in terms of general information of the paper (studied institution & year, implementation & evaluation, subject of study & size) and specific contents of teaching learning plans (theme, curricula & textbooks, methode & # of lessons, resources). The results showed that most studies were reported during the 7th or the 2007 revised curricula period. All, except one doctoral dissertation, were master's theses from a few universities. In all studies, ranging from 2 to 15 lessons, teaching learning plans were implemented and evaluated in the class of the researcher while some were applied in other schools, too. The theme of the teaching learning plans varied but were concentrated on one out of two content elements and two out of six learning elements. The 2007 revised curriculum seems to be an important turning point, not only reinforcing the analyses of the curricular and textbooks in the analyzing stage but also facilitating the use of various methods for the lessons in the developing stage. Practical problem based model was the most frequently adopted, while cooperative learning and ICT served as fundamental although not always mentioned. Various teaching resources such as UCC, reading materials, PPT were developed for the teacher. Activity sheets were the most frequently used for the students, followed by reading materials. Because teaching learning is an essential core of education, teaching learning studies should be more actively conducted and the variety of subject topics, methods and resources should also be obtained by more researchers.
This study was conducted to the effect of low air temperature and light intensity conditions on yield and quality of tomato at the early stage of growth in Korea. Inplastic greenhouses, low temperature and low temperature with shade treatments were performed from 17 to 42 days after plant. Tomato growing degree days were decreased 5.5% due to cold treatment during the treatment period. Light intensity decreased 74.7% of growing degree days due to shade. After commencing treatments, the plant growth decreased by low temperature and low radiation except for height. Analysis of the yield showed that the first harvest date was the same, but the yield of the control was 3.3 times higher than low temperature with shade treatment. The cumulative yields at 87 days after transplanting were 1734, 1131, and 854 g per plant for control, low temperature, and low temperature with shade, respectively. The sugar and acidity of tomatoes did not differ between treatment and harvesting season. To investigate the photosynthetic characteristics according to the treatment, the carbon dioxide reaction curve was analyzed using the biochemical model of the photosynthetic rate. The results showed that the maximum photosynthetic rate, J (electric transportation rate), TPU (triose phosphate utilization), and Rd (dark respiration rate) did not show any difference with temperature, but were reduced by shading. Vcmax (maximum carboxylation rate) was decreased depending on the low temperature and the shade. Results indicated that low temperature and light intensity at the early growth stage can be inhibited the growth in the early stage but this phenomenon might be recovered afterward. The yield was reduced by low temperature and low intensity and there was no difference in quality.
National forests have an advantage over private forests in terms of higher investment in capital, technology, and labor, allowing for more intensive management. As such, national forests are expected to serve not only as a strategic reserve of forest resources to address the long-term demand for timber but also to stably perform various essential forest functions demanded by society. However, most forest stands in the current national forests belong to the fourth age class or above, indicating an imminent timber harvesting period amid an imbalanced age class structure. Therefore, if timber harvesting is not conducted based on systematic management planning, it will become difficult to ensure the continuity of the national forests' diverse functions. This study was conducted to determine the optimal volume of timber production in the national forests to improve the age-class structure while sustainably maintaining their economic and public functions. To achieve this, the study first identified areas within the national forests suitable for timber production. Subsequently, a forest management planning model was developed using multi-objective linear programming, taking into account both the national forests' economic role and their public benefits. The findings suggest that approximately 488,000 hectares within the national forests are suitable for timber production. By focusing on management of these areas, it is possible to not only improve the age-class distribution but also to sustainably uphold the forests' public benefits. Furthermore, the potential volume of timber production from the national forests for the next 100 years would be around 2 million m3 per year, constituting about 44% of the annual domestic timber supply.
Digital Convergence means integration between industry, technology, and contents, and in marketing, it usually comes with creation of new types of product and service under the base of digital technology as digitalization progress in electro-communication industries including telecommunication, home appliance, and computer industries. One can see digital convergence not only in instruments such as PC, AV appliances, cellular phone, but also in contents, network, service that are required in production, modification, distribution, re-production of information. Convergence in contents started around 1990. Convergence in network and service begins as broadcasting and telecommunication integrates and DMB(digital multimedia broadcasting), born in May, 2005 is the symbolic icon in this trend. There are some positive and negative expectations about DMB. The reason why two opposite expectations exist is that DMB does not come out from customer's need but from technology development. Therefore, customers might have hard time to interpret the real meaning of DMB. Time is quite critical to a high tech product, like DMB because another product with same function from different technology can replace the existing product within short period of time. If DMB does not positioning well to customer's mind quickly, another products like Wibro, IPTV, or HSPDA could replace it before it even spreads out. Therefore, positioning strategy is critical for success of DMB product. To make correct positioning strategy, one needs to understand how consumer interprets DMB and how consumer's interpretation can be changed via communication strategy. In this study, we try to investigate how consumer perceives a new product, like DMB and how AD strategy change consumer's perception. More specifically, the paper segment consumers into sub-groups based on their DMB perceptions and compare their characteristics in order to understand how they perceive DMB. And, expose them different printed ADs that have messages guiding consumer think DMB in specific ways, either cellular phone or personal TV. Research Question 1: Segment consumers according to perceptions about DMB and compare characteristics of segmentations. Research Question 2: Compare perceptions about DMB after AD that induces categorization of DMB in direction for each segment. If one understand and predict a direction in which consumer perceive a new product, firm can select target customers easily. We segment consumers according to their perception and analyze characteristics in order to find some variables that can influence perceptions, like prior experience, usage, or habit. And then, marketing people can use this variables to identify target customers and predict their perceptions. If one knows how customer's perception is changed via AD message, communication strategy could be constructed properly. Specially, information from segmented customers helps to develop efficient AD strategy for segment who has prior perception. Research framework consists of two measurements and one treatment, O1 X O2. First observation is for collecting information about consumer's perception and their characteristics. Based on first observation, the paper segment consumers into two groups, one group perceives DMB similar to Cellular phone and the other group perceives DMB similar to TV. And compare characteristics of two segments in order to find reason why they perceive DMB differently. Next, we expose two kinds of AD to subjects. One AD describes DMB as Cellular phone and the other Ad describes DMB as personal TV. When two ADs are exposed to subjects, consumers don't know their prior perception of DMB, in other words, which subject belongs 'similar-to-Cellular phone' segment or 'similar-to-TV' segment? However, we analyze the AD's effect differently for each segment. In research design, final observation is for investigating AD effect. Perception before AD is compared with perception after AD. Comparisons are made for each segment and for each AD. For the segment who perceives DMB similar to TV, AD that describes DMB as cellular phone could change the prior perception. And AD that describes DMB as personal TV, could enforce the prior perception. For data collection, subjects are selected from undergraduate students because they have basic knowledge about most digital equipments and have open attitude about a new product and media. Total number of subjects is 240. In order to measure perception about DMB, we use indirect measurement, comparison with other similar digital products. To select similar digital products, we pre-survey students and then finally select PDA, Car-TV, Cellular Phone, MP3 player, TV, and PSP. Quasi experiment is done at several classes under instructor's allowance. After brief introduction, prior knowledge, awareness, and usage about DMB as well as other digital instruments is asked and their similarities and perceived characteristics are measured. And then, two kinds of manipulated color-printed AD are distributed and similarities and perceived characteristics for DMB are re-measured. Finally purchase intension, AD attitude, manipulation check, and demographic variables are asked. Subjects are given small gift for participation. Stimuli are color-printed advertising. Their actual size is A4 and made after several pre-test from AD professionals and students. As results, consumers are segmented into two subgroups based on their perceptions of DMB. Similarity measure between DMB and cellular phone and similarity measure between DMB and TV are used to classify consumers. If subject whose first measure is less than the second measure, she is classified into segment A and segment A is characterized as they perceive DMB like TV. Otherwise, they are classified as segment B, who perceives DMB like cellular phone. Discriminant analysis on these groups with their characteristics of usage and attitude shows that Segment A knows much about DMB and uses a lot of digital instrument. Segment B, who thinks DMB as cellular phone doesn't know well about DMB and not familiar with other digital instruments. So, consumers with higher knowledge perceive DMB similar to TV because launching DMB advertising lead consumer think DMB as TV. Consumers with less interest on digital products don't know well about DMB AD and then think DMB as cellular phone. In order to investigate perceptions of DMB as well as other digital instruments, we apply Proxscal analysis, Multidimensional Scaling technique at SPSS statistical package. At first step, subjects are presented 21 pairs of 7 digital instruments and evaluate similarity judgments on 7 point scale. And for each segment, their similarity judgments are averaged and similarity matrix is made. Secondly, Proxscal analysis of segment A and B are done. At third stage, get similarity judgment between DMB and other digital instruments after AD exposure. Lastly, similarity judgments of group A-1, A-2, B-1, and B-2 are named as 'after DMB' and put them into matrix made at the first stage. Then apply Proxscal analysis on these matrixes and check the positional difference of DMB and after DMB. The results show that map of segment A, who perceives DMB similar as TV, shows that DMB position closer to TV than to Cellular phone as expected. Map of segment B, who perceive DMB similar as cellular phone shows that DMB position closer to Cellular phone than to TV as expected. Stress value and R-square is acceptable. And, change results after stimuli, manipulated Advertising show that AD makes DMB perception bent toward Cellular phone when Cellular phone-like AD is exposed, and that DMB positioning move towards Car-TV which is more personalized one when TV-like AD is exposed. It is true for both segment, A and B, consistently. Furthermore, the paper apply correspondence analysis to the same data and find almost the same results. The paper answers two main research questions. The first one is that perception about a new product is made mainly from prior experience. And the second one is that AD is effective in changing and enforcing perception. In addition to above, we extend perception change to purchase intention. Purchase intention is high when AD enforces original perception. AD that shows DMB like TV makes worst intention. This paper has limitations and issues to be pursed in near future. Methodologically, current methodology can't provide statistical test on the perceptual change, since classical MDS models, like Proxscal and correspondence analysis are not probability models. So, a new probability MDS model for testing hypothesis about configuration needs to be developed. Next, advertising message needs to be developed more rigorously from theoretical and managerial perspective. Also experimental procedure could be improved for more realistic data collection. For example, web-based experiment and real product stimuli and multimedia presentation could be employed. Or, one can display products together in simulated shop. In addition, demand and social desirability threats of internal validity could influence on the results. In order to handle the threats, results of the model-intended advertising and other "pseudo" advertising could be compared. Furthermore, one can try various level of innovativeness in order to check whether it make any different results (cf. Moon 2006). In addition, if one can create hypothetical product that is really innovative and new for research, it helps to make a vacant impression status and then to study how to form impression in more rigorous way.
Over the past decade, there has been a rapid diffusion of technological devices and a rising number of various devices, resulting in an escalation of virtual reality technology. Technological market has rapidly been changed from smartphone to wearable devices based on virtual reality. Virtual reality can make users feel real situation through sensing interaction, voice, motion capture and so on. Facebook.com, Google, Samsung, LG, Sony and so on have investigated developing platform of virtual reality. the pricing of virtual reality devices also had decreased into 30% from their launched period. Thus market infrastructure in virtual reality have rapidly been developed to crease marketplace. However, most consumers recognize that virtual reality is not ease to purchase or use. That could not lead consumers to positive attitude for devices and purchase the related devices in the early market. Through previous studies related to virtual reality, there are few studies focusing on why the devices for virtual reality stayed in early stage in adoption & diffusion context in the market. Almost previous studies considered the reasons of hard adoption for innovative products in the viewpoints of Typology of Innovation Resistance, MIR(Management of Innovation Resistant), UTAUT & UTAUT2. However, product-based antecedents also important to increase user intention to purchase and use products in the technological market. In this study, we focus on user acceptance and resistance for increasing purchase and usage promotions of wearable devices related to virtual reality based on headgear products like Galaxy Gear. Especially, we added a variables like attitude confidence as a dimension for user resistance. The research questions of this study are follows. First, how attitude confidence and innovativeness resistance affect user intention to use? Second, What factors related to content and brand contexts can affect user intention to use? This research collected data from the participants who have experiences using virtual rality headgears aged between 20s to 50s located in South Korea. In order to collect data, this study used a pilot test and through making face-to-face interviews on three specialists, face validity and content validity were evaluated for the questionnaire validity. Cleansing the data, we dropped some outliers and data of irrelevant papers. Totally, 156 responses were used for testing the suggested hypotheses. Through collecting data, demographics and the relationships among variables were analyzed through conducting structural equation modeling by PLS. The data showed that the sex of respondents who have experience using social commerce sites (male=86(55.1%), female=70(44.9%). The ages of respondents are mostly from 20s (74.4%) to 30s (16.7%). 126 respondents (80.8%) have used virtual reality devices. The results of our model estimation are as follows. With the exception of Hypothesis 1 and 7, which deals with the two relationships between brand awareness to attitude confidence, and quality of content to perceived enjoyment, all of our hypotheses were supported. In compliance with our hypotheses, perceived ease of use (H2) and use innovativeness (H3) were supported with its positively influence for the attitude confidence. This finding indicates that the more ease of use and innovativeness for devices increased, the more users' attitude confidence increased. Perceived price (H4), enjoyment (H5), Quantity of contents (H6) significantly increase user resistance. However, perceived price positively affect user innovativeness resistance meanwhile perceived enjoyment and quantity of contents negatively affect user innovativeness resistance. In addition, aesthetic exterior (H6) was also positively associated with perceived price (p<0.01). Also projection quality (H8) can increase perceived enjoyment (p<0.05). Finally, attitude confidence (H10) increased user intention to use virtual reality devices. however user resistance (H11) negatively affect user intention to use virtual reality devices. The findings of this study show that attitude confidence and user innovativeness resistance differently influence customer intention for using virtual reality devices. There are two distinct characteristic of attitude confidence: perceived ease of use and user innovativeness. This study identified the antecedents of different roles of perceived price (aesthetic exterior) and perceived enjoyment (quality of contents & projection quality). The findings indicated that brand awareness and quality of contents for virtual reality is not formed within virtual reality market yet. Therefore, firms should developed brand awareness for their product in the virtual market to increase market share.
Over the past decade, there has been a rapid diffusion of electronic commerce and a rising number of interconnected networks, resulting in an escalation of security threats and privacy concerns. Electronic commerce has a built-in trade-off between the necessity of providing at least some personal information to consummate an online transaction, and the risk of negative consequences from providing such information. More recently, the frequent disclosure of private information has raised concerns about privacy and its impacts. This has motivated researchers in various fields to explore information privacy issues to address these concerns. Accordingly, the necessity for information privacy policies and technologies for collecting and storing data, and information privacy research in various fields such as medicine, computer science, business, and statistics has increased. The occurrence of various information security accidents have made finding experts in the information security field an important issue. Objective measures for finding such experts are required, as it is currently rather subjective. Based on social network analysis, this paper focused on a framework to evaluate the process of finding experts in the information security field. We collected data from the National Discovery for Science Leaders (NDSL) database, initially collecting about 2000 papers covering the period between 2005 and 2013. Outliers and the data of irrelevant papers were dropped, leaving 784 papers to test the suggested hypotheses. The co-authorship network data for co-author relationship, publisher, affiliation, and so on were analyzed using social network measures including centrality and structural hole. The results of our model estimation are as follows. With the exception of Hypothesis 3, which deals with the relationship between eigenvector centrality and performance, all of our hypotheses were supported. In line with our hypothesis, degree centrality (H1) was supported with its positive influence on the researchers' publishing performance (p<0.001). This finding indicates that as the degree of cooperation increased, the more the publishing performance of researchers increased. In addition, closeness centrality (H2) was also positively associated with researchers' publishing performance (p<0.001), suggesting that, as the efficiency of information acquisition increased, the more the researchers' publishing performance increased. This paper identified the difference in publishing performance among researchers. The analysis can be used to identify core experts and evaluate their performance in the information privacy research field. The co-authorship network for information privacy can aid in understanding the deep relationships among researchers. In addition, extracting characteristics of publishers and affiliations, this paper suggested an understanding of the social network measures and their potential for finding experts in the information privacy field. Social concerns about securing the objectivity of experts have increased, because experts in the information privacy field frequently participate in political consultation, and business education support and evaluation. In terms of practical implications, this research suggests an objective framework for experts in the information privacy field, and is useful for people who are in charge of managing research human resources. This study has some limitations, providing opportunities and suggestions for future research. Presenting the difference in information diffusion according to media and proximity presents difficulties for the generalization of the theory due to the small sample size. Therefore, further studies could consider an increased sample size and media diversity, the difference in information diffusion according to the media type, and information proximity could be explored in more detail. Moreover, previous network research has commonly observed a causal relationship between the independent and dependent variable (Kadushin, 2012). In this study, degree centrality as an independent variable might have causal relationship with performance as a dependent variable. However, in the case of network analysis research, network indices could be computed after the network relationship is created. An annual analysis could help mitigate this limitation.
1. Introduction Today Internet is recognized as an important way for the transaction of products and services. According to the data surveyed by the National Statistical Office, the on-line transaction in 2007 for a year, 15.7656 trillion, shows a 17.1%(2.3060 trillion won) increase over last year, of these, the amount of B2C has been increased 12.0%(10.2258 trillion won). Like this, because the entry barrier of on-line market of Korea is low, many retailers could easily enter into the market. So the bigger its scale is, but on the other hand, the tougher its competition is. Particularly due to the Internet and innovation of IT, the existing market has been changed into the perfect competitive market(Srinivasan, Rolph & Kishore, 2002). In the early years of on-line business, they think that the main reason for success is a moderate price, they are awakened to its importance of on-line service quality with tough competition. If it's not sure whether customers can be provided with what they want, they can use the Web sites, perhaps they can trust their products that had been already bought or not, they have a doubt its viability(Parasuraman, Zeithaml & Malhotra, 2005). Customers can directly reserve and issue their air tickets irrespective of place and time at the Web sites of travel agencies or airlines, but its empirical studies about these Web sites for reserving and issuing air tickets are insufficient. Therefore this study goes on for following specific objects. First object is to measure service quality and service recovery of Web sites for reserving and issuing air tickets. Second is to look into whether above on-line service quality and on-line service recovery have an impact on overall service quality. Third is to seek for the relation with overall service quality and customer satisfaction, then this customer satisfaction and loyalty intention. 2. Theoretical Background 2.1 On-line Service Quality Barnes & Vidgen(2000; 2001a; 2001b; 2002) had invented the tool to measure Web sites' quality four times(called WebQual). The WebQual 1.0, Step one invented a measuring item for information quality based on QFD, and this had been verified by students of UK business school. The Web Qual 2.0, Step two invented for interaction quality, and had been judged by customers of on-line bookshop. The WebQual 3.0, Step three invented by consolidating the WebQual 1.0 for information quality and the WebQual2.0 for interactionquality. It includes 3-quality-dimension, information quality, interaction quality, site design, and had been assessed and confirmed by auction sites(e-bay, Amazon, QXL). Furtheron, through the former empirical studies, the authors changed sites quality into usability by judging that usability is a concept how customers interact with or perceive Web sites and It is used widely for accessing Web sites. By this process, WebQual 4.0 was invented, and is consist of 3-quality-dimension; information quality, interaction quality, usability, 22 items. However, because WebQual 4.0 is focusing on technical part, it's usable at the Website's design part, on the other hand, it's not usable at the Web site's pleasant experience part. Parasuraman, Zeithaml & Malhorta(2002; 2005) had invented the measure for measuring on-line service quality in 2002 and 2005. The study in 2002 divided on-line service quality into 5 dimensions. But these were not well-organized, so there needed to be studied again totally. So Parasuraman, Zeithaml & Malhorta(2005) re-worked out the study about on-line service quality measure base on 2002's study and invented E-S-QUAL. After they invented preliminary measure for on-line service quality, they made up a question for customers who had purchased at amazon.com and walmart.com and reassessed this measure. And they perfected an invention of E-S-QUAL consists of 4 dimensions, 22 items of efficiency, system availability, fulfillment, privacy. Efficiency measures assess to sites and usability and others, system availability measures accurate technical function of sites and others, fulfillment measures promptness of delivering products and sufficient goods and others and privacy measures the degree of protection of data about their customers and so on. 2.2 Service Recovery Service industries tend to minimize the losses by coping with service failure promptly. This responses of service providers to service failure mean service recovery(Kelly & Davis, 1994). Bitner(1990) went on his study from customers' view about service providers' behavior for customers to recognize their satisfaction/dissatisfaction at service point. According to them, to manage service failure successfully, exact recognition of service problem, an apology, sufficient description about service failure and some tangible compensation are important. Parasuraman, Zeithaml & Malhorta(2005) approached the service recovery from how to measure, rather than how to manage, and moved to on-line market not to off-line, then invented E-RecS-QUAL which is a measuring tool about on-line service recovery. 2.3 Customer Satisfaction The definition of customer satisfaction can be divided into two points of view. First, they approached customer satisfaction from outcome of comsumer. Howard & Sheth(1969) defined satisfaction as 'a cognitive condition feeling being rewarded properly or improperly for their sacrifice.' and Westbrook & Reilly(1983) also defined customer satisfaction/dissatisfaction as 'a psychological reaction to the behavior pattern of shopping and purchasing, the display condition of retail store, outcome of purchased goods and service as well as whole market.' Second, they approached customer satisfaction from process. Engel & Blackwell(1982) defined satisfaction as 'an assessment of a consistency in chosen alternative proposal and their belief they had with them.' Tse & Wilton(1988) defined customer satisfaction as 'a customers' reaction to discordance between advance expectation and ex post facto outcome.' That is, this point of view that customer satisfaction is process is the important factor that comparing and assessing process what they expect and outcome of consumer. Unlike outcome-oriented approach, process-oriented approach has many advantages. As process-oriented approach deals with customers' whole expenditure experience, it checks up main process by measuring one by one each factor which is essential role at each step. And this approach enables us to check perceptual/psychological process formed customer satisfaction. Because of these advantages, now many studies are adopting this process-oriented approach(Yi, 1995). 2.4 Loyalty Intention Loyalty has been studied by dividing into behavioral approaches, attitudinal approaches and complex approaches(Dekimpe et al., 1997). In the early years of study, they defined loyalty focusing on behavioral concept, behavioral approaches regard customer loyalty as "a tendency to purchase periodically within a certain period of time at specific retail store." But the loyalty of behavioral approaches focuses on only outcome of customer behavior, so there are someone to point the limits that customers' decision-making situation or process were neglected(Enis & Paul, 1970; Raj, 1982; Lee, 2002). So the attitudinal approaches were suggested. The attitudinal approaches consider loyalty contains all the cognitive, emotional, voluntary factors(Oliver, 1997), define the customer loyalty as "friendly behaviors for specific retail stores." However these attitudinal approaches can explain that how the customer loyalty form and change, but cannot say positively whether it is moved to real purchasing in the future or not. This is a kind of shortcoming(Oh, 1995). 3. Research Design 3.1 Research Model Based on the objects of this study, the research model derived is shows, Step 1 and Step 2 are significant, and mediation variable has a significant effect on dependent variables and so does independent variables at Step 3, too. And there needs to prove the partial mediation effect, independent variable's estimate ability at Step 3(Standardized coefficient
shows, Step 1 and Step 2 are significant, and mediation variable has a significant effect on dependent variables and so does independent variables at Step 3, too. And there needs to prove the partial mediation effect, independent variable's estimate ability at Step 3(Standardized coefficient
이메일무단수집거부
이용약관
제 1 장 총칙
제 2 장 이용계약의 체결
제 3 장 계약 당사자의 의무
제 4 장 서비스의 이용
제 5 장 계약 해지 및 이용 제한
제 6 장 손해배상 및 기타사항
Detail Search
Image Search
(β)