Extracorporeal life support (ECLS) system is a device for respiratory and/or heart failure treatment, and there have been many trials for development and clinical application in the world. Currently, a non-pulsatile blood pump is a standard for ECLS system. Although a pulsatile blood pump is advantageous in physiologic aspects, high pressure generated in the circuits and resultant blood cell trauma remain major concerns which make one reluctant to use a pulsatile blood pump in artificial lung circuits containing a membrane oxygenator. The study was designed to evaluate the hypothesis that placement of a pressure-relieving compliance chamber between a pulsatile pump and a membrane oxygenator might reduce the above mentioned side effects while providing physiologic pulsatile blood flow. The study was performed in a canine model of oleic acid induced acute lung injury (N=16). The animals were divided into three groups according to the type of pump used and the presence of the compliance chamber, In group 1, a non-pulsatile centrifugal pump was used as a control (n=6). In group 2 (n=4), a single-pulsatile pump was used. In group 3 (n=6), a single-pulsatile pump equipped with a compliance chamber was used. The experimental model was a partial bypass between the right atrium and the aorta at a pump flow of 1.8∼2 L/min for 2 hours. The observed parameters were focused on hemodynamic changes, intra-circuit pressure, laboratory studies for blood profile, and the effect on blood cell trauma. In hemodynamics, the pulsatile group II & III generated higher arterial pulse pressure (47
Calcium fluoride, created by topical fluoride application, is the reservoir for fluoride ion regulated by pH in the oral environment. Therefore, the amount and the maintenance of calcium fluoride have an important role in preventing dental caries. The aim of this study is to evaluate the effect of Nd:YAG laser irradiation on the generation of calcium fluoride and the acid resistance of tooth enamel. The bovine anterior permanent teeth were prepared (n=276), and divided into following groups : no treatment (control) fluoride application alone, laser irradiation alone, laser irradiation after fluoride application, and fluoride application after laser irradiation. And each group was subdivided based on the application time of 1.23% acidulated phosphate fluoride (APF) (5 min and 30 min) and the irradiation energy of Nd:YAG laser (
Virtual communities (VCs) have developed rapidly, with more and more people participating in them to exchange information and opinions. A virtual community is a group of people who may or may not meet one another face to face, and who exchange words and ideas through the mediation of computer bulletin boards and networks. A business-to-consumer virtual community (B2CVC) is a commercial group that creates a trustworthy environment intended to motivate consumers to be more willing to buy from an online store. B2CVCs create a social atmosphere through information contribution such as recommendations, reviews, and ratings of buyers and sellers. Although the importance of B2CVCs has been recognized, few studies have been conducted to examine members' word-of-mouth behavior within these communities. This study proposes a model of involvement, statistics, trust, "stickiness," and word-of-mouth in a B2CVC and explores the relationships among these elements based on empirical data. The objectives are threefold: (i) to empirically test a B2CVC model that integrates measures of beliefs, attitudes, and behaviors; (ii) to better understand the nature of these relationships, specifically through word-of-mouth as a measure of revenue generation; and (iii) to better understand the role of stickiness of B2CVC in CRM marketing. The model incorporates three key elements concerning community members: (i) their beliefs, measured in terms of their involvement assessment; (ii) their attitudes, measured in terms of their satisfaction and trust; and, (iii) their behavior, measured in terms of site stickiness and their word-of-mouth. Involvement is considered the motivation for consumers to participate in a virtual community. For B2CVC members, information searching and posting have been proposed as the main purpose for their involvement. Satisfaction has been reviewed as an important indicator of a member's overall community evaluation, and conceptualized by different levels of member interactions with their VC. The formation and expansion of a VC depends on the willingness of members to share information and services. Researchers have found that trust is a core component facilitating the anonymous interaction in VCs and e-commerce, and therefore trust-building in VCs has been a common research topic. It is clear that the success of a B2CVC depends on the stickiness of its members to enhance purchasing potential. Opinions communicated and information exchanged between members may represent a type of written word-of-mouth. Therefore, word-of-mouth is one of the primary factors driving the diffusion of B2CVCs across the Internet. Figure 1 presents the research model and hypotheses. The model was tested through the implementation of an online survey of CTrip Travel VC members. A total of 243 collected questionnaires was reduced to 204 usable questionnaires through an empirical process of data cleaning. The study's hypotheses examined the extent to which involvement, satisfaction, and trust influence B2CVC stickiness and members' word-of-mouth. Structural Equation Modeling tested the hypotheses in the analysis, and the structural model fit indices were within accepted thresholds:
In Smart Education, the roles of digital textbook is very important as face-to-face media to learners. The standardization of digital textbook will promote the industrialization of digital textbook for contents providers and distributers as well as learner and instructors. In this study, the following three objectives-oriented digital textbooks are looking for ways to standardize. (1) digital textbooks should undertake the role of the media for blended learning which supports on-off classes, should be operating on common EPUB viewer without special dedicated viewer, should utilize the existing framework of the e-learning learning contents and learning management. The reason to consider the EPUB as the standard for digital textbooks is that digital textbooks don't need to specify antoher standard for the form of books, and can take advantage od industrial base with EPUB standards-rich content and distribution structure (2) digital textbooks should provide a low-cost open market service that are currently available as the standard open software (3) To provide appropriate learning feedback information to students, digital textbooks should provide a foundation which accumulates and manages all the learning activity information according to standard infrastructure for educational Big Data processing. In this study, the digital textbook in a smart education environment was referred to open digital textbook. The components of open digital textbooks service framework are (1) digital textbook terminals such as smart pad, smart TVs, smart phones, PC, etc., (2) digital textbooks platform to show and perform digital contents on digital textbook terminals, (3) learning contents repository, which exist on the cloud, maintains accredited learning, (4) App Store providing and distributing secondary learning contents and learning tools by learning contents developing companies, and (5) LMS as a learning support/management tool which on-site class teacher use for creating classroom instruction materials. In addition, locating all of the hardware and software implement a smart education service within the cloud must have take advantage of the cloud computing for efficient management and reducing expense. The open digital textbooks of smart education is consdered as providing e-book style interface of LMS to learners. In open digital textbooks, the representation of text, image, audio, video, equations, etc. is basic function. But painting, writing, problem solving, etc are beyond the capabilities of a simple e-book. The Communication of teacher-to-student, learner-to-learnert, tems-to-team is required by using the open digital textbook. To represent student demographics, portfolio information, and class information, the standard used in e-learning is desirable. To process learner tracking information about the activities of the learner for LMS(Learning Management System), open digital textbook must have the recording function and the commnincating function with LMS. DRM is a function for protecting various copyright. Currently DRMs of e-boook are controlled by the corresponding book viewer. If open digital textbook admitt DRM that is used in a variety of different DRM standards of various e-book viewer, the implementation of redundant features can be avoided. Security/privacy functions are required to protect information about the study or instruction from a third party UDL (Universal Design for Learning) is learning support function for those with disabilities have difficulty in learning courses. The open digital textbook, which is based on E-book standard EPUB 3.0, must (1) record the learning activity log information, and (2) communicate with the server to support the learning activity. While the recording function and the communication function, which is not determined on current standards, is implemented as a JavaScript and is utilized in the current EPUB 3.0 viewer, ths strategy of proposing such recording and communication functions as the next generation of e-book standard, or special standard (EPUB 3.0 for education) is needed. Future research in this study will implement open source program with the proposed open digital textbook standard and present a new educational services including Big Data analysis.
In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.
In this study, formation background of biodiversity and its changes in the process of geologic history, and effects of climate change on biodiversity and human were discussed and the alternatives to reduce the effects of climate change were suggested. Biodiversity is 'the variety of life' and refers collectively to variation at all levels of biological organization. That is, biodiversity encompasses the genes, species and ecosystems and their interactions. It provides the basis for ecosystems and the services on which all people fundamentally depend. Nevertheless, today, biodiversity is increasingly threatened, usually as the result of human activity. Diverse organisms on earth, which are estimated as 10 to 30 million species, are the result of adaptation and evolution to various environments through long history of four billion years since the birth of life. Countlessly many organisms composing biodiversity have specific characteristics, respectively and are interrelated with each other through diverse relationship. Environment of the earth, on which we live, has also created for long years through extensive relationship and interaction of those organisms. We mankind also live through interrelationship with the other organisms as an organism. The man cannot lives without the other organisms around him. Even though so, human beings accelerate mean extinction rate about 1,000 times compared with that of the past for recent several years. We have to conserve biodiversity for plentiful life of our future generation and are responsible for sustainable use of biodiversity. Korea has achieved faster economic growth than any other countries in the world. On the other hand, Korea had hold originally rich biodiversity as it is not only a peninsula country stretched lengthily from north to south but also three sides are surrounded by sea. But they disappeared increasingly in the process of fast economic growth. Korean people have created specific Korean culture by coexistence with nature through a long history of agriculture, forestry, and fishery. But in recent years, the relationship between Korean and nature became far in the processes of introduction of western culture and development of science and technology and specific natural feature born from harmonious combination between nature and culture disappears more and more. Population of Korea is expected to be reduced as contrasted with world population growing continuously. At this time, we need to restore biodiversity damaged in the processes of rapid population growth and economic development in concert with recovery of natural ecosystem due to population decrease. There were grand extinction events of five times since the birth of life on the earth. Modern extinction is very rapid and human activity is major causal factor. In these respects, it is distinguished from the past one. Climate change is real. Biodiversity is very vulnerable to climate change. If organisms did not find a survival method such as 'adaptation through evolution', 'movement to the other place where they can exist', and so on in the changed environment, they would extinct. In this respect, if climate change is continued, biodiversity should be damaged greatly. Furthermore, climate change would also influence on human life and socio-economic environment through change of biodiversity. Therefore, we need to grasp the effects that climate change influences on biodiversity more actively and further to prepare the alternatives to reduce the damage. Change of phenology, change of distribution range including vegetation shift, disharmony of interaction among organisms, reduction of reproduction and growth rates due to odd food chain, degradation of coral reef, and so on are emerged as the effects of climate change on biodiversity. Expansion of infectious disease, reduction of food production, change of cultivation range of crops, change of fishing ground and time, and so on appear as the effects on human. To solve climate change problem, first of all, we need to mitigate climate change by reducing discharge of warming gases. But even though we now stop discharge of warming gases, climate change is expected to be continued for the time being. In this respect, preparing adaptive strategy of climate change can be more realistic. Continuous monitoring to observe the effects of climate change on biodiversity and establishment of monitoring system have to be preceded over all others. Insurance of diverse ecological spaces where biodiversity can establish, assisted migration, and establishment of horizontal network from south to north and vertical one from lowland to upland ecological networks could be recommended as the alternatives to aid adaptation of biodiversity to the changing climate.
The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.
Trust has been studied extensively in psychology, economics, and sociology, and its importance has been emphasized not only in marketing, but also in business disciplines in general. Unlike past relationships between suppliers and buyers, which take considerable advantage of private networks and may involve unethical business practices, partnerships between suppliers and buyers are at the core of success for industrial marketing amid intense global competition in the 21st century. A high level of mutual cooperation occurs through an exchange relationship based on trust, which brings long-term benefits, competitive enhancements, and transaction cost reductions, among other benefits, for both buyers and suppliers. In spite of the important role of trust, existing studies in buy-supply situations overlook the role of trust and do not systematically analyze the effect of trust on relational performance. Consequently, an in-depth study that determines the relation of trust to the relational performance between buyers and suppliers of business services is absolutely needed. Business services in this study, which include those supporting the manufacturing industry, are drawing attention as the economic growth engine for the next generation. The Korean government has selected business services as a strategic area for the development of manufacturing sectors. Since the demands for opening business services markets are becoming fiercer, the competitiveness of the business service industry must be promoted now more than ever. The purpose of this study is to investigate the effect of the mutual trust between buyers and suppliers on relational performance. Specifically, this study proposed a theoretical model of trust-relational performance in the transactions of business services and empirically tested the hypotheses delineated from the framework. The study suggests strategic implications based on research findings. Empirical data were collected via multiple methods, including via telephone, mail, and in-person interviews. Sample companies were knowledge-based companies supplying and purchasing business services in Korea. The present study collected data on a dyadic basis. Each pair of sample companies includes a buying company and its corresponding supplying company. Mutual trust was traced for each pair of companies. This study proposes a model of trust-relational performance of buying-supplying for business services. The model consists of trust and its antecedents and consequences. The trust of buyers is classified into trust toward the supplying company and trust toward salespersons. Viewing trust both at the individual level and the organizational level is based on the research of Doney and Cannon (1997). Normally, buyers are the subject of trust, but this study supposes that suppliers are the subjects. Hence, it uniquely focused on the bilateral perspective of perceived risk. In other words, suppliers, like buyers, are the subject of trust since transactions are normally bilateral. From this point of view, suppliers' trust in buyers is as important as buyers' trust in suppliers. The suppliers' trust is influenced by the extent to which it trusts the buying companies and the buyers. This classification of trust using an individual level and an organization level is based on the suggestion of Doney and Cannon (1997). Trust affects the process of supplier selection, which works in a bilateral manner. Suppliers are actively involved in the supplier selection process, working very closely with buyers. In addition, the process is affected by the extent to which each party trusts its partners. The selection process consists of certain steps: recognition, information search, supplier selection, and performance evaluation. As a result of the process, both buyers and suppliers evaluate the performance and take corrective actions on the basis of such outcomes as tangible, intangible, and/or side effects. The measurement of trust used for the present study was developed on the basis of the studies of Mayer, Davis and Schoorman (1995) and Mayer and Davis (1999). Based on their recommendations, the three dimensions of trust used for the study include ability, benevolence, and integrity. The original questions were adjusted to the context of the transactions of business services. For example, a question such as "He/she has professional capabilities" has been changed to "The salesperson showed professional capabilities while we talked about our products." The measurement used for this study differs from those used in previous studies (Rotter 1967; Sullivan and Peterson 1982; Dwyer and Oh 1987). The measurements of the antecedents and consequences of trust used for this study were developed on the basis of Doney and Cannon (1997). The original questions were adjusted to the context of transactions in business services. In particular, questions were developed for both buyers and suppliers to address the following factors: reputation (integrity, customer care, good-will), market standing (company size, market share, positioning in the industry), willingness to customize (product, process, delivery), information sharing (proprietary information, private information), willingness to maintain relationships, perceived professionalism, authority empowerment, buyer-seller similarity, and contact frequency. As a consequential variable of trust, relational performance was measured. Relational performance is classified into tangible effects, intangible effects, and side effects. Tangible effects include financial performance; intangible effects include improvements in relations, network developing, and internal employee satisfaction; side effects include those not included either in the tangible or intangible effects. Three hundred fifty pairs of companies were contacted, and one hundred five pairs of companies responded. After deleting five company pairs because of incomplete responses, one hundred five pairs of companies were used for data analysis. The response ratio of the companies used for data analysis is 30% (105/350), which is above the average response ratio in industrial marketing research. As for the characteristics of the respondent companies, the majority of the companies operate service businesses for both buyers (85.4%) and suppliers (81.8%). The majority of buyers (76%) deal with consumer goods, while the majority of suppliers (70%) deal with industrial goods. This may imply that buyers process the incoming material, parts, and components to produce the finished consumer goods. As indicated by their report of the length of acquaintance with their partners, suppliers appear to have longer business relationships than do buyers. Hypothesis 1 tested the effects of buyer-supplier characteristics on trust. The salesperson's professionalism (t=2.070, p<0.05) and authority empowerment (t=2.328, p<0.05) positively affected buyers' trust toward suppliers. On the other hand, authority empowerment (t=2.192, p<0.05) positively affected supplier trust toward buyers. For both buyers and suppliers, the degree of authority empowerment plays a crucial role in the maintenance of their trust in each other. Hypothesis 2 tested the effects of buyerseller relational characteristics on trust. Buyers tend to trust suppliers, as suppliers make every effort to contact buyers (t=2.212, p<0.05). This tendency has also been shown to be much stronger for suppliers (t=2.591, p<0.01). On the other hand suppliers trust buyers because suppliers perceive buyers as being similar to themselves (t=2.702, p<0.01). This finding confirmed the results of Crosby, Evans, and Cowles (1990), which reported that suppliers and buyers build relationships through regular meetings, either for business or personal matters. Hypothesis 3 tested the effects of trust on perceived risk. It has been found that for both suppliers and buyers the lower is the trust, the higher is the perceived risk (t=-6.621, p<0.01 for buyers; t=-2.437, p<0.05). Interestingly, this tendency has been shown to be much stronger for buyers than for suppliers. One possible explanation for this higher level of perceived risk is that buyers normally perceive higher risks than do suppliers in transactions involving business services. For this reason, it is necessary for suppliers to implement risk reduction strategies for buyers. Hypothesis 4 tested the effects of trust on information searching. It has been found that for both suppliers and buyers, contrary to expectation, trust depends on their partner's reputation (t=2.929, p<0.01 for buyers; t=2.711, p<0.05 for suppliers). This finding shows that suppliers with good reputations tend to be trusted. Prior experience did not show any significant relationship with trust for either buyers or suppliers. Hypothesis 5 tested the effects of trust on supplier/buyer selection. Unlike buyers, suppliers tend to trust buyers when they think that previous transactions with buyers were important (t=2.913 p<0.01). However, this study did not show any significant relationship between source loyalty and the trust of buyers in suppliers. Hypothesis 6 tested the effects of trust on relational performances. For buyers and suppliers, financial performance reportedly improved when they trusted their partners (t=2.301, p<0.05 for buyers; t=3.692, p<0.01 for suppliers). It is interesting that this tendency was much stronger for suppliers than it was for buyers. Similarly, competitiveness was reported to improve when buyers and suppliers trusted their partners (t=3.563, p<0.01 for buyers; t=3.042, p<0.01 for suppliers). For suppliers, efficiency and productivity were reportedly improved when they trusted buyers (t=2.673, p<0.01). Other performance indices showed insignificant relationships with trust. The findings of this study have some strategic implications. First and most importantly, trust-based transactions are beneficial for both suppliers and buyers. As verified in the study, financial performance can be improved through efforts to build and maintain mutual trust. Similarly, competitiveness can be increased through the same kinds of effort. Second, trust-based transactions can facilitate the reduction of perceived risks inherent in the purchasing situation. This finding has implications for both suppliers and buyers. It is generally believed that buyers perceive higher risks in a highly involved purchasing situation. To reduce risks, previous studies have recommended that suppliers devise risk-reducing tactics. Moving beyond these recommendations, the present study uniquely focused on the bilateral perspective of perceived risk. In other words, suppliers are also susceptible to perceived risks, especially when they supply services that require very technical and sophisticated manipulations and maintenance. Consequently, buyers and suppliers must solve problems together in close collaboration. Hence, mutual trust plays a crucial role in the problem-solving process. Third, as found in this study, the more authority a salesperson has, the more he or she can be trusted. This finding is very important with regard to tactics. Building trust is a long-term assignment; however, when mutual trust has not been developed, suppliers can overcome the problems they encounter by empowering a salesperson with the authority to make certain decisions. This finding applies to suppliers as well.
1. The 'Kao Zheng Pai(考證派) comes from the 'Zhe Zhong Pai' and is a school that is influenced by the confucianism of the Qing dynasty. In Japan Inoue Kinga(井上金娥), Yoshida Koton(吉田篁墩) became central members, and the rise of the methodology of historical research(考證學) influenced the members of the 'Zhe Zhong Pai', and the trend of historical research changed from confucianism to medicine, making a school of medicine based on the study of texts and proving that the classics were right. 2. Based on the function of 'Nei Qu Li '(內驅力) the 'Kao Zheng Pai', in the spirit of 'use confucianism as the base', researched letters, meanings and historical origins. Because they were influenced by the methodology of historical research(考證學) of the Qing era, they valued the evidential research of classic texts, and there was even one branch that did only historical research, the 'Rue Xue Kao Zheng Pai'(儒學考證派). Also, the 'Yi Xue Kao Zheng Pai'(醫學考證派) appeared by the influence of Yoshida Kouton and Kariya Ekisai(狩谷掖齋). 3. In the 'Kao Zheng Pai(考證派)'s theories and views the 'Yi Xue Kao Zheng Pai' did not look at medical scriptures like the "Huang Di Nei Jing"("黃帝內經") and did not do research on 'medical' related areas like acupuncture, the meridian and medicinal herbs. Since they were doctors that used medicine, they naturally were based on 'formulas'(方劑) and since their thoughts were based on the historical ideologies, they valued the "Shang Han Ja Bing Lun" which was revered as the 'ancestor of all formulas'(衆方之祖). 4. The lives of the important doctors of the 'Kao Zheng Pai' Meguro Dotaku(目黑道琢) Yamada Seichin(山田正珍), Yamada Kyoko(山田業廣), Mori Ritsi(森立之) Kitamura Naohara(喜多村直寬) are as follows. 1) Meguro Dotaku(目黑道琢 1739
1.The 'Kao Zheng Pai'(考證派) comes from the 'Zhe Zhong Pai(折衷派)' and is a school that is influenced by the confucianism of the Qing dynasty. In Japan Inoue Kinga(井上金峨), Yoshida Koton(古田篁墩