• Title/Summary/Keyword: 작동 신뢰도

Search Result 323, Processing Time 0.019 seconds

The Moderating Role of Need for Cognitive Closure and Temporal Self-Construal in Consumer Satisfaction and Repurchase Consistency (만족도와 재구매 간 관계에 있어서 상황적 영향의 조절효과에 관한 연구 - 인지 종결 욕구와 일시적 자아 해석의 조절효과를 중심으로 -)

  • Lee, Min Hoon;Ha, Young Won
    • Asia Marketing Journal
    • /
    • v.11 no.4
    • /
    • pp.95-119
    • /
    • 2010
  • Although there have been many studies regarding the inconsistency between consumers' attitudes and behavior, prior research has almost exclusively focused on the relationship between the attitude before behavior and the initial behavior. Relatively little research has been conducted on consumer satisfaction after purchase and post-purchase behavior. This research proposed that the relationship between satisfaction and post-purchase behavior is moderated by consumers' psychological characteristics such as need for cognitive closure(NCC) and temporal self-construal(SC). The need for cognitive closure refers to individuals' desire for a firm answer to a question and an aversion toward ambiguity. We assumed the need for cognitive closure as a major moderating variable because it is judged that the requirement for cognition clearly varies between when a consumer repurchases the same product and seeks a new alternative. Individuals who tend to end cognition due to time constraints or inappropriate conditions may display considerable cognitive impatience or impulsivity and has a higher probability in repurchasing the same product than a consumer without such limitations. They would avoid further consideration for new alternatives and the likelihood of the repurchase for prior alternative would increase. As hypothesized, significant moderating effect of the NCC was confirmed. This result gives a significant implication for a corporate to establish effective marketing strategies. For a corporate or product brand that has been occupying the market after entering the market earlier, it would be effective to maintain need for cognitive closure high in the existing consumers and thereby preventing the consumers from being interested in the new alternatives. On the other hand, new brands that have just entered the market need to lower the potential consumers' need for cognitive closure so that the consumers can be interested in new alternatives. Along with need for cognitive closure, temporal self-construal also turned out to moderate the satisfaction-repurchase. temporal SC reflects the extent to which individuals view themselves either as an individuated entity or in relation to others. Consumers under a temporarily independent SC would repurchase former alternative again according to their prior satisfaction and evaluation. In contrast, consumers in temporal interdependent SC tended to switch to a new alternative because they value interpersonal relationships above anything else and have a tendency to rely heavily on in-group opinions. When they are confronted with additional opinions, it is highly probable that he/she will choose a new product as an alternative. By proving the impact that temporal self-construal has on repurchasing behavior, this study is providing the marketers with new standards for establishing successful promotional strategies. For example, if the buyer and the user is the same for a product, it would be effective for the seller to convince the consumer to make decision subjectively by encouraging temporal independent self-construal. On the contrary, in the case where the purchase is made by an individual but the product is consumed by a group of people. For example, a housewife is more likely to choose the products or brands that her husband or children prefer rather than the ones that she likes by herself. In that case, emphasizing how the whole family can be satisfied and happy about the product would be effective for promoting repurchase.

  • PDF

Association between Texture Analysis Parameters and Molecular Biologic KRAS Mutation in Non-Mucinous Rectal Cancer (원발성 비점액성 직장암 환자에서 자기공명영상 기반 텍스처 분석 변수와 KRAS 유전자 변이와의 연관성)

  • Sung Jae Jo;Seung Ho Kim;Sang Joon Park;Yedaun Lee;Jung Hee Son
    • Journal of the Korean Society of Radiology
    • /
    • v.82 no.2
    • /
    • pp.406-416
    • /
    • 2021
  • Purpose To evaluate the association between magnetic resonance imaging (MRI)-based texture parameters and Kirsten rat sarcoma viral oncogene homolog (KRAS) mutation in patients with non-mucinous rectal cancer. Materials and Methods Seventy-nine patients who had pathologically confirmed rectal non-mucinous adenocarcinoma with or without KRAS-mutation and had undergone rectal MRI were divided into a training (n = 46) and validation dataset (n = 33). A texture analysis was performed on the axial T2-weighted images. The association was statistically analyzed using the Mann-Whitney U test. To extract an optimal cut-off value for the prediction of KRAS mutation, a receiver operating characteristic curve analysis was performed. The cut-off value was verified using the validation dataset. Results In the training dataset, skewness in the mutant group (n = 22) was significantly higher than in the wild-type group (n = 24) (0.221 ± 0.283; -0.006 ± 0.178, respectively, p = 0.003). The area under the curve of the skewness was 0.757 (95% confidence interval, 0.606 to 0.872) with a maximum accuracy of 71%, a sensitivity of 64%, and a specificity of 78%. None of the other texture parameters were associated with KRAS mutation (p > 0.05). When a cut-off value of 0.078 was applied to the validation dataset, this had an accuracy of 76%, a sensitivity of 86%, and a specificity of 68%. Conclusion Skewness was associated with KRAS mutation in patients with non-mucinous rectal cancer.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.