• Title/Summary/Keyword: Information process

Search Result 27,868, Processing Time 0.071 seconds

Analysis of the Influence of Role Models on College Students' Entrepreneurial Intentions: Exploring the Multiple Mediating Effects of Growth Mindset and Entrepreneurial Self-Efficacy (대학생 창업의지에 대한 롤모델의 영향 분석: 성장마인드셋과 창업자기효능감의 다중매개효과를 중심으로)

  • Jin Soo Maing;Sun Hyuk Kim
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.5
    • /
    • pp.17-32
    • /
    • 2023
  • The entrepreneurial activities of college students play a significant role in modern economic and social development, particularly as a solution to the changing economic landscape and youth unemployment issues. Introducing innovative ideas and technologies into the market through entrepreneurship can contribute to sustainable economic growth and social value. Additionally, the entrepreneurial intentions of college students are shaped by various factors, making it crucial to deeply understand and appropriately support these elements. To this end, this study systematically explores the importance and impact of role models through a multiple serial mediation analysis. Through a survey of 300 college students, the study analyzed how two psychological variables, growth mindset and entrepreneurial self-efficacy, mediate the influence of role models on entrepreneurial intentions. The presence and success stories of role models were found to enhance the growth mindset of college students, which in turn boosts their entrepreneurial self-efficacy and ultimately strengthens their entrepreneurial intentions. The analysis revealed that exposure to role models significantly influences the formation of a growth mindset among college students. This mindset fosters a positive attitude towards viewing challenges and failures in entrepreneurship as learning opportunities. Such a mindset further enhances entrepreneurial self-efficacy, thereby strengthening the intention to engage in entrepreneurial activities. This research offers insights by integrating various theories, such as mindset theory and social learning theory, to deeply understand the complex process of forming entrepreneurial intentions. Practically, this study provides important guidelines for the design and implementation of college entrepreneurship education. Utilizing role models can significantly enhance students' entrepreneurial intentions, and educational programs can strengthen students' growth mindset and entrepreneurial self-efficacy by sharing entrepreneurial experiences and knowledge through role models. In conclusion, this study provides a systematic and empirical analysis of the various factors and their complex interactions that impact the entrepreneurial intentions of college students. It confirms that psychological factors like growth mindset and entrepreneurial self-efficacy play a significant role in shaping entrepreneurial intentions, beyond mere information or technical education. This research emphasizes that these psychological factors should be comprehensively considered when developing and implementing policies and programs related to college entrepreneurship education.

  • PDF

Region of Interest Extraction and Bilinear Interpolation Application for Preprocessing of Lipreading Systems (입 모양 인식 시스템 전처리를 위한 관심 영역 추출과 이중 선형 보간법 적용)

  • Jae Hyeok Han;Yong Ki Kim;Mi Hye Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.189-198
    • /
    • 2024
  • Lipreading is one of the important parts of speech recognition, and several studies have been conducted to improve the performance of lipreading in lipreading systems for speech recognition. Recent studies have used method to modify the model architecture of lipreading system to improve recognition performance. Unlike previous research that improve recognition performance by modifying model architecture, we aim to improve recognition performance without any change in model architecture. In order to improve the recognition performance without modifying the model architecture, we refer to the cues used in human lipreading and set other regions such as chin and cheeks as regions of interest along with the lip region, which is the existing region of interest of lipreading systems, and compare the recognition rate of each region of interest to propose the highest performing region of interest In addition, assuming that the difference in normalization results caused by the difference in interpolation method during the process of normalizing the size of the region of interest affects the recognition performance, we interpolate the same region of interest using nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation, and compare the recognition rate of each interpolation method to propose the best performing interpolation method. Each region of interest was detected by training an object detection neural network, and dynamic time warping templates were generated by normalizing each region of interest, extracting and combining features, and mapping the dimensionality reduction of the combined features into a low-dimensional space. The recognition rate was evaluated by comparing the distance between the generated dynamic time warping templates and the data mapped to the low-dimensional space. In the comparison of regions of interest, the result of the region of interest containing only the lip region showed an average recognition rate of 97.36%, which is 3.44% higher than the average recognition rate of 93.92% in the previous study, and in the comparison of interpolation methods, the bilinear interpolation method performed 97.36%, which is 14.65% higher than the nearest neighbor interpolation method and 5.55% higher than the bicubic interpolation method. The code used in this study can be found a https://github.com/haraisi2/Lipreading-Systems.

Research on Archive Opening and Sharing Projects of Korean Terrestrial Broadcasters and External Users of Shared Archives : Focusing on the Case of the 5.18 Footage Video Sharing Project 〈May Story(Owol-Iyagi)〉 Contest Organized by KBS (국내 지상파 방송사의 아카이브 개방·공유 사업과 아카이브 이용자 연구 KBS 5.18 아카이브 시민공유 프로젝트 <5월이야기> 공모전 사례를 중심으로)

  • Choi, Hyojin
    • The Korean Journal of Archival Studies
    • /
    • no.78
    • /
    • pp.197-249
    • /
    • 2023
  • This paper focus on the demand for broadcast and video archive contents by users outside broadcasters as the archive openness and sharing projects of terrestrial broadcasters have become more active in recent years. In the process of creating works using broadcasters' released video footage, the study examined the criteria by which video footage is selected and the methods and processes utilized for editing. To this end, the study analyzed the the case of the 5.18 footage video sharing project 〈May Story(Owol-Iyagi)〉 contest organized by KBS in 2022, in which KBS released its footage about the May 18 Democratic Uprising and invited external users to create new content using them. Analyzing the works that were selected as the winners of the contest, the research conducts in-depth interviews with the creators of each work. As a result, the following points are identified. Among the submitted works, many works deal with the direct or indirect experience of the May 18 Democratic Uprising and focus on the impact of this historical event on individuals and our current society. The study also examined the ways in which broadcasters' footage is used in secondary works. We found ways to use video as a means to share historical events, or to present video as evidence or metaphor. It is found that the need for broadcasters to provide a wider range of public video materials such as the May 18 Democratic Uprising, describing more metadata including copyright information before releasing selected footage, ensuring high-definition and high-fidelity videos that can be used for editing, and strengthening streaming or downloading functions for user friendliness. Through this, the study explores the future direction of broadcasters' video data openness and sharing business, and confirms that broadcasters' archival projects can be an alternative to fulfill public responsibilities such as strengthening social integration between regions, generations, and classes through moving images.

Cytotoxicity test on human contact area with L-929 cells using extracorporeal shock wave therapy cartridge (체외충격파치료기 카트리지의 L-929 세포를 통한 인체접촉부의 세포독성시험)

  • Jun-tae Kim;Se-jin Yoon;So-hyun Park;Kyung-ah Kim;Jae-hyun Jo;Jin-hyoung Jeong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.5
    • /
    • pp.389-395
    • /
    • 2024
  • This study was conducted to determine the cytotoxicity of the extracts from the human body contact area of the test substance during a test on the Good Laboratory Practice (GLP), which is the medical device safety evaluation standard, using the extracorporeal shock wave therapy (ESWT) cartridge as a sample, using L-929 cells. The test and control substances were extracted with 1xMEM culture medium containing 10% FBS at 37±1℃ for 24±2 hours. The test substance extract (test group), negative control substance extract (negative control group), positive control substance extract (positive control group), and blank test solution extract (solvent control group) were applied to L929 cells and cultured for 48±2 hours in a 37±1℃, 5±1% CO2 incubator. As a result of observing cell reactions under a microscope, the cells to which the blank test solution extract and negative control substance extract were applied were grade 0, the cells to which the positive control substance extract was applied were grade 4, and the cells to which the test substance extract was applied were grade 0. As a result of quantitative evaluation through cell counting, the cell viability rate of the cells to which the negative control substance extract was applied was 106.28% compared to the blank test solution extract, the cells to which the positive control substance extract was applied were 0.00%, and the cells to which the test substance extract was applied were 99.58%. Therefore, when the results of the negative and positive control groups were confirmed, the test process was appropriate, and it was determined that it did not cause cytotoxicity because the qualitative evaluation method was less than grade 2 and the quantitative evaluation method showed a cell viability rate of more than 70%.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.

Color-related Query Processing for Intelligent E-Commerce Search (지능형 검색엔진을 위한 색상 질의 처리 방안)

  • Hong, Jung A;Koo, Kyo Jung;Cha, Ji Won;Seo, Ah Jeong;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.109-125
    • /
    • 2019
  • As interest on intelligent search engines increases, various studies have been conducted to extract and utilize the features related to products intelligencely. In particular, when users search for goods in e-commerce search engines, the 'color' of a product is an important feature that describes the product. Therefore, it is necessary to deal with the synonyms of color terms in order to produce accurate results to user's color-related queries. Previous studies have suggested dictionary-based approach to process synonyms for color features. However, the dictionary-based approach has a limitation that it cannot handle unregistered color-related terms in user queries. In order to overcome the limitation of the conventional methods, this research proposes a model which extracts RGB values from an internet search engine in real time, and outputs similar color names based on designated color information. At first, a color term dictionary was constructed which includes color names and R, G, B values of each color from Korean color standard digital palette program and the Wikipedia color list for the basic color search. The dictionary has been made more robust by adding 138 color names converted from English color names to foreign words in Korean, and with corresponding RGB values. Therefore, the fininal color dictionary includes a total of 671 color names and corresponding RGB values. The method proposed in this research starts by searching for a specific color which a user searched for. Then, the presence of the searched color in the built-in color dictionary is checked. If there exists the color in the dictionary, the RGB values of the color in the dictioanry are used as reference values of the retrieved color. If the searched color does not exist in the dictionary, the top-5 Google image search results of the searched color are crawled and average RGB values are extracted in certain middle area of each image. To extract the RGB values in images, a variety of different ways was attempted since there are limits to simply obtain the average of the RGB values of the center area of images. As a result, clustering RGB values in image's certain area and making average value of the cluster with the highest density as the reference values showed the best performance. Based on the reference RGB values of the searched color, the RGB values of all the colors in the color dictionary constructed aforetime are compared. Then a color list is created with colors within the range of ${\pm}50$ for each R value, G value, and B value. Finally, using the Euclidean distance between the above results and the reference RGB values of the searched color, the color with the highest similarity from up to five colors becomes the final outcome. In order to evaluate the usefulness of the proposed method, we performed an experiment. In the experiment, 300 color names and corresponding color RGB values by the questionnaires were obtained. They are used to compare the RGB values obtained from four different methods including the proposed method. The average euclidean distance of CIE-Lab using our method was about 13.85, which showed a relatively low distance compared to 3088 for the case using synonym dictionary only and 30.38 for the case using the dictionary with Korean synonym website WordNet. The case which didn't use clustering method of the proposed method showed 13.88 of average euclidean distance, which implies the DBSCAN clustering of the proposed method can reduce the Euclidean distance. This research suggests a new color synonym processing method based on RGB values that combines the dictionary method with the real time synonym processing method for new color names. This method enables to get rid of the limit of the dictionary-based approach which is a conventional synonym processing method. This research can contribute to improve the intelligence of e-commerce search systems especially on the color searching feature.

The Effects of Self-regulatory Resources and Construal Levels on the Choices of Zero-cost Products (자아조절자원 및 해석수준이 공짜대안 선택에 미치는 영향)

  • Lee, Jinyong;Im, Seoung Ah
    • Asia Marketing Journal
    • /
    • v.13 no.4
    • /
    • pp.55-76
    • /
    • 2012
  • Most people prefer to choose zero-cost products they may get without paying any money. The 'zero-cost effect' can be explained with a 'zero-cost model' where consumers attach special values to zero-cost products in a different way from general economic models (Shampanier, Mazar and Ariely 2007). If 2 different products at the regular prices of ₩200 and ₩400 simultaneously offer ₩200 discounts, the prices will be changed to ₩0 and ₩200, respectively. In spite of the same price gap of the two products after the ₩200 discounts, people are much more likely to select the free alternative than the same product at the price of ₩200. Although prior studies have focused on the 'zero-cost effect' in isolation of other factors, this study investigates the moderating effects of a self-regulatory resource and a construal level on the selection of free products. Self-regulatory resources induce people to control or regulate their behavior. However, since self-regulatory resources are limited, they are to be easily depleted when exerted (Muraven, Tice, and Baumeister 1998). Without the resources, consumers tend to become less sensitive to price changes and to spend money more extravagantly (Vohs and Faber 2007). Under this condition, they are also likely to invest less effort on their information processing and to make more intuitive decisions (Pocheptsova, Amir, Dhar, and Baumeister 2009). Therefore, context effects such as price changes and zero cost effects are less likely in the circumstances of resource depletion. In addition, construal levels have profound effects on the ways of information processing (Trope and Liberman 2003, 2010). In a high construal level, people tend to attune their minds to core features and desirability aspects, whereas, in a low construal level, they are more likely to process information based on secondary features and feasibility aspects (Khan, Zhu, and Kalra 2010). A perceived value of a product is more related to desirability whereas a zero cost or a price level is more associated with feasibility. Thus, context effects or reliance on feasibility (for instance, the zero cost effect) will be diminished in a high level construal while those effects may remain in a low level construal. When people make decisions, these 2 factors can influence the magnitude of the 'zero-cost effect'. This study ran two experiments to investigate the effects of self-regulatory resources and construal levels on the selection of a free product. Kisses and Ferrero-Rocher, which were adopted in the prior study (Shampanier et al. 2007) were also used as alternatives in Experiments 1 and 2. We designed Experiment 1 in order to test whether self-regulatory resource depletion will moderate the zero-cost effect. The level of self-regulatory resources was manipulated with two different tasks, a Sudoku task in the depletion condition and a task of drawing diagrams in the non-depletion condition. Upon completion of the manipulation task, subjects were randomly assigned to one of a decision set with a zero-cost option (i.e., Kisses ₩0, and Ferrero-Rocher ₩200) or a set without a zero-cost option (i.e., Kisses ₩200, and Ferrero-Rocher ₩400). A pair of alternatives in the two decision sets have the same price gap of ₩200 between a low-priced Kisses and a high-priced Ferrero-Rocher. Subjects in the no-depletion condition selected Kisses more often (71.88%) over Ferrero-Rocher when Kisses was free than when it was priced at ₩200 (34.88%). However, the zero-cost effect disappeared when people do not have self-regulatory resources. Experiment 2 was conducted to investigate whether constual levels influence the magnitude of the 'zero-cost effect'. To manipulate construal levels, 4 different 'why (in the high construal level condition)' or 'how (in the low construal level condition)' questions about health management were asked. They were presented with 4 boxes connected with downward arrows. In a box at the top, there was one question, 'Why do I maintain good physical health?' or 'How do I maintain good physical health?' Subjects inserted a response to the question of why or how they would maintain good physical health. Similar tasks were repeated for the 2nd, 3rd, and 4th responses. After the manipulation task, subjects were randomly assigned either to a decision set with a zero-cost option, or to a set without it, as in Experiment 1. When a low construal level is primed with 'how', subjects chose free Kisses (60.66%) more often over Ferrero-Rocher than they chose ₩200 Kisses (42.19%) over ₩400 FerreroRocher. On contrast, the zero-cost effect could not be observed any longer when a high construal level is primed with 'why'.

  • PDF