• Title/Summary/Keyword: Information Theory

Search Result 7,415, Processing Time 0.035 seconds

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

An Analysis of the Moderating Effects of User Ability on the Acceptance of an Internet Shopping Mall (인터넷 쇼핑몰 수용에 있어 사용자 능력의 조절효과 분석)

  • Suh, Kun-Soo
    • Asia pacific journal of information systems
    • /
    • v.18 no.4
    • /
    • pp.27-55
    • /
    • 2008
  • Due to the increasing and intensifying competition in the Internet shopping market, it has been recognized as very important to develop an effective policy and strategy for acquiring loyal customers. For this reason, web site designers need to know if a new Internet shopping mall(ISM) will be accepted. Researchers have been working on identifying factors for explaining and predicting user acceptance of an ISM. Some studies, however, revealed inconsistent findings on the antecedents of user acceptance of a website. Lack of consideration for individual differences in user ability is believed to be one of the key reasons for the mixed findings. The elaboration likelihood model (ELM) and several studies have suggested that individual differences in ability plays an moderating role on the relationship between the antecedents and user acceptance. Despite the critical role of user ability, little research has examined the role of user ability in the Internet shopping mall context. The purpose of this study is to develop a user acceptance model that consider the moderating role of user ability in the context of Internet shopping. This study was initiated to see the ability of the technology acceptance model(TAM) to explain the acceptance of a specific ISM. According to TAM. which is one of the most influential models for explaining user acceptance of IT, an intention to use IT is determined by usefulness and ease of use. Given that interaction between user and website takes place through web interface, the decisions to accept and continue using an ISM depend on these beliefs. However, TAM neglects to consider the fact that many users would not stick to an ISM until they trust it although they may think it useful and easy to use. The importance of trust for user acceptance of ISM has been raised by the relational views. The relational view emphasizes the trust-building process between the user and ISM, and user's trust on the website is a major determinant of user acceptance. The proposed model extends and integrates the TAM and relational views on user acceptance of ISM by incorporating usefulness, ease of use, and trust. User acceptance is defined as a user's intention to reuse a specific ISM. And user ability is introduced into the model as moderating variable. Here, the user ability is defined as a degree of experiences, knowledge and skills regarding Internet shopping sites. The research model proposes that the ease of use, usefulness and trust of ISM are key determinants of user acceptance. In addition, this paper hypothesizes that the effects of the antecedents(i.e., ease of use, usefulness, and trust) on user acceptance may differ among users. In particular, this paper proposes a moderating effect of a user's ability on the relationship between antecedents with user's intention to reuse. The research model with eleven hypotheses was derived and tested through a survey that involved 470 university students. For each research variable, this paper used measurement items recognized for reliability and widely used in previous research. We slightly modified some items proper to the research context. The reliability and validity of the research variables were tested using the Crobnach's alpha and internal consistency reliability (ICR) values, standard factor loadings of the confirmative factor analysis, and average variance extracted (AVE) values. A LISREL method was used to test the suitability of the research model and its relating six hypotheses. Key findings of the results are summarized in the following. First, TAM's two constructs, ease of use and usefulness directly affect user acceptance. In addition, ease of use indirectly influences user acceptance by affecting trust. This implies that users tend to trust a shopping site and visit repeatedly when they perceive a specific ISM easy to use. Accordingly, designing a shopping site that allows users to navigate with heuristic and minimal clicks for finding information and products within the site is important for improving the site's trust and acceptance. Usefulness, however, was not found to influence trust. Second, among the three belief constructs(ease of use, usefulness, and trust), trust was empirically supported as the most important determinants of user acceptance. This implies that users require trustworthiness from an Internet shopping site to be repeat visitors of an ISM. Providing a sense of safety and eliminating the anxiety of online shoppers in relation to privacy, security, delivery, and product returns are critically important conditions for acquiring repeat visitors. Hence, in addition to usefulness and ease of use as in TAM, trust should be a fundamental determinants of user acceptance in the context of internet shopping. Third, the user's ability on using an Internet shopping site played a moderating role. For users with low ability, ease of use was found to be a more important factors in deciding to reuse the shopping mall, whereas usefulness and trust had more effects on users with high ability. Applying the EML theory to these findings, we can suggest that experienced and knowledgeable ISM users tend to elaborate on such usefulness aspects as efficient and effective shopping performance and trust factors as ability, benevolence, integrity, and predictability of a shopping site before they become repeat visitors of the site. In contrast, novice users tend to rely on the low elaborating features, such as the perceived ease of use. The existence of moderating effects suggests the fact that different individuals evaluate an ISM from different perspectives. The expert users are more interested in the outcome of the visit(usefulness) and trustworthiness(trust) than those novice visitors. The latter evaluate the ISM in a more superficial manner focusing on the novelty of the site and on other instrumental beliefs(ease of use). This is consistent with the insights proposed by the Heuristic-Systematic model. According to the Heuristic-Systematic model. a users act on the principle of minimum effort. Thus, the user considers an ISM heuristically, focusing on those aspects that are easy to process and evaluate(ease of use). When the user has sufficient experience and skills, the user will change to systematic processing, where they will evaluate more complex aspects of the site(its usefulness and trustworthiness). This implies that an ISM has to provide a minimum level of ease of use to make it possible for a user to evaluate its usefulness and trustworthiness. Ease of use is a necessary but not sufficient condition for the acceptance and use of an ISM. Overall, the empirical results generally support the proposed model and identify the moderating effect of the effects of user ability. More detailed interpretations and implications of the findings are discussed. The limitations of this study are also discussed to provide directions for future research.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

The Changing Aspects of North Korea's Terror Crimes and Countermeasures : Focused on Power Conflict of High Ranking Officials after Kim Jong-IL Era (북한 테러범죄의 변화양상에 따른 대응방안 -김정일 정권 이후 고위층 권력 갈등을 중심으로)

  • Byoun, Chan-Ho;Kim, Eun-Jung
    • Korean Security Journal
    • /
    • no.39
    • /
    • pp.185-215
    • /
    • 2014
  • Since North Korea has used terror crime as a means of unification under communism against South Korea, South Korea has been much damaged until now. And the occurrence possibility of terror crime by North Korean authority is now higher than any other time. The North Korean terror crimes of Kim Il Sung era had been committed by the dictator's instruction with the object of securing governing fund. However, looking at the terror crimes committed for decades during Kim Jung Il authority, it is revealed that these terror crimes are expressed as a criminal behavior because of the conflict to accomplish the power and economic advantage non powerful groups target. This study focused on the power conflict in various causes of terror crimes by applying George B. Vold(1958)'s theory which explained power conflict between groups became a factor of crime, and found the aspect by ages of terror crime behavior by North Korean authority and responding plan to future North Korean terror crime. North Korean authority high-ranking officials were the Labor Party focusing on Juche Idea for decades in Kim Il Sung time. Afterwards, high-ranking officials were formed focusing on military authorities following Military First Policy at the beginning of Kim Jung Il authority, rapid power change has been done for recent 10 years. To arrange the aspect by times of terror crime following this power change, alienated party executives following the support of positive military first authority by Kim Jung Il after 1995 could not object to forcible terror crime behavior of military authority, and 1st, 2nd Yeongpyeong maritime war which happened this time was propelled by military first authority to show the power of military authority. After 2006, conservative party union enforced censorship and inspection on the trade business and foreign currency-earning of military authority while executing drastic purge. The shooting on Keumkangsan tourists that happened this time was a forcible terror crime by military authority following the pressure of conservative party. After October, 2008, first military reign union executed the launch of Gwanmyungsung No.2 long-range missile, second nuclear test, Daechung marine war, and Cheonanham attacking terror in order to highlight the importance and role of military authority. After September 2010, new reign union went through severe competition between new military authority and new mainstream and new military authority at this time executed highly professionalized terror crime such as cyber/electronic terror unlike past military authority. After July 2012, ICBM test launch, third nuclear test, cyber terror on Cheongwadae homepage of new mainstream association was the intention of Km Jung Eun to display his ability and check and adjust the power of party/military/cabinet/ public security organ, and he can attempt the unexpected terror crime in the future. North Korean terror crime has continued since 1980s when Kim Jung Il's power succession was carried out, and the power aspect by times has rapidly changed since 1994 when Kim Il Sung died and the terror crime became intense following the power combat between high-ranking officials and power conflict for right robbery. Now South Korea should install the specialized department which synthesizes and analyzes the information on North Korean high-ranking officials and reinforce the comprehensive information-collecting system through the protection and management of North Korean defectors and secret agents in order to determine the cause of North Korean terror crime and respond to it. And South Korea should participate positively in the international collaboration related to North Korean terror and make direct efforts to attract the international agreement to build the international cooperation for the response to North Korean terror crime. Also, we should try more to arrange the realistic countermeasure against North Korean cyber/electronic terror which was more diversified with the expertise terror escaping from existing forcible terror through enactment/revision of law related to cyber terror crime, organizing relevant institute and budget, training professional manpower, and technical development.

  • PDF

If This Brand Were a Person, or Anthropomorphism of Brands Through Packaging Stories (가설품패시인(假设品牌是人), 혹통과고사포장장품패의인화(或通过故事包装将品牌拟人化))

  • Kniazeva, Maria;Belk, Russell W.
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.3
    • /
    • pp.231-238
    • /
    • 2010
  • The anthropomorphism of brands, defined as seeing human beings in brands (Puzakova, Kwak, and Rosereto, 2008) is the focus of this study. Specifically, the research objective is to understand the ways in which brands are rendered humanlike. By analyzing consumer readings of stories found on food product packages we intend to show how marketers and consumers humanize a spectrum of brands and create meanings. Our research question considers the possibility that a single brand may host multiple or single meanings, associations, and personalities for different consumers. We start by highlighting the theoretical and practical significance of our research, explain why we turn our attention to packages as vehicles of brand meaning transfer, then describe our qualitative methodology, discuss findings, and conclude with a discussion of managerial implications and directions for future studies. The study was designed to directly expose consumers to potential vehicles of brand meaning transfer and then engage these consumers in free verbal reflections on their perceived meanings. Specifically, we asked participants to read non-nutritional stories on selected branded food packages, in order to elicit data about received meanings. Packaging has yet to receive due attention in consumer research (Hine, 1995). Until now, attention has focused solely on its utilitarian function and has generated a body of research that has explored the impact of nutritional information and claims on consumer perceptions of products (e.g., Loureiro, McCluskey and Mittelhammer, 2002; Mazis and Raymond, 1997; Nayga, Lipinski and Savur, 1998; Wansik, 2003). An exception is a recent study that turns its attention to non-nutritional packaging narratives and treats them as cultural productions and vehicles for mythologizing the brand (Kniazeva and Belk, 2007). The next step in this stream of research is to explore how such mythologizing activity affects brand personality perception and how these perceptions relate to consumers. These are the questions that our study aimed to address. We used in-depth interviews to help overcome the limitations of quantitative studies. Our convenience sample was formed with the objective of providing demographic and psychographic diversity in order to elicit variations in consumer reflections to food packaging stories. Our informants represent middle-class residents of the US and do not exhibit extreme alternative lifestyles described by Thompson as "cultural creatives" (2004). Nine people were individually interviewed on their food consumption preferences and behavior. Participants were asked to have a look at the twelve displayed food product packages and read all the textual information on the package, after which we continued with questions that focused on the consumer interpretations of the reading material (Scott and Batra, 2003). On average, each participant reflected on 4-5 packages. Our in-depth interviews lasted one to one and a half hours each. The interviews were tape recorded and transcribed, providing 140 pages of text. The products came from local grocery stores on the West Coast of the US and represented a basic range of food product categories, including snacks, canned foods, cereals, baby foods, and tea. The data were analyzed using procedures for developing grounded theory delineated by Strauss and Corbin (1998). As a result, our study does not support the notion of one brand/one personality as assumed by prior work. Thus, we reveal multiple brand personalities peacefully cohabiting in the same brand as seen by different consumers, despite marketer attempts to create more singular brand personalities. We extend Fournier's (1998) proposition, that one's life projects shape the intensity and nature of brand relationships. We find that these life projects also affect perceived brand personifications and meanings. While Fournier provides a conceptual framework that links together consumers’ life themes (Mick and Buhl, 1992) and relational roles assigned to anthropomorphized brands, we find that consumer life projects mold both the ways in which brands are rendered humanlike and the ways in which brands connect to consumers' existential concerns. We find two modes through which brands are anthropomorphized by our participants. First, brand personalities are created by seeing them through perceived demographic, psychographic, and social characteristics that are to some degree shared by consumers. Second, brands in our study further relate to consumers' existential concerns by either being blended with consumer personalities in order to connect to them (the brand as a friend, a family member, a next door neighbor) or by distancing themselves from the brand personalities and estranging them (the brand as a used car salesman, a "bunch of executives.") By focusing on food product packages, we illuminate a very specific, widely-used, but little-researched vehicle of marketing communication: brand storytelling. Recent work that has approached packages as mythmakers, finds it increasingly challenging for marketers to produce textual stories that link the personalities of products to the personalities of those consuming them, and suggests that "a multiplicity of building material for creating desired consumer myths is what a postmodern consumer arguably needs" (Kniazeva and Belk, 2007). Used as vehicles for storytelling, food packages can exploit both rational and emotional approaches, offering consumers either a "lecture" or "drama" (Randazzo, 2006), myths (Kniazeva and Belk, 2007; Holt, 2004; Thompson, 2004), or meanings (McCracken, 2005) as necessary building blocks for anthropomorphizing their brands. The craft of giving birth to brand personalities is in the hands of writers/marketers and in the minds of readers/consumers who individually and sometimes idiosyncratically put a meaningful human face on a brand.

Differential Effects of Recovery Efforts on Products Attitudes (제품태도에 대한 회복노력의 차별적 효과)

  • Kim, Cheon-GIl;Choi, Jung-Mi
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.1
    • /
    • pp.33-58
    • /
    • 2008
  • Previous research has presupposed that the evaluation of consumer who received any recovery after experiencing product failure should be better than the evaluation of consumer who did not receive any recovery. The major purposes of this article are to examine impacts of product defect failures rather than service failures, and to explore effects of recovery on postrecovery product attitudes. First, this article deals with the occurrence of severe and unsevere failure and corresponding service recovery toward tangible products rather than intangible services. Contrary to intangible services, purchase and usage are separable for tangible products. This difference makes it clear that executing an recovery strategy toward tangible products is not plausible right after consumers find out product failures. The consumers may think about backgrounds and causes for the unpleasant events during the time gap between product failure and recovery. The deliberation may dilutes positive effects of recovery efforts. The recovery strategies which are provided to consumers experiencing product failures can be classified into three types. A recovery strategy can be implemented to provide consumers with a new product replacing the old defective product, a complimentary product for free, a discount at the time of the failure incident, or a coupon that can be used on the next visit. This strategy is defined as "a rewarding effort." Meanwhile a product failure may arise in exchange for its benefit. Then the product provider can suggest a detail explanation that the defect is hard to escape since it relates highly to the specific advantage to the product. The strategy may be called as "a strengthening effort." Another possible strategy is to recover negative attitude toward own brand by giving prominence to the disadvantages of a competing brand rather than the advantages of its own brand. The strategy is reflected as "a weakening effort." This paper emphasizes that, in order to confirm its effectiveness, a recovery strategy should be compared to being nothing done in response to the product failure. So the three types of recovery efforts is discussed in comparison to the situation involving no recovery effort. The strengthening strategy is to claim high relatedness of the product failure with another advantage, and expects the two-sidedness to ease consumers' complaints. The weakening strategy is to emphasize non-aversiveness of product failure, even if consumers choose another competitive brand. The two strategies can be effective in restoring to the original state, by providing plausible motives to accept the condition of product failure or by informing consumers of non-responsibility in the failure case. However the two may be less effective strategies than the rewarding strategy, since it tries to take care of the rehabilitation needs of consumers. Especially, the relative effect between the strengthening effort and the weakening effort may differ in terms of the severity of the product failure. A consumer who realizes a highly severe failure is likely to attach importance to the property which caused the failure. This implies that the strengthening effort would be less effective under the condition of high product severity. Meanwhile, the failing property is not diagnostic information in the condition of low failure severity. Consumers would not pay attention to non-diagnostic information, and with which they are not likely to change their attitudes. This implies that the strengthening effort would be more effective under the condition of low product severity. A 2 (product failure severity: high or low) X 4 (recovery strategies: rewarding, strengthening, weakening, or doing nothing) between-subjects design was employed. The particular levels of product failure severity and the types of recovery strategies were determined after a series of expert interviews. The dependent variable was product attitude after the recovery effort was provided. Subjects were 284 consumers who had an experience of cosmetics. Subjects were first given a product failure scenario and were asked to rate the comprehensibility of the failure scenario, the probability of raising complaints against the failure, and the subjective severity of the failure. After a recovery scenario was presented, its comprehensibility and overall evaluation were measured. The subjects assigned to the condition of no recovery effort were exposed to a short news article on the cosmetic industry. Next, subjects answered filler questions: 42 items of the need for cognitive closure and 16 items of need-to-evaluate. In the succeeding page a subject's product attitude was measured on an five-item, six-point scale, and a subject's repurchase intention on an three-item, six-point scale. After demographic variables of age and sex were asked, ten items of the subject's objective knowledge was checked. The results showed that the subjects formed more favorable evaluations after receiving rewarding efforts than after receiving either strengthening or weakening efforts. This is consistent with Hoffman, Kelley, and Rotalsky (1995) in that a tangible service recovery could be more effective that intangible efforts. Strengthening and weakening efforts also were effective compared to no recovery effort. So we found that generally any recovery increased products attitudes. The results hint us that a recovery strategy such as strengthening or weakening efforts, although it does not contain a specific reward, may have an effect on consumers experiencing severe unsatisfaction and strong complaint. Meanwhile, strengthening and weakening efforts were not expected to increase product attitudes under the condition of low severity of product failure. We can conclude that only a physical recovery effort may be recognized favorably as a firm's willingness to recover its fault by consumers experiencing low involvements. Results of the present experiment are explained in terms of the attribution theory. This article has a limitation that it utilized fictitious scenarios. Future research deserves to test a realistic effect of recovery for actual consumers. Recovery involves a direct, firsthand experience of ex-users. Recovery does not apply to non-users. The experience of receiving recovery efforts can be relatively more salient and accessible for the ex-users than for non-users. A recovery effort might be more likely to improve product attitude for the ex-users than for non-users. Also the present experiment did not include consumers who did not have an experience of the products and who did not perceive the occurrence of product failure. For the non-users and the ignorant consumers, the recovery efforts might lead to decreased product attitude and purchase intention. This is because the recovery trials may give an opportunity for them to notice the product failure.

  • PDF

A Study on the Archives and Records Management in Korea - Overview and Future Direction - (한국의 기록관리 현황 및 발전방향에 관한 연구)

  • Han, Sang-Wan;Kim, Sung-Soo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.2 no.2
    • /
    • pp.1-38
    • /
    • 2002
  • This study examines the status quo of Korean archives and records management from the Governmental as well as professional activities for the development of the field in relation to the new legislation on records management. Among many concerns, this study primarily explores the following four perspectives: 1) the Government Archives and Records Services; 2) the Korean Association of Archives; 3) the Korean Society of Archives and Records Management; 4) the Journal of Korean Society of Archives and Records Management. One of the primary tasks of the is to build the special depository within which the Presidential Library should be located. As a result, the position of the GARS can be elevated and directed by an official at the level of vice-minister right under a president as a governmental representative of managing the public records. In this manner, GARS can sustain its independency and take custody of public records across government agencies. made efforts in regard to the preservation of paper records, the preservation of digital resources in new media formats, facilities and equipments, education of archivists and continuing, training of practitioners, and policy-making of records preservation. For further development, academia and corporate should cooperate continuously to face with the current problems. has held three international conferences to date. The topics of conferences include respectively: 1) records management and archival education of Korea, Japan, and China; 2) knowledge management and metadata for the fulfillment of archives and information science; and 3) electronic records management and preservation with the understanding of ongoing archival research in the States, Europe, and Asia. The Society continues to play a leading role in both of theory and practice for the development of archival science in Korea. It should also suggest an educational model of archival curricula that fits into the Korean context. The Journals of Records Management & Archives Society of Korea have been published on the six major topics to date. Findings suggest that "Special Archives" on regional or topical collections are desirable because it can house subject holdings on specialty or particular figures in that region. In addition, archival education at the undergraduate level is more desirable for Korean situations where practitioners are strongly needed and professionals with master degrees go to manager positions. Departments of Library and Information Science in universities, therefore, are needed to open archival science major or track at the undergraduate level in order to meet current market demands. The qualification of professional archivists should be moderate as well.

HW/SW Partitioning Techniques for Multi-Mode Multi-Task Embedded Applications (멀티모드 멀티태스크 임베디드 어플리케이션을 위한 HW/SW 분할 기법)

  • Kim, Young-Jun;Kim, Tae-Whan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.337-347
    • /
    • 2007
  • An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.

An Application-Specific and Adaptive Power Management Technique for Portable Systems (휴대장치를 위한 응용프로그램 특성에 따른 적응형 전력관리 기법)

  • Egger, Bernhard;Lee, Jae-Jin;Shin, Heon-Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.367-376
    • /
    • 2007
  • In this paper, we introduce an application-specific and adaptive power management technique for portable systems that support dynamic voltage scaling (DVS). We exploit both the idle time of multitasking systems running soft real-time tasks as well as memory- or CPU-bound code regions. Detailed power and execution time profiles guide an adaptive power manager (APM) that is linked to the operating system. A post-pass optimizer marks candidate regions for DVS by inserting calls to the APM. At runtime, the APM monitors the CPU's performance counters to dynamically determine the affinity of the each marked region. for each region, the APM computes the optimal voltage and frequency setting in terms of energy consumption and switches the CPU to that setting during the execution of the region. Idle time is exploited by monitoring system idle time and switching to the energy-wise most economical setting without prolonging execution. We show that our method is most effective for periodic workloads such as video or audio decoding. We have implemented our method in a multitasking operating system (Microsoft Windows CE) running on an Intel XScale-processor. We achieved up to 9% of total system power savings over the standard power management policy that puts the CPU in a low Power mode during idle periods.

An Improved Online Algorithm to Minimize Total Error of the Imprecise Tasks with 0/1 Constraint (0/1 제약조건을 갖는 부정확한 태스크들의 총오류를 최소화시키기 위한 개선된 온라인 알고리즘)

  • Song, Gi-Hyeon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.10
    • /
    • pp.493-501
    • /
    • 2007
  • The imprecise real-time system provides flexibility in scheduling time-critical tasks. Most scheduling problems of satisfying both 0/1 constraint and timing constraints, while the total error is minimized, are NP-complete when the optional tasks have arbitrary processing times. Liu suggested a reasonable strategy of scheduling tasks with the 0/1 constraint on uniprocessors for minimizing the total error. Song et at suggested a reasonable strategy of scheduling tasks with the 0/1 constraint on multiprocessors for minimizing the total error. But, these algorithms are all off-line algorithms. In the online scheduling, the NORA algorithm can find a schedule with the minimum total error for the imprecise online task system. In NORA algorithm, EDF strategy is adopted in the optional scheduling. On the other hand, for the task system with 0/1 constraint, EDF_Scheduling may not be optimal in the sense that the total error is minimized. Furthermore, when the optional tasks are scheduled in the ascending order of their required processing times, NORA algorithm which EDF strategy is adopted may not produce minimum total error. Therefore, in this paper, an online algorithm is proposed to minimize total error for the imprecise task system with 0/1 constraint. Then, to compare the performance between the proposed algorithm and NORA algorithm, a series of experiments are performed. As a conseqence of the performance comparison between two algorithms, it has been concluded that the proposed algorithm can produce similar total error to NORA algorithm when the optional tasks are scheduled in the random order of their required processing times but, the proposed algorithm can produce less total error than NORA algorithm especially when the optional tasks are scheduled in the ascending order of their required processing times.