• Title/Summary/Keyword: flow information

Search Result 5,726, Processing Time 0.037 seconds

The Impacts of Need for Cognitive Closure, Psychological Wellbeing, and Social Factors on Impulse Purchasing (인지폐합수요(认知闭合需要), 심리건강화사회인소대충동구매적영향(心理健康和社会因素对冲动购买的影响))

  • Lee, Myong-Han;Schellhase, Ralf;Koo, Dong-Mo;Lee, Mi-Jeong
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.4
    • /
    • pp.44-56
    • /
    • 2009
  • Impulse purchasing is defined as an immediate purchase with no pre-shopping intentions. Previous studies of impulse buying have focused primarily on factors linked to marketing mix variables, situational factors, and consumer demographics and traits. In previous studies, marketing mix variables such as product category, product type, and atmospheric factors including advertising, coupons, sales events, promotional stimuli at the point of sale, and media format have been used to evaluate product information. Some authors have also focused on situational factors surrounding the consumer. Factors such as the availability of credit card usage, time available, transportability of the products, and the presence and number of shopping companions were found to have a positive impact on impulse buying and/or impulse tendency. Research has also been conducted to evaluate the effects of individual characteristics such as the age, gender, and educational level of the consumer, as well as perceived crowding, stimulation, and the need for touch, on impulse purchasing. In summary, previous studies have found that all products can be purchased impulsively (Vohs and Faber, 2007), that situational factors affect and/or at least facilitate impulse purchasing behavior, and that various individual traits are closely linked to impulse buying. The recent introduction of new distribution channels such as home shopping channels, discount stores, and Internet stores that are open 24 hours a day increases the probability of impulse purchasing. However, previous literature has focused predominantly on situational and marketing variables and thus studies that consider critical consumer characteristics are still lacking. To fill this gap in the literature, the present study builds on this third tradition of research and focuses on individual trait variables, which have rarely been studied. More specifically, the current study investigates whether impulse buying tendency has a positive impact on impulse buying behavior, and evaluates how consumer characteristics such as the need for cognitive closure (NFCC), psychological wellbeing, and susceptibility to interpersonal influences affect the tendency of consumers towards impulse buying. The survey results reveal that while consumer affective impulsivity has a strong positive impact on impulse buying behavior, cognitive impulsivity has no impact on impulse buying behavior. Furthermore, affective impulse buying tendency is driven by sub-components of NFCC such as decisiveness and discomfort with ambiguity, psychological wellbeing constructs such as environmental control and purpose in life, and by normative and informational influences. In addition, cognitive impulse tendency is driven by sub-components of NFCC such as decisiveness, discomfort with ambiguity, and close-mindedness, and the psychological wellbeing constructs of environmental control, as well as normative and informational influences. The present study has significant theoretical implications. First, affective impulsivity has a strong impact on impulse purchase behavior. Previous studies based on affectivity and flow theories proposed that low to moderate levels of impulsivity are driven by reduced self-control or a failure of self-regulatory mechanisms. The present study confirms the above proposition. Second, the present study also contributes to the literature by confirming that impulse buying tendency can be viewed as a two-dimensional concept with both affective and cognitive dimensions, and illustrates that impulse purchase behavior is explained mainly by affective impulsivity, not by cognitive impulsivity. Third, the current study accommodates new constructs such as psychological wellbeing and NFCC as potential influencing factors in the research model, thereby contributing to the existing literature. Fourth, by incorporating multi-dimensional concepts such as psychological wellbeing and NFCC, more diverse aspects of consumer information processing can be evaluated. Fifth, the current study also extends the existing literature by confirming the two competing routes of normative and informational influences. Normative influence occurs when individuals conform to the expectations of others or to enhance his/her self-image. Whereas informational influence occurs when individuals search for information from knowledgeable others or making inferences based upon observations of the behavior of others. The present study shows that these two competing routes of social influence can be attributed to different sources of influence power. The current study also has many practical implications. First, it suggests that people with affective impulsivity may be primary targets to whom companies should pay closer attention. Cultivating a more amenable and mood-elevating shopping environment will appeal to this segment. Second, the present results demonstrate that NFCC is closely related to the cognitive dimension of impulsivity. These people are driven by careless thoughts, not by feelings or excitement. Rational advertising at the point of purchase will attract these customers. Third, people susceptible to normative influences are another potential target market. Retailers and manufacturers could appeal to this segment by advertising their products and/or services as products that can be used to identify with or conform to the expectations of others in the aspiration group. However, retailers should avoid targeting people susceptible to informational influences as a segment market. These people are engaged in an extensive information search relevant to their purchase, and therefore more elaborate, long-term rational advertising messages, which can be internalized into these consumers' thought processes, will appeal to this segment. The current findings should be interpreted with caution for several reasons. The study used a small convenience sample, and only investigated behavior in two dimensions. Accordingly, future studies should incorporate a sample with more diverse characteristics and measure different aspects of behavior. Future studies should also investigate personality traits closely related to affectivity theories. Trait variables such as sensory curiosity, interpersonal curiosity, and atmospheric responsiveness are interesting areas for future investigation.

  • PDF

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

Analyzing the Issue Life Cycle by Mapping Inter-Period Issues (기간별 이슈 매핑을 통한 이슈 생명주기 분석 방법론)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.25-41
    • /
    • 2014
  • Recently, the number of social media users has increased rapidly because of the prevalence of smart devices. As a result, the amount of real-time data has been increasing exponentially, which, in turn, is generating more interest in using such data to create added value. For instance, several attempts are being made to analyze the relevant search keywords that are frequently used on new portal sites and the words that are regularly mentioned on various social media in order to identify social issues. The technique of "topic analysis" is employed in order to identify topics and themes from a large amount of text documents. As one of the most prevalent applications of topic analysis, the technique of issue tracking investigates changes in the social issues that are identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has two limitations. First, when a new period is included, topic analysis must be repeated for all the documents of the entire period, rather than being conducted only on the new documents of the added period. This creates practical limitations in the form of significant time and cost burdens. Therefore, this traditional approach is difficult to apply in most applications that need to perform an analysis on the additional period. Second, the issue is not only generated and terminated constantly, but also one issue can sometimes be distributed into several issues or multiple issues can be integrated into one single issue. In other words, each issue is characterized by a life cycle that consists of the stages of creation, transition (merging and segmentation), and termination. The existing issue tracking methods do not address the connection and effect relationship between these issues. The purpose of this study is to overcome the two limitations of the existing issue tracking method, one being the limitation regarding the analysis method and the other being the limitation involving the lack of consideration of the changeability of the issues. Let us assume that we perform multiple topic analysis for each multiple period. Then it is essential to map issues of different periods in order to trace trend of issues. However, it is not easy to discover connection between issues of different periods because the issues derived for each period mutually contain heterogeneity. In this study, to overcome these limitations without having to analyze the entire period's documents simultaneously, the analysis can be performed independently for each period. In addition, we performed issue mapping to link the identified issues of each period. An integrated approach on each details period was presented, and the issue flow of the entire integrated period was depicted in this study. Thus, as the entire process of the issue life cycle, including the stages of creation, transition (merging and segmentation), and extinction, is identified and examined systematically, the changeability of the issues was analyzed in this study. The proposed methodology is highly efficient in terms of time and cost, as it sufficiently considered the changeability of the issues. Further, the results of this study can be used to adapt the methodology to a practical situation. By applying the proposed methodology to actual Internet news, the potential practical applications of the proposed methodology are analyzed. Consequently, the proposed methodology was able to extend the period of the analysis and it could follow the course of progress of each issue's life cycle. Further, this methodology can facilitate a clearer understanding of complex social phenomena using topic analysis.

Problems with ERP Education at College and How to Solve the Problems (대학에서의 ERP교육의 문제점 및 개선방안)

  • Kim, Mang-Hee;Ra, Ki-La;Park, Sang-Bong
    • Management & Information Systems Review
    • /
    • v.31 no.2
    • /
    • pp.41-59
    • /
    • 2012
  • ERP is a new technique of process innovation. It indicates enterprise resource planning whose purpose is an integrated total management of enterprise resources. ERP can be also seen as one of the latest management systems that organically connects by using computers all business processes including marketing, production and delivery and control those processes on a real-time basis. Currently, however, it's not easy for local enterprises to have operators who will be in charge of ERP programs, even if they want to introduce the resource management system. This suggests that it's urgently needed to train such operators through ERP education at school. But in the field of education, actually, the lack of professional ERP instructors and less effective learning programs for industrial applications of ERP are obstacles to bringing up ERP workers who are competent as much as required by enterprises. In ERP, accounting is more important than any others. Accountants are assuming more and more roles in ERP. Thus, there's a rapidly increasing demand for experts in ERP accounting. This study examined previous researches and literature concerning ERP education, identified problems with current ERP education at college and proposed how to solve the problems. This study proposed the ways of improving ERP education at college as follows. First, a prerequisite learning of ERP, that is, educating the principle of accounting should be intensified to make students get a basic theoretical knowledge of ERP enough. Second, lots of different scenarios designed to try ERP programs in business should be created. In association, students should be educated to get a better understanding of incidents or events taken place in those scenarios and apply it to trying ERP for themselves. Third, as mentioned earlier, ERP is a system that integrates all enterprise resources such as marketing, procurement, personnel management, remuneration and production under the framework of accounting. It should be noted that under ERP, business activities are organically connected with accounting modules. More importantly, those modules should be recognized not individually, but as parts comprising a whole flow of accounting. This study has a limitation because it is a literature research that heavily relied on previous studies, publications and reports. This suggests the need to compare the efficiency of ERP education between before and after applying what this study proposed to improve that education. Also, it's needed to determine students' and professors' perceived effectiveness of current ERP education and compare and analyze the difference in that perception between the two groups.

  • PDF

Accounting Conservatism and Excess Executive Compensation (회계 보수주의와 경영자 초과보상)

  • Byun, Seol-Won;Park, Sang-Bong
    • Management & Information Systems Review
    • /
    • v.37 no.2
    • /
    • pp.187-207
    • /
    • 2018
  • This study examines the negative relationship between accounting conservatism and excess executive compensation and examines whether their relationship increases as managerial incentive compensation intensity increases. For this purpose, a total of 2,755 company-years were selected for the analysis of the companies listed on the Korea Stock Exchange from December 2012 to 2016 as the final sample. The results of this study are as follows. First, there is a statistically significant negative relationship between accounting conservatism and manager overpayment. This implies that managers' incentives to distort future cash flow estimates by over booking assets or accounting profits in order to maximize their compensation when manager compensation is linked to firm performance. In this sense, accounting conservatism can reduce opportunistic behavior by restricting managerial accounting choices, which can be interpreted as a reduction in overpayment to managers. Second, we found that the relationship between accounting conservatism and excess executive compensation increases with the incentive compensation for accounting performance. The higher the managerial incentive compensation intensity of accounting performance is, the more likely it is that the manager has the incentive to make earnings adjustments. Therefore, the high level of incentive compensation for accounting performance means that the ex post settling up problem due to over-compensation can become serious. In this case, the higher the managerial incentive compensation intensity for accounting performance, the greater the role and utility of conservatism in manager compensation contracts. This study is based on the fact that it presents empirical evidence on the usefulness of accounting conservatism in managerial compensation contracts theoretically presented by Watts (2003) and the additional basis that conservatism can be used as a useful tool for investment decision.

Reproducibility of Regional Pulse Wave Velocity in Healthy Subjects

  • Im Jae-Joong;Lee, Nak-Bum;Rhee Moo-Yong;Na Sang-Hun;Kim, Young-Kwon;Lee, Myoung-Mook;Cockcroft John R.
    • International Journal of Vascular Biomedical Engineering
    • /
    • v.4 no.2
    • /
    • pp.19-24
    • /
    • 2006
  • Background: Pulse wave velocity (PWV), which is inversely related to the distensibility of an arterial wall, offers a simple and potentially useful approach for an evaluation of cardiovascular diseases. In spite of the clinical importance and widespread use of PWV, there exist no standard either for pulse sensors or for system requirements for accurate pulse wave measurement. Objective of this study was to assess the reproducibility of PWV values using a newly developed PWV measurement system in healthy subjects prior to a large-scale clinical study. Methods: System used for the study was the PP-1000 (Hanbyul Meditech Co., Korea), which provides regional PWV values based on the measurements of electrocardiography (ECG), phonocardiography (PCG), and pulse waves from four different sites of arteries (carotid, femoral, radial, and dorsalis pedis) simultaneously. Seventeen healthy male subjects with a mean age of 33 years (ranges 22 to 52 years) without any cardiovascular disease were participated for the experiment. Two observers (observer A and B) performed two consecutive measurements from the same subject in a random order. For an evaluation of system reproducibility, two analyses (within-observer and between-observer) were performed, and expressed in terms of mean difference ${\pm}2SD$, as described by Bland and Altman plots. Results: Mean and SD of PWVs for aorta, arm, and leg were $7.07{\pm}1.48m/sec,\;8.43{\pm}1.14m/sec,\;and\;8.09{\pm}0.98m/sec$ measured from observer A and $6.76{\pm}1.00m/sec,\;7.97{\pm}0.80m/sec,\;and\;\7.97{\pm}0.72m/sec$ from observer B, respectively. Between-observer differences ($mean{\pm}2SD$) for aorta, arm, and leg were $0.14{\pm\}0.62m/sec,\;0.18{\pm\}0.84m/sec,\;and\;0.07{\pm}0.86m/sec$, and the correlation coefficients were high especially 0.93 for aortic PWV. Within-observer differences ($mean{\pm}2SD$) for aorta, arm, and leg were $0.01{\pm}0.26m/sec,\;0.02{\pm}0.26m/sec,\;and\;0.08{\pm}0.32m/sec$ from observer A and $0.01{\pm}0.24m/sec,\;0.04{\pm}0.28m/sec,\;and\;0.01{\pm}0.20m/sec$ from observer B, respectively. All the measurements showed significantly high correlation coefficients ranges from 0.94 to 0.99. Conclusion: PWV measurement system used for the study offers comfortable and simple operation and provides accurate analysis results with high reproducibility. Since the reproducibility of the measurement is critical for the diagnosis in clinical use, it is necessary to provide an accurate algorithm for the detection of additional features such as flow wave, reflection wave, and dicrotic notch from a pulse waveform. This study will be extended for the comparison of PWV values from patients with various vascular risks for clinical application. Data acquired from the study could be used for the determination of the appropriate sample size for further studies relating various types of arteriosclerosis-related vascular disease.

  • PDF

Study on the Consequence Effect Analysis & Process Hazard Review at Gas Release from Hydrogen Fluoride Storage Tank (최근 불산 저장탱크에서의 가스 누출시 공정위험 및 결과영향 분석)

  • Ko, JaeSun
    • Journal of the Society of Disaster Information
    • /
    • v.9 no.4
    • /
    • pp.449-461
    • /
    • 2013
  • As the hydrofluoric acid leak in Gumi-si, Gyeongsangbuk-do or hydrochloric acid leak in Ulsan, Gyeongsangnam-do demonstrated, chemical related accidents are mostly caused by large amounts of volatile toxic substances leaking due to the damages of storage tank or pipe lines of transporter. Safety assessment is the most important concern because such toxic material accidents cause human and material damages to the environment and atmosphere of the surrounding area. Therefore, in this study, a hydrofluoric acid leaked from a storage tank was selected as the study example to simulate the leaked substance diffusing into the atmosphere and result analysis was performed through the numerical Analysis and diffusion simulation of ALOHA(Areal Location of Hazardous Atmospheres). the results of a qualitative evaluation of HAZOP (Hazard Operability)was looked at to find that the flange leak, operation delay due to leakage of the valve and the hose, and toxic gas leak were danger factors. Possibility of fire from temperature, pressure and corrosion, nitrogen supply overpressure and toxic leak from internal corrosion of tank or pipe joints were also found to be high. ALOHA resulting effects were a little different depending on the input data of Dense Gas Model, however, the wind direction and speed, rather than atmospheric stability, played bigger role. Higher wind speed affected the diffusion of contaminant. In term of the diffusion concentration, both liquid and gas leaks resulted in almost the same $LC_{50}$ and ALOHA AEGL-3(Acute Exposure Guidline Level) values. Each scenarios showed almost identical results in ALOHA model. Therefore, a buffer distance of toxic gas can be determined by comparing the numerical analysis and the diffusion concentration to the IDLH(Immediately Dangerous to Life and Health). Such study will help perform the risk assessment of toxic leak more efficiently and be utilized in establishing community emergency response system properly.

Development of Cyber R&D Platform on Total System Performance Assessment for a Potential HLW Repository ; Application for Development of Scenario through QA Procedures (고준위 방사성폐기물 처분 종합 성능 평가 (TSPA)를 위한 Cyber R&D Platform 개발 ; 시나리오 도출 과정에서의 품질보증 적용 사례)

  • Seo Eun-Jin;Hwang Yong-soo;Kang Chul-Hyung
    • Proceedings of the Korean Radioactive Waste Society Conference
    • /
    • 2005.06a
    • /
    • pp.311-318
    • /
    • 2005
  • Transparency on the Total System Performance Assessment (TSPA) is the key issue to enhance the public acceptance for a permanent high level radioactive repository. To approve it, all performances on TSPA through Quality Assurance is necessary. The integrated Cyber R&D Platform is developed by KAERI using the T2R3 principles applicable for five major steps in R&D's. The proposed system is implemented in the web-based system so that all participants in TSPA are able to access the system. It is composed of FEAS (FEp to Assessment through Scenario development) showing systematic approach from the FEPs to Assessment methods flow chart, PAID (Performance Assessment Input Databases) showing PA(Performance Assessment) input data set in web based system and QA system receding those data. All information is integrated into Cyber R&D Platform so that every data in the system can be checked whenever necessary. For more user-friendly system, system upgrade included input data & documentation package is under development. Throughout the next phase R&D, Cyber R&D Platform will be connected with the assessment tool for TSPA so that it will be expected to search the whole information in one unified system.

  • PDF