• Title/Summary/Keyword: Processing variables

Search Result 1,079, Processing Time 0.027 seconds

A Study on the Overall Economic Risks of a Hypothetical Severe Accident in Nuclear Power Plant Using the Delphi Method (델파이 기법을 이용한 원전사고의 종합적인 경제적 리스크 평가)

  • Jang, Han-Ki;Kim, Joo-Yeon;Lee, Jai-Ki
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.4
    • /
    • pp.127-134
    • /
    • 2008
  • Potential economic impact of a hypothetical severe accident at a nuclear power plant(Uljin units 3/4) was estimated by applying the Delphi method, which is based on the expert judgements and opinions, in the process of quantifying uncertain factors. For the purpose of this study, it is assumed that the radioactive plume directs the inland direction. Since the economic risk can be divided into direct costs and indirect effects and more uncertainties are involved in the latter, the direct costs were estimated first and the indirect effects were then estimated by applying a weighting factor to the direct cost. The Delphi method however subjects to risk of distortion or discrimination of variables because of the human behavior pattern. A mathematical approach based on the Bayesian inferences was employed for data processing to improve the Delphi results. For this task, a model for data processing was developed. One-dimensional Monte Carlo Analysis was applied to get a distribution of values of the weighting factor. The mean and median values of the weighting factor for the indirect effects appeared to be 2.59 and 2.08, respectively. These values are higher than the value suggested by OECD/NEA, 1.25. Some factors such as small territory and public attitude sensitive to radiation could affect the judgement of panel. Then the parameters of the model for estimating the direct costs were classified as U- and V-types, and two-dimensional Monte Carlo analysis was applied to quantify the overall economic risk. The resulting median of the overall economic risk was about 3.9% of the gross domestic products(GDP) of Korea in 2006. When the cost of electricity loss, the highest direct cost, was not taken into account, the overall economic risk was reduced to 2.2% of GDP. This assessment can be used as a reference for justifying the radiological emergency planning and preparedness.

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.

An Empirical Analysis on the Persistent Usage Intention of Chinese Personal Cloud Service (개인용 클라우드 서비스에 대한 중국 사용자의 지속적 사용의도에 관한 실증 연구)

  • Yu, Hexin;Sura, Suaini;Ahn, Jong-chang
    • Journal of Internet Computing and Services
    • /
    • v.16 no.3
    • /
    • pp.79-93
    • /
    • 2015
  • With the rapid development of information technology, the ways of usage have changed drastically. The ways and efficiency of traditional service application to data processing already could not satisfy the requirements of modern users. Nowadays, users have already understood the importance of data. Therefore, the processing and saving of big data have become the main research of the Internet service company. In China, with the rise and explosion of 115 Cloud leads to other technology companies have began to join the battle of cloud services market. Although currently Chinese cloud services are still mainly dominated by cloud storage service, the series of service contents based on cloud storage service have been affirmed by users, and users willing to try these new ways of services. Thus, how to let users to keep using cloud services has become a topic that worth for exploring and researching. The academia often uses the TAM model with statistical analysis to analyze and check the attitude of users in using the system. However, the basic TAM model obviously already could not satisfy the increasing scale of system. Therefore, the appropriate expansion and adjustment to the TAM model (i. e. TAM2 or TAM3) are very necessary. This study has used the status of Chinese internet users and the related researches in other areas in order to expand and improve the TAM model by adding the brand influence, hardware environment and external environments to fulfill the purpose of this study. Based on the research model, the questionnaires were developed and online survey was conducted targeting the cloud services users of four Chinese main cities. Data were obtained from 210 respondents were used for analysis to validate the research model. The analysis results show that the external factors which are service contents, and brand influence have a positive influence to perceived usefulness and perceived ease of use. However, the external factor hardware environment only has a positive influence to the factor of perceived ease of use. Furthermore, the perceived security factor that is influenced by brand influence has a positive influence persistent intention to use. Persistent intention to use also was influenced by the perceived usefulness and persistent intention to use was influenced by the perceived ease of use. Finally, this research analyzed external variables' attributes using other perspective and tried to explain the attributes. It presents Chinese cloud service users are more interested in fundamental cloud services than extended services. In private cloud services, both of increased user size and cooperation among companies are important in the study. This study presents useful opinions for the purpose of strengthening attitude for private cloud service users can use this service persistently. Overall, it can be summarized by considering the all three external factors could make Chinese users keep using the personal could services. In addition, the results of this study can provide strong references to technology companies including cloud service provider, internet service provider, and smart phone service provider which are main clients are Chinese users.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

A Double-Blind Comparison of Paroxetine and Amitriptyline in the Treatment of Depression Accompanied by Alcoholism : Behavioral Side Effects during the First 2 Weeks of Treatment (주정중독에 동반된 우울증의 치료에서 Paroxetine과 Amitriptyline의 이중맹 비교 : 치료초기 2주 동안의 행동학적 부작용)

  • Yoon, Jin-Sang;Yoon, Bo-Hyun;Choi, Tae-Seok;Kim, Yong-Bum;Lee, Hyung-Yung
    • Korean Journal of Biological Psychiatry
    • /
    • v.3 no.2
    • /
    • pp.277-287
    • /
    • 1996
  • Objective : It has been proposed that cognition and related aspects of mental functioning are decreased in depression as well as in alcoholism. The objective of the study was to compare behavioral side effects of paroxetine and amitriptyline in depressed patients accompanied by alcoholism. The focused comparisons were drug effects concerning psychomotor performance, cognitive function, sleep and daytime sleepiness during the first 2 weeks of treatment. Methods : After an alcohol detoxification period(3 weeks) and a washout period(1 week), a total of 20 male inpatients with alcohol use disorder (DSM-IV), who also had a major depressive episode(DSM-IV), were treated double-blind with paroxetine 20mg/day(n=10) or amitriptyline 25mg/day(n=10) for 2 weeks. All patients were required to have a scare of at least 18 respectively on bath the Hamilton Rating Scale far Depression(HAM-D) and Beck Depression Inventory(BDI) at pre-drug baseline. Patients randomized to paroxetine received active medication in the morning and placebo in the evening whereas those randomized to amitriptyline received active medication in the evening and placebo in the morning. All patients performed the various tasks in a test battery at baseline and at days 3, 7 and 14. The test battery included : critical flicker fusion threshold for sensory information processing capacity : choice reaction time for gross psychomotor performance : tracking accuracy and latency of response to peripheral stimulus as a measure of line sensorimotor co-ordination and divided attention : digit symbol substitution as a measure of sustained attention and concentration. To rate perceived sleep and daytime sleepiness, 10cm line Visual analogue scales were employed at baseline and at days 3, 7 and 14. The subjective rating scales were adapted far this study from Leeds sleep Evaluation Questionnaire and Epworth Sleepiness Scale. In addition a comprehensive side effect assessment, using the UKU side effect rating scale, was carried out at baseline and at days 7 and 14. The efficacy of treatment was evaluated using HAM-D, BDI and clinical global impression far severity and improvement at days 7 and 14. Results : The pattern of results indicated thai paroxetine improved performance an mast of the lest variables and also improved sleep with no effect on daytime sleepiness aver the study period. In contrast, amitriptyline produced disruption of performance on same tests and improved sleep with increased daytime sleepiness in particular at day 3. On the UKU side effect rating scale, mare side effects were registered an amitriptyline. The therapeutic efficacy was observed in favor of paroxetine early in day 7. Conclusion : These results demonstrated thai paroxetine in much better than amitriptyline for the treatment of depressed patients accompained by alcoholism at least in terms of behavioral safety and tolerability, furthermore the results may assist in explaining the therapeutic outcome of paroxetine. For example, and earlier onset of antidepressant action of paroxetine may be caused by early improved cognitive function or by contributing to good compliance with treatment.

  • PDF

The Effects of Price Salience on Consumer Perception and Purchase Intentions (개격현저대소비자감지화구매의도적영향(价格显著对消费者感知和购买意图的影响))

  • Martin-Consuegea, David;Millan, Angel;Diaz, Estrella;Ko, Eun-Ju
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.2
    • /
    • pp.149-163
    • /
    • 2010
  • Previous studies have shown that retail price promotion change consumers' purchase behavior and that retailers use price promotion more frequently. Keeping constant the benefits received by consumers, there are several ways for retailers to communicate a price promotion. For example, retailers can present a price reduction in absolute terms ($, ${\euro}$), percentage terms (%), or some combinations of these two methods (Della Bitta et al. 1981). Communicating a price promotion in different ways is similar to the framing of purchase decisions (Monroe 1990). Framing effects refers to the finding that subjects respond differently to different descriptions of the same decision question (Frisch 1993). Thus, the presentation of the promotion has an impact on consumer deal evaluation and hence retail sales. In fact, much research in marketing attests to the effects of price presentation on deal perception (Lichtenstein and Bearden 1989; Urbany et al. 1988; Yadav and Monroe 1993). In this sense, a number of marketing researches have argued that deal perceptions are also determined by the degree to which consumers are able to calculate the discounts and final purchase prices accurately (Estelami 2003a; Morwitz et al. 1998), which suggests that marketers may be able to enhance responses to discounts by improving calculation accuracy. Consequently, since calculation inaccuracies in the aggregate lead to the underestimation of discounts (Kim and Kramer 2006), consumers are more likely to appreciate a discounted offer following deeper processing of price information that enables them to evaluate a price discount more accurately. The purpose of this research is to examine the effect of different presentations of discount prices on consumer price perceptions. To be more precise, the purpose of this study is to investigate how different implementations of the same price promotion (semantic and visual salience) affect consumers' perceptions of the promotion and their purchase decisions. Specifically, the analysis will focus on the effect of price presentation on evaluation, purchase intentions and perception of savings. In order to verify the hypotheses proposed in the research, this paper will present an experimental analysis dealing with several discount presentations. In this sense, a2 (Numerical salience presentation: absolute and relative) x2 (Worded salience presentation: novel and traditional) x2 (Visual salience: red and blue) design was employed to investigate the effects of discount presentation on three dependent variables: evaluation, purchase intentions and perception of savings. Respondents were exposed to a hypothetical advertisement that they had to evaluate and were informed of the offer conditions. Once the sample finished evaluating the advertisement, they answered a questionnaire related to price salience and dependent dimensions. Then, manipulation checks were conducted to ensure that respondents remembered their treatment conditions. Next, a $2{\times}2{\times}2$ MANOVA and follow-up univariate tests were conducted to verify the research hypotheses suggested and to examine the effects of the individual factors (price salience) on evaluation, purchase intentions and perceived savings. The results of this research show that semantic and visual salience presentations have significant main effects and interactions on evaluation, purchase intentions and perception of savings. Significant numerical salience interactions affected evaluation and purchase intentions. Additionally, a significant worded salience main effect on perception of savings and interactions on evaluation and purchase intentions were found. Finally, visual salience interactions have significant effects on evaluation. The main findings of this research suggest practical implications that firms should consider when planning promotion-based discounts to attract consumer attention. Consequently, because price presentation has important effects on consumer perception, retailers should consider which effect is wanted in order to design an effective discount presentaion. Specifically, retailers should present discounts with a traditional style that facilitates final price calculation. It is thus important to investigate ways in which marketers can enhance the accuracy of consumers' mental arithmetic to improve responses to price discounts. This preliminary study on the effect of price presentation on consumer perception and purchase intentions opens the line of research for further research. The results obtained in this research may have been determined by a number of limiting conceptual and methodological factors. In this sense, the research deals with a variety of discount presentations as well as with their effects; however, the analysis could include additional salience dimensions and effects on consumers. Furthermore, a similar study could be carried out including a larger, more inclusive and heterogeneous sample of consumers. In addition, the experiment did not require sample individuals to actually buy the product, so it is advisable to compare the effects obtained in the research with real consumer behavior and perception.

The Validity and Reliability of 'Computerized Neurocognitive Function Test' in the Elementary School Child (학령기 정상아동에서 '전산화 신경인지기능검사'의 타당도 및 신뢰도 분석)

  • Lee, Jong-Bum;Kim, Jin-Sung;Seo, Wan-Seok;Shin, Hyoun-Jin;Bai, Dai-Seg;Lee, Hye-Lin
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.11 no.2
    • /
    • pp.97-117
    • /
    • 2003
  • Objective: This study is to examine the validity and reliability of Computerized Neurocognitive Function Test among normal children in elementary school. Methods: K-ABC, K-PIC, and Computerized Neurocognitive Function Test were performed to the 120 body of normal children(10 of each male and female) from June, 2002 to January, 2003. Those children had over the average of intelligence and passed the rule out criteria. To verify test-retest reliability for those 30 children who were randomly selected, Computerized Neurocognitive Function Test was carried out again 4 weeks later. Results: As a results of correlation analysis for validity test, four of continues performance tests matched with those on adults. In the memory tests, results presented the same as previous research with a difference between forward test and backward test in short-term memory. In higher cognitive function tests, tests were consist of those with different purpose respectively. After performing factor analysis on 43 variables out of 12 tests, 10 factors were raised and the total percent of variance was 75.5%. The reasons were such as: 'sustained attention, information processing speed, vigilance, verbal learning, allocation of attention and concept formation, flexibility, concept formation, visual learning, short-term memory, and selective attention' in order. In correlation with K-ABC to prepare explanatory criteria, selectively significant correlation(p<.0.5-001) was found in subscale of K-ABC. In the test-retest reliability test, the results reflecting practice effect were found and prominent especially in higher cognitive function tests. However, split-half reliability(r=0.548-0.7726, p<.05) and internal consistency(0.628-0.878, p<.05) of each examined group were significantly high. Conclusion: The performance of Computerized Neurocognitive Function Test in normal children represented differ developmental character than that in adult. And basal information for preparing the explanatory criteria could be acquired by searching for the relation with standardized intelligence test which contains neuropsycological background.

  • PDF

Complex Terrain and Ecological Heterogeneity (TERRECO): Evaluating Ecosystem Services in Production Versus water Quantity/quality in Mountainous Landscapes (산지복잡지형과 생태적 비균질성: 산지경관의 생산성과 수자원/수질에 관한 생태계 서비스 평가)

  • Kang, Sin-Kyu;Tenhunen, John
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.12 no.4
    • /
    • pp.307-316
    • /
    • 2010
  • Complex terrain refers to irregular surface properties of the earth that influence gradients in climate, lateral transfer of materials, landscape distribution in soils properties, habitat selection of organisms, and via human preferences, the patterning in development of land use. Complex terrain of mountainous areas represents ca. 20% of the Earth's terrestrial surface; and such regions provide fresh water to at least half of humankind. Most major river systems originate in such terrain, and their resources are often associated with socio-economic competition and political disputes. The goals of the TERRECO-IRTG focus on building a bridge between ecosystem understanding in complex terrain and spatial assessments of ecosystem performance with respect to derived ecosystem services. More specifically, a coordinated assessment framework will be developed from landscape to regional scale applications to quantify trade-offs and will be applied to determine how shifts in climate and land use in complex terrain influence naturally derived ecosystem services. Within the scope of TERRECO, the abiotic and biotic studies of water yield and quality, production and biodiversity, soil processing of materials and trace gas emissions in complex terrain are merged. There is a need to quantitatively understand 1) the ecosystem services derived in regions of complex terrain, 2) the process regulation occurred to maintain those services, and 3) the sensitivities defining thresholds critical in stability of these systems. The TERRECO-IRTG is dedicated to joint study of ecosystems in complex terrain from landscape to regional scales. Our objectives are to reveal the spatial patterns in driving variables of essential ecosystem processes involved in ecosystem services of complex terrain region and hence, to evaluate the resulting ecosystem services, and further to provide new tools for understanding and managing such areas.

Study of Heating Methods for Optimal Taste and Swelling of Sea-cucumber (가열방법에 따른 해삼의 최대 팽윤 및 기호성 향상 연구)

  • Jung, Yeon-Hun;Yoo, Seung-Seok
    • Korean journal of food and cookery science
    • /
    • v.30 no.6
    • /
    • pp.670-678
    • /
    • 2014
  • The purpose of this study was to find the optimal swelling method and condition for seacucumber to improve its taste and texture to accomodate the rapid increase of consumption. Another purpose was to try to determine an easy way to soak dried sea-cucumber under different conditions, and identify the influence of swelling time on the texture of sea-cucumber, in order to reduce preparation time and provide basic data for easy handling. After boiling or steaming for six different periods including 5, 15, 30 and 60 minutes the texture of the sea-cucumbers were compared, For the additive test, the sea-cucumbers were boiling for 30 minutes period with 4 different additives and the textures were compared, Since the texture is an important characteristic of sea-cucumber, there are many variables that affect this property including the, drying and preservation methods. This study provides basic understanding of the influence of the heating method, time and temperature on the swelling of sea-cucumber for handy use at processing sites.