• Title/Summary/Keyword: approaches

Search Result 13,428, Processing Time 0.045 seconds

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Geochemical Equilibria and Kinetics of the Formation of Brown-Colored Suspended/Precipitated Matter in Groundwater: Suggestion to Proper Pumping and Turbidity Treatment Methods (지하수내 갈색 부유/침전 물질의 생성 반응에 관한 평형 및 반응속도론적 연구: 적정 양수 기법 및 탁도 제거 방안에 대한 제안)

  • 채기탁;윤성택;염승준;김남진;민중혁
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.7 no.3
    • /
    • pp.103-115
    • /
    • 2000
  • The formation of brown-colored precipitates is one of the serious problems frequently encountered in the development and supply of groundwater in Korea, because by it the water exceeds the drinking water standard in terms of color. taste. turbidity and dissolved iron concentration and of often results in scaling problem within the water supplying system. In groundwaters from the Pajoo area, brown precipitates are typically formed in a few hours after pumping-out. In this paper we examine the process of the brown precipitates' formation using the equilibrium thermodynamic and kinetic approaches, in order to understand the origin and geochemical pathway of the generation of turbidity in groundwater. The results of this study are used to suggest not only the proper pumping technique to minimize the formation of precipitates but also the optimal design of water treatment methods to improve the water quality. The bed-rock groundwater in the Pajoo area belongs to the Ca-$HCO_3$type that was evolved through water/rock (gneiss) interaction. Based on SEM-EDS and XRD analyses, the precipitates are identified as an amorphous, Fe-bearing oxides or hydroxides. By the use of multi-step filtration with pore sizes of 6, 4, 1, 0.45 and 0.2 $\mu\textrm{m}$, the precipitates mostly fall in the colloidal size (1 to 0.45 $\mu\textrm{m}$) but are concentrated (about 81%) in the range of 1 to 6 $\mu\textrm{m}$in teams of mass (weight) distribution. Large amounts of dissolved iron were possibly originated from dissolution of clinochlore in cataclasite which contains high amounts of Fe (up to 3 wt.%). The calculation of saturation index (using a computer code PHREEQC), as well as the examination of pH-Eh stability relations, also indicate that the final precipitates are Fe-oxy-hydroxide that is formed by the change of water chemistry (mainly, oxidation) due to the exposure to oxygen during the pumping-out of Fe(II)-bearing, reduced groundwater. After pumping-out, the groundwater shows the progressive decreases of pH, DO and alkalinity with elapsed time. However, turbidity increases and then decreases with time. The decrease of dissolved Fe concentration as a function of elapsed time after pumping-out is expressed as a regression equation Fe(II)=10.l exp(-0.0009t). The oxidation reaction due to the influx of free oxygen during the pumping and storage of groundwater results in the formation of brown precipitates, which is dependent on time, $Po_2$and pH. In order to obtain drinkable water quality, therefore, the precipitates should be removed by filtering after the stepwise storage and aeration in tanks with sufficient volume for sufficient time. Particle size distribution data also suggest that step-wise filtration would be cost-effective. To minimize the scaling within wells, the continued (if possible) pumping within the optimum pumping rate is recommended because this technique will be most effective for minimizing the mixing between deep Fe(II)-rich water and shallow $O_2$-rich water. The simultaneous pumping of shallow $O_2$-rich water in different wells is also recommended.

  • PDF

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

A Study on the Impact of Artificial Intelligence on Decision Making : Focusing on Human-AI Collaboration and Decision-Maker's Personality Trait (인공지능이 의사결정에 미치는 영향에 관한 연구 : 인간과 인공지능의 협업 및 의사결정자의 성격 특성을 중심으로)

  • Lee, JeongSeon;Suh, Bomil;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.231-252
    • /
    • 2021
  • Artificial intelligence (AI) is a key technology that will change the future the most. It affects the industry as a whole and daily life in various ways. As data availability increases, artificial intelligence finds an optimal solution and infers/predicts through self-learning. Research and investment related to automation that discovers and solves problems on its own are ongoing continuously. Automation of artificial intelligence has benefits such as cost reduction, minimization of human intervention and the difference of human capability. However, there are side effects, such as limiting the artificial intelligence's autonomy and erroneous results due to algorithmic bias. In the labor market, it raises the fear of job replacement. Prior studies on the utilization of artificial intelligence have shown that individuals do not necessarily use the information (or advice) it provides. Algorithm error is more sensitive than human error; so, people avoid algorithms after seeing errors, which is called "algorithm aversion." Recently, artificial intelligence has begun to be understood from the perspective of the augmentation of human intelligence. We have started to be interested in Human-AI collaboration rather than AI alone without human. A study of 1500 companies in various industries found that human-AI collaboration outperformed AI alone. In the medicine area, pathologist-deep learning collaboration dropped the pathologist cancer diagnosis error rate by 85%. Leading AI companies, such as IBM and Microsoft, are starting to adopt the direction of AI as augmented intelligence. Human-AI collaboration is emphasized in the decision-making process, because artificial intelligence is superior in analysis ability based on information. Intuition is a unique human capability so that human-AI collaboration can make optimal decisions. In an environment where change is getting faster and uncertainty increases, the need for artificial intelligence in decision-making will increase. In addition, active discussions are expected on approaches that utilize artificial intelligence for rational decision-making. This study investigates the impact of artificial intelligence on decision-making focuses on human-AI collaboration and the interaction between the decision maker personal traits and advisor type. The advisors were classified into three types: human, artificial intelligence, and human-AI collaboration. We investigated perceived usefulness of advice and the utilization of advice in decision making and whether the decision-maker's personal traits are influencing factors. Three hundred and eleven adult male and female experimenters conducted a task that predicts the age of faces in photos and the results showed that the advisor type does not directly affect the utilization of advice. The decision-maker utilizes it only when they believed advice can improve prediction performance. In the case of human-AI collaboration, decision-makers higher evaluated the perceived usefulness of advice, regardless of the decision maker's personal traits and the advice was more actively utilized. If the type of advisor was artificial intelligence alone, decision-makers who scored high in conscientiousness, high in extroversion, or low in neuroticism, high evaluated the perceived usefulness of the advice so they utilized advice actively. This study has academic significance in that it focuses on human-AI collaboration that the recent growing interest in artificial intelligence roles. It has expanded the relevant research area by considering the role of artificial intelligence as an advisor of decision-making and judgment research, and in aspects of practical significance, suggested views that companies should consider in order to enhance AI capability. To improve the effectiveness of AI-based systems, companies not only must introduce high-performance systems, but also need employees who properly understand digital information presented by AI, and can add non-digital information to make decisions. Moreover, to increase utilization in AI-based systems, task-oriented competencies, such as analytical skills and information technology capabilities, are important. in addition, it is expected that greater performance will be achieved if employee's personal traits are considered.

A Study on Dance Historical Value of Jaein Line Dance by Han Seong-jun (한성준을 통해 본 재인 계통춤의 무용사적 가치 연구)

  • Choung, Soung Sook
    • (The) Research of the performance art and culture
    • /
    • no.19
    • /
    • pp.347-378
    • /
    • 2009
  • Those who were from Jaeincheon and Jaein line entertainers played great roles during transition period from traditional society to modern society, and even at present the dances of them are the representative traditional dances of Korea and the matrix of Korean originality. Nevertheless, Korea dance field has given little importance to these dances, but too much importance to Gibang dance in studying traditional dances, which causes the studies on Jaein line danced to be superficial or separate. Therefore, the aims of this article are to analyze the dances of Jaein line by Han Seong-jun, who was representative for the dances, and to appraise the historical value of them. Han Seong-jun(1874-1942) was the most influential dummer and dancer of his day in Japanese colonial times, and has been recognized as one of the masters of traditional dances. He established autonomy of traditional dances by reorganizing, collecting and stage-formalizing the dances, and systemized transmitting ways for various folk dances including a Buddhist dance, which made it possiblefor those dances to be traditional dances of Korea and the bases for creative dances. The values of Jaein line dances, which were transmitted through Han, are the following: First, the dances have been designated as national or regional intangible cultural assets, and, as the representative traditional arts, we proudly show them to the world. Second, the dances, as one of the genres of Korean dances, are the subjects of younger scholars' studies. Third, the dances become one of the representative examples of revivals of traditional dances, which tend to be extinct during modernization times, and contribute to establishing national identity and subjectivity. In addition, they contribute to discovering and transmitting other traditional dances. Fourth, the dances enable many dancers to make association, that is, Association for Preservation of Traditional Dances,for the transmitting the dances, and to distribute the dances and get many dancers to transmit the dances. Furthermore, as new performance repertories, they give another pleasure to the audience. In addition to the above, as a base for expansion of Korean creative dances, Han's dances have other values such as the following: First, in searching for a new methodology for creation, he played an important role in rediscovering the foundation in the tradition, and tried to discover nationalidentity by employing the traditional dances for expression of theme. Second, he contributed to drastically dissolving the genres by expanding the gesture language from motion factors of traditional dances, which can be compared to the modern dance. Third, he tried new challenging approaches to re-create the tradition, and contributed to pursuing the simple elements of our traditional dances as traditional aesthetics. While the dances of Jaein line have such values as the above, there are also some problems around the dances, such as the confusion in the process of transmission resulted from different transmission forms and transmitters, which we must no longer leave as it is. Furthermore, it is urgent that the rest of Jaein line dances be recovered and designated as intangible cultural assets for the sound transmission of the traditional dances.

An Interpretation of the Korean Fairy-Tale "Borrowed Fortune From Heaven" From the Perspective of Analytical Psychology (한국민담 <하늘에서 빌려온 복>에 대한 분석심리학적 이해)

  • Kihong Baek
    • Sim-seong Yeon-gu
    • /
    • v.38 no.1
    • /
    • pp.112-160
    • /
    • 2023
  • This study examined the Korean folklore "Borrowed Fortune from Heaven" from the perspective of Analytical Psychology, considering it a manifestation of the human psyche, and tried to gain a deeper understanding of what happens in our mind. Through the exploration, the researcher was able to re-identify the ongoing psychological process operating in the depths of our mind, pertaining to the emergence of a new dimension of consciousness. Particularly the researcher was able to gain some insights into how the potential psychic elements for the new consciousness are prepared in the unconscious, how they get integrated into the conscious life, and what is essential for the accomplishment of the process. The tale begins with a poor woodcutter who, in order to escape from poverty, starts gathering twice as much firewood. However, the newly acquired amount disappears overnight, so the woodcutter gets perplexed and curious about where it goes and who is taking it. He seeks to find out the truth, which leads him to an unexpected journey to Heaven. There he learns the truth concerning his very tiny amount of fortune, and discovers another big fortune for an unborn person. By pleading with the ruler of Heaven, the woodcutter borrows that grand fortune, on the condition that he must return it to the owner when the time comes. After that, the woodcutter's life undergoes a series of changes, in which he finally becomes a wealthy farmer, but gradually is reminded more and more that the destined time is approaching. In the end, the fortune is completely transferred to the original owner, resulting in a dramatic twist and the creation of a new life circumstances. The overall plot can be understood as a reflection of the psychological process aiming at the evolution of consciousness through renewal. In this context, the woodcutter can be considered a psychic element that undergoes a continuous transformation in preparation for participating in the upcoming new consciousness. In other words, the changes brought about by this figure can be interpreted as a gradual and increasingly detailed foreshadowing of what the forthcoming new consciousness would be like. Interestingly, as the destined time approaches, the protagonist's anguish in conflict reaches its climax, despite his good performance in his role until then. This effectively portrays the difficulty of achieving a new dimension of consciousness, which requires moving past the last step. All the events in the story ultimately converge at this point. After all, the resolution occurs when the protagonist lets go of everything he has and follows the will of Heaven. This implies what is essential for the renewal of consciousness. Only by completely complying with the entire mind, the potential constituents of the new consciousness that should play important roles in a renewal and evolution of consciousness through experiencing, can participate in the ultimate outcome. As long as they remain trapped in any intermediate stage, the totality of the psyche would develop another detour aiming at the final destination, which means the beginning of another period of suffering carrying a purposeful meaning. The tale suggests that this truth will be applied everywhere that renewal of consciousness is directed, whether for an individual or a society.

A Study on the long-term Hemodialysis patient중s hypotension and preventation from Blood loss in coil during the Hemodialysis (장기혈액투석환자의 투석중 혈압하강과 Coil내 혈액손실 방지를 위한 기초조사)

  • 박순옥
    • Journal of Korean Academy of Nursing
    • /
    • v.11 no.2
    • /
    • pp.83-104
    • /
    • 1981
  • Hemodialysis is essential treatment for the chronic renal failure patient's long-term cure and for the patient management before and after kidney transplantation. It sustains the endstage renal failure patient's life which didn't get well despite strict regimen and furthermore it becomes an essential treatment to maintain civil life. Bursing implementation in hemodialysis may affect the significant effect on patient's life. The purpose of this study was to obtain the basic data to solve the hypotension problem encountable to patient and the blood loss problem affecting hemodialysis patient'a anemic states by incomplete rinsing of blood in coil through all process of hemodialysis. The subjects for this study were 44 patients treated hemodialysis 691 times in the hemodialysis unit, The .data was collected at Gang Nam 51. Mary's Hospital from January 1, 1981 to April 30, 1981 by using the direct observation method and the clinical laboratory test for laboratory data and body weight and was analysed by the use of analysis of Chi-square, t-test and anlysis of varience. The results obtained an follows; A. On clinical laboratory data and other data by dialysis Procedure. The average initial body weight was 2.37 ± 0.97kg, and average body weight after every dialysis was 2.33 ± 0.9kg. The subject's average hemoglobin was 7.05±1.93gm/dl and average hematocrit was 20.84± 3.82%. Average initial blood pressure was 174.03±23,75mmHg and after dialysis was 158.45±25.08mmHg. The subject's average blood ion due to blood sample for laboratory data was 32.78±13.49cc/ month. The subject's average blood replacement for blood complementation was 1.31 ±0.88 pint/ month for every patient. B. On the hypotensive state and the coping approaches occurrence rate of hypotension was 28.08%. It was 194 cases among 691 times. 1. In degrees of initial blood pressure, the most 36.6% was in the group of 150-179mmHg, and in degrees of hypotension during dialysis, the most 28.9% in the group of 40-50mmHg, especially if the initial blood pressure was under 180mmHg, 59.8% clinical symptoms appeared in the group of“above 20mmHg of hypotension”. If initial blood pressure was above 180mmHg, 34.2% of clinical symptoms were appeared in the group of“above 40mmHg of hypotension”. These tendencies showed the higher initial blood pressure and the stronger degree of hypotension, these results showed statistically singificant differences. (P=0.0000) 2. Of the occuring times of hypotension,“after 3 hrs”were 29.4%, the longer the dialyzing procedure, the stronger degree of hypotension ann these showed statistically significant differences. (P=0.0142). 3. Of the dispersion of symptoms observed, sweat and flush were 43.3%, and Yawning, and dizziness 37.6%. These were the important symptoms implying hypotension during hemodialysis accordingly. Strages of procedures in coping with hypotension were as follows ; 45.9% were recovered by reducing the blood flow rate from 200cc/min to 1 00cc/min, and by reducing venous pressure to 0-30mmHg. 33.51% were recovered by controling (adjusting) blood flow rate and by infusion of 300cc of 0,9% Normal saline. 4.1% were recovered by infusion of over 300cc of 0.9% normal saline. 3.6% by substituting Nor-epinephiine, 5.7% by substituting blood transfusion, and 7,2% by substituting Albumin were recovered. And the stronger the degree of symptoms observed in hypotention, the more the treatments required for recovery and these showed statistically significant differences (P=0.0000). C. On the effects of the changes of blood pressure and osmolality by albumin and hemofiltration. 1. Changes of blood pressure in the group which didn't required treatment in hypotension and the group required treatment, were averaged 21.5mmHg and 44.82mmHg. So the difference in the latter was bigger than the former and these showed statistically significant difference (P=0.002). On the changes of osmolality, average mean were 12.65mOsm, and 17.57mOsm. So the difference was bigger in the latter than in the former but these not showed statistically significance (P=0.323). 2. Changes of blood pressure in the group infused albumin and in the group didn't required treatment in hypotension, were averaged 30mmHg and 21.5mmHg. So there was no significant differences and it showed no statistical significance (P=0.503). Changes of osmolality were averaged 5.63mOsm and 12.65mOsm. So the difference was smaller in the former but these was no stitistical significance (P=0.287). Changes of blood pressure in the group infused Albumin and in the group required treatment in hypotension were averaged 30mmHg and 44.82mmHg. So the difference was smaller in the former but there is no significant difference (P=0.061). Changes of osmolality were averaged 8.63mOsm, and 17.59mOsm. So the difference were smaller in the former but these not showed statistically significance (P=0.093). 3. Changes of blood pressure in the group iutplemented hemofiltration and in the Uoup didn't required treatment in hypotension were averaged 22mmHg and 21.5mmHg. So there was no significant differences and also these showed no statistical significance (P=0.320). Changes of osmolality were averaged 0.4mOsm and 12.65mOsm. So the difference was smaller in the former but these not showed statistical significance(P=0.199). Changes of blood pressure in the group implemented hemofiltration and in the group required treatment in hypotension were averaged 22mmHg and 44.82mmHg. So the difference was smatter in the former and these showed statistically significant differences (P=0.035). Changes of osmolality were averaged 0.4mOsm and 17.59mOsm. So the difference was smaller in the former but these not showed statistical significance (P=0.086). D. On the changes of body weight, and blood pressure, between the group of hemofiltration and hemodialysis. 1, Changes of body weight in the group implemented hemofiltration and hemodialysis were averaged 3.340 and 3.320. So there was no significant differences and these showed no statistically significant difference, (P=0.185) but standard deviation of body weight averaged in comparison with standard difference of body weight was statistically significant difference (P=0.0000). Change of blood Pressure in the group implemented hemofiltration and hemodialysis were averaged 17.81mmHg and 19.47mmHg. So there was no significant differences and these showed no statistically significant difference (P=0.119), But in comparison with standard deviation about difference of blood pressure was statistically significant difference. (P=0.0000). E. On the blood infusion method in coil after hemodialysis and residual blood losing method in coil. 1, On comparing and analysing Hct of residual blood in coil by factors influencing blood infusion method. Infusion method of saline 200cc reduced residual blood in coil after the quantitative comparison of Saline Occ, 50cc, 100cc, 200cc and the differences showed statistical significance (p < 0.001). Shaking Coil method reduced residual blood in Coil in comparison of Shaking Coil method and Non-Shaking Coil method this showed statistically significant difference (P < 0.05). Adjusting pressure in Coil at OmmHg method reduced residual blood in Coil in comparison of adjusting pressure in Coil at OmmHg and 200mmHg, and this showed statistically significant difference (P < 0.001). 2. Comparing blood infusion method divided into 10 methods in Coil with every factor respectively, there was seldom difference in group of choosing Saline 100cc infusion between Coil at OmmHg. The measured quantity of blood loss was averaged 13.49cc. Shaking Coil method in case of choosing saline 50cc infusion while adjusting pressure in coil at OmmHg was the most effective to reduce residual blood. The measured quantity of blood loss was averaged 15.18cc.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Changes in Agricultural Extension Services in Korea (한국농촌지도사업(韓國農村指導事業)의 변동(變動))

  • Fujita, Yasuki;Lee, Yong-Hwan;Kim, Sung-Soo
    • Journal of Agricultural Extension & Community Development
    • /
    • v.7 no.1
    • /
    • pp.155-166
    • /
    • 2000
  • When the marcher visited Korea in fall 1994, he was shocked to see high rise apartment buildings around the capitol region including Seoul and Suwon, resulting from rising demand of housing because of urban migration followed by second and third industrial development. After 6 years in March 2000, the researcher witnessed more apartment buildings and vinyl house complexes, one of the evidences of continued economic progress in Korea. Korea had to receive the rescue finance from International Monetary Fund (IMF) because of financial crisis in 1997. However, the sign of recovery was seen in a year, and the growth rate of Gross Domestic Products (GDP) in 1999 recorded as high as 10.7 percent. During this period, the Korean government has been working on restructuring of banks, enterprises, labour and public sectors. The major directions of government were; localization, reducing administrative manpower, limiting agricultural budgets, privatization of public enterprises, integration of agricultural organization, and easing of various regulations. Thus, the power of central government shifted to local government resulting in a power increase for city mayors and county chiefs. Agricultural extension services was one of targets of government restructuring, transferred to local governments from central government. At the same time, the number of extension offices was reduced by 64 percent, extension personnel reduced by 24 percent, and extension budgets reduced. During the process of restructuring, the basic direction of extension services was set by central Rural Development Administration Personnel management, technology development and supports were transferred to provincial Rural Development Administrations, and operational responsibilities transferred to city/county governments. Agricultural extension services at the local levels changed the name to Agricultural Technology Extension Center, established under jurisdiction of city mayor or county chief. The function of technology development works were added, at the same time reducing the number of educators for agriculture and rural life. As a result of observations of rural areas and agricultural extension services at various levels, functional responsibilities of extension were not well recognized throughout the central, provincial, and local levels. Central agricultural extension services should be more concerned about effective rural development by monitoring provincial and local level extension activities more throughly. At county level extension services, it may be desirable to add a research function to reflect local agricultural technological needs. Sometimes, adding administrative tasks for extension educators may be helpful far farmers. However, tasks such as inspection and investigation should be avoided, since it may hinder the effectiveness of extension educational activities. It appeared that major contents of the agricultural extension service in Korea were focused on saving agricultural materials, developing new agricultural technology, enhancing agricultural export, increasing production and establishing market oriented farming. However these kinds of efforts may lead to non-sustainable agriculture. It would be better to put more emphasis on sustainable agriculture in the future. Agricultural extension methods in Korea may be better classified into two approaches or functions; consultation function for advanced farmers and technology transfer or educational function for small farmers. Advanced farmers were more interested in technology and management information, while small farmers were more concerned about information for farm management directions and timely diffusion of agricultural technology information. Agricultural extension service should put more emphasis on small farmer groups and active participation of farmers in these groups. Providing information and moderate advice in selecting alternatives should be the major activities for consultation for advanced farmers, while problem solving processes may be the major educational function for small farmers. Systems such as internet and e-mail should be utilized for functions of information exchange. These activities may not be an easy task for decreased numbers of extension educators along with increased administrative tasks. It may be difficult to practice a one-to-one approach However group guidance may improve the task to a certain degree.

  • PDF

The Three Types of Clinical Manifestation of Cow's Milk Allergy with Predominantly Intestinal Symptoms (위장관 증세 위주로 발현하는 영유아기 우유 알레르기 질환의 3가지 임상 유형에 관한 고찰)

  • Lee, Jeong-Jin;Lee, Eun-Joo;Kim, Hyun-Hee;Choi, Eun-Jin;Hwang, Jin-Bok;Han, Chang-Ho;Chung, Hai-Lee;Kwon, Young-Dae;Kim, Yong-Jin
    • Pediatric Gastroenterology, Hepatology & Nutrition
    • /
    • v.3 no.1
    • /
    • pp.30-40
    • /
    • 2000
  • Purpose: During the first year of life, cow's milk protein is the major offender causing food allergy. Cow's milk allergy (CMA) affects 2~7% of infants, of which approximately one-half show predominantly gastrointestinal symptoms. We studied the clinical types of cow's milk allergy with predominantly gastrointestinal symptoms (CMA-GI) of childhood. Methods: The retrospective study was performed on 30 (male 22, female 8) patients who had diagnosed as CMA-GI during 2 years and 3 months from March 1995 to June 1997. Results: 1) Children with CMA-GI presented in the three types of clinical manifestation on the basis of time to reaction to milk ingestion: Quick (Q) onset (5 cases), Slow (S) onset (20 cases), Quick & Slow (Q&S) (5 cases). 2) Age on admission of the three groups was significantly different (p<0.05): (Q onset: $81.4{\pm}67.1$ days, S onset: $31.9{\pm}12.7$ days, Q&S: $366.0{\pm}65.0$ days). Although the body weight at birth was 10~95 percentile in all patients, body weight on admission was different: (Q onset: 10~50 percentile, S onset: below 10 percentile, Q&S: 10~25 percentile). S onset group was significantly different compared with other groups (p<0.05) and 90% of this one was failure to thrive below 3 percentile. 3) Peripheral leukocyte counts were as followings: (Q onset: $5,700{\sim}12,300/mm^3$, S onset: $10,000{\sim}33,400/mm^3$, Q&S: $5,200{\sim}14,900/mm^3$). Slow onset group was significantly different compared with other groups (p<0.05). Serum albumin levels on admission were as followings: (Q onset: $4.2{\pm}0.4\;g/dl$, S onset: $3.0{\pm}0.3\;g/dl$, Q&S: $4.0{\pm}0.3\;g/dl$). S onset group was significantly different compared with other groups (p<0.05) and 85% of this one was below 3.5 g/dl. 4) Although morphometrical analysis on small intestinal mucosa did not show enteropathy in Q onset and Q&S groups, all cases of S onset revealed enteropathy: 45% of this one showed subtotal villous atrophy, 55 % showed partial villous atrophy. 5) Allergic reaction test to other foods was not performed in S onset group because of ethical problem and high risk in general condition. In Q onset group, allergic reaction to one or two other foods: soy formula, weaning formula and eggs. Q&S goup revealed allergic reactions to several foods or to most of all foods except protein hydrolysate formula: eggs, potatos, some kinds of sea food, apples, carrots, beef and chicken. 6) Serum IgE level, peripheral eosinophil counts, milk RAST, soy RAST, skin test were not significantly different among groups. Conclusion: CMA-GI may present in three clinical ways on the basis of time to reaction to milk ingestion, typical clinical findings and morphologic changes in the small bowel mucosal biopsy specimens. This clinical subdivision might be helpful in diagnostic and therapeutic approaches in CMA-GI. Early suspicion is mandatory especially in S onset type because of high risks with malnutrition and enteropathy.

  • PDF