• Title/Summary/Keyword: data management system

Search Result 12,359, Processing Time 0.044 seconds

Clinical Practice Guideline for Endoscopic Resection of Early Gastrointestinal Cancer (조기위장관암 내시경 치료 임상진료지침)

  • Park, Chan Hyuk;Yang, Dong-Hoon;Kim, Jong Wook;Kim, Jie-Hyun;Kim, Ji Hyun;Min, Yang Won;Lee, Si Hyung;Bae, Jung Ho;Chung, Hyunsoo;Choi, Kee Don;Park, Jun Chul;Lee, Hyuk;Kwak, Min-Seob;Kim, Bun;Lee, Hyun Jung;Lee, Hye Seung;Choi, Miyoung;Park, Dong-Ah;Lee, Jong Yeul;Byeon, Jeong-Sik;Park, Chan Guk;Cho, Joo Young;Lee, Soo Teik;Chun, Hoon Jai
    • Journal of Digestive Cancer Research
    • /
    • v.8 no.1
    • /
    • pp.1-50
    • /
    • 2020
  • Although surgery was the standard treatment for early gastrointestinal cancers, endoscopic resection is now a standard treatment for early gastrointestinal cancers without regional lymph node metastasis. High-definition white light endoscopy, chromoendoscopy, and image-enhanced endoscopy such as narrow band imaging are performed to assess the edge and depth of early gastrointestinal cancers for delineation of resection boundaries and prediction of the possibility of lymph node metastasis before the decision of endoscopic resection. Endoscopic mucosal resection and/or endoscopic submucosal dissection can be performed to remove early gastrointestinal cancers completely by en bloc fashion. Histopathological evaluation should be carefully made to investigate the presence of risk factors for lymph node metastasis such as depth of cancer invasion and lymphovascular invasion. Additional treatment such as radical surgery with regional lymphadenectomy should be considered if the endoscopically resected specimen shows risk factors for lymph node metastasis. This is the first Korean clinical practice guideline for endoscopic resection of early gastrointestinal cancer. This guideline was developed by using mainly de novo methods and encompasses endoscopic management of superficial esophageal squamous cell carcinoma, early gastric cancer, and early colorectal cancer. This guideline will be revised as new data on early gastrointestinal cancer are collected.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Critical Success Factor of Noble Payment System: Multiple Case Studies (새로운 결제서비스의 성공요인: 다중사례연구)

  • Park, Arum;Lee, Kyoung Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.59-87
    • /
    • 2014
  • In MIS field, the researches on payment services are focused on adoption factors of payment service using behavior theories such as TRA(Theory of Reasoned Action), TAM(Technology Acceptance Model), and TPB (Theory of Planned Behavior). The previous researches presented various adoption factors according to types of payment service, nations, culture and so on even though adoption factors of identical payment service were presented differently by researchers. The payment service industry relatively has strong path dependency to the existing payment methods so that the research results on the identical payment service are different due to payment culture of nation. This paper aims to suggest a successful adoption factor of noble payment service regardless of nation's culture and characteristics of payment and prove it. In previous researches, common adoption factors of payment service are convenience, ease of use, security, convenience, speed etc. But real cases prove the fact that adoption factors that the previous researches present are not always critical to success to penetrate a market. For example, PayByPhone, NFC based parking payment service, successfully has penetrated to early market and grown. In contrast, Google Wallet service failed to be adopted to users despite NFC based payment method which provides convenience, security, ease of use. As shown in upper case, there remains an unexplained aspect. Therefore, the present research question emerged from the question: "What is the more essential and fundamental factor that should takes precedence over factors such as provides convenience, security, ease of use for successful penetration to market". With these cases, this paper analyzes four cases predicted on the following hypothesis and demonstrates it. "To successfully penetrate a market and sustainably grow, new payment service should find non-customer of the existing payment service and provide noble payment method so that they can use payment method". We give plausible explanations for the hypothesis using multiple case studies. Diners club, Danal, PayPal, Square were selected as a typical and successful cases in each category of payment service. The discussion on cases is primarily non-customer analysis that noble payment service targets on to find the most crucial factor in the early market, we does not attempt to consider factors for business growth. We clarified three-tier non-customer of the payment method that new payment service targets on and elaborated how new payment service satisfy them. In case of credit card, this payment service target first tier of non-customer who can't pay for because they don't have any cash temporarily but they have regular income. So credit card provides an opportunity which they can do economic activities by delaying the date of payment. In a result of wireless phone payment's case study, this service targets on second of non-customer who can't use online payment because they concern about security or have to take a complex process and learn how to use online payment method. Therefore, wireless phone payment provides very convenient payment method. Especially, it made group of young pay for a little money without a credit card. Case study result of PayPal, online payment service, shows that it targets on second tier of non-customer who reject to use online payment service because of concern about sensitive information leaks such as passwords and credit card details. Accordingly, PayPal service allows users to pay online without a provision of sensitive information. Final Square case result, Mobile POS -based payment service, also shows that it targets on second tier of non-customer who can't individually transact offline because of cash's shortness. Hence, Square provides dongle which function as POS by putting dongle in earphone terminal. As a result, four cases made non-customer their customer so that they could penetrate early market and had been extended their market share. Consequently, all cases supported the hypothesis and it is highly probable according to 'analytic generation' that case study methodology suggests. We present for judging the quality of research designs the following. Construct validity, internal validity, external validity, reliability are common to all social science methods, these have been summarized in numerous textbooks(Yin, 2014). In case study methodology, these also have served as a framework for assessing a large group of case studies (Gibbert, Ruigrok & Wicki, 2008). Construct validity is to identify correct operational measures for the concepts being studied. To satisfy construct validity, we use multiple sources of evidence such as the academic journals, magazine and articles etc. Internal validity is to seek to establish a causal relationship, whereby certain conditions are believed to lead to other conditions, as distinguished from spurious relationships. To satisfy internal validity, we do explanation building through four cases analysis. External validity is to define the domain to which a study's findings can be generalized. To satisfy this, replication logic in multiple case studies is used. Reliability is to demonstrate that the operations of a study -such as the data collection procedures- can be repeated, with the same results. To satisfy this, we use case study protocol. In Korea, the competition among stakeholders over mobile payment industry is intensifying. Not only main three Telecom Companies but also Smartphone companies and service provider like KakaoTalk announced that they would enter into mobile payment industry. Mobile payment industry is getting competitive. But it doesn't still have momentum effect notwithstanding positive presumptions that will grow very fast. Mobile payment services are categorized into various technology based payment service such as IC mobile card and Application payment service of cloud based, NFC, sound wave, BLE(Bluetooth Low Energy), Biometric recognition technology etc. Especially, mobile payment service is discontinuous innovations that users should change their behavior and noble infrastructure should be installed. These require users to learn how to use it and cause infra-installation cost to shopkeepers. Additionally, payment industry has the strong path dependency. In spite of these obstacles, mobile payment service which should provide dramatically improved value as a products and service of discontinuous innovations is focusing on convenience and security, convenience and so on. We suggest the following to success mobile payment service. First, non-customers of the existing payment service need to be identified. Second, needs of them should be taken. Then, noble payment service provides non-customer who can't pay by the previous payment method to payment method. In conclusion, mobile payment service can create new market and will result in extension of payment market.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

Analysis of Patient Effective Dose in PET/CT; Using CT Dosimetry Programs (CT 선량 측정 프로그램을 이용한 PET/CT 검사 환자의 예측 유효 선량의 분석)

  • Kim, Jung-Sun;Jung, Woo-Young;Park, Seung-Yong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.77-82
    • /
    • 2010
  • Purpose: As PET/CT come into wide use, it caused increasing of expose in clinical use. Therefore, Korea Food and Drug Administration issued Patient DRL (Diagnostic Reference Level) in CT scan. In this study, to build the basis of patient dose reduction, we analyzed effective dose in transmission scan with CT scan. Materials and Methods: From February, 2010 to March 180 patients (age: $55{\pm}16$, weight: $61.0{\pm}10.4$ kg) who examined $^{18}F$-FDG PET/CT in Asan Medical Center. Biograph Truepoint 40 (SIEMENS, GERMANY), Biograph Sensation 16 (SIEMENS, GERMANY) and Discovery STe8 (GE healthcare, USA) were used in this study. Per each male and female average of 30 patients doses were analyzed by one. Automatic exposure control system for controlling the dose can affect the largest by a patient's body weight less than 50 kg, 50-60 kg less, 60 kg more than the average of the three groups were divided doses. We compared that measured value of CT-expo v1.7 and ImPACT v1.0. The relationship between body weight and the effective dose were analyzed. Results: When using CT-Expo V1.7, effective dose with BIO40, BIO16 and DSTe8 respectably were $6.46{\pm}1.18$ mSv, $9.36{\pm}1.96 $mSv and $9.36{\pm}1.96$ mSv for 30 male patients respectably $6.29{\pm}0.97$ mSv, $10.02{\pm}2.42$ mSv and $9.05{\pm}2.27$ mSv for 30 female patients respectably. When using ImPACT v1.0, effective dose with BIO40, BIO16 and DSTe8 respectably were $6.54{\pm}1.21$ mSv, $8.36{\pm}1.69$ mSv and $9.74{\pm}2.55$Sv for 30 male patients respectably $5.87{\pm}1.09$ mSv, $8.43{\pm}1.89$ mSv and $9.19{\pm}2.29$ mSv for female patients respectably. When divided three groups which were under 50 kg, 50~60 kg and over 60 kg respectably were 6.27 mSv, 7.67 mSv and 9.33 mSv respectably using CT-Expo V1.7, 5.62 mSv, 7.22 mSv and 8.91 mSv respectably using ImPACT v1.0. Weight and the effective dose coefficient analysis showed a very strong positive correlation(r=743, r=0.693). Conclusion: Using such a dose evaluation programs, easier to predict and evaluate the effective dose possible without performing phantom study and such dose evaluation programs could be used to collect basic data for CT dose management.

  • PDF

The Empirical Exploration of the Conception on Nursing (간호개념에 대한 기초조사)

  • 백혜자
    • Journal of Korean Academy of Nursing
    • /
    • v.11 no.1
    • /
    • pp.65-87
    • /
    • 1981
  • The study is aimed at exploring concept held by clinical nurses of nursing. The data were collected from 225 nurses conviniently selected from the population of nurses working in Kang Won province. Findings include. 1) Nurse's Qualification. The respondents view that specialized knowledge is more important qualification of the nurse. Than warm personality. Specifically, 92.9% of the respondents indicated specialized knowledge as the most important qualification while only 43.1% indicated warm personality. 2) On Nursing Profession. The respondents view that nursing profession as health service oriented rather than independent profession specifically. This suggests that nursing profession is not consistentic present health care delivery system nor support nurses working independently. 3) On Clients of Nursing Care The respondents include patients, family and the community residents in the category of nursing care. Specifically, 92.0% of the respondents view that patient is the client, while only 67.1% of nursing student and 74.7% of herself. This indicates the lack of the nurse's recognition toward their clients. 4) On the Priority of Nursing care. Most of the respondents view the clients physical psychological respects as important component of nursing care but not the spiritual ones. Specially, 96.0% of the respondents indicated the physical respects, 93% psychological ones, while 64.1% indicated the spiritual ones. This means the lack of comprehensive conception on nursing aimension. 5) On Nursing Care. 91.6% of the respondents indicated that nursing care is the activity decreasing pain or helping to recover illness, while only 66.2% indicated earring out the physicians medical orders. 6) On Purpose of Nursing Care. 89.8% of the respondents indicated preventing illness and than 76.6% of them decreasing 1;ai of clients. On the other hand, maintaining health has the lowest selection at the degree of 13.8%. This means the lack of nurses' recognition for maintaining health as the most important point. 7) On Knowledge Needed in Nursing Care. Most of the respondents view that the knowledge faced with the spot of nursing care is needed. Specially, 81.3% of the respondents indicated simple curing method and 75.1%, 73.3%, 71.6% each indicated child nursing, maternal nursing and controlling for the communicable disease. On the other hand, knowledge w hick has been neglected in the specialized courses of nursing education, that is, thinking line among com-w unity members, overcoming style against between stress and personal relation in each home, and administration, management have a low selection at the depree of 48.9%,41.875 and 41.3%. 8) On Nursing Idea. The highest degree of selection is that they know themselves rightly, (The mean score measuring distribution was 4.205/5) In the lowest degree,3.016/5 is that devotion is the essential element of nursing, 2.860/5 the religious problems that human beings can not settle, such as a fatal ones, 2,810/5 the nursing profession is worth trying in one's life. This means that the peculiarly essential ideas on the professional sense of value. 9) On Nursing Services. The mean score measuring distribution for the nursing services showed that the inserting of machine air way is 2.132/5, the technique and knowledge for surviving heart-lung resuscitating is 2.892/s, and the preventing air pollution 3.021/5. Specially, 41.1% of the respondents indicated the lack of the replied ratio. 10) On Nurses' Qualifications. The respondents were selected five items as the most important qualifications. Specially, 17.4% of the respondents indicated specialized knowledge, 15.3% the nurses' health, 10.6% satisfaction for nursing profession, 9.8% the experience need, 9.2% comprehension and cooperation, while warm personality as nursing qualifications have a tendency of being lighted. 11) On the Priority of Nursing Care The respondents were selected three items as the most important component. Most of the respondents view the client's physical, spiritual: economic points as important components of nursing care. They showed each 36.8%, 27.6%, 13.8% while educational ones showed 1.8%. 12) On Purpose of Nursing Care. The respondents were selected four items as the most important purpose. Specially,29.3% of the respondents indicated curing illness for clients, 21.3% preventing illness for client 17.4% decreasing pain, 15.3% surviving. 13) On the Analysis of Important Nursing Care Ranging from 5 point to 25 point, the nurses' qualification are concentrated at the degree of 95.1%. Ranging from 3 point to 25, the priorities of nursing care are concentrated at the degree of 96.4%. Ranging from 4 point to 16, the purpose of nursing care is concentrated at the degree of 84.0%. 14) The Analysis, of General Characteristics and Facts of Nursing Concept. The correlation between the educational high level and nursing care showed significance. (P < 0.0262). The correction between the educational low level and purpose of nursing care showed significance. (P < 0.002) The correlation between nurses' working yeras and the degree of importance for the purpose of nursing care showed significance (P < 0.0155) Specially, the most affirmative answers were showed from two years to four ones. 15) On Nunes' qualification and its Degree of Importance The correlation between nurses' qualification and its degree of importance showed significance. (r = 0.2172, p< 0.001) 0.005) B. General characteristics of the subjects The mean age of the subject was 39 ; with 38.6% with in the age range of 20-29 ; 52.6% were male; 57.9% were Schizophrenia; 35.1% were graduated from high school or high school dropouts; 56.l% were not have any religion; 52.6% were unmarried; 47.4% were first admission; 91.2% were involuntary admission patients. C. Measurement of anxiety variables. 1. Measurement tools of affective anxiety in this study demonstrated high reliability (.854). 2. Measurement tools of somatic anxiety in this study demonstrated high reliability (.920). D. Relationship between the anxiety variables and the general characteristics. 1. Relationship between affective anxiety and general characteristics. 1) The level of female patients were higher than that of the male patient (t = 5.41, p < 0.05). 2) Frequencies of admission were related to affective anxiety, so in the first admission the anxiety level was the highest. (F = 5.50, p < 0.005). 2, Relationship between somatic anxiety and general characteristics. 1) The age range of 30-39 was found to have the highest level of the somatic anxiety. (F = 3.95, p < 0.005). 2) Frequencies of admission were related to the somatic anxiety, so .in first admission the anxiety level was the highest. (F = 9.12, p < 0.005) 0. Analysis of significant anxiety symptoms for nursing intervention. 1. Seven items such as dizziness, mental integration, sweating, restlessness, anxiousness, urinary frequency and insomnia, init. accounted for 96% of the variation within the first 24 hours after admission. 2. Seven items such as fear, paresthesias, restlessness, sweating insomnia, init., tremors and body aches and pains accounted for 84% of the variation on the 10th day after admission.

  • PDF

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Target-Aspect-Sentiment Joint Detection with CNN Auxiliary Loss for Aspect-Based Sentiment Analysis (CNN 보조 손실을 이용한 차원 기반 감성 분석)

  • Jeon, Min Jin;Hwang, Ji Won;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.1-22
    • /
    • 2021
  • Aspect Based Sentiment Analysis (ABSA), which analyzes sentiment based on aspects that appear in the text, is drawing attention because it can be used in various business industries. ABSA is a study that analyzes sentiment by aspects for multiple aspects that a text has. It is being studied in various forms depending on the purpose, such as analyzing all targets or just aspects and sentiments. Here, the aspect refers to the property of a target, and the target refers to the text that causes the sentiment. For example, for restaurant reviews, you could set the aspect into food taste, food price, quality of service, mood of the restaurant, etc. Also, if there is a review that says, "The pasta was delicious, but the salad was not," the words "steak" and "salad," which are directly mentioned in the sentence, become the "target." So far, in ABSA, most studies have analyzed sentiment only based on aspects or targets. However, even with the same aspects or targets, sentiment analysis may be inaccurate. Instances would be when aspects or sentiment are divided or when sentiment exists without a target. For example, sentences like, "Pizza and the salad were good, but the steak was disappointing." Although the aspect of this sentence is limited to "food," conflicting sentiments coexist. In addition, in the case of sentences such as "Shrimp was delicious, but the price was extravagant," although the target here is "shrimp," there are opposite sentiments coexisting that are dependent on the aspect. Finally, in sentences like "The food arrived too late and is cold now." there is no target (NULL), but it transmits a negative sentiment toward the aspect "service." Like this, failure to consider both aspects and targets - when sentiment or aspect is divided or when sentiment exists without a target - creates a dual dependency problem. To address this problem, this research analyzes sentiment by considering both aspects and targets (Target-Aspect-Sentiment Detection, hereby TASD). This study detected the limitations of existing research in the field of TASD: local contexts are not fully captured, and the number of epochs and batch size dramatically lowers the F1-score. The current model excels in spotting overall context and relations between each word. However, it struggles with phrases in the local context and is relatively slow when learning. Therefore, this study tries to improve the model's performance. To achieve the objective of this research, we additionally used auxiliary loss in aspect-sentiment classification by constructing CNN(Convolutional Neural Network) layers parallel to existing models. If existing models have analyzed aspect-sentiment through BERT encoding, Pooler, and Linear layers, this research added CNN layer-adaptive average pooling to existing models, and learning was progressed by adding additional loss values for aspect-sentiment to existing loss. In other words, when learning, the auxiliary loss, computed through CNN layers, allowed the local context to be captured more fitted. After learning, the model is designed to do aspect-sentiment analysis through the existing method. To evaluate the performance of this model, two datasets, SemEval-2015 task 12 and SemEval-2016 task 5, were used and the f1-score increased compared to the existing models. When the batch was 8 and epoch was 5, the difference was largest between the F1-score of existing models and this study with 29 and 45, respectively. Even when batch and epoch were adjusted, the F1-scores were higher than the existing models. It can be said that even when the batch and epoch numbers were small, they can be learned effectively compared to the existing models. Therefore, it can be useful in situations where resources are limited. Through this study, aspect-based sentiments can be more accurately analyzed. Through various uses in business, such as development or establishing marketing strategies, both consumers and sellers will be able to make efficient decisions. In addition, it is believed that the model can be fully learned and utilized by small businesses, those that do not have much data, given that they use a pre-training model and recorded a relatively high F1-score even with limited resources.