• Title/Summary/Keyword: open technique

Search Result 1,380, Processing Time 0.031 seconds

A Study of The Regulations on The Use of University Royalties using Delphi Technique (델파이 기법을 활용한 대학의 기술료 사용제도 개선방안 연구)

  • Lee, Jae-Heung;Shin, Jun-Woo
    • Journal of Korea Technology Innovation Society
    • /
    • v.16 no.1
    • /
    • pp.323-345
    • /
    • 2013
  • In this paper, problems with the Korean system regulating the use of university royalties are identified and investigated in order to suggest measures to improve the system in a way that provides a better R&D environment at universities. The Delphi technique was used to gather data from royalty specialists at universities and government ministries. The first Delphi survey conducted used open questions to identify problems in the use of university royalties. Then, closed questions were used for the second Delphi survey. The number of responses and the frequency of answers were analyzed after the first survey, and validity, stability, and reliability analyses were conducted for the second survey. The measures suggested to improve the system regulating the use of university royalties are as follows: First, bonuses for researchers, which are currently 50% or more of collected royalties, need to be decreased, as they are rather high compared to similar bonuses in developed countries, which are around 30% of collected royalties. The guideline for limiting the bonuses, which is explained as XX% or less of collected royalties, is suggested to prevent the excessive use of royalties. Second, rewards for those who contribute to technology transfer and commercialization should be increased. It is also important to build a consensus around the need to reward those who contribute to technology transfer and commercialization. Third, the scale of re-investment into R&D needs to increase. Regulations on royalties should be meaningfully applied to create a positive feedback structure for R&D, which can be described as the process of research, R&D outcomes, technology transfer, collecting royalties, rewarding researchers, and re-investing in R&D. To build a university's R&D capability, re-investment into R&D needs to be regularized as XX% or more of royalties. Fourth, regulations on the royalties of ministries and universities need to be unified. Each category for using royalties needs to be regularized, with detailed matters such as the guideline, process and method for using royalties specified. Also, universities need to make their own specific regulations. Fifth, specific priorities on the use of royalties need to be suggested. Regulation is necessary for the categories that do not have guideline and priorities for the use of royalties. It is hoped that the findings of this research will contribute to reinforcing the R&D capability of universities.

  • PDF

7 to 22Y Follow-up of Anterior Cruciate Ligament Reconstruction : from the standpoint of OA (전방 십자 인대 재건술 7년에서 22년 장기 추시: 관절염 관점에서)

  • Yang, Sang-Hoon;Sim, Jae-Ang;Kwak, Ji-Hoon;Kim, Byung-Kag;Ahn, Byung-Moon;Lee, Beom-Koo
    • Journal of the Korean Arthroscopy Society
    • /
    • v.14 no.1
    • /
    • pp.20-24
    • /
    • 2010
  • Purpose: To evaluate the long term outcomes of the ACL reconstruction from the standpoint of osteoarthritis. Materials and Methods: We evaluated 31 patients who underwent ACL reconstruction from April 1986 to April 1999 and could be followed-up more than 7 years. Mean follow-up period was 10.1 years (7~22 years). In terms of the graft, 11 cases were treated with the ACL reconstruction using a autologous hamstring tendon graft, 20 cases were treated with using a autologous bone patellar tendon bone graft. For femoral tunnel, 11 cases were placed through transtibial tunnel, 20 cases were placed through anteromedial portal using mini-open arthrotomy. Functional and radiographic evaluation was performed. Results: Mean Lysholm score was $89.2{\pm}11.7$ points. Patients had KT-2000 side-to-side differences were $2.1{\pm}1.9\;mm$. IKDC ligament evaluation showed 38.7% type A, 48.3% type B, 6.5% type C and 6.5% type D. Femoral tunnel were placed at 11 or 1 o'clock position in transtibial technique and placed 10 to 10:30 or 2 to 2:30 o'clock position in technique using anteromedial portal. Radiographic analysis for degenerative arthritis revealed that in group using anteromedial tunnel, 50.0% were excellent, 25.0% were good. In group using transtibial tunnel 18.2% were excellent, 18.2% were good. Conclusion: More than 87.1% of cases, long term result of the ACL reconstruction showed good and excellent result in IKDC score. Especially, the group using tunnel through anteromedial portal showed good results for degenerative arthritis.

  • PDF

Corrections on CH4 Fluxes Measured in a Rice Paddy by Eddy Covariance Method with an Open-path Wavelength Modulation Spectroscopy (개회로 파장 변조 분광법과 에디 공분산 방법으로 논에서 관측된 CH4 플럭스 자료의 보정)

  • Kang, Namgoo;Yun, Juyeol;Talucder, M.S.A.;Moon, Minkyu;Kang, Minseok;Shim, Kyo-Moon;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.17 no.1
    • /
    • pp.15-24
    • /
    • 2015
  • $CH_4$ is a trace gas and one of the key greenhouse gases, which requires continuous and systematic monitoring. The application of eddy covariance technique for $CH_4$ flux measurement requires a fast-response, laser-based spectroscopy. The eddy covariance measurements have been used to monitor $CO_2$ fluxes and their data processing procedures have been standardized and well documented. However, such processes for $CH_4$ fluxes are still lacking. In this note, we report the first measurement of $CH_4$ flux in a rice paddy by employing the eddy covariance technique with a recently commercialized wavelength modulation spectroscopy. $CH_4$ fluxes were measured for five consecutive days before and after the rice transplanting at the Gimje flux monitoring site in 2012. The commercially available $EddyPro^{TM}$ program was used to process these data, following the KoFlux protocol for data-processing. In this process, we quantified and documented the effects of three key corrections: (1) frequency response correction, (2) air density correction, and (3) spectroscopic correction. The effects of these corrections were different between daytime and nighttime, and their magnitudes were greater with larger $CH_4$ fluxes. Overall, the magnitude of $CH_4$ flux increased on average by 20-25% after the corrections. The National Center for AgroMeteorology (www.ncam.kr) will soon release an updated KoFlux program to public users, which includes the spectroscopic correction and the gap-filling of $CH_4$ flux.

The Non-Destructive Determination of Heavy Metals in Welding Fume by EDXRF (EDXRF에 의한 용접흄 중의 중금속의 비파괴 정량)

  • Park, Seunghyun;Jeong, Jee Yeon;Ryoo, Jang Jin;Lee, Naroo;Yu, Il Je;Song, Kyung Seuk;Lee, Yong Hag;Han, Jeong Hee;Kim, Sung Jin;Park, Jung sun;Chung, Ho Keun
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.11 no.3
    • /
    • pp.229-234
    • /
    • 2001
  • The EDXRF(Energy Dispersive X-ray Fluorescence Spectrometer) technique was applied to the determination of heavy metals in welding fume. The EDXRF method designed in this study was a non-destructive analysis method. Samples were analyzed directly by EDXRF without any pre-treatment such as digestion and dilution. The samples used to evaluate this method were laboratory samples exposed in a chamber connected with a welding fume generator. The samples were first analyzed using a non-destructive EDXRF method. The samples subsequently were analyzed using AAS method to verify accuray of the EDXRF method. The purpose of this study was to evaluate the possibility of the non-destructive analysis of heavy metals in welding fume by EDXRF. The results of this study were as follow: 1.When the samples were collected under the open-face sampling condition, a surface distribution of welding fume particles on sample filters was uniform, which made non-destructive analysis possible. 2. The method was statistically evaluated according to the NIOSH(National Institute for Occupational Safety and Health) and HSE(Health and Safety Executive) method. 3. The overall precision of the EDXRF method Was calculated at 3.45 % for Cr, 2.57 % for Fe and 3.78 % for Mn as relative standard deviation(RSD), respectively. The limits of detection were calculated at $0.46{\mu}g$/sample for Cr, $0.20{\mu}g$/sample for Fe and $1.14{\mu}g$/sample for Mn, respectively. 4. A comparison between the results of Cr, Fe, Mn analyzed by EDXRF and AAS was made in order to assess the accuracy of EDXRF method. The correlation coefficient between the results of EDXRF and AAS was 0.9985 for Cr, 0.9995 for Fe and 0.9982 for Mn, respectively. The overall uncertainty was determined to be ${\pm}12.31%$, 8.64 % and 11.91 % for Cr, Fe and Mn, respectively. In conclusion, this study showed that Cr, Fe, Mn in welding fume were successfully analyzed by the EDXRF without any sample pre-treatment such as digestion and dilution and a good correlation between the results of EDXRF and AAS was obtained. It was thus possible to use the EDXRF technique as an analysis method of working environment samples. The EDXRF method was an efficient method in a non-destructive analysis of heavy metals in welding fume.

  • PDF

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.

Steroid Effect on the Brain Protection During OPen Heart Surgery Using Hypothermic Circulatory Arrest in the Rabbit Cardiopulmonary bypass Model (저체온순환정지법을 이용한 개심술시 스테로이드의 뇌보호 효과 - 토끼를 이용한 심폐바이패스 실험모델에서 -)

  • Kim, Won-Gon;Lim, Cheong;Moon, Hyun-Jong;Chun, Eui-Kyung;Chi, Je-Geun;Won, Tae-Hee;Lee, Young-Tak;Chee, Hyun-Keun;Kim, Jun-Woo
    • Journal of Chest Surgery
    • /
    • v.30 no.5
    • /
    • pp.471-478
    • /
    • 1997
  • Introduction: The use of rabbits as a cardiopulmonary bypass(CPB) animal model is extremely dif%cult mainly due to technical problems. On the other hand, deep hypothermic circulatory arrest(CA) is used to facilitate surgical repair in a variety of cardiac diseases. Although steroids are generally known to be effective in the treatment of cerebral edema, the protective effects of steroids on the brain during CA are not conclusively established. Objectives of this study are twofold: the establishment of CPB technique in rabbits and the evaluation of preventive effect of steroid on the development of brain edema during CA. Material '||'&'||' Methods: Fifteen New Zealan white rabbits(average body weight 3.5kg) were divided into three experimental groups; control CA group(n=5), CA with Trendelenberg position group(n=5), and CA with Trendelenberg position + steroid(methylprednisolone 30 mglkg) administration group(n=5). After anesthetic induction and tracheostomy, a median sternotomy was performed. An aortic cannula(3.3mm) and a venous ncannula(14 Fr) were inserted, respectively in the ascending aorta and the right atrium. The CPB circuit consisted of a roller pump and a bubble oxygenator. Priming volume of the circuit was approximately 450m1 with 120" 150ml of blood. CPB was initiated at a flow rate of 80~85ml/kg/min, Ten min after the start of CPB, CA was established with duration of 40min at $20^{\circ}C$ of rectal temperature. After CA, CPB was restarted with 20min period of rewarming. Ten min after weaning, the animal was sacrif;cod. One-to-2g portions of the following tissues were rapidly d:ssected and water contents were examined and compared among gr ups: brain, cervical spinal cord, kidney, duodenum, lung, heart, liver, spleen, pancreas. stomach. Statistical significances were analyzed by Kruskal-Wallis nonparametric test. Results: CPB with CA was successfully performed in all cases. Flow rate of 60-100 mlfkgfmin was able to be maintained throughout CPB. During CPB, no significant metabolic acidosis was detected and aortic pressure ranged between 35-55 mmHg. After weaning from CPB, all hearts resumed normal beating spontaneously. There were no statistically significant differences in the water contents of tissues including brain among the three experimental groups. Conclusion: These results indicate (1) CPB can be reliably administered in rabbits if proper technique is used, (2) the effect of steroid on the protection of brain edema related to Trendelenburg position during CA is not established within the scope of this experiment.

  • PDF

A Study on the Revitalization of Tourism Industry through Big Data Analysis (한국관광 실태조사 빅 데이터 분석을 통한 관광산업 활성화 방안 연구)

  • Lee, Jungmi;Liu, Meina;Lim, Gyoo Gun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.149-169
    • /
    • 2018
  • Korea is currently accumulating a large amount of data in public institutions based on the public data open policy and the "Government 3.0". Especially, a lot of data is accumulated in the tourism field. However, the academic discussions utilizing the tourism data are still limited. Moreover, the openness of the data of restaurants, hotels, and online tourism information, and how to use SNS Big Data in tourism are still limited. Therefore, utilization through tourism big data analysis is still low. In this paper, we tried to analyze influencing factors on foreign tourists' satisfaction in Korea through numerical data using data mining technique and R programming technique. In this study, we tried to find ways to revitalize the tourism industry by analyzing about 36,000 big data of the "Survey on the actual situation of foreign tourists from 2013 to 2015" surveyed by the Korea Culture & Tourism Research Institute. To do this, we analyzed the factors that have high influence on the 'Satisfaction', 'Revisit intention', and 'Recommendation' variables of foreign tourists. Furthermore, we analyzed the practical influences of the variables that are mentioned above. As a procedure of this study, we first integrated survey data of foreign tourists conducted by Korea Culture & Tourism Research Institute, which is stored in the tourist information system from 2013 to 2015, and eliminate unnecessary variables that are inconsistent with the research purpose among the integrated data. Some variables were modified to improve the accuracy of the analysis. And we analyzed the factors affecting the dependent variables by using data-mining methods: decision tree(C5.0, CART, CHAID, QUEST), artificial neural network, and logistic regression analysis of SPSS IBM Modeler 16.0. The seven variables that have the greatest effect on each dependent variable were derived. As a result of data analysis, it was found that seven major variables influencing 'overall satisfaction' were sightseeing spot attraction, food satisfaction, accommodation satisfaction, traffic satisfaction, guide service satisfaction, number of visiting places, and country. Variables that had a great influence appeared food satisfaction and sightseeing spot attraction. The seven variables that had the greatest influence on 'revisit intention' were the country, travel motivation, activity, food satisfaction, best activity, guide service satisfaction and sightseeing spot attraction. The most influential variables were food satisfaction and travel motivation for Korean style. Lastly, the seven variables that have the greatest influence on the 'recommendation intention' were the country, sightseeing spot attraction, number of visiting places, food satisfaction, activity, tour guide service satisfaction and cost. And then the variables that had the greatest influence were the country, sightseeing spot attraction, and food satisfaction. In addition, in order to grasp the influence of each independent variables more deeply, we used R programming to identify the influence of independent variables. As a result, it was found that the food satisfaction and sightseeing spot attraction were higher than other variables in overall satisfaction and had a greater effect than other influential variables. Revisit intention had a higher ${\beta}$ value in the travel motive as the purpose of Korean Wave than other variables. It will be necessary to have a policy that will lead to a substantial revisit of tourists by enhancing tourist attractions for the purpose of Korean Wave. Lastly, the recommendation had the same result of satisfaction as the sightseeing spot attraction and food satisfaction have higher ${\beta}$ value than other variables. From this analysis, we found that 'food satisfaction' and 'sightseeing spot attraction' variables were the common factors to influence three dependent variables that are mentioned above('Overall satisfaction', 'Revisit intention' and 'Recommendation'), and that those factors affected the satisfaction of travel in Korea significantly. The purpose of this study is to examine how to activate foreign tourists in Korea through big data analysis. It is expected to be used as basic data for analyzing tourism data and establishing effective tourism policy. It is expected to be used as a material to establish an activation plan that can contribute to tourism development in Korea in the future.

Arthroscopic Treatment of Metallic Suture Anchor Failures after Bankart Repair (Bankart 수술 후 발생한 금속 봉합 나사못 합병증의 관절경적 치료)

  • Shin, Sang-Jin;Jung, Jae-Hoon;Kim, Sung-Jae;Yoo, Jae-Doo
    • Journal of the Korean Arthroscopy Society
    • /
    • v.10 no.1
    • /
    • pp.70-76
    • /
    • 2006
  • Purpose: This study presents 5 patients who had metallic anchor protrusion on glenoid after Bankart repair in anterior shoulder instability and reviewed the cause, clinical feature and arthroscopic removal technique. Method and Materials: 5 male with average age of 22 years (range 19 to 25 years) were included. 4 patients had arthroscopic Bankart repair and 1 patient had open repair for anterior shoulder instability. They had protruded metallic suture anchors on glenoid and the protruded suture anchors were removed arthroscopically using larger suture anchor empty inserter. Results: 4 patients had painful clicking sound with motion of abduction and external rotation and 1 patient showed shoulder instability. The ROM showed normal except mild degrees loss of external rotation. The position of protruded metallic anchor was 2, 3 and 5 O'clock in three patients and 4 O'clock in 2 patients. In 2 patients, the metallic suture anchor was malpositioned about 5mm off on the medial side from the anterior glenoid edge. All had Outerbrige classification Grade II-III chondral damage on humeral head and 1 patient showed glenoid cartilage destruction. None had shoulder instability after 2 years of follow-up. Constant score was 65 preoperatively and 89 postoperatively. ASES score was 67 preoperatively and 88 postoperatively. Conclusion: Symptoms of protruded suture anchor are not combined with instability. Most of symptoms were revealed from the rehabilitation period and confused with postoperative pain. Prompt diagnosis and early arthroscopic removal or impaction of protruded metallic suture anchor is recommended because of serious glenohumeral cartilage destruction. This is easy and simple and reproducible method to remove protruded metallic suture anchor arthroscopically.

  • PDF