• Title/Summary/Keyword: park management

Search Result 21,788, Processing Time 0.063 seconds

Breeding and Development of the Tscherskia triton in Jeju Island (제주도 서식 비단털쥐(Tscherskia triton)의 번식과 발달)

  • Park, Jun-Ho;Oh, Hong-Shik
    • Korean Journal of Environment and Ecology
    • /
    • v.31 no.2
    • /
    • pp.152-165
    • /
    • 2017
  • The greater long-tail hamster, Tscherskia triton, is widely distributed in Northern China, Korea and adjacent areas of Russia. Except for its distribution, biological characteristics related to life history, behavior, and ecological influences for this species are rarely studied in Korea. This study was conducted to obtain biological information on breeding, growth and development that are basic to species-specific studies. The study adopted laboratory management of a breeding programme for T. triton collected in Jeju Island from March, 2015 to December, 2016. According to the study results, the conception rate was 31.67% and the mice in the large cages had a higher rate of conception than those in the small cages (56.7 vs. 6.7%). The gestation period was $22{\pm}1.6days$ (ranges from 21 to27 days), and litter size ranged from 2 to 7, with a mean of $4.26{\pm}1.37$ in the species. The minimum age for weaning was between $19.2{\pm}1.4days$ (range of 18-21 days). There were no significant differences by sex between mean body weight and external body measurements at birth. However, a significant sexual difference was found from the period of weaning (21 days old) in head and body length, as well as tail length (HBL-weaning, $106.50{\pm}6.02$ vs. $113.34{\pm}4.72mm$, p<0.05; HBL-4 months, $163.93{\pm}5.42$ vs. $182.83{\pm}4.32mm$, p<0.05; TL-4 months, $107.23{\pm}3.25$ vs. $93.95{\pm}2.15mm$, p<0.05). Gompertz and Logistic growth curves were fitted to data for body weight and lengths of head and body, tail, ear, and hind foot. In two types of growth curves, males exhibited greater asymptotic values ($164.840{\pm}7.453$ vs. $182.830{\pm}4.319mm$, p<0.0001; $163.936{\pm}5.415$ vs. $182.840{\pm}4.333mm$, p<0.0001), faster maximum growth rates ($1.351{\pm}0.065$ vs. $1.435{\pm}0.085$, p<0.05; $2.870{\pm}0.253$ vs. $3.211{\pm}0.635$, p<0.05), and a later age of maximum growth than females in head and body length ($5.121{\pm}0.318$ vs. $5.520{\pm}0.333$, p<0.05; $6.884{\pm}0.336$ vs. $7.503{\pm}0.453$, p<0.05). However, females exhibited greater asymptotic values ($105.695{\pm}5.938$ vs. $94.150{\pm}2.507mm$, p<0.001; $111.609{\pm}14.881$ vs. $93.960{\pm}2.150mm$, p<0.05) and longer length of inflection ($60.306{\pm}1.992$ vs. $67.859{\pm}1.330mm$, p<0.0001; $55.714{\pm}7.458$ vs. $46.975{\pm}1.074mm$, p<0.05) than males in tail length. These growth rate constants, viz. the morphological characters and weights of the males and females, were similar to each other in two types of growth curves. These results will be used as necessary data to study species specificity of T. triton with biological foundations.

Development of an Offline Based Internal Organ Motion Verification System during Treatment Using Sequential Cine EPID Images (연속촬영 전자조사 문 영상을 이용한 오프라인 기반 치료 중 내부 장기 움직임 확인 시스템의 개발)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Huh, Woong;Kim, Min-Kyu;Han, Young-Yih;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jing-Sung;Park, Hee-Chul;Ahn, Sung-Hwan;Lim, Do-Hoon;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.91-98
    • /
    • 2012
  • Verification of internal organ motion during treatment and its feedback is essential to accurate dose delivery to the moving target. We developed an offline based internal organ motion verification system (IMVS) using cine EPID images and evaluated its accuracy and availability through phantom study. For verification of organ motion using live cine EPID images, a pattern matching algorithm using an internal surrogate, which is very distinguishable and represents organ motion in the treatment field, like diaphragm, was employed in the self-developed analysis software. For the system performance test, we developed a linear motion phantom, which consists of a human body shaped phantom with a fake tumor in the lung, linear motion cart, and control software. The phantom was operated with a motion of 2 cm at 4 sec per cycle and cine EPID images were obtained at a rate of 3.3 and 6.6 frames per sec (2 MU/frame) with $1,024{\times}768$ pixel counts in a linear accelerator (10 MVX). Organ motion of the target was tracked using self-developed analysis software. Results were compared with planned data of the motion phantom and data from the video image based tracking system (RPM, Varian, USA) using an external surrogate in order to evaluate its accuracy. For quantitative analysis, we analyzed correlation between two data sets in terms of average cycle (peak to peak), amplitude, and pattern (RMS, root mean square) of motion. Averages for the cycle of motion from IMVS and RPM system were $3.98{\pm}0.11$ (IMVS 3.3 fps), $4.005{\pm}0.001$ (IMVS 6.6 fps), and $3.95{\pm}0.02$ (RPM), respectively, and showed good agreement on real value (4 sec/cycle). Average of the amplitude of motion tracked by our system showed $1.85{\pm}0.02$ cm (3.3 fps) and $1.94{\pm}0.02$ cm (6.6 fps) as showed a slightly different value, 0.15 (7.5% error) and 0.06 (3% error) cm, respectively, compared with the actual value (2 cm), due to time resolution for image acquisition. In analysis of pattern of motion, the value of the RMS from the cine EPID image in 3.3 fps (0.1044) grew slightly compared with data from 6.6 fps (0.0480). The organ motion verification system using sequential cine EPID images with an internal surrogate showed good representation of its motion within 3% error in a preliminary phantom study. The system can be implemented for clinical purposes, which include organ motion verification during treatment, compared with 4D treatment planning data, and its feedback for accurate dose delivery to the moving target.

The Prognostic Value of the First Day and Daily Updated Scores of the APACHE III System in Sepsis (패혈증환자에서 APACHE III Scoring System의 예후적 가치)

  • Lim, Chae-Man;Lee, Jae-Kyun;Lee, Sung-Soon;Koh, Youn-Suck;Kim, Woo-Sung;Kim, Dong-Soon;Kim, Won-Dong;Park, Pyung-Hwan;Choi, Jong-Moo
    • Tuberculosis and Respiratory Diseases
    • /
    • v.42 no.6
    • /
    • pp.871-877
    • /
    • 1995
  • Background: The index which could predict the prognosis of critically ill patients is needed to find out high risk patients and to individualize their treatment. The APACHE III scoring system was established in 1991, but there has been only a few studies concerning its prognostic value. We wanted to know whether the APACHE III scores have prognostic value in discriminating survivors from nonsurvivors in sepsis. Methods: In 48 patients meeting the Bones criteria for sepsis, we retrospectively surveyed the day 1(D1), day 2(D2) and day 3(D3) scores of patients who were admitted to intensive care unit. The scores of the sepsis survivors and nonsurvivors were compared in respect to the D1 score, and also in respect to the changes of the updated D2 and D3 scores. Results: 1) Of the 48 sepsis patients, 21(43.5%) survived and 27(56.5%) died. The nonsurvivors were older($62.7{\pm}12.6$ vs $51.1{\pm}18.1$ yrs), presented with lower mean arterial pressure($56.9{\pm}26.2$ vs $67.7{\pm}14.2\;mmHg$) and showed greater number of multisystem organ failure($1.2{\pm}0.8$ vs $0.2{\pm}0.4$) than the survivors(p<0.05, respectively). There were no significant differences in sex and initial body temperature between the two groups. 2) The D1 score was lower in the survivors (n=21) than in the nonsurvivors ($44.1{\pm}14.6$, $78.5{\pm}18.6$, p=0.0001). The D2 and D3 scores significantly decreased in the survivors (D1 vs D2, $44.1{\pm}14.6$ : $37.9{\pm}15.0$, p=0.035; D2 vs D3, $37.9{\pm}15.0$ : $30.1{\pm}9.3$, p=0.0001) but showed a tendency to increase in the nonsurvivors (D1 vs D2 (n=21), $78.5{\pm}18.6$ : $81.3{\pm}23.0$, p=0.1337; D2 vs D3 (n=11), $68.2{\pm}19.3$ : $75.3{\pm}18.8$, p=0.0078). 3) The D1 scores of 12 survivors and 6 nonsurvivors were in the same range of 42~67 (mean D1 score, $53.8{\pm}10.0$ in the survivors, $55.3{\pm}10.3$ in the nonsurvivors). The age, sex, initial body temperature, and mean arterial pressure were not different between the two groups. In this group, however, D2 and D3 was significantly decreased in the survivors(D1 vs D2, $53.3{\pm}10.0$ : $43.6{\pm}16.4$, p=0.0278; D2 vs D3, $43.6{\pm}16.4$ : $31.2{\pm}10.3$, p=0.0005), but showed a tendency to increase in the nonsurvivors(D1 vs D2 (n=6), $55.3{\pm}10.3:66.7{\pm}13.9$, p=0.1562; D2 vs D3 (n=4), $64.0{\pm}16.4:74.3{\pm}18.6$, p=0.1250). Among the individual items of the first day APACHE III score, only the score of respiratory rate was capable of discriminating the nonsurvivors from the survivors ($5.5{\pm}2.9$ vs $1.9{\pm}3.7$, p=0.046) in this group. Conclusion: In sepsis, nonsurvivors had higher first day APACHE III score and their updated scores on the following days failed to decline but showed a tendency to increase. Survivors, on the other hand, had lower first day score and showed decline in the updated APACHE scores. These results suggest that the first day and daily updated APACHE III scores are useful in predicting the outcome and assessing the response to management in patients with sepsis.

  • PDF

The Effect of External PEEP on Work of Breathing in Patients with Auto-PEEP (Auto-PEEP이 존재하는 환자에서 호흡 일에 대한 External PEEP의 효과)

  • Chin, Jae-Yong;Lim, Chae-Man;Koh, Youn-Suck;Park, Pyung-Whan;Choi, Jong-Moo;Lee, Sang-Do;Kim, Woo-Sung;Kim, Dong-Soon;Kim, Won-Dong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.2
    • /
    • pp.201-209
    • /
    • 1996
  • Background : Auto-PEEP which develops when expiratory lung emptying is not finished until the beginning of next inspiration is frequently found in patients on mechanical ventilation. Its presence imposes increased risk of barotrauma and hypotension, as well as increased work of breathing (WOB) by adding inspiratory threshold load and/or adversely affecting to inspiratory trigger sensitivity. The aim of this study is to evaluate the relationship of auto-PEEP with WOB and to evaluate the effect of PEEP applied by ventilator (external PEEP) on WOB in patients with auto-PEEP. Method : 15 patients, who required mechanical ventilation for management of acute respiratory failure, were studied. First, the differences in WOB and other indices of respiratory mechanics were examined between 7 patients with auto-PEEP and 8 patients without auto-PEEP. Then, we applied the 3 cm $H_2O$ of external PEEP to patients with auto-PEEP and evaluated its effects on lung mechanics as well as WOB. Indices of respiratory mechanics including tidal volume ($V_T$), repiratory rate, minute ventilation ($V_E$), peak inspiratory flow rate (PIFR), peak expiratory flow rate (PEFR), peak inspiratory pressure (PIP), $T_I/T_{TOT}$, auto-PEEP, dynamic compliance of lung (Cdyn), expiratory airway resistance (RAWe), mean airway resistance (RAWm), $p_{0.1}$, work of breathing performed by patient (WOB), and pressure-time product (PTP) were obtained by CP-100 Pulmonary Monitor (Bicore, USA). The values were expressed as mean $\pm$ SEM (standard error of mean). Results : 1) Comparison of WOB and other indices of respiratory mechanics in patients with and without auto-PEEP : There was significant increase in WOB ($l.71{\pm}0.24$ vs $0.50{\pm}0.19\;J/L$, p=0.007), PTP ($317{\pm}70$ vs $98{\pm}36\;cm$ $H_2O{\cdot}sec/min$, p=0.023), RAWe ($35.6{\pm}5.7$ vs $18.2{\pm}2.3\;cm$ H2O/L/sec, p=0.023), RAWm ($28.8{\pm}2.5$ vs $11.9{\pm}2.0cm$ H2O/L/sec, p=0.001) and $P_{0.1}$ ($6.2{\pm}1.0$ vs 2.9+0.6 cm H2O, p=0.021) in patients with auto-PEEP compared to patients without auto-PEEP. The differences of other indices including $V_T$, PEFR, $V_E$ and $T_I/T_{TOT}$ showed no significance. 2) Effect of 3 cm $H_2O$ external PEEP on respiratory mechanics in patients with auto-PEEP : When 3 cm $H_2O$ of external PEEP was applied, there were significant decrease in WOB ($1.71{\pm}0.24$ vs $1.20{\pm}0.21\;J/L$, p=0.021) and PTP ($317{\pm}70$ vs $231{\pm}55\;cm$ $H_2O{\cdot}sec/min$, p=0.038). RAWm showed a tendency to decrease ($28.8{\pm}2.5$ vs $23.9{\pm}2.1\;cm$ $H_2O$, p=0.051). But PIP was increased with application of 3 cm $H_2O$ of external PEEP ($16{\pm}2$ vs $22{\pm}3\;cm$ $H_2O$, p=0.008). $V_T$, $V_E$, PEFR, $T_I/T_{TOT}$ and Cdyn did not change significantly. Conclusion : The presence of auto-PEEP in mechanically ventilated patients was accompanied with increased WOB performed by patient, and this WOB was decreased by 3 cm $H_2O$ of externally applied PEEP. But, with 3 cm $H_2O$ of external PEEP, increased PIP was noted, implying the importance of close monitoring of the airway pressure during application of external PEEP.

  • PDF

Study for Treatment Effects and Prognostic Factors of Bronchial Asthma -Follow Up Over 2 Years- (2년 이상 관찰중인 성안 기관지 천식환자의 치료 효과 및 예후인자에 관한 연구)

  • Choung, Bo-Young;Park, Jung-Won;Kim, Sung-Kyu;Hong, Chein-Soo
    • Tuberculosis and Respiratory Diseases
    • /
    • v.44 no.3
    • /
    • pp.559-573
    • /
    • 1997
  • Background : Asthma causes recurrent episodes of wheezing, breathlessness, chest tightness, and cough. These symptoms are usually associated with widespread but variable airflow limitation that is partly reversible either spontaneously or with treatment. The inflammation also causes an associated increase in airway responsiveness to a variety of stimuli. Method : Of the 403 adult bronchial asthma patients enrolled from March 1992 to March 1994 in Allergy Clinics of Severance Hospital in Yonsei University, this study reviewed the 97 cases to evaluate the treatment effects and to analyse prognostic factors. The patients were classified to five groups according to treatment responses ; group 1 (non control group) : patients who were not controlled during following up, group 2 (high step treatment group) : patients who were controlled longer than 3 months by step 3 or 4 treatment of "Global initiative for asthma, Global strategy for asthma management and prevention" (NHLBI/WHO) with PFR(%) larger than 80%, group 3 (short term control group) : patients who were controlled less than 1 year by step 1 or 2 treatment of NHLBI/WHO, group 4 (intermediate term control group) : patients who were controlled for more than 1 year but less than 2 years by step 1 or 2 treatment of NHLBI/WHO, group 5 (long term control group) : patients who were controlled for more than 2 years by step 1 or 2 treatment of NHLBI/WHO. Especially the patients who were controlled more than 1 year with negatively converted methacholine test and no eosinophil in sputum were classified to methacholine negative conversion group. We reviewed patients' history, atopy score, total IgE, specific IgE, methacholine PC20 and peripheral blood eosinophil count, pulmonary function test, steroid doses and aggrevation numbers after treatment. Results : On analysis of 98 patients, 20 cases(20.6%) were classified to group 1, 26 cases(26.8%) to group 2, 23 cases(23.7%) to group 3, 15 cases(15.5%) to group 4, and 13 cases(13.4%) to groups 5. There were no differences of sex, asthma type, family history, smoking history, allergic rhinitis and aspirin allergy among the groups. In long term control group, asthma onset age was younger, symptom duration was shorter, and initial pulmonary function was better. The long term control group required lower amounts of oral steroid. had less aggrevation during first 3months after starting treatment and shorter duration from enrollment to control Atopy, allergic skin test, sputum and blood eosinophil, total IgE, nonspecific bronchial responsiveness was not significantly different among the groups. Seven out of 28 patients who were controlled more than 1 years showed negatively converted methachloine test and no eosinophils in the sputum. The mean control duration was $20.3{\pm}9.7$ months and relapse did not occur. Conclusion : Patients who had asthma of onset age younger, shorter symptom duration, better PFT, lower treatment initial steps, lower amounts of steroid needs and less aggravation numbers after starting treatment were classified in the long term control groups compared to the others.

  • PDF

Effects of firm strategies on customer acquisition of Software as a Service (SaaS) providers: A mediating and moderating role of SaaS technology maturity (SaaS 기업의 차별화 및 가격전략이 고객획득성과에 미치는 영향: SaaS 기술성숙도 수준의 매개효과 및 조절효과를 중심으로)

  • Chae, SeongWook;Park, Sungbum
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.151-171
    • /
    • 2014
  • Firms today have sought management effectiveness and efficiency utilizing information technologies (IT). Numerous firms are outsourcing specific information systems functions to cope with their short of information resources or IT experts, or to reduce their capital cost. Recently, Software-as-a-Service (SaaS) as a new type of information system has become one of the powerful outsourcing alternatives. SaaS is software deployed as a hosted and accessed over the internet. It is regarded as the idea of on-demand, pay-per-use, and utility computing and is now being applied to support the core competencies of clients in areas ranging from the individual productivity area to the vertical industry and e-commerce area. In this study, therefore, we seek to quantify the value that SaaS has on business performance by examining the relationships among firm strategies, SaaS technology maturity, and business performance of SaaS providers. We begin by drawing from prior literature on SaaS, technology maturity and firm strategy. SaaS technology maturity is classified into three different phases such as application service providing (ASP), Web-native application, and Web-service application. Firm strategies are manipulated by the low-cost strategy and differentiation strategy. Finally, we considered customer acquisition as a business performance. In this sense, specific objectives of this study are as follows. First, we examine the relationships between customer acquisition performance and both low-cost strategy and differentiation strategy of SaaS providers. Secondly, we investigate the mediating and moderating effects of SaaS technology maturity on those relationships. For this purpose, study collects data from the SaaS providers, and their line of applications registered in the database in CNK (Commerce net Korea) in Korea using a questionnaire method by the professional research institution. The unit of analysis in this study is the SBUs (strategic business unit) in the software provider. A total of 199 SBUs is used for analyzing and testing our hypotheses. With regards to the measurement of firm strategy, we take three measurement items for differentiation strategy such as the application uniqueness (referring an application aims to differentiate within just one or a small number of target industry), supply channel diversification (regarding whether SaaS vendor had diversified supply chain) as well as the number of specialized expertise and take two items for low cost strategy like subscription fee and initial set-up fee. We employ a hierarchical regression analysis technique for testing moderation effects of SaaS technology maturity and follow the Baron and Kenny's procedure for determining if firm strategies affect customer acquisition through technology maturity. Empirical results revealed that, firstly, when differentiation strategy is applied to attain business performance like customer acquisition, the effects of the strategy is moderated by the technology maturity level of SaaS providers. In other words, securing higher level of SaaS technology maturity is essential for higher business performance. For instance, given that firms implement application uniqueness or a distribution channel diversification as a differentiation strategy, they can acquire more customers when their level of SaaS technology maturity is higher rather than lower. Secondly, results indicate that pursuing differentiation strategy or low cost strategy effectively works for SaaS providers' obtaining customer, which means that continuously differentiating their service from others or making their service fee (subscription fee or initial set-up fee) lower are helpful for their business success in terms of acquiring their customers. Lastly, results show that the level of SaaS technology maturity mediates the relationships between low cost strategy and customer acquisition. That is, based on our research design, customers usually perceive the real value of the low subscription fee or initial set-up fee only through the SaaS service provide by vender and, in turn, this will affect their decision making whether subscribe or not.

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.