• Title/Summary/Keyword: language data

Search Result 3,807, Processing Time 0.033 seconds

Accuracy of implant digital scans with different intraoral scanbody shapes and library merging according to different oral exposure height (구내 스캔바디의 형태에 따른 임플란트의 디지털 스캔 정확도 및 구강 내 노출 높이에 따른 라이브러리 중첩 정확도 비교 연구)

  • Jeong, Byungjoon;Lee, Younghoo;Hong, Seoung-Jin;Paek, Janghyun;Noh, Kwantae;Pae, Ahran;Kim, Hyeong-Seob;Kwon, Kung-Rock
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.59 no.1
    • /
    • pp.27-35
    • /
    • 2021
  • Purpose: The purpose of this study is to compare the accuracy of digital scans of implants according to different shapes of scanbodies, and to compare the accuracy of library merging according to different oral exposure height. Materials and methods: A master model with a single tooth edentulous site was prepared. For the first experiment, three types of intraoral scanbodies were prepared, divided into three groups, and the following experiments were conducted for each group: An internal hex implant was placed. The master model with the scanbody connected was scanned with a model scanner, and a master reference file (control group) was created. 10 files (experimental group) were created by performing 10 consecutive scans with an intraoral scanner. After superimposing the control and experimental groups, the following values were calculated: 1) Distance deviation of a designated point on the scanbody 2) Angle deviation of the major axis of the scanbody. For the second experiment, the scanbody scan data were prepared in 6 different heights. Library files were merged with each of the scan data. The distance and angular deviation were calculated using the 7 mm scan data as control group. Results: In the first experiment, there were no significant differences between A and B (P=.278), B and C (P=.568), and C and A (P=.711) in the distance deviations. There were no significant differences between A and B (P=.568), B and C (P=.546), and C and A (P=.112) in the angular deviations. Also, the scanbody showed significantly higher library merging accuracy in the groups with high oral exposure height (P<.5). Conclusion: There were no significant differences in scan accuracy according to the different shapes of scanbodies, and the accuracy of library merging increased according to exposure height of the scanbody in the oral cavity.

Topic Modeling Insomnia Social Media Corpus using BERTopic and Building Automatic Deep Learning Classification Model (BERTopic을 활용한 불면증 소셜 데이터 토픽 모델링 및 불면증 경향 문헌 딥러닝 자동분류 모델 구축)

  • Ko, Young Soo;Lee, Soobin;Cha, Minjung;Kim, Seongdeok;Lee, Juhee;Han, Ji Yeong;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.2
    • /
    • pp.111-129
    • /
    • 2022
  • Insomnia is a chronic disease in modern society, with the number of new patients increasing by more than 20% in the last 5 years. Insomnia is a serious disease that requires diagnosis and treatment because the individual and social problems that occur when there is a lack of sleep are serious and the triggers of insomnia are complex. This study collected 5,699 data from 'insomnia', a community on 'Reddit', a social media that freely expresses opinions. Based on the International Classification of Sleep Disorders ICSD-3 standard and the guidelines with the help of experts, the insomnia corpus was constructed by tagging them as insomnia tendency documents and non-insomnia tendency documents. Five deep learning language models (BERT, RoBERTa, ALBERT, ELECTRA, XLNet) were trained using the constructed insomnia corpus as training data. As a result of performance evaluation, RoBERTa showed the highest performance with an accuracy of 81.33%. In order to in-depth analysis of insomnia social data, topic modeling was performed using the newly emerged BERTopic method by supplementing the weaknesses of LDA, which is widely used in the past. As a result of the analysis, 8 subject groups ('Negative emotions', 'Advice and help and gratitude', 'Insomnia-related diseases', 'Sleeping pills', 'Exercise and eating habits', 'Physical characteristics', 'Activity characteristics', 'Environmental characteristics') could be confirmed. Users expressed negative emotions and sought help and advice from the Reddit insomnia community. In addition, they mentioned diseases related to insomnia, shared discourse on the use of sleeping pills, and expressed interest in exercise and eating habits. As insomnia-related characteristics, we found physical characteristics such as breathing, pregnancy, and heart, active characteristics such as zombies, hypnic jerk, and groggy, and environmental characteristics such as sunlight, blankets, temperature, and naps.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

Structural Properties of Social Network and Diffusion of Product WOM: A Sociocultural Approach (사회적 네트워크 구조특성과 제품구전의 확산: 사회문화적 접근)

  • Yoon, Sung-Joon;Han, Hee-Eun
    • Journal of Distribution Research
    • /
    • v.16 no.1
    • /
    • pp.141-177
    • /
    • 2011
  • I. Research Objectives: Most of the previous studies on diffusion have concentrated on efficacy of WOM communication with the use of variables at individual level (Iacobucci 1996; Midgley et al. 1992). However, there is a paucity of studies which investigated network's structural properties as antecedents of WOM from the perspective of consumers' sociocultural propensities. Against this research backbone, this study attempted to link the network's structural properties and consumer' WOM behavior on cross-national basis. The major research objective of this study was to examine the relationship between network properties and WOM by comparing Korean and Chinese consumers. Specific objectives of this research are threefold; firstly, it sought to examine whether network properties (i.e., tie strength, centrality, range) affect WOM (WOM intention and quality of WOM). Secondly, it aimed to explore the moderating effects of cutural orientation (uncertainty avoidance and individuality) on the relationship between network properties and WOM. Thirdly, it substantiates the role of innovativeness as antecedents to both network properties and WOM. II. Research Hypotheses: Based on the above research objectives, the study put forth the following research hypotheses to validate. ${\cdot}$ H 1-1 : The Strength of tie between two counterparts within network will positively influence WOM effectivenes ${\cdot}$ H 1-2 : The network centrality will positively influence the WOM effectiveness ${\cdot}$ H 1-3 : The network range will positively influence the WOM effectiveness ${\cdot}$ H 2-1 : The consumer's uncertainty avoidance tendency will moderate the relationship between network properties and WOM effectiveness ${\cdot}$ H 2-2 : The consumer's individualism tendency will moderate the relationship between network properties and WOM effectiveness ${\cdot}$ H 3-1 : The consumer's innovativeness will positively influence the social network properties ${\cdot}$ H 3-2 : The consumer's innovativeness will positively influence WOM effectiveness III. Methodology: Through a pilot study and back-translation, two versions of questionnaire were prepared, one in Korean and the other in Chinese. The chinese data were collected from the chinese students enrolled in language schools in Suwon city in Korea, while Korean data were collected from students taking classes in a major university in Seoul. A total of 277 questionnaire were used for analysis of Korean data and 212 for Chinese data. The reason why Chinese students living in Korea rather than in China were selected was based on two factors: one was to neutralize the differences (ie, retail channel availability) that may arise from living in separate countries and the second was to minimize the difference in communication venues such as internet accessibility and cell phone usability. SPSS 12.0 and AMOS 7.0 were used for analysis. IV. Results: Prior to hypothesis verification, mean differences between the two countries in terms of major constructs were performed with the following result; As for network properties (tie strength, centrality and range), Koreans showed higher scores in all three constructs. For cultural orientation traits, Koreans scored higher only on uncertainty avoidance trait than Chinese. As a result of verifying the first research objective, confirming the relationship between network properties and WOM effectiveness, on Korean side, tie strength(Beta=.116; t=1.785) and centrality (Beta=.499; t=6.776) significantly influenced on WOM intention, and similar finding was obtained for Chinese side, with tie strength (Beta=.246; t=3.544) and centrality (Beta=.247; t=3.538) being significant. However, with regard to WOM argument quality, Korean data yielded only centrality (Beta=.82; t=7.600) having a significant impact on WOM, whereas China showed both tie strength(Beat=.142; t=2.052) and centrality(Beta=.348; t=5.031) being influential. To answer for the second research objective addressing the moderating role of cultural orientation, moderated regression anaylsis was performed and the result showed that uncertainty avoidance moderated between network range and WOM intention for both Korea and China, But for Korea, the uncertainty avoidance moderated between tie strength and WOM quality, while for China it moderated between network range and WOM intention. And innovativeness moderated between tie strength and WOM intention for Korea but it moderated between network range and WOM intention for China. As a result of analysing for third research objective, we found that for Korea, innovativeness positively influenced centrality only (Beta=.546; t=10.808), while for China it influenced both tie strength (Beta=.203; t=2.998) and centrality(Beta=.518; t=8.782). But for both countries alike, the innovativeness influenced positively on WOM (WOM intention and WOM quality). V. Implications: The study yields the two practical implications. Firstly, the result suggests that companies targeting multinational customers need to identify segments which are susceptible to the positive WOM and WOM information based on individual traits such as uncertainty avoidance and individualism and based on that, develop marketing communication strategy. Secondly, the companies need to divide the market on Roger's five innovation stages and based on this information, enforce marketing strategy which utilizes social networking tools such as public media and WOM. For instance, innovator and early adopters, if provided with new product information, will be able to capitalize upon the network advantages and thus add informational value to network operations using SNS or corporate blog.

  • PDF

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Marketing Standardization and Firm Performance in International E.Commerce (국제전자상무중적영소표준화화공사표현(国际电子商务中的营销标准化和公司表现))

  • Fritz, Wolfgang;Dees, Heiko
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.3
    • /
    • pp.37-48
    • /
    • 2009
  • The standardization of marketing has been one of the most focused research topics in international marketing. The term "global marketing" was often used to mean an internationally standardized marketing strategy based on similarities between foreign markets. Marketing standardization was discussed only within the context of traditional physical marketplaces. Since then, the digital "marketspace" of the Internet had emerged in the 90's, and it became one of the most important drivers of the globalization process opening new opportunities for the standardization of global marketing activities. On the other hand, the opinion that a greater adoption of the Internet by customers may lead to a higher degree of customization and differentiation of products rather than standardization is also quite popular. Considering this disagreement, it is notable that comprehensive studies which focus upon the marketing standardization especially in the context of global e-commerce are missing to a high degree. On this background, the two basic research questions being addressed in this study are: (1) To what extent do companies standardize their marketing in international e-commerce? (2) Is there an impact of marketing standardization on the performance (or success) of these companies? Following research hypotheses were generated based upon literature review: H 1: Internationally engaged e-commerce firms show a growing readiness for marketing standardization. H 2: Marketing standardization exerts positive effects on the success of companies in international e-commerce. H 3: In international e-commerce, marketing mix standardization exerts a stronger positive effect on the economic as well as the non-economic success of companies than marketing process standardization. H 4: The higher the non-economic success in international e-commerce firms, the higher the economic success. The data for this research were obtained from a questionnaire survey conducted from February to April 2005. The international e-commerce companies of various industries in Germany and all subsidiaries or headquarters of foreign e-commerce companies based in Germany were included in the survey. 118 out of 801 companies responded to the questionnaire. For structural equation modelling (SEM), the Partial-Least. Squares (PLS) approach in the version PLS-Graph 3.0 was applied (Chin 1998a; 2001). All of four research hypotheses were supported by result of data analysis. The results show that companies engaged in international e-commerce standardize in particular brand name, web page design, product positioning, and the product program to a high degree. The companies intend to intensify their efforts for marketing mix standardization in the future. In addition they want to standardize their marketing processes also to a higher degree, especially within the range of information systems, corporate language and online marketing control procedures. In this study, marketing standardization exerts a positive overall impact on company performance in international e-commerce. Standardization of marketing mix exerts a stronger positive impact on the non-economic success than standardization of marketing processes, which in turn contributes slightly stronger to the economic success. Furthermore, our findings give clear support to the assumption that the non-economic success is highly relevant to the economic success of the firm in international e-commerce. The empirical findings indicate that marketing standardization is relevant to the companies' success in international e-commerce. But marketing mix and marketing process standardization contribute to the firms' economic and non-economic success in different ways. The findings indicate that companies do standardize numerous elements of their marketing mix on the Internet. This practice is in part contrary to the popular concept of a "differentiated standardization" which argues that some elements of the marketing mix should be adapted locally and others should be standardized internationally. Furthermore, the findings suggest that the overall standardization of marketing -rather than the standardization of one particular marketing mix element - is what brings about a positive overall impact on success.

  • PDF

Determination of Tumor Boundaries on CT Images Using Unsupervised Clustering Algorithm (비교사적 군집화 알고리즘을 이용한 전산화 단층영상의 병소부위 결정에 관한 연구)

  • Lee, Kyung-Hoo;Ji, Young-Hoon;Lee, Dong-Han;Yoo, Seoung-Yul;Cho, Chul-Koo;Kim, Mi-Sook;Yoo, Hyung-Jun;Kwon, Soo-Il;Chun, Jun-Chul
    • Journal of Radiation Protection and Research
    • /
    • v.26 no.2
    • /
    • pp.59-66
    • /
    • 2001
  • It is a hot issue to determine the spatial location and shape of tumor boundary in fractionated stereotactic radiotherapy (FSRT). We could get consecutive transaxial plane images from the phantom (paraffin) and 4 patients with brain tumor using helical computed tomography(HCT). K-means classification algorithm was adjusted to change raw data pixel value in CT images into classified average pixel value. The classified images consists of 5 regions that ate tumor region (TR), normal region (NR), combination region (CR), uncommitted region (UR) and artifact region (AR). The major concern was how to separate the normal region from tumor region in the combination area. Relative average deviation analysis was adjusted to alter average pixel values of 5 regions into 2 regions of normal and tumor region to define maximum point among average deviation pixel values. And then we drawn gross tumor volume (GTV) boundary by connecting maximum points in images using semi-automatic contour method by IDL(Interactive Data Language) program. The error limit of the ROI boundary in homogeneous phantom is estimated within ${\pm}1%$. In case of 4 patients, we could confirm that the tumor lesions described by physician and the lesions described automatically by the K-mean classification algorithm and relative average deviation analyses were similar. These methods can make uncertain boundary between normal and tumor region into clear boundary. Therefore it will be useful in the CT images-based treatment planning especially to use above procedure apply prescribed method when CT images intermittently fail to visualize tumor volume comparing to MRI images.

  • PDF

The Correlation between Depression and Physical Health in the Elderly (노인의 신체적 건강과 우울과의 관계)

  • Kim, Hyo-Jung
    • Journal of agricultural medicine and community health
    • /
    • v.26 no.2
    • /
    • pp.193-203
    • /
    • 2001
  • The purpose of this study was to identify the relationship between depression and physical health of the elderly and to provide fundamental data for programs which improve the health of this population. The subjects were 168 elderly people(55 years and older) who resided at home in Taegu. They were surveyed by interview using a closed- ended questionnaire. The survey was done from September 16 to October 16 in 2000. The instruments used in this study were general characteristics, Short form Geriatric Depression Scale(SGDS), Barthel Index, Muscular skeletal symptoms scale, Northern Illinois University's Health Self Rating Scale. The data were analyzed by using descriptive statistics, t-test, ANOVA, Pearson Correlation Coefficient, multiple regression with SPSS PC 10.0 version for Windows. The findings were as follows: 1. As compared 65-74 years elderly group, 75-84 years group was significantly higher score for depression(F=3.17, p=.026). As compared elderly group who has own spouse, the group who has no own spouse was significantly higher score for depression(t=- 2.44, p=.016). 2. The aged who have more limitation of Activities of Daily Living(ADL)(t=3.93, p=.000), pain of muscular skeletal symptoms(F=5.33, p=.002) and poor perceived health state(F=17.04, p=.000) showed the higher severity of depression than the aged who have not. 3. ADL correlated negatively with depression(r=- .293, p=.000), pain of muscular skeletal symptoms correlated positively(r=.251, p=.001), perceived health status correlated negatively(r=-.522, p=.000). 4. The combination of perceived health status and ADL explained 29.1% of the varience of depression. On the basis of the above findings the following recommendations are made; 1. Developing health programs is needed considering ADL, pain of muscular skeletal symptoms, perceived health status, demographic variables (age, spouse status) which have an significant effects on depression of the elderly. 2. In the following study, the use of the various scale is needed which reflects physical status of the elderly in home.

  • PDF