• Title/Summary/Keyword: National standard classification system

Search Result 216, Processing Time 0.026 seconds

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.

Analysis of Research Trends in Journal of Distribution Science (유통과학연구의 연구 동향 분석 : 창간호부터 제8권 제3호까지를 중심으로)

  • Kim, Young-Min;Kim, Young-Ei;Youn, Myoung-Kil
    • Journal of Distribution Science
    • /
    • v.8 no.4
    • /
    • pp.5-15
    • /
    • 2010
  • This study investigated research trends of JDS that KODISA published and gave implications to elevate quality of scholarly journals. In other words, the study classified scientific system of distribution area to investigate research trends and to compare it with other scholarly journals of distribution and to give implications for higher level of JDS. KODISA published JDS Vol.1 No.1 for the first time in 1999 followed by Vol.8 No.3 in September 2010 to show 109 theses in total. KODISA investigated subjects, research institutions, number of participants, methodology, frequency of theses in both the Korean language and English, frequency of participation of not only the Koreans but also foreigners and use of references, etc. And, the study investigated JDR of KODIA, JKDM(The Journal of Korean Distribution & Management) and JDA that researched distribution, so that it found out development ways. To investigate research trends of JDS that KODISA publishes, main category was made based on the national science and technology standard classification system of MEST (Ministry Of Education, Science And Technology), table of classification of research areas of NRF(National Research Foundation of Korea), research classification system of both KOREADIMA and KLRA(Korea Logistics Research Association) and distribution science and others that KODISA is looking for, and distribution economy area was divided into general distribution, distribution economy, distribution, distribution information and others, and distribution management was divided into distribution management, marketing, MD and purchasing, consumer behavior and others. The findings were as follow: Firstly, main category occupied 47 theses (43.1%) of distribution economy and 62 theses (56.9%) of distribution management among 109 theses in total. Active research area of distribution economy consisted of 14 theses (12.8%) of distribution information and 9 theses (8.3%) of distribution economy to research distribution as well as distribution information positively every year. The distribution management consisted of 25 theses (22.9%) of distribution management and 20 theses (18.3%) of marketing, These days, research on distribution management, marketing, distribution, distribution information and others is increasing. Secondly, researchers published theses as follow: 55 theses (50.5%) by professor by himself or herself, 12 theses (11.0%) of joint research by professors and businesses, Professors/students published 9 theses (8.3%) followed by 5 theses (4.6%) of researchers, 5 theses (4.6%) of businesses, 4 theses (3.7%) of professors, researchers and businesses and 2 theses (1.8%) of students. Professors published theses less, while businesses, research institutions and graduate school students did more continuously. The number of researchers occupied single researcher (43 theses, 39.5%), two researchers (42 theses, 38.5%) and three researchers or more (24 theses, 22.0%). Thirdly, professors published theses the most at most of areas. Researchers of main category of distribution economy consisted of professors (25 theses, 53.2%), professors and businesses (7 theses, 14.9%), professors and businesses (7 theses, 14.9%), professors and researchers (6 theses, 12.8%) and professors and students (3 theses, 6.3%). And, researchers of main category of distribution management consisted of professors (30 theses, 48.4%), professors and businesses (10 theses, 16.1%), and professors and researchers as well as professors and students (6 theses, 9.7%). Researchers of distribution management consisted of professors, professors and businesses, professors and researchers, researchers and businesses, etc to have various types. Professors mainly researched marketing, MD and purchasing, and consumer behavior, etc to demand active participation of businesses and researchers. Fourthly, research methodology was: Literature research occupied 45 theses (41.3%) the most followed by empirical research based on questionnaire survey (44 theses, 40.4%). General distribution, distribution economy, distribution and distribution management, etc mostly adopted literature research, while marketing did empirical research based on questionnaire survey the most. Fifthly, theses in the Korean language occupied 92.7% (101 theses), while those in English did 7.3% (8 theses). No more than one thesis in English was published until 2006, and 7 theses (11.9%) were published after 2007 to increase. The theses in English were published more to be affirmative. Foreigner researcher published one thesis (0.9%) and both Korean researchers and foreigner researchers jointly published two theses (1.8%) to have very much low participation of foreigner researchers. Sixthly, one thesis of JDS had 27.5 references in average that consisted of 11.1 local references and 16.4 foreign references. And, cited times was 0.4 thesis in average to be low. The distribution economy cited 24.2 references in average (9.4 local references and 14.8 foreign references and JDS had 0.6 cited reference. The distribution management had 30.0 references in average (12.1 local references and 17.9 foreign references) and had 0.3 reference of JDS itself. Seventhly, similar type of scholarly journal had theses in the Korean language and English: JDR( Journal of Distribution Research) of KODIA(Korea Distribution Association) published 92 theses in the Korean language (96.8%) and 3 theses in English (3.2%), that is to say, 95 theses in total. JKDM of KOREADIMA published 132 theses in total that consisted of 93 theses in the Korean language (70.5%) and 39 theses in English (29.5%). Since 2008, JKDM has published scholarly journal in English one time every year. JDS published 52 theses in the Korean language (88.1%) and 7 theses in English (11.9%), that is to say, 59 theses in total. Sixthly, similar type of scholarly journals and research methodology were: JDR's research methodology had 65 empirical researches based on questionnaire survey (68.4%), followed by 17 literature researches (17.9%) and 11 quantitative analyses (11.6%). JKDM made use of various kinds of research methodologies to have 60 questionnaire surveys (45.5%), followed by 40 literature researches (30.3%), 21 quantitative analyses (15.9%), 6 system analyses (4.5%) and 5 case studies (3.8%). And, JDS made use of 30 questionnaire surveys (50.8%), followed by 15 literature researches (25.4%), 7 case studies (11.9%) and 6 quantitative analyses (10.2%). Ninthly, similar types of scholarly journals and Korean researchers and foreigner researchers were: JDR published 93 theses (97.8%) by Korean researchers except for 1 thesis by foreigner researcher and 1 thesis by joint research of the Korean researchers and foreigner researchers. And, JKDM had no foreigner research and 13 theses (9.8%) by joint research of the Korean researchers and foreigner researchers to have more foreigner researchers as well as researchers in foreign countries than similar types of scholarly journals had. And, JDS published 56 theses (94.9%) of the Korean researchers, one thesis (1.7%) of foreigner researcher only, and 2 theses (3.4%) of joint research of both the Koreans and foreigners. Tenthly, similar type of scholarly journals and reference had citation: JDR had 42.5 literatures in average that consisted of 10.9 local literatures (25.7%) and 31.6 foreign literatures (74.3%), and cited times accounted for 1.1 thesis to decrease. JKDM cited 10.5 Korean literatures (36.3%) and 18.4 foreign literatures (63.7%), and number of self-cited literature was no more than 1.1. Number of cited times accounted for 2.9 literatures in 2008 and then decreased continuously since then. JDS cited 26,8 references in average that consisted of 10.9 local references (40.7%) and 15.9 foreign references (59.3%), and number of self-cited accounted for 0.2 reference until 2009, and it increased to be 2.1 references in 2010. The author gives implications based on JDS research trends and investigation on similar type of scholarly journals as follow: Firstly, JDS shall actively invite foreign contributors to prepare for SSCI. Secondly, ratio of theses in English shall increase greatly. Thirdly, various kinds of research methodology shall be accepted to elevate quality of scholarly journals. Fourthly, to increase cited times, Google and other web retrievals shall be reinforced to supply scholarly journals to foreign countries more. Local scholarly journals can be worldwide scholarly journal enough to be acknowledged even in foreign countries by improving the implications above.

  • PDF

Development Process and Methods of Audit and Certification Toolkit for Trustworthy Digital Records Management Agency (신뢰성 있는 전자기록관리기관 감사인증도구 개발에 관한 연구)

  • Rieh, Hae-young;Kim, Ik-han;Yim, Jin-Hee;Shim, Sungbo;Jo, YoonSun;Kim, Hyojin;Woo, Hyunmin
    • The Korean Journal of Archival Studies
    • /
    • no.25
    • /
    • pp.3-46
    • /
    • 2010
  • Digital records management is one whole system in which many social and technical elements are interacting. To maintain the trustworthiness, the repository needs periodical audit and certification. Thus, individual electronic records management agency needs toolkit that can be used to self-evaluate their trustworthiness continuously, and self-assess their atmosphere and system to recognize deficiencies. The purpose of this study is development of self-certification toolkit for repositories, which synthesized and analysed such four international standard and best practices as OAIS Reference Model(ISO 14721), TRAC, DRAMBORA, and the assessment report conducted and published by TNA/UKDA, as well as MoRe2 and current national laws and standards. As this paper describes and demonstrate the development process and the framework of this self-certification toolkit, other electronic records management agencies could follow the process and develop their own toolkit reflecting their situation, and utilize the self-assessment results in-house. As a result of this research, 12 areas for assessment were set, which include (organizational) operation management, classification system and master data management, acquisition, registration and description, storage and preservation, disposal, services, providing finding aids, system management, access control and security, monitoring/audit trail/statistics, and risk management. In each 12 area, the process map or functional charts were drawn and business functions were analyzed, and 54 'evaluation criteria', consisted of main business functional unit in each area were drawn. Under each 'evaluation criteria', 208 'specific evaluation criteria', which supposed to be implementable, measurable, and provable for self-evaluation in each area, were drawn. The audit and certification toolkit developed by this research could be used by digital repositories to conduct periodical self-assessment of the organization, which would be used to supplement any found deficiencies and be used to reflect the organizational development strategy.

Drainage Analysis for the Anyang-cheon Upper-watershed Management Planning (유역관리계획수립(流域管理計劃樹立)에 관(關)한 기초적(基礎的) 연구(硏究))

  • Woo, Bo Myeong
    • Journal of Korean Society of Forest Science
    • /
    • v.42 no.1
    • /
    • pp.39-54
    • /
    • 1979
  • Such stream characteristics as the numbers, lengths, orders of stream channels, and drainage density are the essential elements for the analysis of drainages in planning of watershed management in a drainage basin. The drainage net is the pattern of tributaries and master streams in a drainage basin as declineated on a planimetric map. Stream order is a measure of the position of a stream in the hierarchy of tributaries. Density of the drainage is given by the quotient of the cumulative length of stream and the total drainage area. Drainage density then is simply a length per unit of area. In this study, the Anyang-cheon upper-watershed is selected for the survey and analysis of the stream system and drainage density in view point of the useful collection of data for effective watershed management planning. The Anyang-cheon upper-watershed is consisted of about 12,600 hectars of drainage area including the 13 Sub-stream. Total length of the Stream (as described in the Stream Law) in the survey area is measured as much as 71.2km, and that of the Small-stream as descrived in the Saemaul Stream Survey Book (1972) is calculated as 43,010 meters. Besides of this lengths, measured about 43,410 meters of the Small-stream and about 71,900 meters of the Torrential valley through this study. The range of the drainage density among the 13 Sub-streams having sub-watershed is analysed as from 14.79 to 24.10, and average value of drainage density in the entire watershed is calculated as 18.21 in case of including the length of the Torrential valley and 12.50 in case of excluding the same. It is required that the standard classification system in classifing for the characteristics of identification among the Stream, Sub-stream, Small-stream, Torrent, and Torrential valley must be satisfied through joint study of the authorities concerned.

  • PDF

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.