• Title/Summary/Keyword: learning

Search Result 35,312, Processing Time 0.065 seconds

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

CLINICAL STUDY OF THE ABUSE IN PSYCHIATRICALLY HOSPITALIZED CHILDREN AND ADOLESCENTS (소아청소년 정신과병동 입원아동의 학대에 대한 임상 연구)

  • Lee, Soo-Kyung;Hong, Kang-E
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.10 no.2
    • /
    • pp.145-157
    • /
    • 1999
  • This study was performed by the children and adolescents who were abused or neglected physically, emotionally that were selected in child & adolescents psychiatric ward. We investigated the number of these case in admitted children & adolescents, and also observed characteristics of symptoms, developmental history, characteristics of abuse style, characteristics of abusers, family dynamics and psychopathology. We hypothesized that all kinds of abuse will influnced to emotional, behavioral problems, developmental courses on victims, interactive effects on family dynamics and psychopathology. That subjects were 22 persons of victims who be determined by clinical observation and clinical note. The results of the study were as follows:1) Demographic characteristics of victims:ratio of sex was 1:6.3(male:female), mean age was $11.1{\pm}2.5$. According to birth order, lst was 12(54.5%), 2nd was 5(23%), 3rd was 2(9%) and only child was 3(13.5%). 2) Characteristics of family:According to socioeconomic status, middle to high class was 3(13.5%), middle one was 9(41.% ), middle to low one was 9(41%), low one was 1(0.5%). according to number of family, under the 3 person was 3(13.5%), 4-5 was 17(77.5%), 6-7 was 2(9%). according to marital status of parents, divorce or seperation were 5(23%), remarriage 2(9%), severe marital discord was 19(86.5%). In father, antisocial behavior was 7(32%), alcohol dependence was 10(45.5%). In mother, alcohol abuse was 5(23%), depression was 17(77.3%), history of psychiatric management was 6(27%). 3) Characteristics of abuse:Physical abuse was 18(81.8%), physical and emotional abuse and neglect were 4(18.2%). according to onset of abuse, before 3 years was 15(54.5%), 3-6 years was 5(27.5%), schooler was 1(15%). Only father offender was 2(19%), only mother offender was 8(35.4%), both offender was 8(35.4%), accompaning with spouse abuse was 7(27%), and accompaning with other sibling abuse was 4(18.2%). 4) General characteristics and developmental history of victims:Unwanted baby was 12(54.5%), developmental delay before abuse was9(41%), comorbid developmental disorder was 15(68%). there were 6(27.5%) who didn‘t show definite sign of developmental delay before abuse. 5) Main diagnosis and comorbid diagnosis:According to main diagnosis, conduct disorder 6(27.3%), borderline child 5(23%), depression4(18%), attention deficit hyperactivity disorder(ADHD) 4(18%), pervasive developmental disorder not otherwise specified 2(9%), selective mutism 1(5%). According to comorbid diagnosis, ADHD, borderline intelligence, mental retardation, learning disorder, developmental language disorder, oppositional defiant disorder, chronic tic disorder, functional enuresis and encoporesis, anxiety disorder, dissociative disorder, personality disorder due to medical condition. 5) Course of treatment:A mean duration of admission was $2.4{\pm}1.5$ months. 11(15%) showed improvement of symtoms, however 11(50%) was not changed of symtoms.

  • PDF

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Multi-Dimensional Analysis Method of Product Reviews for Market Insight (마켓 인사이트를 위한 상품 리뷰의 다차원 분석 방안)

  • Park, Jeong Hyun;Lee, Seo Ho;Lim, Gyu Jin;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.57-78
    • /
    • 2020
  • With the development of the Internet, consumers have had an opportunity to check product information easily through E-Commerce. Product reviews used in the process of purchasing goods are based on user experience, allowing consumers to engage as producers of information as well as refer to information. This can be a way to increase the efficiency of purchasing decisions from the perspective of consumers, and from the seller's point of view, it can help develop products and strengthen their competitiveness. However, it takes a lot of time and effort to understand the overall assessment and assessment dimensions of the products that I think are important in reading the vast amount of product reviews offered by E-Commerce for the products consumers want to compare. This is because product reviews are unstructured information and it is difficult to read sentiment of reviews and assessment dimension immediately. For example, consumers who want to purchase a laptop would like to check the assessment of comparative products at each dimension, such as performance, weight, delivery, speed, and design. Therefore, in this paper, we would like to propose a method to automatically generate multi-dimensional product assessment scores in product reviews that we would like to compare. The methods presented in this study consist largely of two phases. One is the pre-preparation phase and the second is the individual product scoring phase. In the pre-preparation phase, a dimensioned classification model and a sentiment analysis model are created based on a review of the large category product group review. By combining word embedding and association analysis, the dimensioned classification model complements the limitation that word embedding methods for finding relevance between dimensions and words in existing studies see only the distance of words in sentences. Sentiment analysis models generate CNN models by organizing learning data tagged with positives and negatives on a phrase unit for accurate polarity detection. Through this, the individual product scoring phase applies the models pre-prepared for the phrase unit review. Multi-dimensional assessment scores can be obtained by aggregating them by assessment dimension according to the proportion of reviews organized like this, which are grouped among those that are judged to describe a specific dimension for each phrase. In the experiment of this paper, approximately 260,000 reviews of the large category product group are collected to form a dimensioned classification model and a sentiment analysis model. In addition, reviews of the laptops of S and L companies selling at E-Commerce are collected and used as experimental data, respectively. The dimensioned classification model classified individual product reviews broken down into phrases into six assessment dimensions and combined the existing word embedding method with an association analysis indicating frequency between words and dimensions. As a result of combining word embedding and association analysis, the accuracy of the model increased by 13.7%. The sentiment analysis models could be seen to closely analyze the assessment when they were taught in a phrase unit rather than in sentences. As a result, it was confirmed that the accuracy was 29.4% higher than the sentence-based model. Through this study, both sellers and consumers can expect efficient decision making in purchasing and product development, given that they can make multi-dimensional comparisons of products. In addition, text reviews, which are unstructured data, were transformed into objective values such as frequency and morpheme, and they were analysed together using word embedding and association analysis to improve the objectivity aspects of more precise multi-dimensional analysis and research. This will be an attractive analysis model in terms of not only enabling more effective service deployment during the evolving E-Commerce market and fierce competition, but also satisfying both customers.

A Study on the 'Zhe Zhong Pai'(折衷派) of the Traditional Medicine of Japan (일본(日本) 의학醫學의 '절충파(折衷派)'에 관(關)한 연구(硏究))

  • Park, Hyun-Kuk;Kim, Ki-Wook
    • Journal of Korean Medical classics
    • /
    • v.20 no.3
    • /
    • pp.121-141
    • /
    • 2007
  • The outline and characteristics of the important doctors of the 'Zhe Zhong Pai'(折衷派) are as follows. Part 1. In the late Edo(江戶) period The 'Zhe Zhong Pai', which tried to take the theory and clinical treatment of the 'Hou Shi Pai (後世派)' and the 'Gu Fang Pai (古方派)' and get their strong points to make treatments perfect, appeared. Their point was 'The main part is the art of the ancients, The latter prescriptions are to be used'(以古法爲主, 後世方爲用) and the "Shang Han Lun(傷寒論)" was revered for its treatments but in actual use it was not kept at that. As mentioned above The 'Zhe Zhong Pai ' viewed treatments as the base, which was the view of most doctors in the Edo period, However, the reason the 'Zhe Zhong Pai' is not valued as much as the 'Gu Fang Pai' by medical history books in Japan is because the 'Zhe Zhong Pai' does not have the substantiation or uniqueness of the 'Gu Fang Pai', and also because the view of 'gather as well as store up' was the same as the 'Kao Zheng Pai', Moreover, the 'compromise'(折衷) point of view was from taking in both Chinese and western medical knowledge systems(漢蘭折衷), Generally the pioneer of the 'Zhe Zhong Pai' is seen as Mochizuki Rokumon(望月鹿門) and after that was Fukui Futei(福井楓亭), Wadato Kaku(和田東郭), Yamada Seichin(山田正珍) and Taki Motohiro(多紀元簡), Part 2. The lives of Wada Tokaku(和田東郭), Nakagame Kinkei(中神琴溪), Nei Teng Xi Zhe(內藤希哲), the important doctors of the 'Zhe Zhong Pai', are as follows First. Wada Tokaku(和田東郭, 1743-1803) was born when the 'Hou Shi Pai' was already declining and the 'Gu Fang Pai' was flourishing and learned medicine from a 'Hou Shi Pai' doctor, Hu Tian Xu Shan(戶田旭山) and a 'Gu Fang Pai' doctor, Yoshimasu Todo(吉益東洞). He was not hindered by 'the old ways(古方), and did not lean towards 'the new ways(後世方)' and formed a way of compromise that 'looked at hardness and softness as the same'(剛柔相摩) by setting 'the cure of the disease' as the base, and said that to cure diseases 'the old way' must be used, but 'the new way' was necessary to supplement its shortcomings. His works include "Dao Shui Suo Yan", "Jiao Chiang Fang Yi Je" and "Yi Xue Sho(醫學說)" Second. Nakagame Kinkei(中神琴溪, 1744-1833) was famous for leaving Yoshirnasu Todo(吉益東洞) and changing to the 'Zhe Zhong Pai', and in his early years used qing fen(輕粉) to cure geisha(妓女) of syphilis. His argument was "the "Shang Han Lun" must be revered but needs to be adapted", "Zhong jing can be made into a follower but I cannot become his follower", "the later medical texts such as "Ru Men Shi Qin(儒門事親)" should only be used for its prescriptions and not its theories". His works include "Shang Han Lun Yue Yan(傷寒論約言) Third. Nei Teng Xi Zhe(內藤希哲, 1701-1735) learned medicine from Qing Shui Xian Sheng(淸水先生) and went out to Edo. In his book "Yi Jing Jie Huo Lun(醫經解惑論)" he tells of how he went from 'learning'(學) to 'skepticism'(惑) and how skepticism made him learn in 'the six skepticisms'(六惑). In the latter years Xi Zhe(希哲) combines the "Shen Nong Ben Cao jing(神農本草經)", the main text for herbal medicine, "Ming Tang jing(明堂經)" of accupuncture, basic theory texts "Huang Dui Nei jing(黃帝內徑)" and "Nan jing(難經)" with the "Shang Han Za Bing Lun", a book that the 'Gu Fang Pai' saw as opposing to the rest, and became 'an expert of five scriptures'(五經一貫). Part 3. Asada Showhaku(淺田宗伯, 1815-1894) started medicine at Zhong Cun Zhong(中村中倧) and learned 'the old way'(古方) from Yoshirnasu Todo and got experience through Chuan Yue(川越) and Fu jing(福井) and received teachings in texts, history and Wang Yangmin's principles(陽明學) from famous teachers. Showhaku(宗伯) meets a medical official of the makufu(幕府), Ben Kang Zong Yuan(本康宗圓), and recieves help from the 3 great doctors of the Edo period, Taki Motokato(多紀元堅), Xiao Dao Xue GU(小島學古) and Xi Duo Cun Kao Chuang and further develops his arts. At 47 he diagnoses the general Jia Mao(家茂) with 'heart failure from beriberi'(脚氣衝心) and becomes a Zheng Shi(徵I), at 51 he cures a minister from France and received a present from Napoleon, at 65 he becomes the court physician and saves Ming Gong(明宮) jia Ren Qn Wang(嘉仁親王, later the 大正犬皇) from bodily convulsions and becomes 'the vassal of merit who saved the national polity(國體)' At the 7th year of the Meiji(明治) he becomes the 2nd owner of Wen Zhi She(溫知社) and takes part in the 'kampo continuation movement'. In his latter years he saw 14000 patients a year, so we can estimate the quality and quantity of his clinical skills Showhaku(宗伯) wrote over 80 books including the "Ju Chuang Shu Ying(橘窓書影)", "WU Wu Yao Shi Fang Han(勿誤藥室方函)", "Shang Han Biang Shu(傷寒辨術)", "jing Qi Shen Lun(精氣神論)", "Hunag Guo Ming Yi Chuan(皇國名醫傳)" and the "Xian Jhe Yi Hua(先哲醫話)". Especially in the "Ju Chuang Shu Ying(橘窓書影)" he says "the old theories are the main, and the new prescriptions are to be used"(以古法爲主, 後世方爲用), stating the 'Zhe Zhong Pai' way of thinking. In the first volume of "Shung Han Biang Shu(傷寒辨術) and "Za Bing Lun Shi(雜病論識)", 'Zong Ping'(總評), He discerns the parts that are not Zhang Zhong Jing's writings and emphasizes his theories and practical uses.

  • PDF

Spatial effect on the diffusion of discount stores (대형할인점 확산에 대한 공간적 영향)

  • Joo, Young-Jin;Kim, Mi-Ae
    • Journal of Distribution Research
    • /
    • v.15 no.4
    • /
    • pp.61-85
    • /
    • 2010
  • Introduction: Diffusion is process by which an innovation is communicated through certain channel overtime among the members of a social system(Rogers 1983). Bass(1969) suggested the Bass model describing diffusion process. The Bass model assumes potential adopters of innovation are influenced by mass-media and word-of-mouth from communication with previous adopters. Various expansions of the Bass model have been conducted. Some of them proposed a third factor affecting diffusion. Others proposed multinational diffusion model and it stressed interactive effect on diffusion among several countries. We add a spatial factor in the Bass model as a third communication factor. Because of situation where we can not control the interaction between markets, we need to consider that diffusion within certain market can be influenced by diffusion in contiguous market. The process that certain type of retail extends is a result that particular market can be described by the retail life cycle. Diffusion of retail has pattern following three phases of spatial diffusion: adoption of innovation happens in near the diffusion center first, spreads to the vicinity of the diffusing center and then adoption of innovation is completed in peripheral areas in saturation stage. So we expect spatial effect to be important to describe diffusion of domestic discount store. We define a spatial diffusion model using multinational diffusion model and apply it to the diffusion of discount store. Modeling: In this paper, we define a spatial diffusion model and apply it to the diffusion of discount store. To define a spatial diffusion model, we expand learning model(Kumar and Krishnan 2002) and separate diffusion process in diffusion center(market A) from diffusion process in the vicinity of the diffusing center(market B). The proposed spatial diffusion model is shown in equation (1a) and (1b). Equation (1a) is the diffusion process in diffusion center and equation (1b) is one in the vicinity of the diffusing center. $$\array{{S_{i,t}=(p_i+q_i{\frac{Y_{i,t-1}}{m_i}})(m_i-Y_{i,t-1})\;i{\in}\{1,{\cdots},I\}\;(1a)}\\{S_{j,t}=(p_j+q_j{\frac{Y_{j,t-1}}{m_i}}+{\sum\limits_{i=1}^I}{\gamma}_{ij}{\frac{Y_{i,t-1}}{m_i}})(m_j-Y_{j,t-1})\;i{\in}\{1,{\cdots},I\},\;j{\in}\{I+1,{\cdots},I+J\}\;(1b)}}$$ We rise two research questions. (1) The proposed spatial diffusion model is more effective than the Bass model to describe the diffusion of discount stores. (2) The more similar retail environment of diffusing center with that of the vicinity of the contiguous market is, the larger spatial effect of diffusing center on diffusion of the vicinity of the contiguous market is. To examine above two questions, we adopt the Bass model to estimate diffusion of discount store first. Next spatial diffusion model where spatial factor is added to the Bass model is used to estimate it. Finally by comparing Bass model with spatial diffusion model, we try to find out which model describes diffusion of discount store better. In addition, we investigate the relationship between similarity of retail environment(conceptual distance) and spatial factor impact with correlation analysis. Result and Implication: We suggest spatial diffusion model to describe diffusion of discount stores. To examine the proposed spatial diffusion model, 347 domestic discount stores are used and we divide nation into 5 districts, Seoul-Gyeongin(SG), Busan-Gyeongnam(BG), Daegu-Gyeongbuk(DG), Gwan- gju-Jeonla(GJ), Daejeon-Chungcheong(DC), and the result is shown

    . In a result of the Bass model(I), the estimates of innovation coefficient(p) and imitation coefficient(q) are 0.017 and 0.323 respectively. While the estimate of market potential is 384. A result of the Bass model(II) for each district shows the estimates of innovation coefficient(p) in SG is 0.019 and the lowest among 5 areas. This is because SG is the diffusion center. The estimates of imitation coefficient(q) in BG is 0.353 and the highest. The imitation coefficient in the vicinity of the diffusing center such as BG is higher than that in the diffusing center because much information flows through various paths more as diffusion is progressing. A result of the Bass model(II) shows the estimates of innovation coefficient(p) in SG is 0.019 and the lowest among 5 areas. This is because SG is the diffusion center. The estimates of imitation coefficient(q) in BG is 0.353 and the highest. The imitation coefficient in the vicinity of the diffusing center such as BG is higher than that in the diffusing center because much information flows through various paths more as diffusion is progressing. In a result of spatial diffusion model(IV), we can notice the changes between coefficients of the bass model and those of the spatial diffusion model. Except for GJ, the estimates of innovation and imitation coefficients in Model IV are lower than those in Model II. The changes of innovation and imitation coefficients are reflected to spatial coefficient(${\gamma}$). From spatial coefficient(${\gamma}$) we can infer that when the diffusion in the vicinity of the diffusing center occurs, the diffusion is influenced by one in the diffusing center. The difference between the Bass model(II) and the spatial diffusion model(IV) is statistically significant with the ${\chi}^2$-distributed likelihood ratio statistic is 16.598(p=0.0023). Which implies that the spatial diffusion model is more effective than the Bass model to describe diffusion of discount stores. So the research question (1) is supported. In addition, we found that there are statistically significant relationship between similarity of retail environment and spatial effect by using correlation analysis. So the research question (2) is also supported.

  • PDF
  • A Study on the 1889 'Nanjukseok' (Orchid, Bamboo and Rock) Paintings of Seo Byeong-o (석재 서병오(1862-1936)의 1889년작 난죽석도 연구)

    • Choi, Kyoung Hyun
      • Korean Journal of Heritage: History & Science
      • /
      • v.51 no.4
      • /
      • pp.4-23
      • /
      • 2018
    • Seo Byeong-o (徐丙五, 1862-1936) played a central role in the formation of the Daegu artistic community-which advocated artistic styles combining poetry, calligraphy and painting-during the Japanese colonial period, when the introduction of the Western concept of 'art' led to the adoption of Japanese and Western styles of painting in Korea. Seo first entered the world of calligraphy and painting after meeting Lee Ha-eung (李昰應, 1820-1898) in 1879, but his career as a scholar-artist only began in earnest after Korea was annexed by Japan in 1910. Seo's oeuvre can be broadly divided into three periods. In his initial period of learning, from 1879 to 1897, his artistic activity was largely confined to copying works from Chinese painting albums and painting works in the "Four Gentlemen" genre, influenced by the work of Lee Ha-eung, in his spare time. This may have been because Seo's principal aim at this time was to further his career as a government official. His subsequent period of development, which lasted from 1898 until 1920, saw him play a leading social role in such areas as the patriotic enlightenment movement until 1910, after which he reoriented his life to become a scholar-artist. During this period, Seo explored new styles based on the orchid paintings of Min Yeong-ik (閔泳翊, 1860-1914), whom he met during his second trip to Shanghai, and on the bamboo paintings of Chinese artist Pu Hua (蒲華, 1830-1911). At the same time, he painted in various genres including landscapes, flowers, and gimyeong jeolji (器皿折枝; still life with vessels and flowers). In his final mature period, from 1921 to 1936, Seo divided his time between Daegu and Seoul, becoming a highly active calligrapher and painter in Korea's modern art community. By this time his unique personal style, characterized by broad brush strokes and the use of abundant ink in orchid and bamboo paintings, was fully formed. Records on, and extant works from, Seo's early period are particularly rare, thus confining knowledge of his artistic activities and painting style largely to the realm of speculation. In this respect, eleven recently revealed nanjukseok (蘭竹石圖; orchid, bamboo and rock) paintings, produced by Seo in 1889, provide important clues about the origins and standards of his early-period painting style. This study uses a comparative analysis to confirm that Seo's orchid paintings show the influence of the early gunran (群蘭圖; orchid) and seongnan (石蘭圖; rock and orchid) paintings produced by Lee Ha-eung before his arrest by Qing troops in July 1882. Seo's bamboo paintings appear to show both that he adopted the style of Zheng Xie (鄭燮, 1693-1765) of the Yangzhou School (揚州畵派), a style widely known in Seoul from the late eighteenth century onward, and of Heo Ryeon (許鍊, 1809-1892), a student of Joseon artist Kim Jeong-hui (金正喜,1786-1856), and that he attempted to apply a modified version of Lee Ha-eung's seongnan painting technique. It was not possible to find other works by Seo evincing a direct relationship with the curious rocks depicted in his 1889 paintings, but I contend that they show the influence of both the late-nineteenth-century-Qing rock painter Zhou Tang (周棠, 1806-1876) and the curious rock paintings of the middle-class Joseon artist Jeong Hak-gyo (丁學敎, 1832-1914). In conclusion, this study asserts that, for his 1889 nanjukseok paintings, Seo Byeong-o adopted the styles of contemporary painters such as Heo Ryeon and Jeong Hak-gyo, whom he met during his early period at the Unhyeongung through his connection with its occupant, Lee Ha-eung, and those of artists such as Zheng Xie and Zhou Tang, whose works he was able to directly observe in Korea.

    Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

    • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.2
      • /
      • pp.141-166
      • /
      • 2019
    • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.