• Title/Summary/Keyword: systems

Search Result 114,465, Processing Time 0.121 seconds

Development of User Based Recommender System using Social Network for u-Healthcare (사회 네트워크를 이용한 사용자 기반 유헬스케어 서비스 추천 시스템 개발)

  • Kim, Hyea-Kyeong;Choi, Il-Young;Ha, Ki-Mok;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.181-199
    • /
    • 2010
  • As rapid progress of population aging and strong interest in health, the demand for new healthcare service is increasing. Until now healthcare service has provided post treatment by face-to-face manner. But according to related researches, proactive treatment is resulted to be more effective for preventing diseases. Particularly, the existing healthcare services have limitations in preventing and managing metabolic syndrome such a lifestyle disease, because the cause of metabolic syndrome is related to life habit. As the advent of ubiquitous technology, patients with the metabolic syndrome can improve life habit such as poor eating habits and physical inactivity without the constraints of time and space through u-healthcare service. Therefore, lots of researches for u-healthcare service focus on providing the personalized healthcare service for preventing and managing metabolic syndrome. For example, Kim et al.(2010) have proposed a healthcare model for providing the customized calories and rates of nutrition factors by analyzing the user's preference in foods. Lee et al.(2010) have suggested the customized diet recommendation service considering the basic information, vital signs, family history of diseases and food preferences to prevent and manage coronary heart disease. And, Kim and Han(2004) have demonstrated that the web-based nutrition counseling has effects on food intake and lipids of patients with hyperlipidemia. However, the existing researches for u-healthcare service focus on providing the predefined one-way u-healthcare service. Thus, users have a tendency to easily lose interest in improving life habit. To solve such a problem of u-healthcare service, this research suggests a u-healthcare recommender system which is based on collaborative filtering principle and social network. This research follows the principle of collaborative filtering, but preserves local networks (consisting of small group of similar neighbors) for target users to recommend context aware healthcare services. Our research is consisted of the following five steps. In the first step, user profile is created using the usage history data for improvement in life habit. And then, a set of users known as neighbors is formed by the degree of similarity between the users, which is calculated by Pearson correlation coefficient. In the second step, the target user obtains service information from his/her neighbors. In the third step, recommendation list of top-N service is generated for the target user. Making the list, we use the multi-filtering based on user's psychological context information and body mass index (BMI) information for the detailed recommendation. In the fourth step, the personal information, which is the history of the usage service, is updated when the target user uses the recommended service. In the final step, a social network is reformed to continually provide qualified recommendation. For example, the neighbors may be excluded from the social network if the target user doesn't like the recommendation list received from them. That is, this step updates each user's neighbors locally, so maintains the updated local neighbors always to give context aware recommendation in real time. The characteristics of our research as follows. First, we develop the u-healthcare recommender system for improving life habit such as poor eating habits and physical inactivity. Second, the proposed recommender system uses autonomous collaboration, which enables users to prevent dropping and not to lose user's interest in improving life habit. Third, the reformation of the social network is automated to maintain the quality of recommendation. Finally, this research has implemented a mobile prototype system using JAVA and Microsoft Access2007 to recommend the prescribed foods and exercises for chronic disease prevention, which are provided by A university medical center. This research intends to prevent diseases such as chronic illnesses and to improve user's lifestyle through providing context aware and personalized food and exercise services with the help of similar users'experience and knowledge. We expect that the user of this system can improve their life habit with the help of handheld mobile smart phone, because it uses autonomous collaboration to arouse interest in healthcare.

Finding Weighted Sequential Patterns over Data Streams via a Gap-based Weighting Approach (발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.55-75
    • /
    • 2010
  • Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledge embedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

A Case Study on Forecasting Inbound Calls of Motor Insurance Company Using Interactive Data Mining Technique (대화식 데이터 마이닝 기법을 활용한 자동차 보험사의 인입 콜량 예측 사례)

  • Baek, Woong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.99-120
    • /
    • 2010
  • Due to the wide spread of customers' frequent access of non face-to-face services, there have been many attempts to improve customer satisfaction using huge amounts of data accumulated throughnon face-to-face channels. Usually, a call center is regarded to be one of the most representative non-faced channels. Therefore, it is important that a call center has enough agents to offer high level customer satisfaction. However, managing too many agents would increase the operational costs of a call center by increasing labor costs. Therefore, predicting and calculating the appropriate size of human resources of a call center is one of the most critical success factors of call center management. For this reason, most call centers are currently establishing a department of WFM(Work Force Management) to estimate the appropriate number of agents and to direct much effort to predict the volume of inbound calls. In real world applications, inbound call prediction is usually performed based on the intuition and experience of a domain expert. In other words, a domain expert usually predicts the volume of calls by calculating the average call of some periods and adjusting the average according tohis/her subjective estimation. However, this kind of approach has radical limitations in that the result of prediction might be strongly affected by the expert's personal experience and competence. It is often the case that a domain expert may predict inbound calls quite differently from anotherif the two experts have mutually different opinions on selecting influential variables and priorities among the variables. Moreover, it is almost impossible to logically clarify the process of expert's subjective prediction. Currently, to overcome the limitations of subjective call prediction, most call centers are adopting a WFMS(Workforce Management System) package in which expert's best practices are systemized. With WFMS, a user can predict the volume of calls by calculating the average call of each day of the week, excluding some eventful days. However, WFMS costs too much capital during the early stage of system establishment. Moreover, it is hard to reflect new information ontothe system when some factors affecting the amount of calls have been changed. In this paper, we attempt to devise a new model for predicting inbound calls that is not only based on theoretical background but also easily applicable to real world applications. Our model was mainly developed by the interactive decision tree technique, one of the most popular techniques in data mining. Therefore, we expect that our model can predict inbound calls automatically based on historical data, and it can utilize expert's domain knowledge during the process of tree construction. To analyze the accuracy of our model, we performed intensive experiments on a real case of one of the largest car insurance companies in Korea. In the case study, the prediction accuracy of the devised two models and traditional WFMS are analyzed with respect to the various error rates allowable. The experiments reveal that our data mining-based two models outperform WFMS in terms of predicting the amount of accident calls and fault calls in most experimental situations examined.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Change in Growth of Chrysanthemum zawadskii var. coreanum as Effected by Different Green Roof System under Rainfed Conditions (빗물활용 옥상녹화 식재지반에 따른 한라구절초의 생육 변화)

  • Ju, Jin-Hee;Kim, Won-Tae;Yoon, Yong-Han
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.39 no.1
    • /
    • pp.117-123
    • /
    • 2011
  • This study aims to suggest a suitable soil thickness and soil mixture ratio of a green roof system by verifying the growth of Chrysanthemum zawadskii var. coreanum as affected by different green roof systems using rainwater. The experimental planting grounds were made with different soil thicknesses(15cm, 25cm) and soil mixing ratios (SL, $P_7P_1L_2$, $P_6P_2L_2$, $P_5P_3L_2$, $P_4P_4L_2$) and with excellent drought tolerance. Ornamental value Chrysanthemum zawadskii var. coreanum was planted. The change in plant height, green coverage ratio, chlorophyll content, fresh weight, dry weight, and dry T/R ratio of Chrysanthemum zawadskii var. coreanum were investigated from April to October 2009. For 15cm soil thickness, the plant height of Chrysanthemum zawadskii var. coreanum was not significantly different as affected by the soil mixing ratio. However, it was found to be higher in the amended soil mixture, $P_7P_1L_2$, $P_6P_2L_2$, $P_5P_3L_2$ and $P_4P_4L_2$ than in the sandy loam soil, as it was SL overall. For 25cm soil the plant height differences were in order to SL < $P_7P_1L_2$, $P_6P_2L_2$, $P_5P_3L_2$ < $P_4P_4L_2$. The green coverage ratio was observed not to be different by soil mixing ratio with soil thickness of 15cm, but, the lowest green coverage ratio in the SL. In the 25cm soil thickness, the green coverage ratio was 86-89% with a good coverage rate overall. The change in chlorophyll contents with 15cm soil thickness was found to be the highest in the SL treatment and the lowest in the $P_5P_3L_2$ treatment. For 25cm thickness, the highest value was in the $P_4P_4L_2$ and SL, and the lowest in the$P_7P_1L_2$. Fresh weight and dry weight were larger in soil with 25cm thickness. Therefore, the growth of Chrysanthemum zawadskii var. coreanum as affected by a different green roof system for using rainwater was higher in soil with 25cm thickness than 15cm, and in PPL amended soil than in sandy loam.

Conflicts between the Conservation and Removal of the Modern Historic Landscapes - A Case of the Demolition Controversy of the Japanese General Government Building in Seoul - (근대 역사 경관의 보존과 철거 - 구 조선총독부 철거 논쟁을 사례로 -)

  • Son, Eun-Shin;Pae, Jeong-Hann
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.4
    • /
    • pp.21-35
    • /
    • 2018
  • In recent years, there has been a tendency to reuse 'landscapes of memory,' including industrial heritages, modern cultural heritages, and post-industrial parks, as public spaces in many cities. Among the various types of landscapes, 'modern historic landscapes', which were formed in the 19th and 20th centuries, are landscapes where the debate between conservation and removal is most frequent, according to the change of evaluation and recognition of modern history. This study examines conflicts between conservation and removal around modern historic landscapes and explores the value judgment criteria and the process of formation of those landscapes, as highlighted in the case of the demolition controversy of the old Japanese general government building in Seoul, which was dismantled in 1995. First, this study reviews newspaper articles, television news and debate programs from 1980-1999 and some articles related to the controversy of the Japanese general government building. Then it draws the following six factors as the main issues of the demolition controversy of the building: symbolic location, discoveries and responses of new historical facts, reaction and intervention of a related country, financial conditions, function and usage of the landscape, changes of urban, historical and architectural policies. Based on these issues, this study examines the conflicts between symbolic values that play an important role in the formation of modern historic landscapes and determines conservation or removal, and the utility of functional values that solve the problems and respond to criticisms that arise in the process of forming the modern historic landscape. Especially, it is noted that the most important factor that makes the decision is the symbolic values, although the determination of the conservation or removal of modern historic landscapes has changed according to changes in historical perceptions of modern history. Today, the modern historic landscape is an important site for urban design, and still has historical issues to be agreed upon and addressed. Thi study has contemporary significance from the point that it divides the many values of modern historic landscapes into symbolic values and functional values, evaluates these, and reviews the background social context.

The effect of cavity wall property on the shear bond strength test using iris method (Iris 법을 이용한 전단접착강도 측정에서 와동벽의 영향)

  • Kim, Dong-Hwan;Bae, Ji-Hyun;Cho, Byeong-Hoon;Lee, In-Bog;Baek, Seung-Ho;Ryu, Hyun-Mi;Son, Ho-Hyun;Um, Chung-Moon;Kwon, Hyuck-Choon
    • Restorative Dentistry and Endodontics
    • /
    • v.29 no.2
    • /
    • pp.170-176
    • /
    • 2004
  • Objectives : In the unique metal iris method. the developing interfacial gap at the cavity floor resulting from the cavity wall property during polymerizing composite resin might affect the nominal shear bond strength values. The aim of this study is to evaluate that the iris method reduces the cohesive failure in the substrates and the cavity wall property effects on the shear bond strength tests using iris method. Materials and Methods : The occlusal dentin of 64 extracted human molars were randomly divided into 4 groups to simulate two different levels of cavity wall property (metal and dentin iris) and two different materials ($ONE-STEP^{\circledR}$ and $ALL-BOND^{\circledR}$ 2) for each wall property. After positioning the iris on the dentin surface. composite resin was packed and light-cured. After 24 hours the shear bond strength was measured at a crosshead speed of 0.5 mm/min. Fracture analysis was performed using a microscope and SEM. The data was analyzed statistically by a two-way ANOV A and t-test. Results : The shear bond strength with metal iris was significant higher than those with dentin iris (p=0.034). Using $ONE-STEP^{\circledR}$, the shear bond strength with metal iris was significant higher than those with dentin iris (p=0.005), but not in $ALL-BOND^{\circledR}$ 2 (p=0.774). The incidence of cohesive failure was very lower than other shear bond strength tests that did not use iris method. Conclusions:The iris method may significantly reduce the cohesive failures in the substrates. According to the bonding agent systems. the shear bond strength was affected by the cavity wall property.

MICROLEAKAGE OF COMPOSITE RESIN RESTORATION ACCORDING TO THE NUMBER OF THERMOCYCLING (열순환 횟수에 따른 복합레진의 미세누출)

  • Kim, Chang-Youn;Shin, Dong-Hoon
    • Restorative Dentistry and Endodontics
    • /
    • v.32 no.4
    • /
    • pp.377-384
    • /
    • 2007
  • Present tooth bonding system can be categorized into total etching bonding system (TE) and self-etching boding system (SE) based on their way of smear layer treatment. The purposes of this study were to compare the effectiveness between these two systems and to evaluate the effect of number of themocycling on microleakage of class V composite resin restorations. Total forty class V cavities were prepared on the single-rooted bovine teeth and were randomly divided into four experimental groups: two kinds of bonding system and another two kinds of thermocycling groups. Half of the cavities were filed with Z250 following the use of TE system, Single Bond and another twenty cavities were filled with Metafil and AQ Bond, SE system. All composite restoratives were cured using light curing unit (XL2500, 3M ESPE, St. Paul, MN, USA) for 40 seconds with a light intensity of $600mW/cm^2$. Teeth were stored in distilled water for one day at room temperature and were finished and polished with Sof-Lex system. Half of teeth were thermocycled 500 times and the other half were thermocycled 5,000 times between $5^{\circ}C$ and $55^{\circ}C$ for 30 second at each temperature. Teeth were isolated with two layers of nail varnish except the restoration surface and 1 mm surrounding margins. Electrical conductivity (${\mu}A$) was recorded in distilled water by electrochemical method. Microleakage scores were compared and analyzed using two-way ANOVA at 95% level. From this study, following results were obtained: There was no interaction between variables of bonding system and number of thermocycling (p = 0.485). Microleakage was not affected by the number of thermocycling either (p = 0.814). However, Composite restoration of Metafil and AQ Bond, SE bond system showed less microleakage than composite restoration of Z250 and Single Bond, TE bond system (p = 0.005).

A Fast Algorithm for Computing Multiplicative Inverses in GF(2$^{m}$) using Factorization Formula and Normal Basis (인수분해 공식과 정규기저를 이용한 GF(2$^{m}$ ) 상의 고속 곱셈 역원 연산 알고리즘)

  • 장용희;권용진
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.324-329
    • /
    • 2003
  • The public-key cryptosystems such as Diffie-Hellman Key Distribution and Elliptical Curve Cryptosystems are built on the basis of the operations defined in GF(2$^{m}$ ):addition, subtraction, multiplication and multiplicative inversion. It is important that these operations should be computed at high speed in order to implement these cryptosystems efficiently. Among those operations, as being the most time-consuming, multiplicative inversion has become the object of lots of investigation Formant's theorem says $\beta$$^{-1}$ =$\beta$$^{2}$sup m/-2/, where $\beta$$^{-1}$ is the multiplicative inverse of $\beta$$\in$GF(2$^{m}$ ). Therefore, to compute the multiplicative inverse of arbitrary elements of GF(2$^{m}$ ), it is most important to reduce the number of times of multiplication by decomposing 2$^{m}$ -2 efficiently. Among many algorithms relevant to the subject, the algorithm proposed by Itoh and Tsujii[2] has reduced the required number of times of multiplication to O(log m) by using normal basis. Furthermore, a few papers have presented algorithms improving the Itoh and Tsujii's. However they have some demerits such as complicated decomposition processes[3,5]. In this paper, in the case of 2$^{m}$ -2, which is mainly used in practical applications, an efficient algorithm is proposed for computing the multiplicative inverse at high speed by using both the factorization formula x$^3$-y$^3$=(x-y)(x$^2$+xy+y$^2$) and normal basis. The number of times of multiplication of the algorithm is smaller than that of the algorithm proposed by Itoh and Tsujii. Also the algorithm decomposes 2$^{m}$ -2 more simply than other proposed algorithms.

A Hardware Implementation of the Underlying Field Arithmetic Processor based on Optimized Unit Operation Components for Elliptic Curve Cryptosystems (타원곡선을 암호시스템에 사용되는 최적단위 연산항을 기반으로 한 기저체 연산기의 하드웨어 구현)

  • Jo, Seong-Je;Kwon, Yong-Jin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.1
    • /
    • pp.88-95
    • /
    • 2002
  • In recent years, the security of hardware and software systems is one of the most essential factor of our safe network community. As elliptic Curve Cryptosystems proposed by N. Koblitz and V. Miller independently in 1985, require fewer bits for the same security as the existing cryptosystems, for example RSA, there is a net reduction in cost size, and time. In this thesis, we propose an efficient hardware architecture of underlying field arithmetic processor for Elliptic Curve Cryptosystems, and a very useful method for implementing the architecture, especially multiplicative inverse operator over GF$GF (2^m)$ onto FPGA and futhermore VLSI, where the method is based on optimized unit operation components. We optimize the arithmetic processor for speed so that it has a resonable number of gates to implement. The proposed architecture could be applied to any finite field $F_{2m}$. According to the simulation result, though the number of gates are increased by a factor of 8.8, the multiplication speed We optimize the arithmetic processor for speed so that it has a resonable number of gates to implement. The proposed architecture could be applied to any finite field $F_{2m}$. According to the simulation result, though the number of gates are increased by a factor of 8.8, the multiplication speed and inversion speed has been improved 150 times, 480 times respectively compared with the thesis presented by Sarwono Sutikno et al. [7]. The designed underlying arithmetic processor can be also applied for implementing other crypto-processor and various finite field applications.