• Title/Summary/Keyword: User evaluation

Search Result 2,584, Processing Time 0.032 seconds

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

Demonstration of Disaster Information and Evacuation Support Model for the Safety Vulnerable Groups (안전취약계층을 위한 재난정보 및 대피지원 모델 실증)

  • Son, Min Ho;Kweon, Il Ryong;Jung, Tae Ho;Lee, Han Jun
    • Journal of the Society of Disaster Information
    • /
    • v.17 no.3
    • /
    • pp.465-486
    • /
    • 2021
  • Purpose: Since most disaster information systems are centered on non-disabled people, the reality is that there is a lack of disaster information delivery systems for the vulnerable, such as the disabled, the elderly, and children, who are relatively vulnerable to disasters. The purpose of the service is to improve the safety of the disabled and the elderly by eliminating blind spots of informatization and establishing customized disaster information services to respond to disasters through IoT-based integrated control technology. Method: The model at the core of this study is the disaster alert propagation model and evacuation support model, and it shall be developed by reflecting the behavioral characteristics of the disabled and the elderly in the event of a disaster. The disaster alert propagation model spreads disaster situations collected using IoT technology, and the evacuation support model uses geomagnetic field-based measuring technology to identify the user's indoor location and help the disabled and the elderly evacuate safely. Results: Demonstration model demonstration resulted in an efficient qualitative evaluation of indoor location accuracy, such as the suitability of evacuation route guidance and satisfaction of services from the user's perspective. Conclusion: Disaster information and evacuation support services were established for the safety vulnerable groups of mobile app for model verification. The disaster situation was demonstrated through experts in the related fields and the disabled by limiting it to the fire situation. It was evaluated as "satisfaction" in the adequacy of disaster information delivery and evacuation support, and its functional satisfaction and user UI were evaluated as "normal" due to the nature of the pilot model. Through this, the disaster information and evacuation support services presented in this study were evaluated to support the safety vulnerable groups to a faster disaster evacuation without missing the golden time of disaster evacuation.

A Study of 'Hear Me Later' VR Content Production to Improve the Perception of the Visually-Impaired (시각 장애인에 대한 인식 개선을 위한 'Hear me later' VR 콘텐츠 제작 연구)

  • Kang, YeWon;Cho, WonA;Hong, SeungA;Lee, KiHan;Ko, Hyeyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.99-109
    • /
    • 2020
  • This study was conducted to improve the education method for improving perception awareness of the visually-impaired. 'Hear me later' was designed and implemented based on VR content that allows the visually-impaired experience in the eyes and environment. The main target is from middle and high school students to adolescents in their twenties. It is consisted of a student, the user's daily life with waking up at home in the morning, going to school, taking classes at school, and disembarking home late in the dark. In addition, 10 quests are placed on each map to induce users' participation and activity. These quests are a daily activity for non-disabled people, but it is an activity to experience uncomfortable activity for visually impaired people. In order to verify the effect of 'Hear me later', 8 participants in their early teens to early 20s' perception of visually impaired people was measured through pre and post evaluation of VR contents experience. In order to verify the effect of'Hear me later', 8 participants in their early teens to early 20s' perception of visually impaired people was measured through pre-post evaluation of VR experiences. As a result, it was found that in the post-evaluation of VR contents experience, the perception of the visually impaired was increased by 30% compared to the pre-evaluation. In particular, misunderstandings and changes in prejudice toward the visually impaired were remarkable. Through this study, the possibility of a VR-based disability experience education program that can freely construct space-time and maximize the experience was verified. In addition, it laid the foundation to expand it to various fields of improvement of the disabled.

Recent Progress in Air-Conditioning and Refrigeration Research : A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2015 (설비공학회 분야의 최근 연구 동향 : 2015년 학회지 논문에 대한 종합적 고찰)

  • Lee, Dae-Young;Kim, Sa Ryang;Kim, Hyun-Jung;Kim, Dong-Seon;Park, Jun-Seok;Ihm, Pyeong Chan
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.28 no.6
    • /
    • pp.256-268
    • /
    • 2016
  • This article reviews the papers published in the Korean Journal of Air-Conditioning and Refrigeration Engineering during 2015. It is intended to understand the status of current research in the areas of heating, cooling, ventilation, sanitation, and indoor environments of buildings and plant facilities. Conclusions are as follows. (1) The research works on the thermal and fluid engineering were carried out in the areas of flow, heat and mass transfer, cooling and heating, and air-conditioning, the renewable energy system and the flow inside building rooms. Research issues dealing with air-conditioning machines and fire and exhausting smoke were reduced. CFD seems to be spreading to more research areas. (2) Research works on heat transfer area were carried out in the categories of heat transfer characteristics, pool boiling and condensing heat transfer and industrial heat exchangers. Researches on heat transfer characteristics included the economic analysis of GHG emission, micro channel heat exchanger, effect of rib angle on thermal performance, the airside performance of fin-and-tube heat exchangers, theoretical analysis of a rotary heat exchanger, heat exchanger in a cryogenic environment, the performance of a cross-flow-type, indirect evaporative cooler made of paper/plastic film. In the area of pool boiling and condensing, the bubble jet loop heat pipe was studied. In the area of industrial heat exchangers, researches were performed on fin-tube heat exchanger, KSTAR PFC and vacuum vessel at baking phase, the performance of small-sized dehumidification rotor, design of gas-injection port of an asymmetric scroll compressor, effect of slot discharge-angle change on exhaust efficiency of range hood system with air curtain. (3) In the field of refrigeration, various studies were carried in the categories of refrigeration cycle, alternative refrigeration/energy system, system control. In the refrigeration cycle category, a cold-climate heat pump system, $CO_2$ cascade systems, ejector cycles and a PCM-based continuous heating system were investigated. In the alternative refrigeration/energy system category, a polymer adsorption heat pump, an alcohol absorption heat pump and a desiccant-based hybrid refrigeration system were investigated. In the system control category, turbo-refrigerator capacity controls and an absorption chiller fault diagnostics were investigated. (4) In building mechanical system research fields, eighteen studies were reported for achieving effective design of the mechanical systems, and also for maximizing the energy efficiency of buildings. The topics of the studies included energy performance, HVAC system, ventilation, and renewable energies, piping in the buildings. Proposed designs, performance tests using numerical methods and experiments provide useful information and key data which can improve the energy efficiency of the buildings. (5) The field of architectural environment was mostly focused on indoor environment and building energy. The main researches of indoor environment were related to the user and location awareness technology applied dimming lighting control system, the lighting performance evaluation for light-shelves, the improvement evaluation of air quality through analysis of ventilation efficiency and the evaluation of airtightness of sliding and LS window systems. The subjects of building energy were worked on the energy saving estimation of existing buildings, the developing model to predict heating energy usage in domestic city area and the performance evaluation of cooling applied with economizer control. The studies were also performed related to the experimental measurement of weight variation and thermal conductivity in polyurethane foam, the development of flame spread prevention system for sandwich panels, the utilization of heat from waste-incineration facility in large-scale horticultural facilities.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Different Look, Different Feel: Social Robot Design Evaluation Model Based on ABOT Attributes and Consumer Emotions (각인각색, 각봇각색: ABOT 속성과 소비자 감성 기반 소셜로봇 디자인평가 모형 개발)

  • Ha, Sangjip;Lee, Junsik;Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.55-78
    • /
    • 2021
  • Tosolve complex and diverse social problems and ensure the quality of life of individuals, social robots that can interact with humans are attracting attention. In the past, robots were recognized as beings that provide labor force as they put into industrial sites on behalf of humans. However, the concept of today's robot has been extended to social robots that coexist with humans and enable social interaction with the advent of Smart technology, which is considered an important driver in most industries. Specifically, there are service robots that respond to customers, the robots that have the purpose of edutainment, and the emotionalrobots that can interact with humans intimately. However, popularization of robots is not felt despite the current information environment in the modern ICT service environment and the 4th industrial revolution. Considering social interaction with users which is an important function of social robots, not only the technology of the robots but also other factors should be considered. The design elements of the robot are more important than other factors tomake consumers purchase essentially a social robot. In fact, existing studies on social robots are at the level of proposing "robot development methodology" or testing the effects provided by social robots to users in pieces. On the other hand, consumer emotions felt from the robot's appearance has an important influence in the process of forming user's perception, reasoning, evaluation and expectation. Furthermore, it can affect attitude toward robots and good feeling and performance reasoning, etc. Therefore, this study aims to verify the effect of appearance of social robot and consumer emotions on consumer's attitude toward social robot. At this time, a social robot design evaluation model is constructed by combining heterogeneous data from different sources. Specifically, the three quantitative indicator data for the appearance of social robots from the ABOT Database is included in the model. The consumer emotions of social robot design has been collected through (1) the existing design evaluation literature and (2) online buzzsuch as product reviews and blogs, (3) qualitative interviews for social robot design. Later, we collected the score of consumer emotions and attitudes toward various social robots through a large-scale consumer survey. First, we have derived the six major dimensions of consumer emotions for 23 pieces of detailed emotions through dimension reduction methodology. Then, statistical analysis was performed to verify the effect of derived consumer emotionson attitude toward social robots. Finally, the moderated regression analysis was performed to verify the effect of quantitatively collected indicators of social robot appearance on the relationship between consumer emotions and attitudes toward social robots. Interestingly, several significant moderation effects were identified, these effects are visualized with two-way interaction effect to interpret them from multidisciplinary perspectives. This study has theoretical contributions from the perspective of empirically verifying all stages from technical properties to consumer's emotion and attitudes toward social robots by linking the data from heterogeneous sources. It has practical significance that the result helps to develop the design guidelines based on consumer emotions in the design stage of social robot development.

Numerical Calculations of IASCC Test Worker Exposure using Process Simulations (공정 시뮬레이션을 이용한 조사유기응력부식균열 시험 작업자 피폭량의 전산 해석에 관한 연구)

  • Chang, Kyu-Ho;Kim, Hae-Woong;Kim, Chang-Kyu;Park, Kwang-Soo;Kwak, Dae-In
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.6
    • /
    • pp.803-811
    • /
    • 2021
  • In this study, the exposure amount of IASCC test worker was evaluated by applying the process simulation technology. Using DELMIA Version 5, a commercial process simulation code, IASCC test facility, hot cells, and workers were prepared, and IASCC test activities were implemented, and the cumulative exposure of workers passing through the dose-distributed space could be evaluated through user coding. In order to simulate behavior of workers, human manikins with a degree of freedom of 200 or more imitating the human musculoskeletal system were applied. In order to calculate the worker's exposure, the coordinates, start time, and retention period for each posture were extracted by accessing the sub-information of the human manikin task, and the cumulative exposure was calculated by multiplying the spatial dose value by the posture retention time. The spatial dose for the exposure evaluation was calculated using MCNP6 Version 1.0, and the calculated spatial dose was embedded into the process simulation domain. As a result of comparing and analyzing the results of exposure evaluation by process simulation and typical exposure evaluation, the annual exposure to daily test work in the regular entrance was predicted at similar levels, 0.388 mSv/year and 1.334 mSv/year, respectively. Exposure assessment was also performed on special tasks performed in areas with high spatial doses, and tasks with high exposure could be easily identified, and work improvement plans could be derived intuitively through human manikin posture and spatial dose visualization of the tasks.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.