• Title/Summary/Keyword: Current-sharing

검색결과 845건 처리시간 0.026초

온라인 장례 플랫폼의 초기 사용자 경험 분석및서비스 개발 제안 (Analysis of the First Time User Experience of the online memorial platform and suggestion of service developments)

  • 이주은;황진도
    • 서비스연구
    • /
    • 제14권1호
    • /
    • pp.44-62
    • /
    • 2024
  • 코로나19로 인한 온택트 서비스의 발달과 친환경 장례 문화의 사회적 이슈는 온라인 장례라는 새로운 문화의 필요성을 인식하게 하였다. 국내 기관 및 기업에서 온라인 장례 서비스 활성화를 위한 시도가 여러 차례 있었으나, 효과는 미약한 실정이다. 본 연구의 목적은 온라인 장례 플랫폼의 초기 사용자 경험 분석을 통해 사용성의 문제를 파악하고, 온라인 장례 플랫폼의 접근성 및 사용성을 향상할 수 있는 서비스 개발을 제안함에 있다. 이에 본 연구에서는 사용자 경험(UX), OOBE, FTUE 이론의 문헌 고찰을 통해 온라인 장례 플랫폼의 접근성과 사용성 향상에 영향을 주는 요인을 파악하고, 대표적 온라인 장례 앱 '메모리얼'을 연구 대상으로 선정하여 실험을 진행하였다. 초기 사용자 경험을 분석하기 전, 연구 대상인 앱 '메모리얼'의 UX 서비스 특성을 이해하고자 유사 타 서비스와의 IA를 비교 분석하였다. 또한 온라인 장례 플랫폼 사용 경험이 없는 피실험자 10명을 대상으로 Unpack-Setup/Configure-First Use 단계에 해당하는 태스크를 수행하고, 실험 과정을 UX Curve로 표현하여 부정적인 경험이 발생한 지점과 요인을 파악하였다. 그 결과, 주요 문제 요인으로 불필요한 UI 요소, 회원가입 단계에서의 민감한 개인정보 요구, 서비스의 몰입감 부족 등이 있었고, 개선 사항으로는 사용자 간의 감정 공유 및 원활한 소통 촉진을 위한 커뮤니티 기능 강화 등이 있었다. 이러한 인사이트를 반영하여 기존 앱 서비스의 문제점을 해결할 수 있는 서비스 개발을 제안하였다. 개발한 프로토타입의 타당성을 검증하고자 서비스 디자인 전문가 3인의 인터뷰를 진행하였다. 본 연구는 최근 부상하고 있는 온라인 장례 서비스의 질적 향상과 활성화에 기여하기 위해 진행되었으며, 온라인 장례 서비스의 현황을 이해하고, 서비스 접근성과 사용성 강화를 위해 필요한 요인을 규명하였다는 점에서 연구의 의의가 있다.

U-마켓에서의 사용자 정보보호를 위한 매장 추천방법 (A Store Recommendation Procedure in Ubiquitous Market for User Privacy)

  • 김재경;채경희;구자철
    • Asia pacific journal of information systems
    • /
    • 제18권3호
    • /
    • pp.123-145
    • /
    • 2008
  • Recently, as the information communication technology develops, the discussion regarding the ubiquitous environment is occurring in diverse perspectives. Ubiquitous environment is an environment that could transfer data through networks regardless of the physical space, virtual space, time or location. In order to realize the ubiquitous environment, the Pervasive Sensing technology that enables the recognition of users' data without the border between physical and virtual space is required. In addition, the latest and diversified technologies such as Context-Awareness technology are necessary to construct the context around the user by sharing the data accessed through the Pervasive Sensing technology and linkage technology that is to prevent information loss through the wired, wireless networking and database. Especially, Pervasive Sensing technology is taken as an essential technology that enables user oriented services by recognizing the needs of the users even before the users inquire. There are lots of characteristics of ubiquitous environment through the technologies mentioned above such as ubiquity, abundance of data, mutuality, high information density, individualization and customization. Among them, information density directs the accessible amount and quality of the information and it is stored in bulk with ensured quality through Pervasive Sensing technology. Using this, in the companies, the personalized contents(or information) providing became possible for a target customer. Most of all, there are an increasing number of researches with respect to recommender systems that provide what customers need even when the customers do not explicitly ask something for their needs. Recommender systems are well renowned for its affirmative effect that enlarges the selling opportunities and reduces the searching cost of customers since it finds and provides information according to the customers' traits and preference in advance, in a commerce environment. Recommender systems have proved its usability through several methodologies and experiments conducted upon many different fields from the mid-1990s. Most of the researches related with the recommender systems until now take the products or information of internet or mobile context as its object, but there is not enough research concerned with recommending adequate store to customers in a ubiquitous environment. It is possible to track customers' behaviors in a ubiquitous environment, the same way it is implemented in an online market space even when customers are purchasing in an offline marketplace. Unlike existing internet space, in ubiquitous environment, the interest toward the stores is increasing that provides information according to the traffic line of the customers. In other words, the same product can be purchased in several different stores and the preferred store can be different from the customers by personal preference such as traffic line between stores, location, atmosphere, quality, and price. Krulwich(1997) has developed Lifestyle Finder which recommends a product and a store by using the demographical information and purchasing information generated in the internet commerce. Also, Fano(1998) has created a Shopper's Eye which is an information proving system. The information regarding the closest store from the customers' present location is shown when the customer has sent a to-buy list, Sadeh(2003) developed MyCampus that recommends appropriate information and a store in accordance with the schedule saved in a customers' mobile. Moreover, Keegan and O'Hare(2004) came up with EasiShop that provides the suitable tore information including price, after service, and accessibility after analyzing the to-buy list and the current location of customers. However, Krulwich(1997) does not indicate the characteristics of physical space based on the online commerce context and Keegan and O'Hare(2004) only provides information about store related to a product, while Fano(1998) does not fully consider the relationship between the preference toward the stores and the store itself. The most recent research by Sedah(2003), experimented on campus by suggesting recommender systems that reflect situation and preference information besides the characteristics of the physical space. Yet, there is a potential problem since the researches are based on location and preference information of customers which is connected to the invasion of privacy. The primary beginning point of controversy is an invasion of privacy and individual information in a ubiquitous environment according to researches conducted by Al-Muhtadi(2002), Beresford and Stajano(2003), and Ren(2006). Additionally, individuals want to be left anonymous to protect their own personal information, mentioned in Srivastava(2000). Therefore, in this paper, we suggest a methodology to recommend stores in U-market on the basis of ubiquitous environment not using personal information in order to protect individual information and privacy. The main idea behind our suggested methodology is based on Feature Matrices model (FM model, Shahabi and Banaei-Kashani, 2003) that uses clusters of customers' similar transaction data, which is similar to the Collaborative Filtering. However unlike Collaborative Filtering, this methodology overcomes the problems of personal information and privacy since it is not aware of the customer, exactly who they are, The methodology is compared with single trait model(vector model) such as visitor logs, while looking at the actual improvements of the recommendation when the context information is used. It is not easy to find real U-market data, so we experimented with factual data from a real department store with context information. The recommendation procedure of U-market proposed in this paper is divided into four major phases. First phase is collecting and preprocessing data for analysis of shopping patterns of customers. The traits of shopping patterns are expressed as feature matrices of N dimension. On second phase, the similar shopping patterns are grouped into clusters and the representative pattern of each cluster is derived. The distance between shopping patterns is calculated by Projected Pure Euclidean Distance (Shahabi and Banaei-Kashani, 2003). Third phase finds a representative pattern that is similar to a target customer, and at the same time, the shopping information of the customer is traced and saved dynamically. Fourth, the next store is recommended based on the physical distance between stores of representative patterns and the present location of target customer. In this research, we have evaluated the accuracy of recommendation method based on a factual data derived from a department store. There are technological difficulties of tracking on a real-time basis so we extracted purchasing related information and we added on context information on each transaction. As a result, recommendation based on FM model that applies purchasing and context information is more stable and accurate compared to that of vector model. Additionally, we could find more precise recommendation result as more shopping information is accumulated. Realistically, because of the limitation of ubiquitous environment realization, we were not able to reflect on all different kinds of context but more explicit analysis is expected to be attainable in the future after practical system is embodied.

국가 예방접종 인터넷정보시스템 개발을 위한 의원정보시스템의 예방접종 모듈 평가연구 (Evaluation on the Immunization Module of Non-chart System in Private Clinic for Development of Internet Information System of National Immunization Programme m Korea)

  • 이무식;이건세;이석구;신의철;김건엽;나백주;홍지영;김윤정;박숙경;김보경;권윤형;김영택
    • 농촌의학ㆍ지역보건
    • /
    • 제29권1호
    • /
    • pp.65-75
    • /
    • 2004
  • 현재 보건소를 중심으로 이루어지고 있는 예방접종 등록사업은 향후 전국의 민간의료기관에 확산 적용되어 공공보건의료기관과 민간의료기관이 상호 연계되고 데이터가 통합, 운영됨으로써 국가예방접종사업이 완성될 수 있다. 따라서 민간의료기관에 기반한 예방접종 사업의 정보화 추진에서의 발생할 수 있는 문제점들을 예측하고 이를 해결할 수 있는 성공전략을 개발하는 것은 매우 중요하다. 그 중에서도 민간의료기관이 Non-chart system의 예방접종 모듈을 분석하여 예방접종 전산화를 위한 기본적인 문제점과 개선방안을 도출하고 공공보건의료기관과의 통합적 연계운영을 위한 기초자료를 제공하는 것은 예방접종전산등 곡사업의 핵심사업과제중의 하나이다. 예방접종 정보관련 프로그램을 평가하기 위하여 현재 민간의료기관(내과, 소아과, 산부인과 및 가정의학과)에서 주로 사용하고 있는 4개 보험청구 및 진료기록관리 프로그램인 Non-chart system과 현재 보건소에서 사용하고 있는 예방접종등록정보 프로그램인 (주)포스테이터의 보건소정보시스템과 (주)미드컴퓨터의 예방접종등록시스템 두가지를 대상으로 하였다. 분석의 표준은 현재 보건소에서 사용하고 있는 예방접종 관련 소프트웨어를 중심으로 하여 민간의료기관의 예방접종관련 프로그램 및 관련 모듈을 분석하였다. 모듈의 분석은 보건소정보시스템 및 예방접종 등록 프로그램을 기본으로 하여 예방접종 업무의 흐름과 활용 및 기능에 따라 분석하였다. 접수 및 신상등록에 있어서 기본적인 자료의 입력내용이 민간의료기관의 내용을 기준으로 보완됨이 바람직할 것으로 보여지는 데 특히 추후 검색과 리마인드(reminder) 및 리콜(Recall)기능의 이용을 대비하여 E-mail주소 등 개인신원의 내용을 충분히 파악할 필요가 있다고 판단된다. 예방접종 예진부분은 모든 프로그램에서 누락되어 있는데 필수적인 예진표의 내용을 반드시 포함시킬 필요가 있다. 개인의 접종기록 및 검색은 개인별 접종표 화면이 출력과 필수적인 접종내역란이 구성으로 접종표 형식이 단순화되어 사용하기에 편리하게 구성되어야 할 것이다. 접종대상 및 실적보고 서식 출력은 법령에 따라 Non-chart system을 이용한 자동화된 전산처리가 가능하여야 하며 자동화된 출력서식의 모듈이 제공되어야한다. 예방접종 증명서 발급기능은 2005년 이후부터 초등학교 입학시 예방접종증명서 제출 의무화가 예정되어 있으므로 예방접종 증명서의 발급기능이 추가되어야 한다. 접종자료의 전송기능으로는 의료보험의 EDI 청구를 위한 전송기능을 이용한 기능이 추가되어야하며 추후 예방접종 자료의 DB변환과 더불어 전송될 수 있는 기능이 추가되어야 한다. 리마인드(Reminder) 및 리콜(Recall)기능은 예방접종 등록사업의 필수적인 부분이며, E-mail을 통한 방법, 전화 또는 편지를 발송하는 방법 등이 추가되어야한다. 백신의 등록 및 재고관리 기능은 다양한 제약회사의 백신생산 및 백신의 효율적인 공금과 유효기간내 접종 등 관리와 견제되므로 백신등록 추가기능이 필요하며 아울러 연령별, 용량별, 백신종류별 등으로 구분되어 기록될 필요가 없었다.

  • PDF

키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법 (A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model)

  • 조원진;노상규;윤지영;박진수
    • Asia pacific journal of information systems
    • /
    • 제21권1호
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근 (A Folksonomy Ranking Framework: A Semantic Graph-based Approach)

  • 박현정;노상규
    • Asia pacific journal of information systems
    • /
    • 제21권2호
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.