• Title/Summary/Keyword: User Tagging

Search Result 76, Processing Time 0.022 seconds

A Lifelog Management System Based on the Relational Data Model and its Applications (관계 데이터 모델 기반 라이프로그 관리 시스템과 그 응용)

  • Song, In-Chul;Lee, Yu-Won;Kim, Hyeon-Gyu;Kim, Hang-Kyu;Haam, Deok-Min;Kim, Myoung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.637-648
    • /
    • 2009
  • As the cost of disks decreases, PCs are soon expected to be equipped with a disk of 1TB or more. Assuming that a single person generates 1GB of data per month, 1TB is enough to store data for the entire lifetime of a person. This has lead to the growth of researches on lifelog management, which manages what people see and listen to in everyday life. Although many different lifelog management systems have been proposed, including those based on the relational data model, based on ontology, and based on file systems, they have all advantages and disadvantages: Those based on the relational data model provide good query processing performance but they do not support complex queries properly; Those based on ontology handle more complex queries but their performances are not satisfactory: Those based on file systems support only keyword queries. Moreover, these systems are lack of support for lifelog group management and do not provide a convenient user interface for modifying and adding tags (metadata) to lifelogs for effective lifelog search. To address these problems, we propose a lifelog management system based on the relational data model. The proposed system models lifelogs by using the relational data model and transforms queries on lifelogs into SQL statements, which results in good query processing performance. It also supports a simplified relationship query that finds a lifelog based on other lifelogs directly related to it, to overcome the disadvantage of not supporting complex queries properly. In addition, the proposed system supports for the management of lifelog groups by providing ways to create, edit, search, play, and share them. Finally, it is equipped with a tagging tool that helps the user to modify and add tags conveniently through the ion of various tags. This paper describes the design and implementation of the proposed system and its various applications.

Inferring the Transit Trip Destination Zone of Smart Card User Using Trip Chain Structure (통행사슬 구조를 이용한 교통카드 이용자의 대중교통 통행종점 추정)

  • SHIN, Kangwon
    • Journal of Korean Society of Transportation
    • /
    • v.34 no.5
    • /
    • pp.437-448
    • /
    • 2016
  • Some previous researches suggested a transit trip destination inference method by constructing trip chains with incomplete(missing destination) smart card dataset obtained on the entry fare control systems. To explore the feasibility of the transit trip destination inference method, the transit trip chains are constructed from the pre-paid smart card tagging data collected in Busan on October 2014 weekdays by tracing the card IDs, tagging times(boarding, alighting, transfer), and the trip linking distances between two consecutive transit trips in a daily sequences. Assuming that most trips in the transit trip chains are linked successively, the individual transit trip destination zones are inferred as the consecutive linking trip's origin zones. Applying the model to the complete trips with observed OD reveals that about 82% of the inferred trip destinations are the same as those of the observed trip destinations and the inference error defined as the difference in distance between the inferred and observed alighting stops is minimized when the trip linking distance is less than or equal to 0.5km. When applying the model to the incomplete trips with missing destinations, the overall destination missing rate decreases from 71.40% to 21.74% and approximately 77% of the destination missing trips are the single transit trips for which the destinations can not be inferable. In addition, the model remarkably reduces the destination missing rate of the multiple incomplete transit trips from 69.56% to 6.27%. Spearman's rank correlation and Chi-squared goodness-of-fit tests showed that the ranks for transit trips of each zone are not significantly affected by the inferred trips, but the transit trip distributions only using small complete trips are significantly different from those using complete and inferred trips. Therefore, it is concluded that the model should be applicable to derive a realistic transit trip patterns in cities with the incomplete smart card data.

Effects of SNS user's Personality on Usage patterns and SNS commitment: A case study of Facebook (SNS 이용자의 성격이 SNS 이용유형과 SNS 몰입에 미치는 영향에 관한 연구: 페이스북을 중심으로)

  • Choi, Yena;Hwang, HaSung
    • Journal of Internet Computing and Services
    • /
    • v.17 no.3
    • /
    • pp.95-106
    • /
    • 2016
  • The purpose of this study was to examine how college students use Facebook and the ways in which they feel of commitment while using Facebook. The Big Five Personality Model has been considerably used in the psychology fields, and the researchers have started to explore the role of characteristic factors in influencing an individual's use of social media, such as Facebook which has become one of the most popular social networking site in the world. Therefore, the current study aims to specify the links between The Big Five Personality Model and usage patterns as well as commitment of Facebook. Two hundreds thirty five college students participated in a survey and the results are as follows: First, participants who were high in extraversion and agreeableness were more likely to do information sharing activities such as sharing posts to their friends, writing comments on the other's posts. In addition, participants who were high in openness to experience, conscientiousness, and neuroticism were more likely to do information producing activities including offering events, group, or public pages to meet people both on and offline. Second, in terms of the relationship between personality traits and commitment to the Facebook, the study found that extraversion and neuroticism were related to users' commitment to Facebook. These findings are consistent with the existing literature regarding extraversion and neuroticism were representative personality factors when it comes to commitment of media. Specifically, the study found that those who were high in neuroticism were more likely to produce information such as posting photos repeatedly or tagging their friends on posts, and also more likely to feel commitment on Facebook. These findings confirm that personality is a highly relevant factor in determining individual's behavior and the degree of commitment on Facebook. Based on these findings implications and limitations of the study are discussed.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

Current Trends for National Bibliography through Analyzing the Status of Representative National Bibliographies (주요국 국가서지 현황조사를 통한 국가서지의 최신 경향 분석)

  • Lee, Mihwa;Lee, Ji-Won
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.32 no.1
    • /
    • pp.35-57
    • /
    • 2021
  • This paper is to grasp the current trends of national bibliographies through analyzing representative national bibliographies using literature review, analysis of national bibliographies' web pages and survey. First, in order to conform to the definition of a national bibliography as a record of a national publication, it attempts to include a variety of materials from print to electronic resources, but in reality it cannot contain all the materials, so there are exceptions. It is impossible to create a general selection guide for national bibliography coverage, and a plan that reflects the national characteristics and prepares a valid and comprehensive coverage based on analysis is needed. Second, cooperation with publishers and libraries is being made to efficiently generate national bibliography. For the efficiency of national bibliography generation, changes should be sought such as the standardization and consistency, the collection level metadata description for digital resources, and the creation of national bibliography using linked data. Third, national bibliography is published through the national bibliographic online search system, linked data search, MARC download using PDF, OAI-PMH, SRU, Z39.50, and mass download in RDF/XML format, and is integrated with the online public access catalog or also built separately. Above all, national bibliographies and online public access catalogs need to be built in a way of data reuse through an integrated library system. Fourth, as a differentiated function for national bibliography, various services such as user tagging and national bibliographic statistics are provided along with various browsing functions. In addition, services of analysis of national bibliographic big data, links to electronic publications, and mass download of linked data should be provided, and it is necessary to identify users' needs and provide open services that reflect them in order to develop differentiated services. Through the current trends and considerations of the national bibliographies analyzed in this study, it will be possible to explore changes in national and international national bibliography.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.