• Title/Summary/Keyword: 멀티미디어 검색 시스템

Search Result 614, Processing Time 0.019 seconds

Enhanced Grid-Based Trajectory Cloaking Method for Efficiency Search and User Information Protection in Location-Based Services (위치기반 서비스에서 효율적 검색과 사용자 정보보호를 위한 향상된 그리드 기반 궤적 클로킹 기법)

  • Youn, Ji-Hye;Song, Doo-Hee;Cai, Tian-Yuan;Park, Kwang-Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.8
    • /
    • pp.195-202
    • /
    • 2018
  • With the development of location-based applications such as smart phones and GPS navigation, active research is being conducted to protect location and trajectory privacy. To receive location-related services, users must disclose their exact location to the server. However, disclosure of users' location exposes not only their locations but also their trajectory to the server, which can lead to concerns of privacy violation. Furthermore, users request from the server not only location information but also multimedia information (photographs, reviews, etc. of the location), and this increases the processing cost of the server and the information to be received by the user. To solve these problems, this study proposes the EGTC (Enhanced Grid-based Trajectory Cloaking) technique. As with the existing GTC (Grid-based Trajectory Cloaking) technique, EGTC method divides the user trajectory into grids at the user privacy level (UPL) and creates a cloaking region in which a random query sequence is determined. In the next step, the necessary information is received as index by considering the sub-grid cell corresponding to the path through which the user wishes to move as c(x,y). The proposed method ensures the trajectory privacy as with the existing GTC method while reducing the amount of information the user must listen to. The excellence of the proposed method has been proven through experimental results.

A Study on the Operating Conditions of Lecture Contents in Contactless Online Classes for University Students (대학생 대상 비대면 온라인 수업에서의 강의 콘텐츠 운영 실태 연구)

  • Lee, Jongmoon
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.32 no.4
    • /
    • pp.5-24
    • /
    • 2021
  • The purpose of this study was to investigate and analyze the operating conditions of lecture contents in contactless online classes for University students. First, as a result of analyzing the responses of 93 respondents, 93.3% of the respondents took real-time online lectures (47.7%) or recorded video lectures (45.6%). Second, as a result of analyzing the contents used as textbooks, it was found that e-books (materials) and paper books (materials) were used together (36.6%), or e-books or electronic materials (36.6% and 37.6% respectively) were used in both liberal arts (47.3%) and major subjects (39.8%). In addition to textbooks, both major subjects and liberal arts highly used web materials (47.6% and 40.5% respectively) and YouTube materials (33.3% and 48.0% respectively) as external materials. Third, both liberal arts and major subjects used 'electronic files in the form of PPT or text organized and written by instructors' (62.9% and 58.1% respectively), 'internet materials' (16.7% and 19% respectively) and 'paper book or materials' (10.4% and 12.3% respectively) to share lecture contents. For the screen displayed lecture contents, 93.5% of the respondents satisfied in major subjects, and 90.2% of the respondents satisfied in liberal arts. These results suggest developing multimedia-based lecture contents and an evaluation solution capable of real-time exam supervision, developing a task management system capable of AI-based plagiarism search, task guidance, and task evaluation, and institutionalizing a solution to copyright problems for electronicizing lecture materials so that lectures can be given in the ubiquitous environment.

Implementation of Security Information and Event Management for Realtime Anomaly Detection and Visualization (실시간 이상 행위 탐지 및 시각화 작업을 위한 보안 정보 관리 시스템 구현)

  • Kim, Nam Gyun;Park, Sang Seon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.5
    • /
    • pp.303-314
    • /
    • 2018
  • In the past few years, government agencies and corporations have succumbed to stealthy, tailored cyberattacks designed to exploit vulnerabilities, disrupt operations and steal valuable information. Security Information and Event Management (SIEM) is useful tool for cyberattacks. SIEM solutions are available in the market but they are too expensive and difficult to use. Then we implemented basic SIEM functions to research and development for future security solutions. We focus on collection, aggregation and analysis of real-time logs from host. This tool allows parsing and search of log data for forensics. Beyond just log management it uses intrusion detection and prioritize of security events inform and support alerting to user. We select Elastic Stack to process and visualization of these security informations. Elastic Stack is a very useful tool for finding information from large data, identifying correlations and creating rich visualizations for monitoring. We suggested using vulnerability check results on our SIEM. We have attacked to the host and got real time user activity for monitoring, alerting and security auditing based this security information management.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.