• Title/Summary/Keyword: 데이터 분산 서비스

Search Result 777, Processing Time 0.032 seconds

Channel Sorting Based Transmission Scheme For D2D Caching Networks (채널 정렬을 활용한 D2D 캐싱 네트워크용 전송 기법)

  • Jeong, Moo-Woong;Ryu, Jong Yeol;Kim, Seong Hwan;Lee, Woongsup;Ban, Tae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.11
    • /
    • pp.1511-1517
    • /
    • 2018
  • Mobile Device-to-Device (D2D) caching networks can transmit multimedia data to users directly without passing through any network infrastructure by storing popular multimedia contents in advance that are popular among many mobile users at caching server devices (CSDs) in distributed manners. Thus, mobile D2D caching networks can significantly reduce backhaul traffic in wired networks and service latency time of mobile users. In this paper, we propose an efficient transmission scheme that can enhance the transmission efficiency of mobile D2D caching networks by using multiple CSDs that are caching the contents that are popular among mobile users. By sorting the multiple CSDs that are caching a content that mobile users want to receive according to their channel gains, the proposed scheme can reduce the complexity of algorithm significantly, compared to an optimal scheme based on Brute-force searching, and can also obtain much higher network transmission efficiency than the existing Blanket and Opportunistic transmission schemes.

A Model for Analyzing Time-Varying Passengers' Crowdedness Degree of Subway Platforms Using Smart Card Data (스마트카드자료를 활용한 지하철 승강장 동적 혼잡도 분석모형)

  • Shin, Seongil;Lee, Sangjun;Lee, Changhun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.5
    • /
    • pp.49-63
    • /
    • 2019
  • Crowdedness management at subway platforms is essential to improve services, including the prevention of train delays and ensuring passenger safety. Establishing effective crowdedness mitigation measures for platforms requires accurate estimation of the congestion level. There are temporal and spatial constraints since crowdedness on subway platforms is assessed at certain locations every 1-2 years by hand counting. However, smart cards generate real-time big data 24 hours a day and could be used in estimating congestion. This study proposes a model based on data from transit cards to estimate crowdedness dynamically. Crowdedness was defined as demand, which can be translated into passengers dynamically moving along a subway network. The trajectory of an individual passenger can be identified through this model. Passenger flow that concentrates or disperses at a platform is also calculated every minute. Lastly, the platform congestion level is estimated based on effective waiting areas for each platform structure.

A Video Encryption Based Approach for Privacy Protection of Video Surveillance Service (개인정보보호를 위한 영상 암호화 아키텍처 연구)

  • Kim, Jeongseok;Lee, Jaeho
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.307-314
    • /
    • 2020
  • The video surveillance service is being widely deployed around our lives and the service stores sensitive data such as video streams in the cloud over the Internet or the centralized data store in an on-premise environment. The main concerning of these services is that the user should trust the service provider how secure the video or data is stored and handled without any concrete evidence. In this paper, we proposed the approach to protecting video by PKI (public key infrastructure) with a blockchain network. The video is encrypted by a symmetric key, then the key is shared through a blockchain network with taking advantage of the PKI mechanism. Therefore, the user can ensure the sensitive data is always kept secure and traceable in its lifecycle.

Empirical Study for Causal Relationship between Weather and e-Commerce Purchase Behavior

  • Hyun-Jin Yeo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.155-160
    • /
    • 2024
  • Weather indexes such as temperature, humidity, wind speed and air pressure have been studied for diverse life-related factors: Food poisoning, discomfort, and others. In that, the Korea Meteorological Administration(KMA) has been released indexes such as 'Life industrial weather information', 'Safety weather information', and even 'picnic weather information' that shows how an weather like to enjoy picnic. Those weather-life effects also reveal on shopping preference such as an weather affects offline shopping purchase behaviors especially big-marts because they have outside leisure activity attribute However, since online shopping has not physical attribute, weather factors may not affect on same way to offline. Although previous researches have focused on psychological factors that have been utilized in marketing criteria, this research utilize KMA weather dataset that affects psychological factors. This research utilize 1,033 online survey for SEM analysis to clarify relationships between weather factors and online shopping purchase behaviors. As a result, online purchase intention is affected by temperature and humidity.

Preliminary Study on the Enhancement of Reconstruction Speed for Emission Computed Tomography Using Parallel Processing (병렬 연산을 이용한 방출 단층 영상의 재구성 속도향상 기초연구)

  • Park, Min-Jae;Lee, Jae-Sung;Kim, Soo-Mee;Kang, Ji-Yeon;Lee, Dong-Soo;Park, Kwang-Suk
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.443-450
    • /
    • 2009
  • Purpose: Conventional image reconstruction uses simplified physical models of projection. However, real physics, for example 3D reconstruction, takes too long time to process all the data in clinic and is unable in a common reconstruction machine because of the large memory for complex physical models. We suggest the realistic distributed memory model of fast-reconstruction using parallel processing on personal computers to enable large-scale technologies. Materials and Methods: The preliminary tests for the possibility on virtual manchines and various performance test on commercial super computer, Tachyon were performed. Expectation maximization algorithm with common 2D projection and realistic 3D line of response were tested. Since the process time was getting slower (max 6 times) after a certain iteration, optimization for compiler was performed to maximize the efficiency of parallelization. Results: Parallel processing of a program on multiple computers was available on Linux with MPICH and NFS. We verified that differences between parallel processed image and single processed image at the same iterations were under the significant digits of floating point number, about 6 bit. Double processors showed good efficiency (1.96 times) of parallel computing. Delay phenomenon was solved by vectorization method using SSE. Conclusion: Through the study, realistic parallel computing system in clinic was established to be able to reconstruct by plenty of memory using the realistic physical models which was impossible to simplify.

Open GIS Component Software Ensuring an Interoperability of Spatial Information (공간정보 상호운용성 지원을 위한 컴포넌트 기반의 개방형 GIS 소프트웨어)

  • Choe, Hye-Ok;Kim, Gwang-Su;Lee, Jong-Hun
    • The KIPS Transactions:PartD
    • /
    • v.8D no.6
    • /
    • pp.657-664
    • /
    • 2001
  • The Information Technology has progressed to the open architecture, component, and multimedia services under Internet, ensuring interoperability, reusability, and realtime. The GIS is a system processing geo-spatial information such as natural resources, buildings, roads, and many kinds of facilities in the earth. The spatial information featured by complexity and diversity requires interoperability and reusability of pre-built databases under open architecture. This paper is for the development of component based open GIS Software. The goal of the open GIS component software is a middleware of GIS combining technology of open architecture and component ensuring interoperability of spatial information and reusability of elementary pieces of GIS software. The open GIS component conforms to the distributed open architecture for spatial information proposed by OGC (Open GIS Consortium). The system consists of data provider components, kernel (MapBase) components, clearinghouse components and five kinds of GIS application of local governments. The data provider component places a unique OLE DB interface to connect and access diverse data sources independent of their formats and locations. The MapBase component supports core and common technology of GIS feasible for various applications. The clearinghouse component provides functionality about discovery and access of spatial information under Internet. The system is implemented using ATL/COM and Visual C++ under MicroSoft's Windows environment and consisted of more than 20 components. As we made case study for KSDI (Korea Spatial Data Infrastructure) sharing spatial information between local governments, the advantage of component based open GIS software was proved. Now, we are undertaking another case study for sharing seven kinds of underground facilities using the open GIS component software.

  • PDF

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

A Study on the Development of Reference Linking System Based on Digital Object Identifier for Korean Journal Articles (국내 학술지 논문의 DOI 기반 연계시스템 구축에 관한 연구)

  • 한혜영;정동열
    • Journal of the Korean Society for information Management
    • /
    • v.17 no.4
    • /
    • pp.207-227
    • /
    • 2000
  • Recenily, major internalional STM(Science, Trchnical, and Mcdivine) Publichers have been developing prototype systems that can provide the refeence linking of jouinal articles within the scholarly literature on a cross-publisher basis using the URN(Universal Resouree Name). In Korea, it is hard to find the efforts to link the scattered digitalized documents to an individual user through a unified web. In this study, a linking model for an inergrated gatewny fro, bibliographic information to full tcxt has been desugned and 'Electronic Research Resourced Linking system (E3R/LS)' has been developed as a prototype for centralized static reference linking system. There are three major components for constructing refrernce linking systems. The firsl componcnt the Digital Object Identifiet(DO1). is introduced as the public identifier inrended to be applied wherever thr item needs to be identified. For denl~iymg Korean journal articles, llie extended SICI(Serlal Ilem and Conlribut~on Idealifier) has becn newly dehed in 1111s study and is used as a suiiia on DOI. The reierence datubasc conlams the second com~onenl, metadiltil, linkcd to implemenied by all information providers. The CnRI resolution system is used for resolving a DOI into a URL as the third component.

  • PDF

국가전자도서관 DL 사례

  • 공봉석
    • Proceedings of the Korea Database Society Conference
    • /
    • 1998.09a
    • /
    • pp.293-312
    • /
    • 1998
  • $\square$ 정보화 혁명의 대국민 인식수단과 초고속정보통신망의 주요 응용 서비스로서 부각 $\square$ 전통적인 정보이용방법에 대한 이용자들의 인식의 변화로 도서관 활성화 방안으로써 정보통신기술의 도입 $\square$ 선진 각국의 전자도서관 구축의 국가적 추진 $\square$ 국내 초고속정보통신망의 구축과 보급 확대 $\square$ 모든 국민이 지역, 시간의 제약없이 도서관에 접근하여 필요한 자료를 획득 $\square$ 정보획득시간 단축으로 국내연구자들의 연구력 증진 $\square$ 정보화의 지역격차 해소 $\square$ 국내 도서관의 전자도서관 사업 촉진 $\square$ 주요 전자도서관의 통합연동체제를 마련 $\square$ 주요 도서관별 대상분야 조정으로 중복투자 방지 $\square$ 초고속정보통신망의 주요한 응용서비스로 가시화 - 초고속정보통신망의 선도적 대국민 가시화 - 이에 따른 민간부문의 참여 및 투자 촉진 $\square$ 국가, 산업, 국민생활 정보화의 주요한 기반 구축 $\square$ 지역적 정보불균형 해소 - 일부지역에 편중되어 있는 정보를 인터넷을 이용하여 시.공간 제약없이 제공함으로써 정보의 불균형 해소 $\square$ 전자도서관의 기본모델 제시 -전자도서관의 주요기능인 타기관간 자료연동 및 검색시스템을 구현함으로써 향후 구축되는 전자도서관의 기본모델 제시 $\square$ 전자도서관간 자료공유체제 구축 -시범사업 참여도서관간 분산 관리하고 있는 정보의 공유를 위한 표준체제 구축(중략) 것으로 나타났다.까지 증가율을 보여 주었다. 것으로 나타났다.대표하는 압밀계수의 추정이 가능할 것으로 사료된다. $O_3$/라는 결정학적 관계를 가지며 에피탁샬 성장했음을 알 수 있었다.있었다.다(p<0.05)..8800-0.6800로 각각 표시되었다.~$\pm$10 V의 측정범위에서 memory window가 계속 증가하는 것을 보여주었다./$^{\circ}C$의 고주파 유전특성을 얻었다. 얻었다.끼쳤다.보였다.다. 싸이클링 성능을 보였다.다.보였다.다.고 입력 반사손실을 그림 I, 2, 3에 각각 나타내었다. 대책을 요구하고 있다.하는 경향을 보였다. 생존율은 48시간째부터 폐사하기 시작하여 144시간째에는 전량폐사하였다. 삼투압 조절 능력을 위한 여러가지 파라메타에서 15 $\textperthousand$구는 이상이 없는 것으로 추측되나, 0 $\textperthousand$구에서는 코티졸, Na$^{+}$, K$^{+}$, Cl ̄, 총단백질 및 AST에서 시간경과에 따른 삼투압 조절 능력에 문제가 있는 것으로 보여진다.c}C$에서 5시간 가열조리 후 잔존율은 각각 84.7% 및 73.3%였고, 질소가스 통기하에서는 잔존율이 88.9% 및 81.8%로 더욱 안정하였다.8% 및 12.44%, 201일 이상의 경우 13.17% 및 11.30%로 201일 이상의 유기의 경우에만 대조구와 삭제 구간에 유의적인(p<0.05) 차이를 나타내었다.는 담수(淡水)에서 10%o의 해수(海水)로 이주된지 14일(日) 이후에 신장(腎臟)에서 수축된 것으로 나타났다. 30%o의 해수(海水)에 적응(適應)된 틸라피아의 평균 신사구체(腎絲球體)의 면적은 담수(淡水)에 적응된 개체의 면적보다 유의성

  • PDF

WebPR : A Dynamic Web Page Recommendation Algorithm Based on Mining Frequent Traversal Patterns (WebPR :빈발 순회패턴 탐사에 기반한 동적 웹페이지 추천 알고리즘)

  • Yoon, Sun-Hee;Kim, Sam-Keun;Lee, Chang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.11B no.2
    • /
    • pp.187-198
    • /
    • 2004
  • The World-Wide Web is the largest distributed Information space and has grown to encompass diverse information resources. However, although Web is growing exponentially, the individual's capacity to read and digest contents is essentially fixed. From the view point of Web users, they can be confused by explosion of Web information, by constantly changing Web environments, and by lack of understanding needs of Web users. In these Web environments, mining traversal patterns is an important problem in Web mining with a host of application domains including system design and Information services. Conventional traversal pattern mining systems use the inter-pages association in sessions with only a very restricted mechanism (based on vector or matrix) for generating frequent k-Pagesets. We develop a family of novel algorithms (termed WebPR - Web Page Recommend) for mining frequent traversal patterns and then pageset to recommend. Our algorithms provide Web users with new page views, which Include pagesets to recommend, so that users can effectively traverse its Web site. The main distinguishing factors are both a point consistently spanning schemes applying inter-pages association for mining frequent traversal patterns and a point proposing the most efficient tree model. Our experimentation with two real data sets, including Lady Asiana and KBS media server site, clearly validates that our method outperforms conventional methods.