• Title/Summary/Keyword: 메모리(memory)

Search Result 3,905, Processing Time 0.028 seconds

Broadcast Content Recommender System based on User's Viewing History (사용자 소비이력기반 방송 콘텐츠 추천 시스템)

  • Oh, Soo-Young;Oh, Yeon-Hee;Han, Sung-Hee;Kim, Hee-Jung
    • Journal of Broadcast Engineering
    • /
    • v.17 no.1
    • /
    • pp.129-139
    • /
    • 2012
  • This paper introduces a recommender system that is to recommend broadcast content. Our recommender system uses user's viewing history for personalized recommendations. Broadcast contents has unique characteristics as compared with books, musics and movies. There are two types of broadcast content, a series program and an episode program. The series program is comprised of several programs that deal with the same topic or story. Meanwhile, the episode program covers a variety of topics. Each program of those has different topic in general. Therefore, our recommender system recommends TV programs to users according to the type of broadcast content. The recommendations in this system are based on user's viewing history that is used to calculate content similarity between contents. Content similarity is calculated by exploiting collaborative filtering algorithm. Our recommender system uses java sparse array structure and performs memory-based processing. And then the results of processing are stored as an index structure. Our recommender system provides recommendation items through OPEN APIs that utilize the HTTP Protocol. Finally, this paper introduces the implementation of our recommender system and our web demo.

Mobile Cloud Context-Awareness System based on Jess Inference and Semantic Web RL for Inference Cost Decline (추론 비용 감소를 위한 Jess 추론과 시멘틱 웹 RL기반의 모바일 클라우드 상황인식 시스템)

  • Jung, Se-Hoon;Sim, Chun-Bo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.19-30
    • /
    • 2012
  • The context aware service is the service to provide useful information to the users by recognizing surroundings around people who receive the service via computer based on computing and communication, and by conducting self-decision. But CAS(Context Awareness System) shows the weak point of small-scale context awareness processing capacity due to restricted mobile function under the current mobile environment, memory space, and inference cost increment. In this paper, we propose a mobile cloud context system with using Google App Engine based on PaaS(Platform as a Service) in order to get context service in various mobile devices without any subordination to any specific platform. Inference design method of the proposed system makes use of knowledge-based framework with semantic inference that is presented by SWRL rule and OWL ontology and Jess with rule-based inference engine. As well as, it is intended to shorten the context service reasoning time with mapping the regular reasoning of SWRL to Jess reasoning engine by connecting the values such as Class, Property and Individual which are regular information in the form of SWRL to Jess reasoning engine via JessTab plug-in in order to overcome the demerit of queries reasoning method of SparQL in semantic search which is a previous reasoning method.

Novel Radix-26 DF IFFT Processor with Low Computational Complexity (연산복잡도가 적은 radix-26 FFT 프로세서)

  • Cho, Kyung-Ju
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.1
    • /
    • pp.35-41
    • /
    • 2020
  • Fast Fourier transform (FFT) processors have been widely used in various application such as communications, image, and biomedical signal processing. Especially, high-performance and low-power FFT processing is indispensable in OFDM-based communication systems. This paper presents a novel radix-26 FFT algorithm with low computational complexity and high hardware efficiency. Applying a 7-dimensional index mapping, the twiddle factor is decomposed and then radix-26 FFT algorithm is derived. The proposed algorithm has a simple twiddle factor sequence and a small number of complex multiplications, which can reduce the memory size for storing the twiddle factor. When the coefficient of twiddle factor is small, complex constant multipliers can be used efficiently instead of complex multipliers. Complex constant multipliers can be designed more efficiently using canonic signed digit (CSD) and common subexpression elimination (CSE) algorithm. An efficient complex constant multiplier design method for the twiddle factor multiplication used in the proposed radix-26 algorithm is proposed applying CSD and CSE algorithm. To evaluate performance of the previous and the proposed methods, 256-point single-path delay feedback (SDF) FFT is designed and synthesized into FPGA. The proposed algorithm uses about 10% less hardware than the previous algorithm.

Analysis of Factors for Korean Women's Cancer Screening through Hadoop-Based Public Medical Information Big Data Analysis (Hadoop기반의 공개의료정보 빅 데이터 분석을 통한 한국여성암 검진 요인분석 서비스)

  • Park, Min-hee;Cho, Young-bok;Kim, So Young;Park, Jong-bae;Park, Jong-hyock
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.10
    • /
    • pp.1277-1286
    • /
    • 2018
  • In this paper, we provide flexible scalability of computing resources in cloud environment and Apache Hadoop based cloud environment for analysis of public medical information big data. In fact, it includes the ability to quickly and flexibly extend storage, memory, and other resources in a situation where log data accumulates or grows over time. In addition, when real-time analysis of accumulated unstructured log data is required, the system adopts Hadoop-based analysis module to overcome the processing limit of existing analysis tools. Therefore, it provides a function to perform parallel distributed processing of a large amount of log data quickly and reliably. Perform frequency analysis and chi-square test for big data analysis. In addition, multivariate logistic regression analysis of significance level 0.05 and multivariate logistic regression analysis of meaningful variables (p<0.05) were performed. Multivariate logistic regression analysis was performed for each model 3.

Linear Resource Sharing Method for Query Optimization of Sliding Window Aggregates in Multiple Continuous Queries (다중 연속질의에서 슬라이딩 윈도우 집계질의 최적화를 위한 선형 자원공유 기법)

  • Baek, Seong-Ha;You, Byeong-Seob;Cho, Sook-Kyoung;Bae, Hae-Young
    • Journal of KIISE:Databases
    • /
    • v.33 no.6
    • /
    • pp.563-577
    • /
    • 2006
  • A stream processor uses resource sharing method for efficient of limited resource in multiple continuous queries. The previous methods process aggregate queries to consist the level structure. So insert operation needs to reconstruct cost of the level structure. Also a search operation needs to search cost of aggregation information in each size of sliding windows. Therefore this paper uses linear structure for optimization of sliding window aggregations. The method comprises of making decision, generation and deletion of panes in sequence. The decision phase determines optimum pane size for holding accurate aggregate information. The generation phase stores aggregate information of data per pane from stream buffer. At the deletion phase, panes are deleted that are no longer used. The proposed method uses resources less than the method where level structures were used as data structures as it uses linear data format. The input cost of aggregate information is saved by calculating only pane size of data though numerous stream data is arrived, and the search cost of aggregate information is also saved by linear searching though those sliding window size is different each other. In experiment, the proposed method has low usage of memory and the speed of query processing is increased.

Spark based Scalable RDFS Ontology Reasoning over Big Triples with Confidence Values (신뢰값 기반 대용량 트리플 처리를 위한 스파크 환경에서의 RDFS 온톨로지 추론)

  • Park, Hyun-Kyu;Lee, Wan-Gon;Jagvaral, Batselem;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.87-95
    • /
    • 2016
  • Recently, due to the development of the Internet and electronic devices, there has been an enormous increase in the amount of available knowledge and information. As this growth has proceeded, studies on large-scale ontological reasoning have been actively carried out. In general, a machine learning program or knowledge engineer measures and provides a degree of confidence for each triple in a large ontology. Yet, the collected ontology data contains specific uncertainty and reasoning such data can cause vagueness in reasoning results. In order to solve the uncertainty issue, we propose an RDFS reasoning approach that utilizes confidence values indicating degrees of uncertainty in the collected data. Unlike conventional reasoning approaches that have not taken into account data uncertainty, by using the in-memory based cluster computing framework Spark, our approach computes confidence values in the data inferred through RDFS-based reasoning by applying methods for uncertainty estimating. As a result, the computed confidence values represent the uncertainty in the inferred data. To evaluate our approach, ontology reasoning was carried out over the LUBM standard benchmark data set with addition arbitrary confidence values to ontology triples. Experimental results indicated that the proposed system is capable of running over the largest data set LUBM3000 in 1179 seconds inferring 350K triples.

Building a Log Framework for Personalization Based on a Java Open Source (JAVA 오픈소스 기반의 개인화를 지원하는 Log Framework 구축)

  • Sin, Choongsub;Park, Seog
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.524-530
    • /
    • 2015
  • A log is for text monitoring and perceiving the issues of a system during the development and operation of a program. Based on the log, system developers and operators can trace the cause of an issue. In the development phase, it is relatively simple for a log to be traced while there are only a small number of personnel uses of a system such as developers and testers. However, it is the difficult to trace a log when many people can use the system in the operation phase. In major cases, because a log cannot be tracked, even tracing is dropped. This study proposed a simplified tracing of a log during the system operation. Thus, the purpose is to create a log on the run time based on an ID/IP, using features provided by the Logback. It saves an ID/IP of the tracking user on a DB, and loads the user's ID/IP onto the memory to trace once WAS starts running. Before the online service operates, an Interceptor is executed to decide whether to load a log file, and then it generates the service requested by a certain user in a separate log file. The load is insignificant since the arithmetic operation occurs in a JVM, although every service must pass through the Interceptor to be executed.

A Study on a Secure Internet Service Provider Model Using Smart Secure-Pad (스마트 보안패드를 이용한 안전한 인터넷 서비스 제공 모델에 관한 연구)

  • Lee, Jae-Sik;Kim, Hyung-Joo;Jun, Moon-Seog
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.3
    • /
    • pp.1428-1438
    • /
    • 2013
  • Services take place in Internet environment, a formation of the trust relationship between user and service provider for services. Different authentication schemes such as using Certificate of Public Key Infrastructure authentication and using ID/PW for a simple user authentication have been proposed for trust relationship. In addition, in the case of electronic financial transactions, transaction integrity and non-repudiation features are provided. These services are provided in Internet environment, use various measures to ensure service safety. However, it was difficult to prevent attacks using existing security technology because of emergence of MITB attack that manipulate the memory area of the Web browser and social engineering attacks such as phishing/pharming, requires application of new security technologies became. In this paper, we propose a concept of smart secure-pad, and utilize it safely formed a trust relationship between user and service provider, a model has been proposed to ensure safety of data transmission. Proposed model's security evaluation results show security against to MITB attack and phishing/pharming that can't be prevent attack using existing security technology. In addition, service provider can easily apply the model in safe environment can provide Internet service using provided representative services applying the proposed model.

Update Protocols for Web-Based GIS Applications (웹 기반 GIS 응용을 위한 변경 프로토콜)

  • An, Seong-U;Seo, Yeong-Deok;Kim, Jin-Deok;Hong, Bong-Hui
    • Journal of KIISE:Databases
    • /
    • v.29 no.4
    • /
    • pp.321-333
    • /
    • 2002
  • As web-based services are becoming more and more popular, concurrent updates of spatial data should be possible in the web-based environments in order to use the various services. Web-based GIS applications are characterized by large quantity of data providing and these data should be continuously updated according to various user's requirements. Faced with such an enormous data providing system, it is inefficient for a server to do all of the works of updating spatial data requested by clients. Besides, the HTTP protocol used in the web environment is established under the assumption of 'Connectionless'and 'Stateless'. Lots of problems may occur if the scheme of transaction processing based on the LAN environment is directly applied to the web environment. Especially for long transactions of updating spatial data, it is very difficult to control the concurrency among clients and to keep the consistency of the server data. This paper proposes a solution of keeping consistency during updating directly spatial data in the client-side by resolving the Dormancy Region Lock problem caused by the 'Connectionless'and 'Stateless'feature of the HTTP protocol. The RX(Region-eXclusive) lock and the periodically sending of ALIVE_CLIENTi messages can solve this problem. The protocol designed here is verified as effective enough through implementing in the main memory spatial database system, called CyberMap.

Extension of Wright-based Connector Considering Efficiency Characteristics of Component (컴포넌트 효율성 특성을 고려한 Wright기반의 커넥터 확장)

  • 정화영;송영재
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.12
    • /
    • pp.1185-1192
    • /
    • 2003
  • In the component assembly and composition technique of software architecture, It is operated that the existing composition techniques based on architecture, ACME, Wright etc., used in FIFO with the direct connection structure between components through connector's Role. But, when the non-synchronizing request of components that have different characteristics occurs, the FIFO techniques is applied to the connector is difficult to process and operate effectively because of the high performance component waiting the sequence order if the low performance component is allocated first. Thus, the allocated request process according to the priority considering the characteristics of each call components in connector is necessary to improve the operation of assembled component. In this research, we extend the connector part that is available in multiplex connection structure based on existent Wright specification. For service process requested from component, the connector part is designed and implemented to operating with priority sequence through calculating the weight of CPU use rate, bean requesting process time and memory use rate among the efficiency elements of assembled components. To verify the efficiency if this designed connector, we implemented 20 samples EJB components that have different efficiency characteristics and applied these samples components to designed connector. The operating results with this designed connector show that the efficient operation of whole system is possible though the processing time takes 481ms more than the time of the existing FIFO techniques.