• Title/Summary/Keyword: 정보소스

Search Result 2,219, Processing Time 0.034 seconds

An Improved Multi-Keyword Search Protocol to Protect the Privacy of Outsourced Cloud Data (아웃소싱된 클라우드 데이터의 프라이버시를 보호하기 위한 멀티 키워드 검색 프로토콜의 개선)

  • Kim, Tae-Yeon;Cho, Ki-Hwan;Lee, Young-Lok
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.10
    • /
    • pp.429-436
    • /
    • 2017
  • There is a growing tendency to outsource sensitive or important data in cloud computing recently. However, it is very important to protect the privacy of outsourced data. So far, a variety of secure and efficient multi-keyword search schemes have been proposed in cloud computing environment composed of a single data owner and multiple data users. Zhang et. al recently proposed a search protocol based on multi-keyword in cloud computing composed of multiple data owners and data users but their protocol has two problems. One is that the cloud server can illegally infer the relevance between data files by going through the keyword index and user's trapdoor, and the other is that the response for the user's request is delayed because the cloud server has to execute complicated operations as many times as the size of the keyword index. In this paper, we propose an improved multi-keyword based search protocol which protects the privacy of outsourced data under the assumption that the cloud server is completely unreliable. And our experiments show that the proposed protocol is more secure in terms of relevance inference between the data files and has higher efficiency in terms of processing time than Zhang's one.

Effective Bandwidth Measurement for Dynamic Adaptive Streaming over HTTP (DASH시스템을 위한 유효 대역폭 측정 기법)

  • Kim, Dong Hyun;Jung, Jong Min;Huh, Jun Hwan;Kim, Jong Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.42-52
    • /
    • 2017
  • DASH (Dynamic Adaptive Streaming over HTTP) is an adaptive streaming technique that enables transmission of multimedia content when clients request the multimedia contents to server. In this system, to ensure the best quality of the content to satisfy users, it is necessary to precisely measure the residual bandwidth. However, the measured residual bandwidth by the DASH, which is not considering the transmission features of TCP, varies by the size of previous media segment, which makes it hard to ensure QoE to users. In this paper, we excluded the TCP Slow start range from measurement of residual bandwidth and suggested the new DASH bandwidth measuring method to decrease the error. Then, we realized the method in DASH system based on open source, and compared the existing measuring method. The new method showed that the accuracy of result has increased by 20%. Also, it could improve the QoE of users in terms of service quality and number of changes of segment quality.

Comparison of physics-based and data-driven models for streamflow simulation of the Mekong river (메콩강 유출모의를 위한 물리적 및 데이터 기반 모형의 비교·분석)

  • Lee, Giha;Jung, Sungho;Lee, Daeeop
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.6
    • /
    • pp.503-514
    • /
    • 2018
  • In recent, the hydrological regime of the Mekong river is changing drastically due to climate change and haphazard watershed development including dam construction. Information of hydrologic feature like streamflow of the Mekong river are required for water disaster prevention and sustainable water resources development in the river sharing countries. In this study, runoff simulations at the Kratie station of the lower Mekong river are performed using SWAT (Soil and Water Assessment Tool), a physics-based hydrologic model, and LSTM (Long Short-Term Memory), a data-driven deep learning algorithm. The SWAT model was set up based on globally-available database (topography: HydroSHED, landuse: GLCF-MODIS, soil: FAO-Soil map, rainfall: APHRODITE, etc) and then simulated daily discharge from 2003 to 2007. The LSTM was built using deep learning open-source library TensorFlow and the deep-layer neural networks of the LSTM were trained based merely on daily water level data of 10 upper stations of the Kratie during two periods: 2000~2002 and 2008~2014. Then, LSTM simulated daily discharge for 2003~2007 as in SWAT model. The simulation results show that Nash-Sutcliffe Efficiency (NSE) of each model were calculated at 0.9(SWAT) and 0.99(LSTM), respectively. In order to simply simulate hydrological time series of ungauged large watersheds, data-driven model like the LSTM method is more applicable than the physics-based hydrological model having complexity due to various database pressure because it is able to memorize the preceding time series sequences and reflect them to prediction.

Study on Point and Line Tunneling in Si, Ge, and Si-Ge Hetero Tunnel Field-Effect Transistor (Si, Ge과 Si-Ge Hetero 터널 트랜지스터의 라인 터널링과 포인트 터널링에 대한 연구)

  • Lee, Ju-chan;Ann, TaeJun;Sim, Un-sung;Yu, YunSeop
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.5
    • /
    • pp.876-884
    • /
    • 2017
  • The current-voltage characteristics of Silicon(Si), Germanum(Ge), and hetero tunnel field-effect transistors(TFETs) with source-overlapped gate structure was investigated using TCAD simulations in terms of tunneling. A Si-TFET with gate oxide material $SiO_2$ showed the hump effects in which line and point tunneling appear simultaneously, but one with gate oxide material $HfO_2$ showed only the line tunneling due to decreasing threshold voltage and it shows better performance than one with gate oxide material $SiO_2$. Tunneling mechanism of Ge and hetero-TFETs with gate oxide material of both $SiO_2$ and $HfO_2$ are dominated by point tunneling, and showed higher leakage currents, and Si-TFET shows better performance than Ge and hetero-TFETs in terms of SS. These simulation results of Si, Ge, and hetero-TFETs with source-overlapped gate structure can give the guideline for optimal TFET structures with non-silicon channel materials.

A Study on the Research Model for the Standardization of Software-Similarity-Appraisal Techniques (소프트웨어 복제도 감정기법의 표준화 모델에 관한 연구)

  • Bahng, Hyo-Keun;Cha, Tae-Own;Chung, Tai-Myoung
    • The KIPS Transactions:PartD
    • /
    • v.13D no.6 s.109
    • /
    • pp.823-832
    • /
    • 2006
  • The Purpose of Similarity(Reproduction) Degree Appraisal is to determine the equality or similarity between two programs and it is a system that presents the technical grounds of judgment which is necessary to support the resolution of software intellectual property rights through expert eyes. The most important things in proceeding software appraisal are not to make too much of expert's own subjective judgment and to acquire the accurate-appraisal results. However, up to now standard research and development for its systematic techniques are not properly made out and as different expert as each one could approach in a thousand different ways, even the techniques for software appraisal types have not exactly been presented yet. Moreover, in the analyzing results of all the appraisal cases finished before, through a practical way, we blow that there are some damages on objectivity and accuracy in some parts of the appraisal results owing to the problems of existing appraisal procedures and techniques or lack of expert's professional knowledge. In this paper we present the model for the standardization of software-similarity-appraisal techniques and objective-evaluation methods for decreasing a tolerance that could make different results according to each expert in the same-evaluation points. Especially, it analyzes and evaluates the techniques from various points of view concerning the standard appraisal process, setting a range of appraisal, setting appraisal domains and items in detail, based on unit processes, setting the weight of each object to be appraised, and the degree of logical and physical similarity, based on effective solutions to practical problems of existing appraisal techniques and their objective and quantitative standardization. Consequently, we believe that the model for the standardization of software-similarity-appraisal techniques will minimizes the possibility of mistakes due to an expert's subjective judgment as well as it will offer a tool for improving objectivity and reliability of the appraisal results.

A Concise Korean Programming Language "Sprout" (간결한 한글 프로그래밍 언어 "새싹")

  • Cheon, Junseok;Kang, Dohun;Kim, Gunwoo;Woo, Gyun
    • Journal of KIISE
    • /
    • v.42 no.4
    • /
    • pp.496-503
    • /
    • 2015
  • Most programming languages are designed based on English. It becomes another barrier in learning programming languages in non-English speaking country. If a programming language is presented using a native language, the education cost of programming will be much cheaper and the programming itself can be much more fun. However, designing the programming languages based on native languages has not been much focused or published up to now. It is partly because the evolution of popular programming languages is so fast, and partly because the efficiency of programs is much stressed than the source code. But, the designing of programming languages based on native language is not a small issue, especially if we reflect on the education of programming. In fact, there have been significant efforts reported in the Korean programming languages so far, but it has not practically been used in the education. This paper introduces yet another Korean programming language, namely Sprout, which is concise and can be easily learned by beginners. To demonstrate the conciseness of Sprout, we have performed two experiments on Sprout. Firstly, we compared the sizes of the programs in Sprout with those in former Korean programming languages. Secondly, we compared the size of Sprout, the language itself, with those of popular programming languages such as C and Python. According to the experiments, Sprout programs are more concise to 10% on average than those in former Korean languages. Furthermore, Sprout itself is more compact to 24% on average than other popular programming languages.

A Web Accessability Compliance Framework for Website Development: A Case of W Bank Internet Banking Project - (웹사이트 개발을 위한 웹접근성 준수 프레임워크: - W 은행 인터넷 뱅킹 시스템 구축 사례 -)

  • Kim, Yoosin;Jeong, Seung Ryul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.87-99
    • /
    • 2013
  • As Internet advances, websites with simpel HTML pages are changing to complex web application systems with enormous contents and various services. With this trend, it is noted that situations where Web accessibility of the old and the handicapped is inhibited are growing. To solve this problem, The Disability Discrimination Act has been enacted since April 2013. This act triggers massive website reorganization efforts. However, in order for the huge and sophisticated web applications and web sites to ensure a web accessibility, a framework is required to throughout the web site development. Based on thorough review of website development methodologies, web accessibility compliance standards, and various web accessibility issues related to website characteristics, this study proposes a practice oriented "Web Accessibility Compliance Framework". The current study also examines the usefulness and value of this framework by applying it to the internet banking development project of W bank and receiving a certificate for high quality website complying web accessibility standards.

Emergence of Social Networked Journalism Model: A Case Study of Social News Site, "wikitree" (소셜 네트워크 저널리즘 모델의 출현: 소셜 뉴스사이트, "위키트리" 사례연구)

  • Seol, Jinah
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.83-90
    • /
    • 2015
  • This paper examines the rising value of social networked journalism and analyzes the case of a social news site based on the theory of networked journalism. Social networked journalism allows the public to be involved in every aspect of journalism production through crowd-sourcing and interactivity. The networking effect with the public is driving journalism to transform into a more open, more networked and more responsive venue. "wikitree" is a social networking news service on which anybody can write news and disseminate it via Facebook and Twitter. It is operated as an open sourced program which incorporates "Google Translate" to automatically convert all its content, enabling any global citizen with an Internet access to contribute news production and share either their own creative contents or generated contents from other sources. Since its inception, "wikitree global" site has been expanding its coverage rapidly with access points arising from 160 countries. Analyzing its international coverage by country and by news category as well as by the unique visit numbers via SNS, the results of the case study imply that networking with the global public can enhance news traffic to the social news site as well as to specific news items. The results also suggest that the utilization of Twitter and Facebook in social networked journalism can break the boundary between local and global public by extending news-gathering ability while growing audience's interest in the site, and engender a feasible business model for a local online journalism.

Implementation of Policy based In-depth Searching for Identical Entities and Cleansing System in LOD Cloud (LOD 클라우드에서의 연결정책 기반 동일개체 심층검색 및 정제 시스템 구현)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.67-77
    • /
    • 2018
  • This paper suggests that LOD establishes its own link policy and publishes it to LOD cloud to provide identity among entities in different LODs. For specifying the link policy, we proposed vocabulary set founded on RDF model as well. We implemented Policy based In-depth Searching and Cleansing(PISC for short) system that proceeds in-depth searching across LODs by referencing the link policies. PISC has been published on Github. LODs have participated voluntarily to LOD cloud so that degree of the entity identity needs to be evaluated. PISC, therefore, evaluates the identities and cleanses the searched entities to confine them to that exceed user's criterion of entity identity level. As for searching results, PISC provides entity's detailed contents which have been collected from diverse LODs and ontology customized to the content. Simulation of PISC has been performed on DBpedia's 5 LODs. We found that similarity of 0.9 of source and target RDF triples' objects provided appropriate expansion ratio and inclusion ratio of searching result. For sufficient identity of searched entities, 3 or more target LODs are required to be specified in link policy.

A Comparative Study on Similarity Measure Techniques for Cross-Project Defect Prediction (교차 프로젝트 결함 예측을 위한 유사도 측정 기법 비교 연구)

  • Ryu, Duksan;Baik, Jongmoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.6
    • /
    • pp.205-220
    • /
    • 2018
  • Software defect prediction is helpful for allocating valuable project resources effectively for software quality assurance activities thanks to focusing on the identified fault-prone modules. If historical data collected within a company is sufficient, a Within-Project Defect Prediction (WPDP) can be utilized for accurate fault-prone module prediction. In case a company does not maintain historical data, it may be helpful to build a classifier towards predicting comprehensible fault prediction based on Cross-Project Defect Prediction (CPDP). Since CPDP employs different project data collected from other organization to build a classifier, the main obstacle to build an accurate classifier is that distributions between source and target projects are not similar. To address the problem, because it is crucial to identify effective similarity measure techniques to obtain high performance for CPDP, In this paper, we aim to identify them. We compare various similarity measure techniques. The effectiveness of similarity weights calculated by those similarity measure techniques are evaluated. The results are verified using the statistical significance test and the effect size test. The results show k-Nearest Neighbor (k-NN), LOcal Correlation Integral (LOCI), and Range methods are the top three performers. The experimental results show that predictive performances using the three methods are comparable to those of WPDP.