• Title/Summary/Keyword: Information search cost

Search Result 506, Processing Time 0.021 seconds

A Multi-Stage Approach to Secure Digital Image Search over Public Cloud using Speeded-Up Robust Features (SURF) Algorithm

  • AL-Omari, Ahmad H.;Otair, Mohammed A.;Alzwahreh, Bayan N.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12
    • /
    • pp.65-74
    • /
    • 2021
  • Digital image processing and retrieving have increasingly become very popular on the Internet and getting more attention from various multimedia fields. That results in additional privacy requirements placed on efficient image matching techniques in various applications. Hence, several searching methods have been developed when confidential images are used in image matching between pairs of security agencies, most of these search methods either limited by its cost or precision. This study proposes a secure and efficient method that preserves image privacy and confidentially between two communicating parties. To retrieve an image, feature vector is extracted from the given query image, and then the similarities with the stored database images features vector are calculated to retrieve the matched images based on an indexing scheme and matching strategy. We used a secure content-based image retrieval features detector algorithm called Speeded-Up Robust Features (SURF) algorithm over public cloud to extract the features and the Honey Encryption algorithm. The purpose of using the encrypted images database is to provide an accurate searching through encrypted documents without needing decryption. Progress in this area helps protect the privacy of sensitive data stored on the cloud. The experimental results (conducted on a well-known image-set) show that the performance of the proposed methodology achieved a noticeable enhancement level in terms of precision, recall, F-Measure, and execution time.

The Comparative Software Development Cost Model Considering the Change in the Shape Parameter of the Erlang Distribution (어랑분포의 형상모수 변화에 따른 소프트웨어 개발 비용모형에 관한 비교 연구)

  • Yang, Tae-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.6
    • /
    • pp.566-572
    • /
    • 2016
  • Software Reliability implemented in software development is one of the most important issues. In finite failure NHPP software reliability models for software failure analysis, the hazard function that means a failure rate may have constant independently for failure time, non-increasing or non-decreasing pattern. In this study, software development cost analysis considering the variable shape parameter of Erlang distribution as the failure life distribution in the software product testing process was studied. The software failure model was applied finite failure Non-Homogeneous Poisson Procedure and the parameters approximation using maximum likelihood estimation was accompanied. Thus, this paper was presented comparative analysis by applying a software failure time data to the software, considering the shape parameter of Erlang distribution for development cost model analysis. When compared to the cost curve in accordance with the shape parameter, the model of smaller shape can be seen that the optimal software release time delay and more cost. Through this study, it is thought that it can serve as a preliminary information which can basically help the software developers to search for development cost according to software shape parameters.

A Study on the Attributes of Software Reliability Cost Model with Shape Parameter Change of Type-2 Gumbel Life Distribution (Type-2 Gumbel 수명분포의 형상모수 변화에 따른 소프트웨어 신뢰성 비용모형의 속성에 관한 연구)

  • Yang, Tae-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.3
    • /
    • pp.211-217
    • /
    • 2019
  • In this study, we compare and analyze the attributes of the software development cost model according to the shape parameters change of the Type-2 Gumbel lifetime distribution using the NHPP model. In order to analyze the software failure phenomena, the parametric estimation is applied to the maximum likelihood estimation method, and the nonlinear equations are calculated using the bisection method. As a result, when the attributes of the cost curves according to the change of shape parameters are compared, it is found that the larger the number of shape parameters, the lower the software development cost and the faster the release time. Through this study, it is expected that it will be helpful for the software developers to search for the development cost according to the software shape parameters change, and also to provide the necessary information for the attributes of the software development cost.

An Efficient MBR Compression Technique for Main Memory Multi-dimensional Indexes (메인 메모리 다차원 인덱스를 위한 효율적인 MBR 압축 기법)

  • Kim, Joung-Joon;Kang, Hong-Koo;Kim, Dong-Oh;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.9 no.2
    • /
    • pp.13-23
    • /
    • 2007
  • Recently there is growing Interest in LBS(Location Based Service) requiring real-time services and the spatial main memory DBMS for efficient Telematics services. In order to optimize existing disk-based multi-dimensional Indexes of the spatial main memory DBMS in the main memory, multi-dimensional index structures have been proposed, which minimize failures in cache access by reducing the entry size. However, because the reduction of entry size requires compression based on the MBR of the parent node or the removal of redundant MBR, the cost of MBR reconstruction increases in index update and the efficiency of search is lowered in index search. Thus, to reduce the cost of MBR reconstruction, this paper proposed the RSMBR(Relative-Sized MBR) compression technique, which applies the base point of compression differently in case of broad distribution and narrow distribution. In case of broad distribution, compression is made based on the left-bottom point of the extended MBR of the parent node, and in case of narrow distribution, the whole MBR is divided into cells of the same size and compression is made based on the left-bottom point of each cell. In addition, MBR was compressed using a relative coordinate and size to reduce the cost of search in index search. Lastly, we evaluated the performance of the proposed RSMBR compression technique using real data, and proved its superiority.

  • PDF

Systematic Literature Review on Cloud Adoption

  • Bagiwa, Idris Lawal;Ghani, Imran;Younas, Muhammad;Bello, Mannir
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.8 no.2
    • /
    • pp.1-22
    • /
    • 2016
  • While many organizations believe that cloud computing has the potential to reduce operational cost by abstracting capital assets like data storage center and processing systems into a readily on demand available and affordable operating expenses, still many of these organizations are not aware of the factors determining the performance of cloud computing technology. This paper provides a systematic literature review focusing on the factors determining the performance of cloud computing. In trying to come up with this review, the following sources were searched for relevant articles: ScienceDirect, Scientific.Net, ACMDigital Library, IEEE Xplore, Springer, World Scientific Journal, Wiley Online Library, Academic Search Premier (via EBSCOHost) and EdITLib (Education & Information Technology Digital Library). In first search strategy, approximately 100 keywords related to the research domain like; "Cloud Computing" and "Cloud Services" were used. In second search strategy, 65 keywords more related to the research domain were selected. In the third search strategy, the primary materials were identified and classified according to the paper types (Journal or Conference), year of publication and so on. Based on this study, twenty (20) factors were found that determine the performance of cloud computing. The IT organization needs to consider these twenty (20) factors in order to adopt cloud computing.

A Probabilistic Filtering Technique for Improving the Efficiency of Local Search (국지적 탐색의 효율향상을 위한 확률적 여과 기법)

  • Kang, Byoung-Ho;Ryu, Kwang-Ryel
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.3
    • /
    • pp.246-254
    • /
    • 2007
  • Local search algorithms start from a certain candidate solution and probe its neighborhood to find ones with improved quality. This paper proposes a method of probabilistically filtering out bad-looking neighbors based on a simple low-cost preliminary evaluation heuristics. The probabilistic filtering enables us to save time wasted on fully evaluating those solutions that will eventually be trashed, and thus improves the search efficiency by allowing us to spend more time on examining better looking solutions. Experiments with two large-scaled real-world problems, which are a traffic signal control problem in traffic network and a load balancing problem in production scheduling, have shown that the proposed method finds better quality solutions, given the same amount of CPU time.

An Alternative Modeling for Lot-sizing and Scheduling Problem with a Decomposition Based Heuristic Algorithm (로트 크기 결정 문제의 새로운 혼합정수계획법 모형 및 휴리스틱 알고리즘 개발)

  • Han, Junghee;Lee, Youngho;Kim, Seong-in;Park, Eunkyung
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.33 no.3
    • /
    • pp.373-380
    • /
    • 2007
  • In this paper, we consider a new lot-sizing and scheduling problem (LSSP) that minimizes the sum of production cost, setup cost and inventory cost. Setup carry-over and overlapping as well as demand splitting are considered. Also, maximum number of setups for each time period is not limited. For this LSSP, we have formulated a mixed integer programming (MIP) model, of which the size does not increase even if we divide a time period into a number of micro time periods. Also, we have developed an efficient heuristic algorithm by combining decomposition scheme with local search procedure. Test results show that the developed heuristic algorithm finds good quality (in practice, even better) feasible solutions using far less computation time compared with the CPLEX, a competitive MIP solver.

Optimal Number of Super-peers in Clustered P2P Networks (클러스터 P2P 네트워크에서의 최적 슈퍼피어 개수)

  • Kim Sung-Hee;Kim Ju-Gyun;Lee Sang-Kyu;Lee Jun-Soo
    • The KIPS Transactions:PartC
    • /
    • v.13C no.4 s.107
    • /
    • pp.481-490
    • /
    • 2006
  • In a super-peer based P2P network, The network is clustered and each cluster is managed by a special peer, called a super-peer which has information of all peers in its cluster. This clustered P2P model is known to have efficient information search and less traffic load. In this paper, we first estimate the message traffic cost caused by peer's query, join and update actions within a cluster as well as between the clusters and with these values, we present the optimal number of super-peers that minimizes the traffic cost for the various size of super-peer based P2P networks.rks.

An Update-Efficient, Disk-Based Inverted Index Structure for Keyword Search on Data Streams (데이터 스트림에 대한 키워드 검색을 위한, 효율적인 갱신이 가능한 디스크 기반 역색인 구조)

  • Park, Eun Ju;Lee, Ki Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.4
    • /
    • pp.171-180
    • /
    • 2016
  • As social networking services such as twitter become increasingly popular, data streams are widely prevalent these days. In order to search data accumulated from data streams efficiently, the use of an index structure is essential. In this paper, we propose an update-efficient, disk-based inverted index structure for efficient keyword search on data streams. When new data arrive at the data stream, the index needs to be updated to incorporate the new data. The traditional inverted index is very inefficient to update in terms of disk I/O, because all index data stored in the disk need to be read and written to the disk each time the index is updated. To solve this problem, we divide the whole inverted index into a sequence of inverted indices with exponentially increasing size. When new data arrives, it is first inserted into the smallest index and, later, the small indices are merged with the larger indices, which leads to a small amortize update cost for each new data. Furthermore, when indices stored in the disk are merged with each other, we minimize the disk I/O cost incurred for the merge operation, resulting in an even smaller update cost. Through various experiments, we compare the update efficiency of the proposed index structure with the previous one, and show the performance advantage of the proposed structure in terms of the update cost.

Redesigning Radio Networks Considering Frequency Demands and Frequency Reassignment Cost (주파수 수요와 주파수 재할당 비용을 고려한 무선통신 네트워크 재설계)

  • Han, Junghee
    • Journal of Information Technology Services
    • /
    • v.10 no.1
    • /
    • pp.117-133
    • /
    • 2011
  • In this paper, we present a frequency reassignment problem (FRP) arising from the reconfiguration of radio networks such as adding new base stations (BSs) and changing the number of frequencies assigned to BSs. For this problem, we develop an integer programming (IP) model that minimizes the sum of frequency reassignment cost and the cost for unsatisfied frequency demands, while avoiding interference among frequencies. To obtain tight lower bounds, we develop some valid inequalities and devise an objective function relaxation scheme. Also, we develop a simple but efficient heuristic procedure to solve large size problems. Computational results show that the developed valid inequalities are effective for improving lower bounds. Also, the proposed tabu search heuristic finds tight upper bounds with average optimality gap of 2.3%.