• Title/Summary/Keyword: fast search

Search Result 931, Processing Time 0.023 seconds

A Study on the Establishment of Cybercrime Business Model(CBM) through a Systematic Literature Review (체계적 문헌 연구를 통한 사이버범죄 비즈니스 모델(CBM) 구축)

  • Park, Ji-Yong;Lee, Heesang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.646-661
    • /
    • 2020
  • Technological innovations and fast-growing new internet businesses are changing the paradigm of traditional business management, having various impacts on society. The development of internet technology is also increasing the adverse effects on technological innovation, and in particular, cybercrime related to computers continues to increase with each technological innovation. The purpose of this study is to construct a cybercrime business model (CBM) by using the business model canvas (BMC) theory for cybercrime in order to reduce cybercrime, and this model is applied and analyzed based on types of Korean cybercrimes. For this study, a systematic literature review was conducted to determine the components of cybercrime, and 60 relevant documents were classified through a keyword-based literature search. Besides, qualitative research in the classified literature has led to the derivation of cybercrime into 18 sub-blocks and nine building blocks. This study applies BMC theory to this derivation of cybercrime and builds the CBM through proper redefinition. Lastly, the developed CBM could be applied to cybercrime in Korea to help cyber incident-response staff understand cybercrimes analytically. This study contributes to the development of a new analysis framework that can reduce cybercrime.

2-Dimensional Bitmap Tries for Fast Packet Classification (고속 패킷 분류를 위한 2차원 비트맵 트라이)

  • Seo, Ji-hee;Lim, Hye-sook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.9
    • /
    • pp.1754-1766
    • /
    • 2015
  • Packet classification carried out in Internet routers is one of the challenging tasks, because it has to be performed at wire-speed using five header fields at the same time. In this paper, we propose a leaf-pushed AQT bitmap trie. The proposed architecture applies the leaf-pushing to an area-based quad-trie (AQT) to reduce unnecessary off-chip memory accesses. The proposed architecture also applies a bitmap trie, which is a kind of multi-bit tries, to improve search performance and scalability. For performance evaluation, simulations are conducted by using rule sets ACL, FW, and IPC, with the sizes of 1k, 5k, and 10k. Simulation results show that the number of off-chip memory accesses is less than one regardless of set types or set sizes. Additionally, since the proposed architecture applies a bitmap trie, the required number of on-chip memory accesses is the 50% of the leaf-pushed AQT trie. In addition, our proposed architecture shows good scalability in the required on-chip memory size, where the scalability is identified by the stable change in the required memory sizes, as the size of rule sets increases.

Design and Implementation of a Main-Memory Database System for Real-time Mobile GIS Application (실시간 모바일 GIS 응용 구축을 위한 주기억장치 데이터베이스 시스템 설계 및 구현)

  • Kang, Eun-Ho;Yun, Suk-Woo;Kim, Kyung-Chang
    • The KIPS Transactions:PartD
    • /
    • v.11D no.1
    • /
    • pp.11-22
    • /
    • 2004
  • As random access memory chip gets cheaper, it becomes affordable to realize main memory-based database systems. Consequently, reducing cache misses emerges as the most important issue in current main memory databases, in which CPU speeds have been increasing at 60% per year, compared to the memory speeds at 10% per you. In this paper, we design and implement a main-memory database system for real-time mobile GIS. Our system is composed of 5 modules: the interface manager provides the interface for PDA users; the memory data manager controls spatial and non-spatial data in main-memory using virtual memory techniques; the query manager processes spatial and non-spatial query : the index manager manages the MR-tree index for spatial data and the T-tree index for non-spatial index : the GIS server interface provides the interface with disk-based GIS. The MR-tree proposed propagates node splits upward only if one of the internal nodes on the insertion path has empty space. Thus, the internal nodes of the MR-tree are almost 100% full. Our experimental study shows that the two-dimensional MR-tree performs search up to 2.4 times faster than the ordinary R-tree. To use virtual memory techniques, the memory data manager uses page tables for spatial data, non- spatial data, T-tree and MR-tree. And, it uses indirect addressing techniques for fast reloading from disk.

An Efficient Data Structure to Obtain Range Minima in Constant Time in Constructing Suffix Arrays (접미사 배열 생성 과정에서 구간 최소간 위치를 상수 시간에 찾기 위한 효율적인 자료구조)

  • 박희진
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.3_4
    • /
    • pp.145-151
    • /
    • 2004
  • We present an efficient data structure to obtain the range minima in an away in constant time. Recently, suffix ways are extensively used to search DNA sequences fast in bioinformatics. In constructing suffix arrays, solving the range minima problem is necessary When we construct suffix arrays, we should solve the range minima problem not only in a time-efficient way but also in a space-efficient way. The reason is that DNA sequences consist of millions or billions of bases. Until now, the most efficient data structure to find the range minima in an way in constant time is based on the method that converts the range minima problem in an array into the LCA (Lowest Common Ancestor) problem in a Cartesian tree and then converts the LCA problem into the range minima problem in a specific array. This data structure occupies O( n) space and is constructed in O(n) time. However since this data structure includes intermediate data structures required to convert the range minima problem in an array into other problems, it requires large space (=13n) and much time. Our data structure is based on the method that directly solves the range minima problem. Thus, our data structure requires small space (=5n) and less time in practice. As a matter of course, our data structure requires O(n) time and space theoretically.

Development of a back analysis program for reasonable derivation of tunnel design parameters (합리적인 터널설계정수 산정을 위한 역해석 프로그램 개발)

  • Kim, Young-Joon;Lee, Yong-Joo
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.15 no.3
    • /
    • pp.357-373
    • /
    • 2013
  • In this paper, a back analysis program for analyzing the behavior of tunnel-ground system and evaluating the material properties and tunnel design parameters was developed. This program was designed to be able to implement the back analysis of underground structure by combination of using FLAC and optimized algorithm as direct method. In particular, Rosenbrock method which is able to do direct search without obtaining differential coefficient was adopted for the back analysis algorithm among optimization methods. This back analysis program was applied to the site to evaluate the design parameters. The back analysis was carried out using field measurement results from 5 sites. In the course of back analysis, nonlinear regression analysis was carried out to identify the optimum function of the measured ground displacement. Exponential function and fractional function were used for the regression analysis and total displacement calculated by optimum function was used as the back analysis input data. As a result, displacement recalculated through the back analysis using measured displacement of the structure showed 4.5% of error factor comparing to the measured data. Hence, the program developed in this study proved to be effectively applicable to tunnel analysis.

Determining the number of Clusters in On-Line Document Clustering Algorithm (온라인 문서 군집화에서 군집 수 결정 방법)

  • Jee, Tae-Chang;Lee, Hyun-Jin;Lee, Yill-Byung
    • The KIPS Transactions:PartB
    • /
    • v.14B no.7
    • /
    • pp.513-522
    • /
    • 2007
  • Clustering is to divide given data and automatically find out the hidden meanings in the data. It analyzes data, which are difficult for people to check in detail, and then, makes several clusters consisting of data with similar characteristics. On-Line Document Clustering System, which makes a group of similar documents by use of results of the search engine, is aimed to increase the convenience of information retrieval area. Document clustering is automatically done without human interference, and the number of clusters, which affect the result of clustering, should be decided automatically too. Also, the one of the characteristics of an on-line system is guarantying fast response time. This paper proposed a method of determining the number of clusters automatically by geometrical information. The proposed method composed of two stages. In the first stage, centers of clusters are projected on the low-dimensional plane, and in the second stage, clusters are combined by use of distance of centers of clusters in the low-dimensional plane. As a result of experimenting this method with real data, it was found that clustering performance became better and the response time is suitable to on-line circumstance.

Extracting Silhouettes of a Polyhedral Model from a Curved Viewpoint Trajectory (곡선 궤적의 이동 관측점에 대한 다면체 모델의 윤곽선 추출)

  • Kim, Gu-Jin;Baek, Nak-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.8 no.2
    • /
    • pp.1-7
    • /
    • 2002
  • The fast extraction of the silhouettes of a model is very useful for many applications in computer graphics and animation. In this paper, we present an efficient algorithm to compute a sequence of perspective silhouettes for a polyhedral model from a moving viewpoint. The viewpoint is assumed to move along a trajectory q(t), which is a space curve of a time parameter t. Then, we can compute the time-intervals for each edge of the model to be contained in the silhouette by two major computations: (i) intersecting q(t) with two planes and (ii) a number of dot products. If q(t) is a curve of degree n, then there are at most of n + 1 time-intervals for an edge to be in a silhouette. For each time point $t_i$ we can extract silhouette edges by searching the intervals containing $t_i$ among the computed intervals. For the efficient search, we propose two kinds of data structures for storing the intervals: an interval tree and an array. Our algorithm can be easily extended to compute the parallel silhouettes with minor modifications.

  • PDF

A Study on Tracking Algorithm for Moving Object Using Partial Boundary Line Information (부분 외곽선 정보를 이용한 이동물체의 추척 알고리즘)

  • Jo, Yeong-Seok;Lee, Ju-Sin
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.539-548
    • /
    • 2001
  • In this paper, we propose that fast tracking algorithm for moving object is separated from background, using partial boundary line information. After detecting boundary line from input image, we track moving object by using the algorithm which takes boundary line information as feature of moving object. we extract moving vector on the imput image which has environmental variation, using high-performance BMA, and we extract moving object on the basis of moving vector. Next, we extract boundary line on the moving object as an initial feature-vector generating step for the moving object. Among those boundary lines, we consider a part of the boundary line in every direction as feature vector. And then, as a step for the moving object, we extract moving vector from feature vector generated under the information of the boundary line of the moving object on the previous frame, and we perform tracking moving object from the current frame. As a result, we show that the proposed algorithm using feature vector generated by each directional boundary line is simple tracking operation cost compared with the previous active contour tracking algorithm that changes processing time by boundary line size of moving object. The simulation for proposed algorithm shows that BMA operation is reduced about 39% in real image and tracking error is less than 2 pixel when the size of feature vector is [$10{\times}5$] using the information of each direction boundary line. Also the proposed algorithm just needs 200 times of search operation bout processing cost is varies by the size of boundary line on the previous algorithm.

  • PDF

Core Competency of Content Intermediary and Improvement in Content Distribution Channel: Focused on Broadcasting Content Download Market (온라인 방송콘텐츠 유통 중개업자의 핵심 역량과 유통구조 개선효과에 관한 사례 (방송콘텐츠 다운로드 시장을 중심으로))

  • Kim, Yoo-Jung;Kim, Kwan-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.9
    • /
    • pp.254-266
    • /
    • 2011
  • Wired and mobile Internet has led to increase in online broadcasting content market size. In particular, the fast-growing smart devices like smart phone and tablet PC in mobile Internet market has accelerated the growth of online broadcasting content service market. Meanwhile, the illegal distribution of online broadcasting content has been widespread in this market. There are also significant transaction costs, search costs, contracting and coordination costs in online broadcasting content market. MCP(Master content provider) which is a content distributor has been playing a critical role in preventing illegal content distribution, reducing costs and removing inefficiencies of online broadcasting content market. Thus, the purpose of this study is to investigate the competency of MCP to streamline the online broadcasting contents market from the resource- based view. And this study conducted a case study to explain the status of online broadcasting content market and define what kinds of problems and issues are there in the market in a systematic way. The case study also showed how MCP competency plays an important role in reducing administrative and transaction costs and in solving illegal content distribution and other inefficiencies of the online broadcasting contents market.

A Design and Development of Big Data Indexing and Search System using Lucene (루씬을 이용한 빅데이터 인덱싱 및 검색시스템의 설계 및 구현)

  • Kim, DongMin;Choi, JinWoo;Woo, ChongWoo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.6
    • /
    • pp.107-115
    • /
    • 2014
  • Recently, increased use of the internet resulted in generation of large and diverse types of data due to increased use of social media, expansion of a convergence of among industries, use of the various smart device. We are facing difficulties to manage and analyze the data using previous data processing techniques since the volume of the data is huge, form of the data varies and evolves rapidly. In other words, we need to study a new approach to solve such problems. Many approaches are being studied on this issue, and we are describing an effective design and development to build indexing engine of big data platform. Our goal is to build a system that could effectively manage for huge data set which exceeds previous data processing range, and that could reduce data analysis time. We used large SNMP log data for an experiment, and tried to reduce data analysis time through the fast indexing and searching approach. Also, we expect our approach could help analyzing the user data through visualization of the analyzed data expression.