• Title/Summary/Keyword: fast-algorithm

Search Result 3,702, Processing Time 0.029 seconds

Development of Earthquake Early Warning System nearby Epicenter based on P-wave Multiple Detection (진원지 인근 지진 조기 경보를 위한 선착 P파 다중 탐지 시스템 개발)

  • Lee, Taehee;Noh, Jinseok;Hong, Seungseo;Kim, YoungSeok
    • Journal of the Korean Geosynthetics Society
    • /
    • v.18 no.4
    • /
    • pp.107-114
    • /
    • 2019
  • In this paper, the P-wave multiple detection system for the fast and accurate earthquake early warning nearby the epicenter was developed. The developed systems were installed in five selected public buildings for the validation. During the monitoring, a magnitude 2.3 earthquake occurred in Pohang on 26 September 2019. P-wave initial detection algorithms were operated in three out of four systems installed in Pohang area and recorded as seismic events. At the nearest station, 5.5 km from the epicenter, P-wave signal was detected 1.2 seconds after the earthquake, and S-wave was reached 1.02 seconds after the P-wave reached, providing some alarm time. The maximum accelerations recorded in three different stations were 6.28 gal, 6.1 gal, and 5.3 gal, respectively. The alarm algorithm did not work, due to the high threshold of the maximum ground acceleration (25.1 gal) to operate it. If continuous monitoring and analysis are to be carried out in the future, the developed system could use a highly effective earthquake warning system suitable for the domestic situation.

Big Data Based Dynamic Flow Aggregation over 5G Network Slicing

  • Sun, Guolin;Mareri, Bruce;Liu, Guisong;Fang, Xiufen;Jiang, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4717-4737
    • /
    • 2017
  • Today, smart grids, smart homes, smart water networks, and intelligent transportation, are infrastructure systems that connect our world more than we ever thought possible and are associated with a single concept, the Internet of Things (IoT). The number of devices connected to the IoT and hence the number of traffic flow increases continuously, as well as the emergence of new applications. Although cutting-edge hardware technology can be employed to achieve a fast implementation to handle this huge data streams, there will always be a limit on size of traffic supported by a given architecture. However, recent cloud-based big data technologies fortunately offer an ideal environment to handle this issue. Moreover, the ever-increasing high volume of traffic created on demand presents great challenges for flow management. As a solution, flow aggregation decreases the number of flows needed to be processed by the network. The previous works in the literature prove that most of aggregation strategies designed for smart grids aim at optimizing system operation performance. They consider a common identifier to aggregate traffic on each device, having its independent static aggregation policy. In this paper, we propose a dynamic approach to aggregate flows based on traffic characteristics and device preferences. Our algorithm runs on a big data platform to provide an end-to-end network visibility of flows, which performs high-speed and high-volume computations to identify the clusters of similar flows and aggregate massive number of mice flows into a few meta-flows. Compared with existing solutions, our approach dynamically aggregates large number of such small flows into fewer flows, based on traffic characteristics and access node preferences. Using this approach, we alleviate the problem of processing a large amount of micro flows, and also significantly improve the accuracy of meeting the access node QoS demands. We conducted experiments, using a dataset of up to 100,000 flows, and studied the performance of our algorithm analytically. The experimental results are presented to show the promising effectiveness and scalability of our proposed approach.

A Study on Multi-modal Near-IR Face and Iris Recognition on Mobile Phones (휴대폰 환경에서의 근적외선 얼굴 및 홍채 다중 인식 연구)

  • Park, Kang-Ryoung;Han, Song-Yi;Kang, Byung-Jun;Park, So-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.1-9
    • /
    • 2008
  • As the security requirements of mobile phones have been increasing, there have been extensive researches using one biometric feature (e.g., an iris, a fingerprint, or a face image) for authentication. Due to the limitation of uni-modal biometrics, we propose a method that combines face and iris images in order to improve accuracy in mobile environments. This paper presents four advantages and contributions over previous research. First, in order to capture both face and iris image at fast speed and simultaneously, we use a built-in conventional mega pixel camera in mobile phone, which is revised to capture the NIR (Near-InfraRed) face and iris image. Second, in order to increase the authentication accuracy of face and iris, we propose a score level fusion method based on SVM (Support Vector Machine). Third, to reduce the classification complexities of SVM and intra-variation of face and iris data, we normalize the input face and iris data, respectively. For face, a NIR illuminator and NIR passing filter on camera are used to reduce the illumination variance caused by environmental visible lighting and the consequent saturated region in face by the NIR illuminator is normalized by low processing logarithmic algorithm considering mobile phone. For iris, image transform into polar coordinate and iris code shifting are used for obtaining robust identification accuracy irrespective of image capturing condition. Fourth, to increase the processing speed on mobile phone, we use integer based face and iris authentication algorithms. Experimental results were tested with face and iris images by mega-pixel camera of mobile phone. It showed that the authentication accuracy using SVM was better than those of uni-modal (face or iris), SUM, MAX, NIN and weighted SUM rules.

Page Group Search Model : A New Internet Search Model for Illegal and Harmful Content (페이지 그룹 검색 그룹 모델 : 음란성 유해 정보 색출 시스템을 위한 인터넷 정보 검색 모델)

  • Yuk, Hyeon-Gyu;Yu, Byeong-Jeon;Park, Myeong-Sun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.12
    • /
    • pp.1516-1528
    • /
    • 1999
  • 월드 와이드 웹(World Wide Web)에 존재하는 음란성 유해 정보는 많은 국가에서 사회적인 문제를 일으키고 있다. 그러나 현재 음란성 유해 정보로부터 미성년자를 보호하는 실효성 있는 방법은 유해 정보 접근 차단 프로그램을 사용하는 방법뿐이다. 유해 정보 접근 차단 프로그램은 기본적으로 음란성 유해 정보를 포함한 유해 정보 주소 목록을 기반으로 사용자의 유해 정보에 대한 접근을 차단하는 방식으로 동작한다.그런데 대규모 유해 정보 주소 목록의 확보를 위해서는 월드 와이드 웹으로부터 음란성 유해 정보를 자동 색출하는 인터넷 정보 검색 시스템의 일종인 음란성 유해 정보 색출 시스템이 필요하다. 그런데 음란성 유해 정보 색출 시스템은 그 대상이 사람이 아닌 유해 정보 접근 차단 프로그램이기 때문에 일반 인터넷 정보 검색 시스템과는 달리, 대단히 높은 검색 정확성을 유지해야 하고, 유해 정보 접근 차단 프로그램에서 관리가 용이한 검색 목록을 생성해야 하는 요구 사항을 가진다.본 논문에서는 기존 인터넷 정보 검색 모델이 "문헌"에 대한 잘못된 가정 때문에 위 요구사항을 만족시키지 못하고 있음을 지적하고, 월드 와이드 웹 상의 문헌에 대한 새로운 정의와 이를 기반으로 위의 요구사항을 만족하는 검색 모델인 페이지 그룹 검색 모델을 제안한다. 또한 다양한 실험과 분석을 통해 제안하는 모델이 기존 인터넷 정보 검색 모델보다 높은 정확성과 빠른 검색 속도, 그리고 유해 정보 접근 차단 프로그램에서의 관리가 용이한 검색 목록을 생성함을 보인다.Abstract Illegal and Harmful Content on the Internet, especially content for adults causes a social problem in many countries. To protect children from harmful content, A filtering software, which blocks user's access to harmful content based on a blocking list, and harmful content search system, which is a special purpose internet search system to generate the blocking list, are necessary. We found that current internet search models do not satisfy the requirements of the harmful content search system: high accuracy in document analysis, fast search time, and low overhead in the filtering software.In this paper we point out these problems are caused by a mistake in a document definition of the current internet models and propose a new internet search model, Page Group Search Model. This model considers a document as a set of pages that are made for one subject. We suggest a Group Construction algorithm and a Group Evaluation algorithm. And we perform experiments to prove that Page Group Search Model satisfies the requirements.uirements.

Development of a back analysis program for reasonable derivation of tunnel design parameters (합리적인 터널설계정수 산정을 위한 역해석 프로그램 개발)

  • Kim, Young-Joon;Lee, Yong-Joo
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.15 no.3
    • /
    • pp.357-373
    • /
    • 2013
  • In this paper, a back analysis program for analyzing the behavior of tunnel-ground system and evaluating the material properties and tunnel design parameters was developed. This program was designed to be able to implement the back analysis of underground structure by combination of using FLAC and optimized algorithm as direct method. In particular, Rosenbrock method which is able to do direct search without obtaining differential coefficient was adopted for the back analysis algorithm among optimization methods. This back analysis program was applied to the site to evaluate the design parameters. The back analysis was carried out using field measurement results from 5 sites. In the course of back analysis, nonlinear regression analysis was carried out to identify the optimum function of the measured ground displacement. Exponential function and fractional function were used for the regression analysis and total displacement calculated by optimum function was used as the back analysis input data. As a result, displacement recalculated through the back analysis using measured displacement of the structure showed 4.5% of error factor comparing to the measured data. Hence, the program developed in this study proved to be effectively applicable to tunnel analysis.

Classifier Selection using Feature Space Attributes in Local Region (국부적 영역에서의 특징 공간 속성을 이용한 다중 인식기 선택)

  • Shin Dong-Kuk;Song Hye-Jeong;Kim Baeksop
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.12
    • /
    • pp.1684-1690
    • /
    • 2004
  • This paper presents a method for classifier selection that uses distribution information of the training samples in a small region surrounding a sample. The conventional DCS-LA(Dynamic Classifier Selection - Local Accuracy) selects a classifier dynamically by comparing the local accuracy of each classifier at the test time, which inevitably requires long classification time. On the other hand, in the proposed approach, the best classifier in a local region is stored in the FSA(Feature Space Attribute) table during the training time, and the test is done by just referring to the table. Therefore, this approach enables fast classification because classification is not needed during test. Two feature space attributes are used entropy and density of k training samples around each sample. Each sample in the feature space is mapped into a point in the attribute space made by two attributes. The attribute space is divided into regular rectangular cells in which the local accuracy of each classifier is appended. The cells with associated local accuracy comprise the FSA table. During test, when a test sample is applied, the cell to which the test sample belongs is determined first by calculating the two attributes, and then, the most accurate classifier is chosen from the FSA table. To show the effectiveness of the proposed algorithm, it is compared with the conventional DCS -LA using the Elena database. The experiments show that the accuracy of the proposed algorithm is almost same as DCS-LA, but the classification time is about four times faster than that.

Determining the number of Clusters in On-Line Document Clustering Algorithm (온라인 문서 군집화에서 군집 수 결정 방법)

  • Jee, Tae-Chang;Lee, Hyun-Jin;Lee, Yill-Byung
    • The KIPS Transactions:PartB
    • /
    • v.14B no.7
    • /
    • pp.513-522
    • /
    • 2007
  • Clustering is to divide given data and automatically find out the hidden meanings in the data. It analyzes data, which are difficult for people to check in detail, and then, makes several clusters consisting of data with similar characteristics. On-Line Document Clustering System, which makes a group of similar documents by use of results of the search engine, is aimed to increase the convenience of information retrieval area. Document clustering is automatically done without human interference, and the number of clusters, which affect the result of clustering, should be decided automatically too. Also, the one of the characteristics of an on-line system is guarantying fast response time. This paper proposed a method of determining the number of clusters automatically by geometrical information. The proposed method composed of two stages. In the first stage, centers of clusters are projected on the low-dimensional plane, and in the second stage, clusters are combined by use of distance of centers of clusters in the low-dimensional plane. As a result of experimenting this method with real data, it was found that clustering performance became better and the response time is suitable to on-line circumstance.

Adaptive Image Content-Based Retrieval Techniques for Multiple Queries (다중 질의를 위한 적응적 영상 내용 기반 검색 기법)

  • Hong Jong-Sun;Kang Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.3 s.303
    • /
    • pp.73-80
    • /
    • 2005
  • Recently there have been many efforts to support searching and browsing based on the visual content of image and multimedia data. Most existing approaches to content-based image retrieval rely on query by example or user based low-level features such as color, shape, texture. But these methods of query are not easy to use and restrict. In this paper we propose a method for automatic color object extraction and labelling to support multiple queries of content-based image retrieval system. These approaches simplify the regions within images using single colorizing algorithm and extract color object using proposed Color and Spatial based Binary tree map(CSB tree map). And by searching over a large of number of processed regions, a index for the database is created by using proposed labelling method. This allows very fast indexing of the image by color contents of the images and spatial attributes. Futhermore, information about the labelled regions, such as the color set, size, and location, enables variable multiple queries that combine both color content and spatial relationships of regions. We proved our proposed system to be high performance through experiment comparable with another algorithm using 'Washington' image database.

Extracting Silhouettes of a Polyhedral Model from a Curved Viewpoint Trajectory (곡선 궤적의 이동 관측점에 대한 다면체 모델의 윤곽선 추출)

  • Kim, Gu-Jin;Baek, Nak-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.8 no.2
    • /
    • pp.1-7
    • /
    • 2002
  • The fast extraction of the silhouettes of a model is very useful for many applications in computer graphics and animation. In this paper, we present an efficient algorithm to compute a sequence of perspective silhouettes for a polyhedral model from a moving viewpoint. The viewpoint is assumed to move along a trajectory q(t), which is a space curve of a time parameter t. Then, we can compute the time-intervals for each edge of the model to be contained in the silhouette by two major computations: (i) intersecting q(t) with two planes and (ii) a number of dot products. If q(t) is a curve of degree n, then there are at most of n + 1 time-intervals for an edge to be in a silhouette. For each time point $t_i$ we can extract silhouette edges by searching the intervals containing $t_i$ among the computed intervals. For the efficient search, we propose two kinds of data structures for storing the intervals: an interval tree and an array. Our algorithm can be easily extended to compute the parallel silhouettes with minor modifications.

  • PDF

Development of M2M Simulator for Mobile Network using Knapsack Algorithm (Knapsack 알고리즘을 이용한 모바일 네트워크용 M2M 시뮬레이터 개발)

  • Lee, Sun-Sik;Jang, Jong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.11
    • /
    • pp.2661-2667
    • /
    • 2013
  • Recently, at Home and abroad, Internet of Things era things(Thing) is participating as a subject of communication in human communication paradigm of existing (lot/M2M) is in full swing. Automobile, refrigerator, bicycle, until shoes, and communication functions generation of information is installed and has created a fusion of new service IT infrastructure. Its use and application are broadening to various areas and the number of devices used for it is increasing to increase the number of information transmitted for each object. When the traffic reaches its limit while each set of data is transmitted from the devices divided into each group through the mobile network, M2M communications service might not be processed smoothly. This study used the Knapsack Problem algorithm to create a virtual simulator for a smooth M2M service when the mobile network used for the M2M communications reaches its limit. The virtual simulator applies smooth processing of services from the M2M communications that should be processed first to other subsequent services when data comes to each group of devices. As the M2M technology develops to make many objects more compact in size, it would help with smoother processing of M2M services for the mobile network with fast-increasing traffic.