• Title/Summary/Keyword: Split-Algorithm

Search Result 316, Processing Time 0.021 seconds

An Implementation of an Edge-based Algorithm for Separating and Intersecting Spherical Polygons (구 볼록 다각형 들의 분리 및 교차를 위한 간선 기반 알고리즘의 구현)

  • Ha, Jong-Seong;Cheon, Eun-Hong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.9
    • /
    • pp.479-490
    • /
    • 2001
  • In this paper, we consider the method of partitioning a sphere into faces with a set of spherical convex polygons $\Gamma$=${P_1...P_n}$ for determining the maximum of minimum intersection. This problem is commonly related with five geometric problems that fin the densest hemisphere containing the maximum subset of $\Gamma$, a great circle separating $\Gamma$, a great circle bisecting $\Gamma$ and a great circle intersecting the minimum or maximum subset of $\Gamma$. In order to efficiently compute the minimum or maximum intersection of spherical polygons. we take the approach of edge-based partition, in which the ownerships of edges rather than faces are manipulated as the sphere is incrementally partitioned by each of the polygons. Finally, by gathering the unordered split edges with the maximum number of ownerships. we approximately obtain the centroids of the solution faces without constructing their boundaries. Our algorithm for finding the maximum intersection is analyzed to have an efficient time complexity O(nv) where n and v respectively, are the numbers of polygons and all vertices. Furthermore, it is practical from the view of implementation, since it computes numerical values. robustly and deals with all the degenerate cases, Using the similar approach, the boundary of a general intersection can be constructed in O(nv+LlogL) time, where : is the output-senstive number of solution edges.

  • PDF

Making Cache-Conscious CCMR-trees for Main Memory Indexing (주기억 데이타베이스 인덱싱을 위한 CCMR-트리)

  • 윤석우;김경창
    • Journal of KIISE:Databases
    • /
    • v.30 no.6
    • /
    • pp.651-665
    • /
    • 2003
  • To reduce cache misses emerges as the most important issue in today's situation of main memory databases, in which CPU speeds have been increasing at 60% per year, and memory speeds at 10% per year. Recent researches have demonstrated that cache-conscious index structure such as the CR-tree outperforms the R-tree variants. Its search performance can be poor than the original R-tree, however, since it uses a lossy compression scheme. In this paper, we propose alternatively a cache-conscious version of the R-tree, which we call MR-tree. The MR-tree propagates node splits upward only if one of the internal nodes on the insertion path has empty room. Thus, the internal nodes of the MR-tree are almost 100% full. In case there is no empty room on the insertion path, a newly-created leaf simply becomes a child of the split leaf. The height of the MR-tree increases according to the sequence of inserting objects. Thus, the HeightBalance algorithm is executed when unbalanced heights of child nodes are detected. Additionally, we also propose the CCMR-tree in order to build a more cache-conscious MR-tree. Our experimental and analytical study shows that the two-dimensional MR-tree performs search up to 2.4times faster than the ordinary R-tree while maintaining slightly better update performance and using similar memory space.

Development of a real-time surface image velocimeter using an android smartphone (스마트폰을 이용한 실시간 표면영상유속계 개발)

  • Yu, Kwonkyu;Hwang, Jeong-Geun
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.6
    • /
    • pp.469-480
    • /
    • 2016
  • The present study aims to develop a real-time surface image velocimeter (SIV) using an Android smartphone. It can measure river surface velocity by using its built-in sensors and processors. At first the SIV system figures out the location of the site using the GPS of the phone. It also measures the angles (pitch and roll) of the device by using its orientation sensors to determine the coordinate transform from the real world coordinates to image coordinates. The only parameter to be entered is the height of the phone from the water surface. After setting, the camera of the phone takes a series of images. With the help of OpenCV, and open source computer vision library, we split the frames of the video and analyzed the image frames to get the water surface velocity field. The image processing algorithm, similar to the traditional STIV (Spatio-Temporal Image Velocimeter), was based on a correlation analysis of spatio-temporal images. The SIV system can measure instantaneous velocity field (1 second averaged velocity field) once every 11 seconds. Averaging this instantaneous velocity measurement for sufficient amount of time, we can get an average velocity field. A series of tests performed in an experimental flume showed that the measurement system developed was greatly effective and convenient. The measured results by the system showed a maximum error of 13.9 % and average error less than 10 %, when we compared with the measurements by a traditional propeller velocimeter.

Estimation of Trip Matrices from Traffic Counts : An Equilibrium Approach (교통망 평형 조건하에서 링크 교통량 자료를 이용한 기종점 통행표 추정방법에 관한 연구)

  • 오재학
    • Journal of Korean Society of Transportation
    • /
    • v.10 no.1
    • /
    • pp.55-62
    • /
    • 1992
  • 교통수요는 교통정책 및 교통시설 계획의 수립 및 평가에 중요한 영향을 미치게 되므로 교통수요의 예측은 교통연구에서 중요한 부문을 차지하고 있다. 도로밑에 설치된 전자차량감지기(Electronic Vehicle Detector)로부터 자동 수집된 링크 교통량 자료(Traffic Counts)를 주요 입력자료로 이용하여 계획지역의 기종점 통행표(Origin Destination Trip Matrix)를 작성할 수 있는 기법 들이 최근 수년동안 많이 발달하게 되었다. 이러한 새로운 기법들은 가구조사(Home Inteview), 노변면접조사(Road-Side Interview)등을 토하여 조사된 자료를 기초로하는 전통적은 4단계 교통수요추정방법(Conventional 4-Stage Estimation Method)-통행발생(Generation), 통행분포(Distribution), 수단선택(Modal Split), 교통배분(Assignment)-과 비교하여 첫째로 정확도가 높은 링크 교통량 자료를 별도의 조사를 거치지 않고서도 수집이 가능하기 때문에 조사비용이 거의 들지 않아도 되어 경제적이고, 둘째로 전통적인 수요예측방법들에서 요구되어지는 복잡한 모형수립 및 계수조정(Parameter Calibration)이 필요하지 않아 간편하고 셋째로 오래전에 작성된 기종점 통행표를 단순히 링크 교통량 자료만을 이용하여 쉽게 보완할 수 있어 지속적인 자료의 축적(Data Age-ing)이 가능하며 더 나아 가서 소위 연속적인 교통 계획 및 교통시설관리(Continuous Transport Planning and Management)를 가능케 하는 등의 여러 장점 때문에 많은 주목을 받아 오고 최근 몇 년이 꾸준히 실무에 유용하게 적용이 되고 있는 실정이다. 본 연구는 링크 교통량자료를 이용하여 기종점 통행표를 작성하기 위하여 개발된 기존의 여러 기법들 가운데 특히 용량제약조건(Capacity-Restrained Condition)하에서 기존의 방법들을 상호 검토한 후 Wardrop의 교통망 평형원칙(Wardrop's First Network Equilibrium Principle)을 만족하는 새로운 추정기법을 제의하고 이의 시험결과를 논의하는 것을 주요내용으로 한다. 링크 교통량 자료를 이용하여 기종점 통행표를 작성하는 기법들의 근본 목표는 조사된 링크 교통량(Ob-served Traffic Counts)에 가장 근접한 교통망 통행 배정 링크 교통량(Assigned Link Volumes)을 재현(Re-producing)할 수 있는 기종점 통행표들 중에서 최적의 기종점 통행표를 발견하는 것이다. 따라서 교통망에서 통행자의 여행 경로 배정을 가장 잘 반영할 수 있는 현실적인(Realistic) 교통망 통행 배정 모형(Net-work Traffic Assignment Model)의 선택은 중요한 요소가 되며 특히 교통망에 교통체증(Traffic Conges-tion)이 심할 경우 교통망 통행자 평형조건(Network Traffic Equilibrium Condition)을 고려하기 위한 특별한 처리가 요구되어진다. 본 연구는 Whllumsen(Hall, Van Vliet and Willumsen, 1980)에 의하여 개발된 ME2(Maximum Entropy Matrix Estimation)기법에서 반복식 추정방법(Sequential Estimation Method)을 사용할 경우 Wardrop의 평형조건을 만족하는 기종점 통행표를 구할 수 없다는 단점을 극복하기 위한 방안으로서 엔트로피 극대화문제와 교통망 평형 조건(Entropy Maximisation and Network Equilibrium Condition)의 두 문제를 동시에 해결할 수 있는 새로운 수식모형과 이를 풀기 위한 알고리즘(Simultaneous Solution Algorithm)을 제의하였다. 제의된 수식모형과 알고리즘을 예제 교통망(Example Network)을 이용한 시험하고 그 결과를 ME2 의 반복식 추정 방법으로부터 구한 기종점 통행표와 비교 검토하였다.

  • PDF

Algorithm for Primary Full-thickness Skin Grafting in Pediatric Hand Burns

  • Park, Yang Seo;Lee, Jong Wook;Huh, Gi Yeun;Koh, Jang Hyu;Seo, Dong Kook;Choi, Jai Koo;Jang, Young Chul
    • Archives of Plastic Surgery
    • /
    • v.39 no.5
    • /
    • pp.483-488
    • /
    • 2012
  • Background Pediatric hand burns are a difficult problem because they lead to serious hand deformities with functional impairment due to rapid growth during childhood. Therefore, adequate management is required beginning in the acute stage. Our study aims to establish surgical guidelines for a primary full-thickness skin graft (FTSG) in pediatric hand burns, based on long-term observation periods and existing studies. Methods From January 2000 to May 2011, 210 patients underwent primary FTSG. We retrospectively studied the clinical course and treatment outcomes based on the patients' medical records. The patients' demographics, age, sex, injury site of the fingers, presence of web space involvement, the incidence of postoperative late deformities, and the duration of revision were critically analyzed. Results The mean age of the patients was 24.4 months (range, 8 to 94 months), consisting of 141 males and 69 females. The overall observation period was 6.9 years (range, 1 to 11 years) on average. At the time of the burn, 56 cases were to a single finger, 73 to two fingers, 45 to three fingers, and 22 to more than three. Among these cases, 70 were burns that included a web space (33.3%). During the observation, 25 cases underwent corrective operations with an average period of 40.6 months. Conclusions In the volar area, primary full-thickness skin grafting can be a good indication for an isolated injured finger, excluding the web spaces, and injuries of less than three fingers including the web spaces. Also, in the dorsal area, full-thickness skin grafting can be a good indication. However, if the donor site is insufficient and the wound is large, split-thickness skin grafting can be considered.

A New Wideband Speech/Audio Coder Interoperable with ITU-T G.729/G.729E (ITU-T G.729/G.729E와 호환성을 갖는 광대역 음성/오디오 부호화기)

  • Kim, Kyung-Tae;Lee, Min-Ki;Youn, Dae-Hee
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.81-89
    • /
    • 2008
  • Wideband speech, characterized by a bandwidth of about 7 kHz (50-7000 Hz), provides a substantial quality improvement in terms of naturalness and intelligibility. Although higher data rates are required, it has extended its application to audio and video conferencing, high-quality multimedia communications in mobile links or packet-switched transmissions, and digital AM broadcasting. In this paper, we present a new bandwidth-scalable coder for wideband speech and audio signals. The proposed coder spits 8kHz signal bandwidth into two narrow bands, and different coding schemes are applied to each band. The lower-band signal is coded using the ITU-T G.729/G.729E coder, and the higher-band signal is compressed using a new algorithm based on the gammatone filter bank with an invertible auditory model. Due to the split-band architecture and completely independent coding schemes for each band, the output speech of the decoder can be selected to be a narrowband or wideband according to the channel condition. Subjective tests showed that, for wideband speech and audio signals, the proposed coder at 14.2/18 kbit/s produces superior quality to ITU-T 24 kbit/s G.722.1 with the shorter algorithmic delay.

Dynamic Block Reassignment for Load Balancing of Block Centric Graph Processing Systems (블록 중심 그래프 처리 시스템의 부하 분산을 위한 동적 블록 재배치 기법)

  • Kim, Yewon;Bae, Minho;Oh, Sangyoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.5
    • /
    • pp.177-188
    • /
    • 2018
  • The scale of graph data has been increased rapidly because of the growth of mobile Internet applications and the proliferation of social network services. This brings upon the imminent necessity of efficient distributed and parallel graph processing approach since the size of these large-scale graphs are easily over a capacity of a single machine. Currently, there are two popular parallel graph processing approaches, vertex-centric graph processing and block centric processing. While a vertex-centric graph processing approach can easily be applied to the parallel processing system, a block-centric graph processing approach is proposed to compensate the drawbacks of the vertex-centric approach. In these systems, the initial quality of graph partition affects to the overall performance significantly. However, it is a very difficult problem to divide the graph into optimal states at the initial phase. Thus, several dynamic load balancing techniques have been studied that suggest the progressive partitioning during the graph processing time. In this paper, we present a load balancing algorithms for the block-centric graph processing approach where most of dynamic load balancing techniques are focused on vertex-centric systems. Our proposed algorithm focus on an improvement of the graph partition quality by dynamically reassigning blocks in runtime, and suggests block split strategy for escaping local optimum solution.

Adaptive Segmentation Approach to Extraction of Road and Sky Regions (도로와 하늘 영역 추출을 위한 적응적 분할 방법)

  • Park, Kyoung-Hwan;Nam, Kwang-Woo;Rhee, Yang-Won;Lee, Chang-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.7
    • /
    • pp.105-115
    • /
    • 2011
  • In Vision-based Intelligent Transportation System(ITS) the segmentation of road region is a very basic functionality. Accordingly, in this paper, we propose a region segmentation method using adaptive pattern extraction technique to segment road regions and sky regions from original images. The proposed method consists of three steps; firstly we perform the initial segmentation using Mean Shift algorithm, the second step is the candidate region selection based on a static-pattern matching technique and the third is the region growing step based on a dynamic-pattern matching technique. The proposed method is able to get more reliable results than the classic region segmentation methods which are based on existing split and merge strategy. The reason for the better results is because we use adaptive patterns extracted from neighboring regions of the current segmented regions to measure the region homogeneity. To evaluate advantages of the proposed method, we compared our method with the classical pattern matching method using static-patterns. In the experiments, the proposed method was proved that the better performance of 8.12% was achieved when we used adaptive patterns instead of static-patterns. We expect that the proposed method can segment road and sky areas in the various road condition in stable, and take an important role in the vision-based ITS applications.

Design of Classifier for Sorting of Black Plastics by Type Using Intelligent Algorithm (지능형 알고리즘을 이용한 재질별 검정색 플라스틱 분류기 설계)

  • Park, Sang Beom;Roh, Seok Beom;Oh, Sung Kwun;Park, Eun Kyu;Choi, Woo Zin
    • Resources Recycling
    • /
    • v.26 no.2
    • /
    • pp.46-55
    • /
    • 2017
  • In this study, the design methodology of Radial Basis Function Neural Networks is developed with the aid of Laser Induced Breakdown Spectroscopy and also applied to the practical plastics sorting system. To identify black plastics such as ABS, PP, and PS, RBFNNs classifier as a kind of intelligent algorithms is designed. The dimensionality of the obtained input variables are reduced by using PCA and divided into several groups by using K-means clustering which is a kind of clustering techniques. The entire data is split into training data and test data according to the ratio of 4:1. The 5-fold cross validation method is used to evaluate the performance as well as reliability of the proposed classifier. In case of input variables and clusters equal to 5 respectively, the classification performance of the proposed classifier is obtained as 96.78%. Also, the proposed classifier showed superiority in the viewpoint of classification performance where compared to other classifiers.

A $CST^+$ Tree Index Structure for Range Search (범위 검색을 위한 $CST^+$ 트리 인덱스 구조)

  • Lee, Jae-Won;Kang, Dae-Hee;Lee, Sang-Goo
    • Journal of KIISE:Databases
    • /
    • v.35 no.1
    • /
    • pp.17-28
    • /
    • 2008
  • Recently, main memory access is a performance bottleneck for many computer applications. Cache memory is introduced in order to reduce memory access latency. However, it is possible for cache memory to reduce memory access latency, when desired data are located on cache. EST tree is proposed to solve this problem by improving T tree. However, when doing a range search, EST tree has to search unnecessary nodes. Therefore, this paper proposes $CST^+$ tree which has the merit of CST tree and is possible to do a range search by linking data nodes with linked lists. By experiments, we show that $CST^+$ is $4{\sim}10$ times as fast as CST and $CSB^+$. In addition, rebuilding an index Is an essential step for the database recovery from system failure. In this paper, we propose a fast tree index rebuilding algorithm called MaxPL. MaxPL has no node-split overhead and employs a parallelism for reading the data records and inserting the keys into the index. We show that MaxPL is $2{\sim}11$ times as fast as sequential insert and batch insert.