• Title/Summary/Keyword: Point-Based Data Processing

Search Result 503, Processing Time 0.029 seconds

The Design and Implementation of GIS Data Processing using 3-Tiers Architecture for selecting Route (3계층 구조를 이용한 GIS 자료처리 설계 및 구현 -도로의 노선선정을 중심으로-)

  • 이형석;배상호
    • Journal of the Korea Society of Computer and Information
    • /
    • v.7 no.3
    • /
    • pp.23-29
    • /
    • 2002
  • The design of data processing of GIS requires efficient method with analysis procedure. This system is easy to be used and managed for presenting route according to conditions as a graphic user interface environmental window system by applying three tiers based object-oriented method. The tier of data is in charge of a class for the exchange, extraction and conservation of data between GeoMedia and application tiers. A route selection algorithm was applied to application tiers, considering all conditions which are necessary for the route selection between a beginning point and an end point, and it was added by module such as data handing, road condition, buffering, clothoid and AHP to select the alternative route followed by new condition. The user tier can express the data acquired by an application tier. Thus three tiers based architecture was presented by implementing design of GIB data processing for its efficiency.

  • PDF

Prerequisite Research for the Development of an End-to-End System for Automatic Tooth Segmentation: A Deep Learning-Based Reference Point Setting Algorithm (자동 치아 분할용 종단 간 시스템 개발을 위한 선결 연구: 딥러닝 기반 기준점 설정 알고리즘)

  • Kyungdeok Seo;Sena Lee;Yongkyu Jin;Sejung Yang
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.346-353
    • /
    • 2023
  • In this paper, we propose an innovative approach that leverages deep learning to find optimal reference points for achieving precise tooth segmentation in three-dimensional tooth point cloud data. A dataset consisting of 350 aligned maxillary and mandibular cloud data was used as input, and both end coordinates of individual teeth were used as correct answers. A two-dimensional image was created by projecting the rendered point cloud data along the Z-axis, where an image of individual teeth was created using an object detection algorithm. The proposed algorithm is designed by adding various modules to the Unet model that allow effective learning of a narrow range, and detects both end points of the tooth using the generated tooth image. In the evaluation using DSC, Euclid distance, and MAE as indicators, we achieved superior performance compared to other Unet-based models. In future research, we will develop an algorithm to find the reference point of the point cloud by back-projecting the reference point detected in the image in three dimensions, and based on this, we will develop an algorithm to divide the teeth individually in the point cloud through image processing techniques.

An Empirical Study of Qualities of Association Rules from a Statistical View Point

  • Dorn, Maryann;Hou, Wen-Chi;Che, Dunren;Jiang, Zhewei
    • Journal of Information Processing Systems
    • /
    • v.4 no.1
    • /
    • pp.27-32
    • /
    • 2008
  • Minimum support and confidence have been used as criteria for generating association rules in all association rule mining algorithms. These criteria have their natural appeals, such as simplicity; few researchers have suspected the quality of generated rules. In this paper, we examine the rules from a more rigorous point of view by conducting statistical tests. Specifically, we use contingency tables and chi-square test to analyze the data. Experimental results show that one third of the association rules derived based on the support and confidence criteria are not significant, that is, the antecedent and consequent of the rules are not correlated. It indicates that minimum support and minimum confidence do not provide adequate discovery of meaningful associations. The chi-square test can be considered as an enhancement or an alternative solution.

Logic circuit design for high-speed computing of dynamic response in real-time hybrid simulation using FPGA-based system

  • Igarashi, Akira
    • Smart Structures and Systems
    • /
    • v.14 no.6
    • /
    • pp.1131-1150
    • /
    • 2014
  • One of the issues in extending the range of applicable problems of real-time hybrid simulation is the computation speed of the simulator when large-scale computational models with a large number of DOF are used. In this study, functionality of real-time dynamic simulation of MDOF systems is achieved by creating a logic circuit that performs the step-by-step numerical time integration of the equations of motion of the system. The designed logic circuit can be implemented to an FPGA-based system; FPGA (Field Programmable Gate Array) allows large-scale parallel computing by implementing a number of arithmetic operators within the device. The operator splitting method is used as the numerical time integration scheme. The logic circuit consists of blocks of circuits that perform numerical arithmetic operations that appear in the integration scheme, including addition and multiplication of floating-point numbers, registers to store the intermediate data, and data busses connecting these elements to transmit various information including the floating-point numerical data among them. Case study on several types of linear and nonlinear MDOF system models shows that use of resource sharing in logic synthesis is crucial for effective application of FPGA to real-time dynamic simulation of structural response with time step interval of 1 ms.

Design of Heavy Rain Advisory Decision Model Based on Optimized RBFNNs Using KLAPS Reanalysis Data (KLAPS 재분석 자료를 이용한 진화최적화 RBFNNs 기반 호우특보 판별 모델 설계)

  • Kim, Hyun-Myung;Oh, Sung-Kwun;Lee, Yong-Hee
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.5
    • /
    • pp.473-478
    • /
    • 2013
  • In this paper, we develop the Heavy Rain Advisory Decision Model based on intelligent neuro-fuzzy algorithm RBFNNs by using KLAPS(Korea Local Analysis and Prediction System) Reanalysis data. the prediction ability of existing heavy rainfall forecasting systems is usually affected by the processing techniques of meteorological data. In this study, we introduce the heavy rain forecast method using the pre-processing techniques of meteorological data are in order to improve these drawbacks of conventional system. The pre-processing techniques of meteorological data are designed by using point conversion, cumulative precipitation generation, time series data processing and heavy rain warning extraction methods based on KLAPS data. Finally, the proposed system forecasts cumulative rainfall for six hours after future t(t=1,2,3) hours and offers information to determine heavy rain advisory. The essential parameters of the proposed model such as polynomial order, the number of rules, and fuzzification coefficient are optimized by means of Differential Evolution.

Grid-based Index Generation and k-nearest-neighbor Join Query-processing Algorithm using MapReduce (맵리듀스를 이용한 그리드 기반 인덱스 생성 및 k-NN 조인 질의 처리 알고리즘)

  • Jang, Miyoung;Chang, Jae Woo
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1303-1313
    • /
    • 2015
  • MapReduce provides high levels of system scalability and fault tolerance for large-size data processing. A MapReduce-based k-nearest-neighbor(k-NN) join algorithm seeks to produce the k nearest-neighbors of each point of a dataset from another dataset. The algorithm has been considered important in bigdata analysis. However, the existing k-NN join query-processing algorithm suffers from a high index-construction cost that makes it unsuitable for the processing of bigdata. To solve the corresponding problems, we propose a new grid-based, k-NN join query-processing algorithm. Our algorithm retrieves only the neighboring data from a query cell and sends them to each MapReduce task, making it possible to improve the overhead data transmission and computation. Our performance analysis shows that our algorithm outperforms the existing scheme by up to seven-fold in terms of the query-processing time, while also achieving high extent of query-result accuracy.

A study on the 2D floor plan derivation of the indoor Point Cloud based on pixelation (포인트 클라우드 데이터의 픽셀화 기반 건축물 실내의 2D도면 도출에 관한 연구)

  • Jung, Yong-Il;Oh, Sang-Min;Ryu, Min-Woo;Kang, Nam-Woo;Cho, Hun-hee
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2020.06a
    • /
    • pp.105-106
    • /
    • 2020
  • Recently, a method of deriving an efficient 2D floor plan has been attracting attention for remodeling of old buildings with inaccurate 2D floor plans, and thus, studies on reverse engineering of indoor Point Cloud Date(PCD) have been actively conducted. However, in the case of a indoor PCD, due to interference of indoor objects, available equipment is limited to Mobile Laser Scanner(MLS), which causes a efficiency reduction of data processing. Therefore, this study proposes an automatic derivation algorithm for 2D floor plan of indoor PCD based on pixelation. First, the scanned indoor PCD is projected on the XY coordinate plane. Second, a point distribution of each pixel in the projected PCD is derived using a pixelation. Lastly, 2 floor plan derivation based on the algorithm is performed.

  • PDF

Text Detection in Scene Images Based on Interest Points

  • Nguyen, Minh Hieu;Lee, Gueesang
    • Journal of Information Processing Systems
    • /
    • v.11 no.4
    • /
    • pp.528-537
    • /
    • 2015
  • Text in images is one of the most important cues for understanding a scene. In this paper, we propose a novel approach based on interest points to localize text in natural scene images. The main ideas of this approach are as follows: first we used interest point detection techniques, which extract the corner points of characters and center points of edge connected components, to select candidate regions. Second, these candidate regions were verified by using tensor voting, which is capable of extracting perceptual structures from noisy data. Finally, area, orientation, and aspect ratio were used to filter out non-text regions. The proposed method was tested on the ICDAR 2003 dataset and images of wine labels. The experiment results show the validity of this approach.

Development of LiDAR Simulator for Backpack-mounted Mobile Indoor Mapping System

  • Chung, Minkyung;Kim, Changjae;Choi, Kanghyeok;Chung, DongKi;Kim, Yongil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.2
    • /
    • pp.91-102
    • /
    • 2017
  • Backpack-mounted mapping system is firstly introduced for flexible movement in indoor spaces where satellite-based localization is not available. With the achieved advances in miniaturization and weight reduction, use of LiDAR (Light Detection and Ranging) sensors in mobile platforms has been increasing, and indeed, they have provided high-precision information on indoor environments and their surroundings. Previous research on the development of backpack-mounted mapping systems, has concentrated mostly on the improvement of data processing methods or algorithms, whereas practical system components have been determined empirically. Thus, in the present study, a simulator for a LiDAR sensor (Velodyne VLP-16), was developed for comparison of the effects of diverse conditions on the backpack system and its operation. The simulated data was analyzed by visual inspection and comparison of the data sets' statistics, which differed according to the LiDAR arrangement and moving speed. Also, the data was used as input to a point-cloud registration algorithm, ICP (Iterative Closest Point), to validate its applicability as pre-analysis data. In fact, the results indicated centimeter-level accuracy, thus demonstrating the potentials of simulation data to be utilized as a tool for performance comparison of pointdata processing methods.

Design of Parallel Decimal Floating-Point Arithmetic Unit for High-speed Operations (고속 연산을 위한 병렬 구조의 십진 부동소수점 연산 장치 설계)

  • Yun, Hyoung-Kie;Moon, Dai-Tchul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.2921-2926
    • /
    • 2013
  • In this paper, a decimal floating-point arithmetic unit(DFP) was proposed and redesigned to support high speed arithmetic operation employed parallel processing technique. The basic architecture of the proposed DFP was based on the L.K.Wang's DFP and improved it enabling high speed operation by parallel processing for two operands with same size of exponent. The proposed DFP was synthesized as a target device of xc2vp30-7ff896 using Xilinx ISE and verified by simulation using Flowrian tool of System Centroid co. Compared to L.K.Wang's DFP and reference [6]'s method, the proposed DFP improved data processing speed about 8.4% and 3% respectively in case of same input data.