• 제목/요약/키워드: point dataset

검색결과 195건 처리시간 0.023초

Bayesian Multiple Change-Point Estimation of Multivariate Mean Vectors for Small Data

  • Cheon, Sooyoung;Yu, Wenxing
    • 응용통계연구
    • /
    • 제25권6호
    • /
    • pp.999-1008
    • /
    • 2012
  • A Bayesian multiple change-point model for small data is proposed for multivariate means and is an extension of the univariate case of Cheon and Yu (2012). The proposed model requires data from a multivariate noncentral $t$-distribution and conjugate priors for the distributional parameters. We apply the Metropolis-Hastings-within-Gibbs Sampling algorithm to the proposed model to detecte multiple change-points. The performance of our proposed algorithm has been investigated on simulated and real dataset, Hanwoo fat content bivariate data.

Improving the Quality of Response Surface Analysis of an Experiment for Coffee-Supplemented Milk Beverage: I. Data Screening at the Center Point and Maximum Possible R-Square

  • Rheem, Sungsue;Oh, Sejong
    • 한국축산식품학회지
    • /
    • 제39권1호
    • /
    • pp.114-120
    • /
    • 2019
  • Response surface methodology (RSM) is a useful set of statistical techniques for modeling and optimizing responses in research studies of food science. As a design for a response surface experiment, a central composite design (CCD) with multiple runs at the center point is frequently used. However, sometimes there exist situations where some among the responses at the center point are outliers and these outliers are overlooked. Since the responses from center runs are those from the same experimental conditions, there should be no outliers at the center point. Outliers at the center point ruin statistical analysis. Thus, the responses at the center point need to be looked at, and if outliers are observed, they have to be examined. If the reasons for the outliers are not errors in measuring or typing, such outliers need to be deleted. If the outliers are due to such errors, they have to be corrected. Through a re-analysis of a dataset published in the Korean Journal for Food Science of Animal Resources, we have shown that outlier elimination resulted in the increase of the maximum possible R-square that the modeling of the data can obtain, which enables us to improve the quality of response surface analysis.

Neural Network Modeling supported by Change-Point Detection for the Prediction of the U.S. Treasury Securities

  • Oh, Kyong-Joo;Ingoo Han
    • 한국경영과학회:학술대회논문집
    • /
    • 한국경영과학회 2000년도 추계학술대회 및 정기총회
    • /
    • pp.37-39
    • /
    • 2000
  • The purpose of this paper is to present a neural network model based on change-point detection for the prediction of the U.S. Treasury Securities. Interest rates have been studied by a number of researchers since they strongly affect other economic and financial parameters. Contrary to other chaotic financial data, the movement of interest rates has a series of change points due to the monetary policy of the U.S. government. The basic concept of this proposed model is to obtain intervals divided by change points, to identify them as change-point groups, and to use them in interest rates forecasting. The proposed model consists of three stages. The first stage is to detect successive change points in the interest rates dataset. The second stage is to forecast the change-point group with the backpropagation neural network (BPN). The final stage is to forecast the output with BPN. This study then examines the predictability of the integrated neural network model for interest rates forecasting using change-point detection.

  • PDF

Object Detection with LiDAR Point Cloud and RGBD Synthesis Using GNN

  • Jung, Tae-Won;Jeong, Chi-Seo;Lee, Jong-Yong;Jung, Kye-Dong
    • International journal of advanced smart convergence
    • /
    • 제9권3호
    • /
    • pp.192-198
    • /
    • 2020
  • The 3D point cloud is a key technology of object detection for virtual reality and augmented reality. In order to apply various areas of object detection, it is necessary to obtain 3D information and even color information more easily. In general, to generate a 3D point cloud, it is acquired using an expensive scanner device. However, 3D and characteristic information such as RGB and depth can be easily obtained in a mobile device. GNN (Graph Neural Network) can be used for object detection based on these characteristics. In this paper, we have generated RGB and RGBD by detecting basic information and characteristic information from the KITTI dataset, which is often used in 3D point cloud object detection. We have generated RGB-GNN with i-GNN, which is the most widely used LiDAR characteristic information, and color information characteristics that can be obtained from mobile devices. We compared and analyzed object detection accuracy using RGBD-GNN, which characterizes color and depth information.

라이다 점군의 효율적 검색을 위한 CUDA 기반 옥트리 알고리듬 구현 (Implementation of CUDA-based Octree Algorithm for Efficient Search for LiDAR Point Cloud)

  • 김형우;이양원
    • 대한원격탐사학회지
    • /
    • 제34권6_1호
    • /
    • pp.1009-1024
    • /
    • 2018
  • 라이다의 활용 증가와 함께 점군 자료의 양이 급증할 것으로 예상되며, 이에 따라 효율적인 점군 검색 및 자료 분석을 위한 차원 축소 방법의 중요성이 강조되고 있다. 이에 따라 본 연구에서는 입력된 원점과 방향 벡터를 이용해 옥트리 노드를 조회하는 파라메트릭 알고리듬의 특징에 따른 기존 CPU, GPU 기반 옥트리의 한계를 정의하고, 이를 극복할 수 있는 검색 기법을 제시한다. GPU 옥트리 환경을 활용할 수 있는 파라메트릭 알고리듬을 구현하고 이에 대한 성능평가를 수행하였으며, 또한 검색된 지점을 활용하여 잡음이 제거된 2차원 영상 투영 방법을 구현하였다.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

Deep Learning for Weeds' Growth Point Detection based on U-Net

  • Arsa, Dewa Made Sri;Lee, Jonghoon;Won, Okjae;Kim, Hyongsuk
    • 스마트미디어저널
    • /
    • 제11권7호
    • /
    • pp.94-103
    • /
    • 2022
  • Weeds bring disadvantages to crops since they can damage them, and a clean treatment with less pollution and contamination should be developed. Artificial intelligence gives new hope to agriculture to achieve smart farming. This study delivers an automated weeds growth point detection using deep learning. This study proposes a combination of semantic graphics for generating data annotation and U-Net with pre-trained deep learning as a backbone for locating the growth point of the weeds on the given field scene. The dataset was collected from an actual field. We measured the intersection over union, f1-score, precision, and recall to evaluate our method. Moreover, Mobilenet V2 was chosen as the backbone and compared with Resnet 34. The results showed that the proposed method was accurate enough to detect the growth point and handle the brightness variation. The best performance was achieved by Mobilenet V2 as a backbone with IoU 96.81%, precision 97.77%, recall 98.97%, and f1-score 97.30%.

포인트 클라우드 기반 선체 구조 변형 탐지 알고리즘 적용 연구 (Application of Point Cloud Based Hull Structure Deformation Detection Algorithm)

  • 송상호;이갑헌;한기민;장화섭
    • 대한조선학회논문집
    • /
    • 제59권4호
    • /
    • pp.235-242
    • /
    • 2022
  • As ship condition inspection technology has been developed, research on collecting, analyzing, and diagnosing condition information has become active. In ships, related research has been conducted, such as analyzing, detecting, and classifying major hull failures such as cracks and corrosion using 2D and 3D data information. However, for geometric deformation such as indents and bulges, 2D data has limitations in detection, so 3D data is needed to utilize spatial feature information. In this study, we aim to detect hull structural deformation positions. It builds a specimen based on actual hull structure deformation and acquires a point cloud from a model scanned with a 3D scanner. In the obtained point cloud, deformation(outliers) is found with a combination of RANSAC algorithms that find the best matching model in the Octree data structure and dataset.

ASPPMVSNet: A high-receptive-field multiview stereo network for dense three-dimensional reconstruction

  • Saleh Saeed;Sungjun Lee;Yongju Cho;Unsang Park
    • ETRI Journal
    • /
    • 제44권6호
    • /
    • pp.1034-1046
    • /
    • 2022
  • The learning-based multiview stereo (MVS) methods for three-dimensional (3D) reconstruction generally use 3D volumes for depth inference. The quality of the reconstructed depth maps and the corresponding point clouds is directly influenced by the spatial resolution of the 3D volume. Consequently, these methods produce point clouds with sparse local regions because of the lack of the memory required to encode a high volume of information. Here, we apply the atrous spatial pyramid pooling (ASPP) module in MVS methods to obtain dense feature maps with multiscale, long-range, contextual information using high receptive fields. For a given 3D volume with the same spatial resolution as that in the MVS methods, the dense feature maps from the ASPP module encoded with superior information can produce dense point clouds without a high memory footprint. Furthermore, we propose a 3D loss for training the MVS networks, which improves the predicted depth values by 24.44%. The ASPP module provides state-of-the-art qualitative results by constructing relatively dense point clouds, which improves the DTU MVS dataset benchmarks by 2.25% compared with those achieved in the previous MVS methods.

INTERACTIVE FEATURE EXTRACTION FOR IMAGE REGISTRATION

  • Kim Jun-chul;Lee Young-ran;Shin Sung-woong;Kim Kyung-ok
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.641-644
    • /
    • 2005
  • This paper introduces an Interactive Feature Extraction (!FE) approach for the registration of satellite imagery by matching extracted point and line features. !FE method contains both point extraction by cross-correlation matching of singular points and line extraction by Hough transform. The purpose of this study is to minimize user's intervention in feature extraction and easily apply the extracted features for image registration. Experiments with these imagery dataset proved the feasibility and the efficiency of the suggested method.

  • PDF