• 제목/요약/키워드: point dataset

검색결과 195건 처리시간 0.027초

A HIERARCHICAL APPROACH TO HIGH-RESOLUTION HYPERSPECTRAL IMAGE CLASSIFICATION OF LITTLE MIAMI RIVER WATERSHED FOR ENVIRONMENTAL MODELING

  • Heo, Joon;Troyer, Michael;Lee, Jung-Bin;Kim, Woo-Sun
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2006년도 Proceedings of ISRS 2006 PORSEC Volume II
    • /
    • pp.647-650
    • /
    • 2006
  • Compact Airborne Spectrographic Imager (CASI) hyperspectral imagery was acquired over the Little Miami River Watershed (1756 square miles) in Ohio, U.S.A., which is one of the largest hyperspectral image acquisition. For the development of a 4m-resolution land cover dataset, a hierarchical approach was employed using two different classification algorithms: 'Image Object Segmentation' for level-1 and 'Spectral Angle Mapper' for level-2. This classification scheme was developed to overcome the spectral inseparability of urban and rural features and to deal with radiometric distortions due to cross-track illumination. The land cover class members were lentic, lotic, forest, corn, soybean, wheat, dry herbaceous, grass, urban barren, rural barren, urban/built, and unclassified. The final phase of processing was completed after an extensive Quality Assurance and Quality Control (QA/QC) phase. With respect to the eleven land cover class members, the overall accuracy with a total of 902 reference points was 83.9% at 4m resolution. The dataset is available for public research, and applications of this product will represent an improvement over more commonly utilized data of coarser spatial resolution such as National Land Cover Data (NLCD).

  • PDF

Establishing the Process of Spatial Informatization Using Data from Social Network Services

  • Eo, Seung-Won;Lee, Youngmin;Yu, Kiyun;Park, Woojin
    • 한국측량학회지
    • /
    • 제34권2호
    • /
    • pp.111-120
    • /
    • 2016
  • Prior knowledge about the SNS (Social Network Services) datasets is often required to conduct valuable analysis using social media data. Understanding the characteristics of the information extracted from SNS datasets leaves much to be desired in many ways. This paper purposes on analyzing the detail of the target social network services, Twitter, Instagram, and YouTube to establish the spatial informatization process to integrate social media information with existing spatial datasets. In this study, valuable information in SNS datasets have been selected and total 12,938 data have been collected in Seoul via Open API. The dataset has been geo-coded and turned into the point form. We also removed the overlapped values of the dataset to conduct spatial integration with the existing building layers. The resultant of this spatial integration process will be utilized in various industries and become a fundamental resource to further studies related to geospatial integration using social media datasets.

Feature Voting for Object Localization via Density Ratio Estimation

  • Wang, Liantao;Deng, Dong;Chen, Chunlei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권12호
    • /
    • pp.6009-6027
    • /
    • 2019
  • Support vector machine (SVM) classifiers have been widely used for object detection. These methods usually locate the object by finding the region with maximal score in an image. With bag-of-features representation, the SVM score of an image region can be written as the sum of its inside feature-weights. As a result, the searching process can be executed efficiently by using strategies such as branch-and-bound. However, the feature-weight derived by optimizing region classification cannot really reveal the category knowledge of a feature-point, which could cause bad localization. In this paper, we represent a region in an image by a collection of local feature-points and determine the object by the region with the maximum posterior probability of belonging to the object class. Based on the Bayes' theorem and Naive-Bayes assumptions, the posterior probability is reformulated as the sum of feature-scores. The feature-score is manifested in the form of the logarithm of a probability ratio. Instead of estimating the numerator and denominator probabilities separately, we readily employ the density ratio estimation techniques directly, and overcome the above limitation. Experiments on a car dataset and PASCAL VOC 2007 dataset validated the effectiveness of our method compared to the baselines. In addition, the performance can be further improved by taking advantage of the recently developed deep convolutional neural network features.

Effects of Job Satisfaction on Organizational Commitment and Turnover Intention Among Vietnamese Employees in Foreign Direct Investment Enterprises

  • TRAN, Thi Phuong Diu;NGUYEN, Thi Van Khanh;DO, Thanh Quang;NGUYEN, Cong Nghiep;LUONG, Thu Thuy
    • 유통과학연구
    • /
    • 제20권10호
    • /
    • pp.31-38
    • /
    • 2022
  • Purpose: This article focuses on exploring the associations between job satisfaction, organizational commitment, and turnover intention. Specifically, this study estimates the impacts of Vietnamese employees' job satisfaction on their organizational commitment and turnover intention in FDI enterprises. Research design, data and methodology: The measures are adapted from previous studies to develop a questionnaire with a seven-point Likert scale. The dataset is directly collected from 227 respondents who are employees at FDI enterprises situated in the North of Vietnam. The dataset is analyzed by quantitative approaches using SPSS 24.0 and AMOS 24.0. Results: The results show that while turnover intention is positively correlated with monthly income, it is negatively correlated with job satisfaction and organizational commitment. Also, organizational commitment is positively associated with job satisfaction among employees at FDI enterprises in Vietnam. Conclusions: The findings of this study will serve as useful references for administrators of FDI enterprises and policymakers to promote employees' job satisfaction and retain skilled employees.

Bird's Eye View Semantic Segmentation based on Improved Transformer for Automatic Annotation

  • Tianjiao Liang;Weiguo Pan;Hong Bao;Xinyue Fan;Han Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권8호
    • /
    • pp.1996-2015
    • /
    • 2023
  • High-definition (HD) maps can provide precise road information that enables an autonomous driving system to effectively navigate a vehicle. Recent research has focused on leveraging semantic segmentation to achieve automatic annotation of HD maps. However, the existing methods suffer from low recognition accuracy in automatic driving scenarios, leading to inefficient annotation processes. In this paper, we propose a novel semantic segmentation method for automatic HD map annotation. Our approach introduces a new encoder, known as the convolutional transformer hybrid encoder, to enhance the model's feature extraction capabilities. Additionally, we propose a multi-level fusion module that enables the model to aggregate different levels of detail and semantic information. Furthermore, we present a novel decoupled boundary joint decoder to improve the model's ability to handle the boundary between categories. To evaluate our method, we conducted experiments using the Bird's Eye View point cloud images dataset and Cityscapes dataset. Comparative analysis against stateof-the-art methods demonstrates that our model achieves the highest performance. Specifically, our model achieves an mIoU of 56.26%, surpassing the results of SegFormer with an mIoU of 1.47%. This innovative promises to significantly enhance the efficiency of HD map automatic annotation.

자동 치아 분할용 종단 간 시스템 개발을 위한 선결 연구: 딥러닝 기반 기준점 설정 알고리즘 (Prerequisite Research for the Development of an End-to-End System for Automatic Tooth Segmentation: A Deep Learning-Based Reference Point Setting Algorithm)

  • 서경덕;이세나;진용규;양세정
    • 대한의용생체공학회:의공학회지
    • /
    • 제44권5호
    • /
    • pp.346-353
    • /
    • 2023
  • In this paper, we propose an innovative approach that leverages deep learning to find optimal reference points for achieving precise tooth segmentation in three-dimensional tooth point cloud data. A dataset consisting of 350 aligned maxillary and mandibular cloud data was used as input, and both end coordinates of individual teeth were used as correct answers. A two-dimensional image was created by projecting the rendered point cloud data along the Z-axis, where an image of individual teeth was created using an object detection algorithm. The proposed algorithm is designed by adding various modules to the Unet model that allow effective learning of a narrow range, and detects both end points of the tooth using the generated tooth image. In the evaluation using DSC, Euclid distance, and MAE as indicators, we achieved superior performance compared to other Unet-based models. In future research, we will develop an algorithm to find the reference point of the point cloud by back-projecting the reference point detected in the image in three dimensions, and based on this, we will develop an algorithm to divide the teeth individually in the point cloud through image processing techniques.

Center point prediction using Gaussian elliptic and size component regression using small solution space for object detection

  • Yuantian Xia;Shuhan Lu;Longhe Wang;Lin Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권8호
    • /
    • pp.1976-1995
    • /
    • 2023
  • The anchor-free object detector CenterNet regards the object as a center point and predicts it based on the Gaussian circle region. For each object's center point, CenterNet directly regresses the width and height of the objects and finally gets the boundary range of the objects. However, the critical range of the object's center point can not be accurately limited by using the Gaussian circle region to constrain the prediction region, resulting in many low-quality centers' predicted values. In addition, because of the large difference between the width and height of different objects, directly regressing the width and height will make the model difficult to converge and lose the intrinsic relationship between them, thereby reducing the stability and consistency of accuracy. For these problems, we proposed a center point prediction method based on the Gaussian elliptic region and a size component regression method based on the small solution space. First, we constructed a Gaussian ellipse region that can accurately predict the object's center point. Second, we recode the width and height of the objects, which significantly reduces the regression solution space and improves the convergence speed of the model. Finally, we jointly decode the predicted components, enhancing the internal relationship between the size components and improving the accuracy consistency. Experiments show that when using CenterNet as the improved baseline and Hourglass-104 as the backbone, on the MS COCO dataset, our improved model achieved 44.7%, which is 2.6% higher than the baseline.

Temporal Search Algorithm for Multiple-Pedestrian Tracking

  • Yu, Hye-Yeon;Kim, Young-Nam;Kim, Moon-Hyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권5호
    • /
    • pp.2310-2325
    • /
    • 2016
  • In this paper, we provide a trajectory-generation algorithm that can identify pedestrians in real time. Typically, the contours for the extraction of pedestrians from the foreground of images are not clear due to factors including brightness and shade; furthermore, pedestrians move in different directions and interact with each other. These issues mean that the identification of pedestrians and the generation of trajectories are somewhat difficult. We propose a new method for trajectory generation regarding multiple pedestrians. The first stage of the method distinguishes between those pedestrian-blob situations that need to be merged and those that require splitting, followed by the use of trained decision trees to separate the pedestrians. The second stage generates the trajectories of each pedestrian by using the point-correspondence method; however, we introduce a new point-correspondence algorithm for which the A* search method has been modified. By using fuzzy membership functions, a heuristic evaluation of the correspondence between the blobs was also conducted. The proposed method was implemented and tested with the PETS 2009 dataset to show an effective multiple-pedestrian-tracking capability in a pedestrian-interaction environment.

3차원 얼굴 인식을 위한 PSO와 다중 포인트 특징 추출을 이용한 RBFNNs 패턴분류기 설계 (Design of RBFNNs Pattern Classifier Realized with the Aid of PSO and Multiple Point Signature for 3D Face Recognition)

  • 오성권;오승훈
    • 전기학회논문지
    • /
    • 제63권6호
    • /
    • pp.797-803
    • /
    • 2014
  • In this paper, 3D face recognition system is designed by using polynomial based on RBFNNs. In case of 2D face recognition, the recognition performance reduced by the external environmental factors such as illumination and facial pose. In order to compensate for these shortcomings of 2D face recognition, 3D face recognition. In the preprocessing part, according to the change of each position angle the obtained 3D face image shapes are changed into front image shapes through pose compensation. the depth data of face image shape by using Multiple Point Signature is extracted. Overall face depth information is obtained by using two or more reference points. The direct use of the extracted data an high-dimensional data leads to the deterioration of learning speed as well as recognition performance. We exploit principle component analysis(PCA) algorithm to conduct the dimension reduction of high-dimensional data. Parameter optimization is carried out with the aid of PSO for effective training and recognition. The proposed pattern classifier is experimented with and evaluated by using dataset obtained in IC & CI Lab.

CKGS: A Way Of Compressed Key Guessing Space to Reduce Ghost Peaks

  • Li, Di;Li, Lang;Ou, Yu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권3호
    • /
    • pp.1047-1062
    • /
    • 2022
  • Differential power analysis (DPA) is disturbed by ghost peaks. There is a phenomenon that the mean absolute difference (MAD) value of the wrong key is higher than the correct key. We propose a compressed key guessing space (CKGS) scheme to solve this problem and analyze the AES algorithm. The DPA based on this scheme is named CKGS-DPA. Unlike traditional DPA, the CKGS-DPA uses two power leakage points for a combined attack. The first power leakage point is used to determine the key candidate interval, and the second is used for the final attack. First, we study the law of MAD values distribution when the attack point is AddRoundKey and explain why this point is not suitable for DPA. According to this law, we modify the selection function to change the distribution of MAD values. Then a key-related value screening algorithm is proposed to obtain key information. Finally, we construct two key candidate intervals of size 16 and reduce the key guessing space of the SubBytes attack from 256 to 32. Simulation experimental results show that CKGS-DPA reduces the power traces demand by 25% compared with DPA. Experiments performed on the ASCAD dataset show that CKGS-DPA reduces the power traces demand by at least 41% compared with DPA.