• Title/Summary/Keyword: 3-D features

Search Result 1,567, Processing Time 0.029 seconds

Influence of Two-Dimensional and Three-Dimensional Acquisitions of Radiomic Features for Prediction Accuracy

  • Ryohei Fukui;Ryutarou Matsuura;Katsuhiro Kida;Sachiko Goto
    • Progress in Medical Physics
    • /
    • v.34 no.3
    • /
    • pp.23-32
    • /
    • 2023
  • Purpose: In radiomics analysis, to evaluate features, and predict genetic characteristics and survival time, the pixel values of lesions depicted in computed tomography (CT) and magnetic resonance imaging (MRI) images are used. CT and MRI offer three-dimensional images, thus producing three-dimensional features (Features_3d) as output. However, in reports, the superiority between Features_3d and two-dimensional features (Features_2d) is distinct. In this study, we aimed to investigate whether a difference exists in the prediction accuracy of radiomics analysis of lung cancer using Features_2d and Features_3d. Methods: A total of 38 cases of large cell carcinoma (LCC) and 40 cases of squamous cell carcinoma (SCC) were selected for this study. Two- and three-dimensional lesion segmentations were performed. A total of 774 features were obtained. Using least absolute shrinkage and selection operator regression, seven Features_2d and six Features_3d were obtained. Results: Linear discriminant analysis revealed that the sensitivities of Features_2d and Features_3d to LCC were 86.8% and 89.5%, respectively. The coefficients of determination through multiple regression analysis and the areas under the receiver operating characteristic curve (AUC) were 0.68 and 0.70 and 0.93 and 0.94, respectively. The P-value of the estimated AUC was 0.87. Conclusions: No difference was found in the prediction accuracy for LCC and SCC between Features_2d and Features_3d.

Evaluation of Volumetric Texture Features for Computerized Cell Nuclei Grading

  • Kim, Tae-Yun;Choi, Hyun-Ju;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.12
    • /
    • pp.1635-1648
    • /
    • 2008
  • The extraction of important features in cancer cell image analysis is a key process in grading renal cell carcinoma. In this study, we applied three-dimensional (3D) texture feature extraction methods to cell nuclei images and evaluated the validity of them for computerized cell nuclei grading. Individual images of 2,423 cell nuclei were extracted from 80 renal cell carcinomas (RCCs) using confocal laser scanning microscopy (CLSM). First, we applied the 3D texture mapping method to render the volume of entire tissue sections. Then, we determined the chromatin texture quantitatively by calculating 3D gray-level co-occurrence matrices (3D GLCM) and 3D run length matrices (3D GLRLM). Finally, to demonstrate the suitability of 3D texture features for grading, we performed a discriminant analysis. In addition, we conducted a principal component analysis to obtain optimized texture features. Automatic grading of cell nuclei using 3D texture features had an accuracy of 78.30%. Combining 3D textural and 3D morphological features improved the accuracy to 82.19%. As a comparative study, we also performed a stepwise feature selection. Using the 4 optimized features, we could obtain more improved accuracy of 84.32%. Three dimensional texture features have potential for use as fundamental elements in developing a new nuclear grading system with accurate diagnosis and predicting prognosis.

  • PDF

Three-Dimensional Shape Recognition and Classification Using Local Features of Model Views and Sparse Representation of Shape Descriptors

  • Kanaan, Hussein;Behrad, Alireza
    • Journal of Information Processing Systems
    • /
    • v.16 no.2
    • /
    • pp.343-359
    • /
    • 2020
  • In this paper, a new algorithm is proposed for three-dimensional (3D) shape recognition using local features of model views and its sparse representation. The algorithm starts with the normalization of 3D models and the extraction of 2D views from uniformly distributed viewpoints. Consequently, the 2D views are stacked over each other to from view cubes. The algorithm employs the descriptors of 3D local features in the view cubes after applying Gabor filters in various directions as the initial features for 3D shape recognition. In the training stage, we store some 3D local features to build the prototype dictionary of local features. To extract an intermediate feature vector, we measure the similarity between the local descriptors of a shape model and the local features of the prototype dictionary. We represent the intermediate feature vectors of 3D models in the sparse domain to obtain the final descriptors of the models. Finally, support vector machine classifiers are used to recognize the 3D models. Experimental results using the Princeton Shape Benchmark database showed the average recognition rate of 89.7% using 20 views. We compared the proposed approach with state-of-the-art approaches and the results showed the effectiveness of the proposed algorithm.

Terrain Classification Using Three-Dimensional Co-occurrence Features (3차원 Co-occurrence 특징을 이용한 지형분류)

  • Jin Mun-Gwang;Woo Dong-Min;Lee Kyu-Won
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.1
    • /
    • pp.45-50
    • /
    • 2003
  • Texture analysis has been efficiently utilized in the area of terrain classification. In this application features have been obtained in the 2D image domain. This paper suggests 3D co-occurrence texture features by extending the concept of co-occurrence to 3D world. The suggested 3D features are described using co-occurrence histogram of digital elevations at two contiguous position as co-occurrence matrix. The practical construction of co-occurrence matrix limits the number of levels of digital elevation. If the digital elevation is quantized into the number of levels over the whole DEM(Digital Elevation Map), the distinctive features can not be obtained. To resolve the quantization problem, we employ local quantization technique which preserves the variation of elevations. Experiments has been carried out to verify the proposed 3D co-occurrence features, and the addition of the suggested features significantly improves the classification accuracy.

3D RECONSTRUCTION OF LANDSCAPE FEATURES USING LiDAR DATAAND DIGITAL AERIAL PHOTOGRAPH FOR 3D BASED VISIBILITY ANALYSIS

  • Song, Chul-Chul;Lee, Woo-Kyun;Jeong, Hoe-Seong;Lee, Kwan-Kyu
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.548-551
    • /
    • 2007
  • Among components of digital topographic maps used officially in Korea, only contours have 3D values except buildings and trees that are demanded in landscape planning. This study presented a series of processes for 3Dreconstructing landscape features such as terrain, buildings and standing trees using LiDAR (Light Detection And Ranging) data and aerial digital photo graphs. The 3D reconstructing processes contain 1) building terrain model, 2) delineating outline of landscape features, 3) extracting height values, and 4) shaping and coloring landscape features using aerial photograph and 3-D virtual data base. LiDAR data and aerial photograph was taken in November 2006 for $50km^{2}$ area in Sorak National Park located in eastern part of Korea. The average scanning density of LiDAR pulse was 1.32 points per square meter, and the aerial photograph with RGB bands has $0.35m{\times}0.35m$ spatial resolution. Using reconstructed 3D landscape features, visibility with the growing trees with time and at different viewpoints was analyzed. Visible area from viewpoint could be effectively estimated considering 3D information of landscape features. This process could be applied for landscape planning like building scale with the consideration of surrounding landscape features.

  • PDF

Effective Multi-Modal Feature Fusion for 3D Semantic Segmentation with Multi-View Images (멀티-뷰 영상들을 활용하는 3차원 의미적 분할을 위한 효과적인 멀티-모달 특징 융합)

  • Hye-Lim Bae;Incheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.505-518
    • /
    • 2023
  • 3D point cloud semantic segmentation is a computer vision task that involves dividing the point cloud into different objects and regions by predicting the class label of each point. Existing 3D semantic segmentation models have some limitations in performing sufficient fusion of multi-modal features while ensuring both characteristics of 2D visual features extracted from RGB images and 3D geometric features extracted from point cloud. Therefore, in this paper, we propose MMCA-Net, a novel 3D semantic segmentation model using 2D-3D multi-modal features. The proposed model effectively fuses two heterogeneous 2D visual features and 3D geometric features by using an intermediate fusion strategy and a multi-modal cross attention-based fusion operation. Also, the proposed model extracts context-rich 3D geometric features from input point cloud consisting of irregularly distributed points by adopting PTv2 as 3D geometric encoder. In this paper, we conducted both quantitative and qualitative experiments with the benchmark dataset, ScanNetv2 in order to analyze the performance of the proposed model. In terms of the metric mIoU, the proposed model showed a 9.2% performance improvement over the PTv2 model using only 3D geometric features, and a 12.12% performance improvement over the MVPNet model using 2D-3D multi-modal features. As a result, we proved the effectiveness and usefulness of the proposed model.

For the Association between 3D VAR Model and 2D Features

  • Kiuchi, Yasuhiko;Tanaka, Masaru;Fujiki, Jun;Mishima, Taketoshi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1404-1407
    • /
    • 2002
  • Although we look at objects as 2D images through our eyes, we can reconstruct the shape and/or depth of objects. In order to realize this ability using computers, it is required that the method which can estimate the 3D features of object from 2D images. As feature which represents 3D shapes effectively, three dimensional vector autoregressive model is pro- posed. If this feature is associated other feature of 2D shape, then above aim might be achieved. On the other hand, as feature which represents 2D shapes, quasi moment features is proposed. As the first step of association of these features, we constructed real time simulator that computes both of two features concurrently from object data (3D curves) . This simulator can also rotate object and estimate the rotation The method using 3D VAR model estimates the rotation correctly, but the estimation by quasi moment features includes much errors. This reason would be that projected images are constructed by the points only, and doesn't have enough sizes to estimate the correct 3D rotation parameters.

  • PDF

Model-based 3-D object recognition using hopfield neural network (Hopfield 신경회로망을 이용한 모델 기반형 3차원 물체 인식)

  • 정우상;송호근;김태은;최종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.5
    • /
    • pp.60-72
    • /
    • 1996
  • In this paper, a enw model-base three-dimensional (3-D) object recognition mehtod using hopfield network is proposed. To minimize deformation of feature values on 3-D rotation, we select 3-D shape features and 3-D relational features which have rotational invariant characteristics. Then these feature values are normalized to have scale invariant characteristics, also. The input features are matched with model features by optimization process of hopjfield network in the form of two dimensional arrayed neurons. Experimental results on object classification and object matching with the 3-D rotated, scale changed, an dpartial oculued objects show good performance of proposed method.

  • PDF

Geometric Snapping for 3D Triangular Meshes and Its Applications (3차원 삼각형 메쉬에 대한 기하학적 스내핑과 그의 응용)

  • 유관희;하종성
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.3_4
    • /
    • pp.239-246
    • /
    • 2004
  • Image snapping for an image moves the cursor location to nearby features in the image, such as edges. In this paper, we propose geometric snapping for 3D triangular meshes, which is extended from image snapping. Similar to image snapping, geometric snapping also moves the cursor location naturally to a location which represents main geometric features in the 3D triangular meshes. Movement of cursor is based on the approximate curvatures which appear geometric features on the 3D triangular meshes. The proposed geometric snapping can be applied to extract main geometric features on 3D triangular meshes. Moreover, it can be applied to extract the geometric features of a tooth which are necessary for generating the occlusal surfaces in dental prostheses.

Geometric LiveWire and Geometric LiveLane for 3D Meshes (삼차원 메쉬에 대한 기하학 라이브와이어와 기하학 라이브레인)

  • Yoo Kwan-Hee
    • The KIPS Transactions:PartA
    • /
    • v.12A no.1 s.91
    • /
    • pp.13-22
    • /
    • 2005
  • Similarly to the edges defined in a 2D image, we can define the geometric features representing the boundary of the distinctive parts appearing on 3D meshes. The geometric features have been used as basic primitives in several applications such as mesh simplification, mesh deformation, and mesh editing. In this paper, we propose geometric livewire and geometric livelane for extracting geometric features in a 3D mesh, which are the extentions of livewire and livelane methods in images. In these methods, approximate curvatures are adopted to represent the geometric features in a 3D mesh and the 3D mesh itself is represented as a weighted directed graph in which cost functions are defined for the weights of edges. Using a well-known shortest path finding algorithm in the weighted directed graph, we extracted geometric features in the 3D mesh among points selected by a user. In this paper, we also visualize the results obtained from applying the techniques to extracting geometric features in the general meshes modeled after human faces, cows, shoes, and single teeth.