• Title/Summary/Keyword: 3-D object retrieval

Search Result 27, Processing Time 0.027 seconds

A Sketch-based 3D Object Retrieval Approach for Augmented Reality Models Using Deep Learning

  • Ji, Myunggeun;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.1
    • /
    • pp.33-43
    • /
    • 2020
  • Retrieving a 3D model from a 3D database and augmenting the retrieved model in the Augmented Reality system simultaneously became an issue in developing the plausible AR environments in a convenient fashion. It is considered that the sketch-based 3D object retrieval is an intuitive way for searching 3D objects based on human-drawn sketches as query. In this paper, we propose a novel deep learning based approach of retrieving a sketch-based 3D object as for an Augmented Reality Model. For this work, we introduce a new method which uses Sketch CNN, Wasserstein CNN and Wasserstein center loss for retrieving a sketch-based 3D object. Especially, Wasserstein center loss is used for learning the center of each object category and reducing the Wasserstein distance between center and features of the same category. The proposed 3D object retrieval and augmentation consist of three major steps as follows. Firstly, Wasserstein CNN extracts 2D images taken from various directions of 3D object using CNN, and extracts features of 3D data by computing the Wasserstein barycenters of features of each image. Secondly, the features of the sketch are extracted using a separate Sketch CNN. Finally, we adopt sketch-based object matching method to localize the natural marker of the images to register a 3D virtual object in AR system. Using the detected marker, the retrieved 3D virtual object is augmented in AR system automatically. By the experiments, we prove that the proposed method is efficiency for retrieving and augmenting objects.

Web-based 3D Object Retrieval from User-drawn Sketch Query (스케치를 이용한 웹 환경에서의 3차원 모델 검색)

  • Song, Jonghun;Ju, Jae Ho;Yoon, Sang Min
    • Journal of KIISE
    • /
    • v.41 no.10
    • /
    • pp.838-846
    • /
    • 2014
  • Three-dimensional (3D) object retrieval from user-drawn sketch queries is one of the important research issues in the areas of pattern recognition and computer graphics for simulation, visualization, and Computer Aided Design. The performance of content-based 3D object retrieval system depends on the availability of effective descriptors and similarity measures for this kind of data. In this paper, we present a sketch-based 3D object retrieval system by extracting a hybrid edge descriptor which is robust against rotation and translation. The experimental results which are based on HTML5 and WebGL show that proposed sketch-based 3D object retrieval method is very efficient to search and order the 3D objects according to user's intention.

Sketch-based 3D object retrieval using Wasserstein Center Loss (Wasserstein Center 손실을 이용한 스케치 기반 3차원 물체 검색)

  • Ji, Myunggeun;Chun, Junchul;Kim, Namgi
    • Journal of Internet Computing and Services
    • /
    • v.19 no.6
    • /
    • pp.91-99
    • /
    • 2018
  • Sketch-based 3D object retrieval is a convenient way to search for various 3D data using human-drawn sketches as query. In this paper, we propose a new method of using Sketch CNN, Wasserstein CNN and Wasserstein center loss for sketch-based 3D object search. Specifically, Wasserstein center loss is a method of learning the center of each object category and reducing the Wasserstein distance between center and features of the same category. To do this, the proposed 3D object retrieval is performed as follows. Firstly, Wasserstein CNN extracts 2D images taken from various directions of 3D object using CNN, and extracts features of 3D data by computing the Wasserstein barycenters of features of each image. Secondly, the features of the sketch are extracted using a separate Sketch CNN. Finally, we learn the features of the extracted 3D object and the features of the sketch using the proposed Wasserstein center loss. In order to demonstrate the superiority of the proposed method, we evaluated two sets of benchmark data sets, SHREC 13 and SHREC 14, and the proposed method shows better performance in all conventional metrics compared to the state of the art methods.

Combining Shape and SIFT Features for 3-D Object Detection and Pose Estimation (효과적인 3차원 객체 인식 및 자세 추정을 위한 외형 및 SIFT 특징 정보 결합 기법)

  • Tak, Yoon-Sik;Hwang, Een-Jun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.2
    • /
    • pp.429-435
    • /
    • 2010
  • Three dimensional (3-D) object detection and pose estimation from a single view query image has been an important issue in various fields such as medical applications, robot vision, and manufacturing automation. However, most of the existing methods are not appropriate in a real time environment since object detection and pose estimation requires extensive information and computation. In this paper, we present a fast 3-D object detection and pose estimation scheme based on surrounding camera view-changed images of objects. Our scheme has two parts. First, we detect images similar to the query image from the database based on the shape feature, and calculate candidate poses. Second, we perform accurate pose estimation for the candidate poses using the scale invariant feature transform (SIFT) method. We earned out extensive experiments on our prototype system and achieved excellent performance, and we report some of the results.

Visual Semantic Based 3D Video Retrieval System Using HDFS

  • Ranjith Kumar, C.;Suguna, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3806-3825
    • /
    • 2016
  • This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose we intent to hitch on BOVW and Mapreduce in 3D framework. Here, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and produce results .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we fiture the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

3D Object Retrieval System Using 2D Shape Information (2차원 모양 정보를 이용한 3차원 물체 검색 시스템)

  • Lim, Sam;Choo, Hyon-Gon;Choi, Min-Seok;Kim, Whoi-Yul
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.57-60
    • /
    • 2001
  • In this paper, we propose a new 3D object retrieval system using the shape information of 2D silhouette images. 2D images at different view points are derived from a 3D model and linked to the model. Shape feature of 2D image is extracted by a region-based descriptor. In the experiment, we compare the results of the proposed system with those of the system using curvature scale space(CSS) to show the efficiency of our system.

  • PDF

An Analysis of 3-D Object Characteristics Using Locally Linear Embedding (시점별 형상의 지역적 선형 사상을 통한 3차원 물체의 특성 분석)

  • Lee, Soo-Chahn;Yun, Il-Dong
    • Journal of Broadcast Engineering
    • /
    • v.14 no.1
    • /
    • pp.81-84
    • /
    • 2009
  • This paper explores the possibility of describing objects from the change in the shape according to the change in viewpoint. Specifically, we sample the shapes from various viewpoints of a 3-D model, and apply dimension reduction by locally linear embedding. A low dimensional distribution of points are constructed, and characteristics of the object are described from this distribution. Also, we propose two 3-D retrieval methods by applying the iterative closest point algorithm, and by applying Fourier transform and measuring similarity by modified Housdorff distance, and present experimental results. The proposed method shows that the change of shape according to the change in viewpoint can describe the characteristics of an object.

3D Cross-Modal Retrieval Using Noisy Center Loss and SimSiam for Small Batch Training

  • Yeon-Seung Choo;Boeun Kim;Hyun-Sik Kim;Yong-Suk Park
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.670-684
    • /
    • 2024
  • 3D Cross-Modal Retrieval (3DCMR) is a task that retrieves 3D objects regardless of modalities, such as images, meshes, and point clouds. One of the most prominent methods used for 3DCMR is the Cross-Modal Center Loss Function (CLF) which applies the conventional center loss strategy for 3D cross-modal search and retrieval. Since CLF is based on center loss, the center features in CLF are also susceptible to subtle changes in hyperparameters and external inferences. For instance, performance degradation is observed when the batch size is too small. Furthermore, the Mean Squared Error (MSE) used in CLF is unable to adapt to changes in batch size and is vulnerable to data variations that occur during actual inference due to the use of simple Euclidean distance between multi-modal features. To address the problems that arise from small batch training, we propose a Noisy Center Loss (NCL) method to estimate the optimal center features. In addition, we apply the simple Siamese representation learning method (SimSiam) during optimal center feature estimation to compare projected features, making the proposed method robust to changes in batch size and variations in data. As a result, the proposed approach demonstrates improved performance in ModelNet40 dataset compared to the conventional methods.

3D Models Retrieval Using Shape Index and Curvedness (형태 인덱스와 정규 곡률을 이용한 3차원 모델 검색)

  • Park, Ki-Tae;Hwang, Hae-Jung;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.33-41
    • /
    • 2007
  • Owing to the development of multimedia and communication technologies, multimedia data become a common feature of the information systems and are on the increase. This has led to the need of 3D shape retrieval systems that, given a query object, retrieve similar 3D objects. Therefore, shape descriptor required to describe a 3D object effectively and efficiently. In this paper, a new descriptor for 3D model retrieval based on shape information is proposed. The proposed descriptor utilizes the curvedness together with the shape index that provides local geometry information. The existing 3D Shape Spectrum Descriptor (3D SSD), which is defined as the histogram of shape index values, represents the characteristics of local shapes of the 3D surface. However, it does not properly represent the local shape characteristics, because many points with different curvedness may have the same shape index value. Therefore, we add a new feature that represents the degree of curvedness, thereby improving the discriminating power of the shape descriptor. We evaluate the performance of the proposed method, compared with the previous method. The experimental results have shown that the performance of retrieval has been improved by 23.6%.

Efficient Image Retrieval using Minimal Spatial Relationships (최소 공간관계를 이용한 효율적인 이미지 검색)

  • Lee, Soo-Cheol;Hwang, Een-Jun;Byeon, Kwang-Jun
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.383-393
    • /
    • 2005
  • Retrieval of images from image databases by spatial relationship can be effectively performed through visual interface systems. In these systems, the representation of image with 2D strings, which are derived from symbolic projections, provides an efficient and natural way to construct image index and is also an ideal representation for the visual query. With this approach, retrieval is reduced to matching two symbolic strings. However, using 2D-string representations, spatial relationships between the objects in the image might not be exactly specified. Ambiguities arise for the retrieval of images of 3D scenes. In order to remove ambiguous description of object spatial relationships, in this paper, images are referred by considering spatial relationships using the spatial location algebra for the 3D image scene. Also, we remove the repetitive spatial relationships using the several reduction rules. A reduction mechanism using these rules can be used in query processing systems that retrieve images by content. This could give better precision and flexibility in image retrieval.