• Title/Summary/Keyword: Panorama View

Search Result 67, Processing Time 0.025 seconds

360 RGBD Image Synthesis from a Sparse Set of Images with Narrow Field-of-View (소수의 협소화각 RGBD 영상으로부터 360 RGBD 영상 합성)

  • Kim, Soojie;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.487-498
    • /
    • 2022
  • Depth map is an image that contains distance information in 3D space on a 2D plane and is used in various 3D vision tasks. Many existing depth estimation studies mainly use narrow FoV images, in which a significant portion of the entire scene is lost. In this paper, we propose a technique for generating 360° omnidirectional RGBD images from a sparse set of narrow FoV images. The proposed generative adversarial network based image generation model estimates the relative FoV for the entire panoramic image from a small number of non-overlapping images and produces a 360° RGB and depth image simultaneously. In addition, it shows improved performance by configuring a network reflecting the spherical characteristics of the 360° image.

A study of the panoramic radiographic images of the buccolingual dilaceration (협설만곡치아의 파노라마방사선영상소견에 대한 연구)

  • Kim, Young-Ho;Jeong, Hwan-Seok;Huh, Kyung-Hoe;Yi, Won-Jin;Heo, Min-Suk;Lee, Sam-Sun;Choi, Soon-Chul
    • Imaging Science in Dentistry
    • /
    • v.40 no.1
    • /
    • pp.39-44
    • /
    • 2010
  • Purpose : We want to identify the appearance of the buccolingual root dilaceration teeth in the panoramic views and specify the characteristics of these teeth. Materials and Methods : One thousand-six patients were examined on the basis of both panoramic and CT image criteria. We diagnosed and excluded certain teeth from the samples; both prosthodontic or pathologic lesion appearing teeth and mesiodistally dilacerated ones. We meticulously discerned buccolingually dilacerated teeth in the CT images and total 48 samples were selected. The degree of severity in dilaceration was standardized by 2 types of criteria. The samples were differentiated into 3 groups and again categorized into six types showing from the panoramic views: irregular view on the root apex area, clear blunt on the root tip, stepping on root tip, double lamina dura or double tip, arrow-target shaped root, bull's eye, normal view. Results : The types of teeth selected from total 48 buccolingual root dilaceration samples were mandibular first and second molar, premolars, canines, and lateral incisors. The direction of dilaceration was an even percentage to each buccal and lingual side for most selected teeth, however, that of both canines and lateral incisors were directed in almost a buccal side. In the panoramic views, the root types of the buccolingually dilacerated teeth were irregular view on the root apex area, clear blunt on the root tip, stepping on root tip and normal types were almost always normal view. The more severity in dilareated degree, the more chances of observation in the panoramic views were clear blunt on the root tip and stepping on root tip. Conclusion : As observed in the shape of stepping on root tip or double lamina dura in the panoramic views, there can be much more probability to diagnose as a buccolingually dilacerated root.

Matching Points Filtering Applied Panorama Image Processing Using SURF and RANSAC Algorithm (SURF와 RANSAC 알고리즘을 이용한 대응점 필터링 적용 파노라마 이미지 처리)

  • Kim, Jeongho;Kim, Daewon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.144-159
    • /
    • 2014
  • Techniques for making a single panoramic image using multiple pictures are widely studied in many areas such as computer vision, computer graphics, etc. The panorama image can be applied to various fields like virtual reality, robot vision areas which require wide-angled shots as an useful way to overcome the limitations such as picture-angle, resolutions, and internal informations of an image taken from a single camera. It is so much meaningful in a point that a panoramic image usually provides better immersion feeling than a plain image. Although there are many ways to build a panoramic image, most of them are using the way of extracting feature points and matching points of each images for making a single panoramic image. In addition, those methods use the RANSAC(RANdom SAmple Consensus) algorithm with matching points and the Homography matrix to transform the image. The SURF(Speeded Up Robust Features) algorithm which is used in this paper to extract featuring points uses an image's black and white informations and local spatial informations. The SURF is widely being used since it is very much robust at detecting image's size, view-point changes, and additionally, faster than the SIFT(Scale Invariant Features Transform) algorithm. The SURF has a shortcoming of making an error which results in decreasing the RANSAC algorithm's performance speed when extracting image's feature points. As a result, this may increase the CPU usage occupation rate. The error of detecting matching points may role as a critical reason for disqualifying panoramic image's accuracy and lucidity. In this paper, in order to minimize errors of extracting matching points, we used $3{\times}3$ region's RGB pixel values around the matching points' coordinates to perform intermediate filtering process for removing wrong matching points. We have also presented analysis and evaluation results relating to enhanced working speed for producing a panorama image, CPU usage rate, extracted matching points' decreasing rate and accuracy.

A proposed image stitching method for web-based panoramic virtual reality for Hoseo Cyber Museum (호서 사이버 박물관: 웹기반의 파노라마 비디오 가상현실에 대한 효율적인 이미지 스티칭 알고리즘)

  • Khan, Irfan;Soo, Hong Song
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.2
    • /
    • pp.893-898
    • /
    • 2013
  • It is always a dream to recreate the experience of a particular place, the Panorama Virtual Reality has been interpreted as a kind of technology to create virtual environments and the ability to maneuver angle for and select the path of view in a dynamic scene. In this paper we examined an efficient method for Image registration and stitching of captured imaged. Two approaches are studied in this paper. First, dynamic programming is used to spot the ideal key points, match these points to merge adjacent images together, later image blending is used for smooth color transitions. In second approach, FAST and SURF detection are used to find distinct features in the images and nearest neighbor algorithm is used to match corresponding features, estimate homography with matched key points using RANSAC. The paper also covers the automatically choosing (recognizing, comparing) images to stitching method.

Wild Fire Monitoring System using the Image Matching (영상 접합을 이용한 산불 감시 시스템)

  • Lee, Seung-Hee;Shin, Bum-Joo;Song, Bok-Deuk;An, Sun-Joung;Kim, Jin-Dong;Lee, Hak-Jun
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.6
    • /
    • pp.40-47
    • /
    • 2013
  • In case of wild fire, early detection of wild fire is the most important factor in minimizing the damages. In this paper, we suggest an effective system that detects wild fire using a panoramic image from a single camera with PAN/TILT head. This enables the system to detect the size and the location of the fire in the early stages. After converting RGB image input to color YCrCb image, the differential image is used to detect changes in movement of the smoke to determine the regions which may be prone to forest fire. Histogram analysis of fire flame is used to determine the possibility of fire in the predetermined regions. In addition, image matching and SURF were used to create the panoramic image. There are many advantages in this system. First of all, it is very economical because this system needs only a single camera and a monitor. Second, it shows the live image of wide view through panoramic image. Third, this system can reduce the quantity of saved data by storing panoramic images.

Study on bone healing process following cyst enucleation using fractal analysis (프랙탈 분석을 이용한 낭종 적출술 후 결손부 치유 양상에 관한 연구)

  • Lim, Hun-Jun;Lee, Seung-Soo;Kim, Won-Ki;Ohn, Byung-Hun;Choi, Sang-Moon;Oh, Se-Ri;Min, Seung-Ki;Lee, Jun
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.37 no.6
    • /
    • pp.477-482
    • /
    • 2011
  • Introduction: Bone regeneration of cystic defects of the jaws after a cyst treatment requires lengthy healing periods. Generally, the bony changes are observed periodically through a visual radiographic reading as well as by the clinical opinion and radiographic images (panorama, periapical view), but it is difficult to compare the objective bony changes using only the radiographic density. In addition, it is difficult to observe minute bony changes through a visual radiographic reading, which can lead to a subjective judgment. This study exmined the bone density after the enucleation of a jaw cyst by fractal analysis. Materials and Methods: Eighteen patients with a cystic lesion on the jaw were assessed. Panoramic radiographs were taken preoperatively, immediately postoperatively, and 1, 3, 6 and 12 months after cyst enucleation. The images were analyzed by fractal analysis. Results: The mean fractal dimensions increased immediately after surgery and 3, 6 and 12 months postoperatively. The postoperative 6 and 12 months fractal dimension was similar to the controls. Conclusion: Fractal analysis was used to overcome the limit of a subjective reading during an assessment of bone regeneration after cyst enucleation.

FPGA Implementation of SURF-based Feature extraction and Descriptor generation (SURF 기반 특징점 추출 및 서술자 생성의 FPGA 구현)

  • Na, Eun-Soo;Jeong, Yong-Jin
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.4
    • /
    • pp.483-492
    • /
    • 2013
  • SURF is an algorithm which extracts feature points and generates their descriptors from input images, and it is being used for many applications such as object recognition, tracking, and constructing panorama pictures. Although SURF is known to be robust to changes of scale, rotation, and view points, it is hard to implement it in real time due to its complex and repetitive computations. Using 3.3 GHz Pentium, in our experiment, it takes 240ms to extract feature points and create descriptors in a VGA image containing about 1,000 feature points, which means that software implementation cannot meet the real time requirement, especially in embedded systems. In this paper, we present a hardware architecture that can compute the SURF algorithm very fast while consuming minimum hardware resources. Two key concepts of our architecture are parallelism (for repetitive computations) and efficient line memory usage (obtained by analyzing memory access patterns). As a result of FPGA synthesis using Xilinx Virtex5LX330, it occupies 101,348 LUTs and 1,367 KB on-chip memory, giving performance of 30 frames per second at 100 MHz clock.

2D Adjacency Matrix Generation using DCT for UWV Contents (DCT를 통한 UWV 콘텐츠의 2D 인접도 행렬 생성)

  • Xiaorui, Li;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.366-374
    • /
    • 2017
  • Since a display device such as TV or digital signage is getting larger, the types of media is getting changed into wider view one such as UHD, panoramic and jigsaw-like media. Especially, panoramic and jigsaw-like media is realized by stitching video clips, which are captured by different camera or devices. However, a stitching process takes long time, and has difficulties in applying for a real-time process. Thus, this paper suggests to find out 2D Adjacency Matrix, which tells spatial relationships among those video clips in order to decrease a stitching processing time. Using the Discrete Cosine Transform (DCT), we convert the each frame of video source from the spatial domain (2D) into frequency domain. Based on the aforementioned features, 2D Adjacency Matrix of images could be found that we can efficiently make the spatial map of the images by using DCT. This paper proposes a new method of generating 2D adjacency matrix by using DCT for producing a panoramic and jigsaw-like media through various individual video clips.

Research on the Image Projection of Platform Screen X (스크린 X 영상 투영 방식의 특징 연구)

  • Shan, Xinyi;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.503-508
    • /
    • 2017
  • Screen X is one of the Premium Large-Format platforms. Screen X projects images and video along the side wall of the theatre that go in synch with the front main screen. It's a different way of immersing the audience. This technique demands film-makers taking the two new "screens" into account when making movies. Screen X is the most obvious feature of viewing content with a range of 270 degrees. Viewers can view the experience of viewing experiences beyond the frame of screen screens by allowing viewers to experience the experience beyond the frame of screen screens, which means screen X can immerse the audiences without 3D glasses. Based on the results of this study, studies of content specifications and how they fit in the screen X are studied, and goals for maximizing the visual effects of the visual effects are studied. Looking forward to future research paper researchers and industry professionals who will benefit from future research papers.

3D Object Recognition Using Appearance Model Space of Feature Point (특징점 Appearance Model Space를 이용한 3차원 물체 인식)

  • Joo, Seong Moon;Lee, Chil Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.2
    • /
    • pp.93-100
    • /
    • 2014
  • 3D object recognition using only 2D images is a difficult work because each images are generated different to according to the view direction of cameras. Because SIFT algorithm defines the local features of the projected images, recognition result is particularly limited in case of input images with strong perspective transformation. In this paper, we propose the object recognition method that improves SIFT algorithm by using several sequential images captured from rotating 3D object around a rotation axis. We use the geometric relationship between adjacent images and merge several images into a generated feature space during recognizing object. To clarify effectiveness of the proposed algorithm, we keep constantly the camera position and illumination conditions. This method can recognize the appearance of 3D objects that previous approach can not recognize with usually SIFT algorithm.