• Title/Summary/Keyword: Multi-view image set

Search Result 19, Processing Time 0.024 seconds

Analysis of Affine Motion Compensation for Light Field Image Compression (라이트필드 영상 압축을 위한 Affine 움직임 보상 분석)

  • Huu, Thuc Nguyen;Duong, Vinh Van;Xu, Motong;Jeon, Byeungwoo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.216-217
    • /
    • 2019
  • Light Field (LF) image can be understood as a set of images captured by a multi-view camera array at the same time. The changes among views can be modeled by a general motion model such as affine motion model. In this paper, we study the impact of affine coding tool of Versatile Video Coding (VVC) on LF image compression. Our experimental results show a small contribution by affine coding tool in overall LF image compression of roughly 0.2% - 0.4%.

  • PDF

KMTNet Supernova Project : Pipeline and Alerting System Development

  • Lee, Jae-Joon;Moon, Dae-Sik;Kim, Sang Chul;Pak, Mina
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.40 no.1
    • /
    • pp.56.2-56.2
    • /
    • 2015
  • The KMTNet Supernovae Project utilizes the large $2^{\circ}{\times}2^{\circ}$ field of view of the three KMTNet telescopes to search and monitor supernovae, especially early ones, and other optical transients. A key component of the project is to build a data pipeline with a descent latency and an early alerting system that can handle the large volume of the data in an efficient and a prompt way, while minimizing false alarms, which casts a significant challenge to the software development. Here we present the current status of their development. The pipeline utilizes a difference image analysis technique to discover candidate transient sources after making correction of image distortion. In the early phase of the program, final selection of transient sources from candidates will mainly rely on multi-filter, multi-epoch and multi-site screening as well as human inspection, and an interactive web-based system is being developed for this purpose. Eventually, machine learning algorithms, based on the training set collected in the early phase, will be used to select true transient sources from candidates.

  • PDF

Performance Evaluation of Pansharpening Algorithms for WorldView-3 Satellite Imagery

  • Kim, Gu Hyeok;Park, Nyung Hee;Choi, Seok Keun;Choi, Jae Wan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.4
    • /
    • pp.413-423
    • /
    • 2016
  • Worldview-3 satellite sensor provides panchromatic image with high-spatial resolution and 8-band multispectral images. Therefore, an image-sharpening technique, which sharpens the spatial resolution of multispectral images by using high-spatial resolution panchromatic images, is essential for various applications of Worldview-3 images based on image interpretation and processing. The existing pansharpening algorithms tend to tradeoff between spectral distortion and spatial enhancement. In this study, we applied six pansharpening algorithms to Worldview-3 satellite imagery and assessed the quality of pansharpened images qualitatively and quantitatively. We also analyzed the effects of time lag for each multispectral band during the pansharpening process. Quantitative assessment of pansharpened images was performed by comparing ERGAS (Erreur Relative Globale Adimensionnelle de Synthèse), SAM (Spectral Angle Mapper), Q-index and sCC (spatial Correlation Coefficient) based on real data set. In experiment, quantitative results obtained by MRA (Multi-Resolution Analysis)-based algorithm were better than those by the CS (Component Substitution)-based algorithm. Nevertheless, qualitative quality of spectral information was similar to each other. In addition, images obtained by the CS-based algorithm and by division of two multispectral sensors were shaper in terms of spatial quality than those obtained by the other pansharpening algorithm. Therefore, there is a need to determine a pansharpening method for Worldview-3 images for application to remote sensing data, such as spectral and spatial information-based applications.

A Study on Use of Advanced Digital Contents of Cultural Archetype in Architecture (건축문화원형의 디지털콘텐츠화 연구)

  • Chang, Young-Hee
    • Journal of The Korean Digital Architecture Interior Association
    • /
    • v.6 no.2
    • /
    • pp.31-38
    • /
    • 2006
  • An architect for Cultural Technology have the most obligation to improve the cultural inheritance into an industrial resources. From an esthetical point of view, it is possible to put digital contents to a multiplicity of practical usage in the moment of its' digital conversion. These are practical resources out of an only duplication from the original. Developing cultural archetype into the most suitable model for one-source multi-use that is the core of the project. If you want to change an archetype, a Korean traditional architecture, into a creative source, you should develope a reappearance and a practical model harmonized with the image set. In addition, development process of cultural archetype digital contents based on ultimate idea and imagnation in our architectural culture, referred from cultural archetype and digital contents technology was offered by the study.

  • PDF

Cooperative recognition using multi-view images

  • Kojoh, Toshiyuki;Nagata, Tadashi;Zha, Hong-Bin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10b
    • /
    • pp.70-75
    • /
    • 1993
  • We represent a method of 3-D object recognition using multi images in this paper. The recognition process is executed as follows. Object models as prior knowledgement are generated and stored on a computer. To extract features of a recognized object, three CCD cameras are set at vertices of a regular triangle and take images of an object to be recognized. By comparing extracted features with generated models, the object is recognized. In general, it is difficult to recognize 3-D objects because there are the following problems such as how to make the correspondence to both stereo images, generate and store an object model according to a recognition process, and effectively collate information gotten from input images. We resolve these problems using the method that the collation on the basis of features independent on the viewpoint, the generation of object models as enumerating some candidate models in an early recognition level, the execution a tight cooperative process among results gained by analyzing each image. We have made experiments based on real images in which polyhedral objects are used as objects to be recognized. Some of results reveal the usefulness of the proposed method.

  • PDF

Bilayer Segmentation of Consistent Scene Images by Propagation of Multi-level Cues with Adaptive Confidence (다중 단계 신호의 적응적 전파를 통한 동일 장면 영상의 이원 영역화)

  • Lee, Soo-Chahn;Yun, Il-Dong;Lee, Sang-Uk
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.450-462
    • /
    • 2009
  • So far, many methods for segmenting single images or video have been proposed, but few methods have dealt with multiple images with analogous content. These images, which we term consistent scene images, include concurrent images of a scene and gathered images of a similar foreground, and may be collectively utilized to describe a scene or as input images for multi-view stereo. In this paper, we present a method to segment these images with minimum user input, specifically, manual segmentation of one image, by iteratively propagating information via multi-level cues with adaptive confidence depending on the nature of the images. Propagated cues are used as the bases to compute multi-level potentials in an MRF framework, and segmentation is done by energy minimization. Both cues and potentials are classified as low-, mid-, and high- levels based on whether they pertain to pixels, patches, and shapes. A major aspect of our approach is utilizing mid-level cues to compute low- and mid- level potentials, and high-level cues to compute low-, mid-, and high- level potentials, thereby making use of inherent information. Through this process, the proposed method attempts to maximize the amount of both extracted and utilized information in order to maximize the consistency of the segmentation. We demonstrate the effectiveness of the proposed method on several sets of consistent scene images and provide a comparison with results based only on mid-level cues [1].

A dual path encoder-decoder network for placental vessel segmentation in fetoscopic surgery

  • Yunbo Rao;Tian Tan;Shaoning Zeng;Zhanglin Chen;Jihong Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.15-29
    • /
    • 2024
  • A fetoscope is an optical endoscope, which is often applied in fetoscopic laser photocoagulation to treat twin-to-twin transfusion syndrome. In an operation, the clinician needs to observe the abnormal placental vessels through the endoscope, so as to guide the operation. However, low-quality imaging and narrow field of view of the fetoscope increase the difficulty of the operation. Introducing an accurate placental vessel segmentation of fetoscopic images can assist the fetoscopic laser photocoagulation and help identify the abnormal vessels. This study proposes a method to solve the above problems. A novel encoder-decoder network with a dual-path structure is proposed to segment the placental vessels in fetoscopic images. In particular, we introduce a channel attention mechanism and a continuous convolution structure to obtain multi-scale features with their weights. Moreover, a switching connection is inserted between the corresponding blocks of the two paths to strengthen their relationship. According to the results of a set of blood vessel segmentation experiments conducted on a public fetoscopic image dataset, our method has achieved higher scores than the current mainstream segmentation methods, raising the dice similarity coefficient, intersection over union, and pixel accuracy by 5.80%, 8.39% and 0.62%, respectively.

An Approach to Measurement of Water Quality Factors and its Application Using NOAA satellite Data

  • Jang, Dong-Ho;Jo, Gi-Ho;Chi, Kwang-Hoon
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.363-370
    • /
    • 1999
  • Remotely sensed data is regarded as a potentially effective data source for the measurement of water quality and for the environmental change of water bodies. In this study, we measured the spectral reflectance by using multi-spectral image of low resolution camera(LRC) which will be loaded in the OSMI multi-purpose satellite(KOMPSAT) scheduled to be launched on 1999 to use the data in analyzing water pollution. We also investigated the possibility of extraction of water quality factors in water bodies by using remotely sensed low resolution data such as NOAA/AVHRR. In this study, Shiwha-District and Sang-Sam Lake was set up as the subject areas for the study. In this part of the study, we measured the spectral reflectance of the water surface to analyze the radiance of the water bodies in low resolution spectral band and tried to analyze the water quality factors in water bodies by using radiance feature from another remotely sensed data such as NOAA/AVHRR. As the method of this study, first, we measured the spectral reflectance of the water surface by using SFOV( Single Field of View) to measure the reflectance of water quality analysis from every channel in LRC spectral band(0.4~O.9${\mu}{\textrm}{m}$). Second, we investigated the usefulness of ground truth data and the LRC data by measuring every spectral reflectance of water quality factors. Third, we analyzed water quality factors by using the radiance feature from another remotely sensed data such as NOAA/AVHRR. We carried out ratio process of what we selected Chlorophyll-a and suspended sediments as the first factors of the water quality. The results of the analysis are below. First, the amount of pollutants of Shiwha-Lake has been increasing every you since 1987 by factors of eutrophication. Second, as a result of the reflectance, Chlorophyll-a represented high spectral reflectance mainly around 0.52${\mu}{\textrm}{m}$ of green spectral band, and turbidity represented high spectral reflectance at 0.57${\mu}{\textrm}{m}$. But suspended sediments absorbed high at 0.8${\mu}{\textrm}{m}$. Third, Chlorophyll-a and suspended sediments could have a distribution chart as a result of the water quality analysis by using NOAA/AVHRR data.

  • PDF

The Correction Factor of Sensitivity in Gamma Camera - Based on Whole Body Bone Scan Image - (감마카메라의 Sensitivity 보정 Factor에 관한 연구 - 전신 뼈 영상을 중심으로 -)

  • Jung, Eun-Mi;Jung, Woo-Young;Ryu, Jae-Kwang;Kim, Dong-Seok
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.3
    • /
    • pp.208-213
    • /
    • 2008
  • Purpose: Generally a whole body bone scan has been known as one of the most frequently executed exams in the nuclear medicine fields. Asan medical center, usually use various gamma camera systems - manufactured by PHILIPS (PRECEDENCE, BRIGHTVIEW), SIEMENS (ECAM, ECAM signature, ECAM plus, SYMBIA T2), GE (INFINIA) - to execute whole body scan. But, as we know, each camera's sensitivity is not same so it is hard to consistent diagnosis of patients. So our purpose is when we execute whole body bone scans, we exclude uncontrollable factors and try to correct controllable factors such as inherent sensitivity of gamma camera. In this study, we're going to measure each gamma camera's sensitivity and study about reasonable correction factors of whole body bone scan to follow up patient's condition using different gamma cameras. Materials and Methods: We used the $^{99m}Tc$ flood phantom, it recommend by IAEA recommendation based on general counts rate of a whole body scan and measured counts rates by the use of various gamma cameras - PRECEDENCE, BRIGHTVIEW, ECAM, ECAM signature, ECAM plus, IFINIA - in Asan medical center nuclear medicine department. For measuring sensitivity, all gamma camera equipped LEHR collimator (Low Energy High Resolution multi parallel Collimator) and the $^{99m}Tc$ gamma spectrum was adjusted around 15% window level, the photo peak was set to 140-kev and acquirded for 60 sec and 120 sec in all gamma cameras. In order to verify whether can apply calculated correction factors to whole body bone scan or not, we actually conducted the whole body bone scan to 27 patients and we compared it analyzed that results. Results: After experimenting using $^{99m}Tc$ flood phantom, sensitivity of ECAM plus was highest and other sensitivity order of all gamma camera is ECAM signature, SYMBIA T2, ECAM, BRIGHTVIEW, IFINIA, PRECEDENCE. And yield sensitivity correction factor show each gamma camera's relative sensitivity ratio by yielded based on ECAM's sensitivity. (ECAM plus 1.07, ECAM signature 1.05, SYMBIA T2 1.03, ECAM 1.00, BRIGHTVIEW 0.90, INFINIA 0.83, PRECEDENCE 0.72) When analyzing the correction factor yielded by $^{99m}Tc$ experiment and another correction factor yielded by whole body bone scan, it shows statistically insignificant value (p<0.05) in whole body bone scan diagnosis. Conclusion: In diagnosing the bone metastasis of patients undergoing cancer, whole body bone scan has been conducted as follow up tests due to its good points (high sensitivity, non invasive, easily conducted). But as a follow up study, it's hard to perform whole body bone scan continuously using same gamma camera. If we use same gamma camera to patients, we have to consider effectiveness of equipment's change by time elapsed. So we expect that applying sensitivity correction factor to patients who tested whole body bone scan regularly will add consistence in diagnosis of patients.

  • PDF