• 제목/요약/키워드: observation fusion

검색결과 140건 처리시간 0.031초

DEVELOPMENT OF DATA INTEGRATION AND INFORMATION FUSION INFRASTRUCTURE FOR EARTH OBSERVATION

  • Takagi Mikio;Kltsuregawa Masaru;Shibasaki Ryousuke;Ninomiya Seishi;Koike Toshio
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.22-25
    • /
    • 2005
  • The 10 Year Implementation Plan for a Global Earth Observation System of Systems (GEOSS), which was endorsed at the Third Earth Observation Summit in Brussels in February, 2005, emphasizes the importance of data management facilities for diverse and large-volume Earth Observation data from inhomogeneous information sources. A three year research plan for addressing this key target of GEOSS has just approved as the first step by the Japanese government. The goals of this research are, (1) to develop a data management core system consisting of data integration and information fusion functions and interoperability and information service functions; (2) to establish data and information flows between data providers and users; (3) to promote application studies of data integration and information fusion, especially in the fields of weather forecasting, flood forecasting, agricultural management, and climate variability and changes. The research group involves leading scientists on information science and technology, who have been developing giant data archive servers, storage area networks, metadata models, ontology for the earth observations. They are closely cooperating with scientists on earth sciences, water resources management, and agriculture, and establishing an effective collaborative research framework.

  • PDF

Viola속 식물의 원형질체 및 융합세포의 전자현미경 관찰 (Electron Microscopic Observations of Protoplast and Fusion Cell of Viola Species)

  • 정용모;임현희;손병구;서정해;정정한;권오창
    • 생명과학회지
    • /
    • 제7권4호
    • /
    • pp.282-288
    • /
    • 1997
  • To obtain a basic information on the development of Genus Viola, ultrastructure and electrofusion process between the two protoplasts from wild Viola callus cells and pansy mesophyll cells were observed with a scanning electron microscopy(SEM) and transmission electron microscopy(TEM). In the ultrastructural observation of wild viola callus protoplasts and pansy mesophyll protoplasts using SEM, their cell walls were removed completely. A knob-like formation was observed on the enlarge surface of viola callus protoplasts. On the surface of pansy mesophyll protoplasts net-like chloroplasts were observed. In SEM observation of pansy mesophyll protoplasts, chloroplasts devoid of membrane were observed on the surface the protoplasts. Pearl chain was formed by applying AC field of 200 V/cm at 1.0 MHz for 43 sec. The lysis of plasma membranes and fusion process occurred by applying a 1,600 V/cm DC pulse twice for 1 sec. After 1-2 hours of a DC pulse application, it was observed that the two protoplasts were fused completely into one cell. In TEM observation of the fused cell, many small vacuoles were located in the fusion area of the two protoplasts. Indeed, two distinct regions were observed during fusing process; in one region, a nucleus was found, while in the other region, both nucleus and nucleous were found.

  • PDF

Distributed Fusion Estimation for Sensor Network

  • Song, Il Young;Song, Jin Mo;Jeong, Woong Ji;Gong, Myoung Sool
    • 센서학회지
    • /
    • 제28권5호
    • /
    • pp.277-283
    • /
    • 2019
  • In this paper, we propose a distributed fusion estimation for sensor networks using a receding horizon strategy. Communication channels were modelled as Markov jump systems, and a posterior probability distribution for communication channel characteristics was calculated and incorporated into the filter to allow distributed fusion estimation to handle path loss observation situations automatically. To implement distributed fusion estimation, a Kalman-Consensus filter was then used to obtain the average consensus, based on the estimates of sensors randomly distributed across sensor networks. The advantages of the proposed algorithms were then verified using a large-scale sensor network example.

검출기 융합에 기반을 둔 확률가정밀도 (PHD) 필터를 적용한 다중 객체 추적 방법 (Fusion of Local and Global Detectors for PHD Filter-Based Multi-Object Tracking)

  • 윤주홍;황영배;최병호;윤국진
    • 제어로봇시스템학회논문지
    • /
    • 제22권9호
    • /
    • pp.773-777
    • /
    • 2016
  • In this paper, a novel multi-object tracking method to track an unknown number of objects is proposed. To handle multiple object states and uncertain observations efficiently, a probability hypothesis density (PHD) filter is adopted and modified. The PHD filter is capable of reducing false positives, managing object appearances and disappearances, and estimating the multiple object trajectories in a unified framework. Although the PHD filter is robust in cluttered environments, it is vulnerable to false negatives. For this reason, we propose to exploit local observations in an RFS of the observation model. Each local observation is generated by using an online trained object detector. The main purpose of the local observation is to deal with false negatives in the PHD filtering procedure. The experimental results demonstrated that the proposed method robustly tracked multiple objects under practical situations.

협력 인지 통신 네트워크에서 새로운 증분형 스펙트럼 검출 (Novel Incremental Spectrum Sensing in Cooperative Cognitive Radio Networks)

  • 하 뉘엔 부;공형윤
    • 한국통신학회논문지
    • /
    • 제35권9A호
    • /
    • pp.859-867
    • /
    • 2010
  • 본 논문에서는 새로운 스펙트럼 검출 시스템을 제안한다. 먼저 융합 센터(Fusion Center : FC)에서 1차 사용자의 신호를 수신하고, 이를 이용하여 1차 사용자의 유무를 판단한다. 그러나 이 과정에서 최종 1차 사용자의 유무를 판단하지 못한다면, 각 2차 사용자들의 Local observation 결과를 필요로 한다. 이때 융합 센터(Fusion Center : FC)는 각 2차 사용자의 신호 중 에너지가 가장 큰 2차 사용자의 Local observation의 결과만을 수신하며, 수신한 FC는 최종 결정값을 각 2차 사용자에게 송신한다. 본 논문을 통해 제안하는 기법은 단 하나의 2차 사용자가 스펙트럼 검출에 참여하기 때문에 불필요한 스펙트럼 검출로 인한 비트수를 줄일 수 있다. 그러므로 1차 사용자의 유무를 판단하는 과정에서의 2차 사용자의 전력과 불필요한 Local observation전송으로 인한 대역폭의 소모를 줄일 수 있다. Monte-Carlo 시뮬레이션을 통해 본 논문에서 제안하는 기법의 검출 확률, 오 경보 확률 등을 구함으로서 기존의 기법보다 우수한 성능을 보이는 것을 증명한다.

TSDnet: 적외선과 가시광선 이미지 융합을 위한 규모-3 밀도망 (TSDnet: Three-scale Dense Network for Infrared and Visible Image Fusion)

  • 장영매;이효종
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2022년도 추계학술발표대회
    • /
    • pp.656-658
    • /
    • 2022
  • The purpose of infrared and visible image fusion is to integrate images of different modes with different details into a result image with rich information, which is convenient for high-level computer vision task. Considering many deep networks only work in a single scale, this paper proposes a novel image fusion based on three-scale dense network to preserve the content and key target features from the input images in the fused image. It comprises an encoder, a three-scale block, a fused strategy and a decoder, which can capture incredibly rich background details and prominent target details. The encoder is used to extract three-scale dense features from the source images for the initial image fusion. Then, a fusion strategy called l1-norm to fuse features of different scales. Finally, the fused image is reconstructed by decoding network. Compared with the existing methods, the proposed method can achieve state-of-the-art fusion performance in subjective observation.

GPS 기반 추적레이더 실시간 바이어스 추정 및 비동기 정보융합을 통한 발사체 추적 성능 개선 (Performance enhancement of launch vehicle tracking using GPS-based multiple radar bias estimation and sensor fusion)

  • 송하룡
    • 한국산업정보학회논문지
    • /
    • 제20권6호
    • /
    • pp.47-56
    • /
    • 2015
  • 다중센서 시스템에서 센서 바이어스를 제거하는 센서 등록 과정은 각각의 센서가 공통된 좌표를 갖게 하기 위해 반드시 필요하다. 만약 센서 등록 과정을 적절하게 처리하지 않는다면, 거대한 추적 에러 또는 같은 목표물을 향한 다수의 허수 트랙이 발생하게 되어 추적에 실패하게 된다. 특히, 발사체 추적에 있어서 각각의 추적 장비는 반드시 적절한 센서등록 과정을 거쳐야 하며, 이 후 다중센서 융합알고리즘을 활용하면 발사체 추적 성능을 높이고 다중 추적 시스템에 정확한 지향입력으로 활용 가능하게 된다. 본 논문에서는 실시간 바이어스 추정/제거 알고리즘과 비동기 다중 센서 융합 기법을 제안하였다. 제안된 바이어스 추정 알고리즘은 GPS와 다중 레이더 간의 의사 바이어스 측정치를 활용하였고, 비동기 센서 융합알고리즘 적용을 통해 추적 성능을 향상하였다.

Phaffia rhodozyma의 원형질체 융합 (Protoplast Fusion of phaffia rhodozyma)

  • 배석;김문휘;박종천;김재형;전순배
    • KSBB Journal
    • /
    • 제5권3호
    • /
    • pp.255-261
    • /
    • 1990
  • Astaxanthin을 생산하는 효모 Phaffia rhodozyma로부터 제조된 상보적 돌연변이 균주간의 세포융합을 통하여 astaxanthin을 대량 생산하는 균주를 얻고자 시도하였다. 이들의 원형질체 융합빈도는 $1.3$\times$10^-^5-6.0$\times$10^-^5$이었고 DNA함량, 핵염색, UV조사에 대한 생존력 비교 그리고 형질분리분석등으로 핵융합을 확인하였다. 융합체 중 F1은 야생형의 모 균주와 비교했을때 astaxanthin생성량이 약 3배 증가하였다.

  • PDF

An Improved Remote Sensing Image Fusion Algorithm Based on IHS Transformation

  • Deng, Chao;Wang, Zhi-heng;Li, Xing-wang;Li, Hui-na;Cavalcante, Charles Casimiro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권3호
    • /
    • pp.1633-1649
    • /
    • 2017
  • In remote sensing image processing, the traditional fusion algorithm is based on the Intensity-Hue-Saturation (IHS) transformation. This method does not take into account the texture or spectrum information, spatial resolution and statistical information of the photos adequately, which leads to spectrum distortion of the image. Although traditional solutions in such application combine manifold methods, the fusion procedure is rather complicated and not suitable for practical operation. In this paper, an improved IHS transformation fusion algorithm based on the local variance weighting scheme is proposed for remote sensing images. In our proposal, firstly, the local variance of the SPOT (which comes from French "Systeme Probatoire d'Observation dela Tarre" and means "earth observing system") image is calculated by using different sliding windows. The optimal window size is then selected with the images being normalized with the optimal window local variance. Secondly, the power exponent is chosen as the mapping function, and the local variance is used to obtain the weight of the I component and match SPOT images. Then we obtain the I' component with the weight, the I component and the matched SPOT images. Finally, the final fusion image is obtained by the inverse Intensity-Hue-Saturation transformation of the I', H and S components. The proposed algorithm has been tested and compared with some other image fusion methods well known in the literature. Simulation result indicates that the proposed algorithm could obtain a superior fused image based on quantitative fusion evaluation indices.

Infrared and visible image fusion based on Laplacian pyramid and generative adversarial network

  • Wang, Juan;Ke, Cong;Wu, Minghu;Liu, Min;Zeng, Chunyan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권5호
    • /
    • pp.1761-1777
    • /
    • 2021
  • An image with infrared features and visible details is obtained by processing infrared and visible images. In this paper, a fusion method based on Laplacian pyramid and generative adversarial network is proposed to obtain high quality fusion images, termed as Laplacian-GAN. Firstly, the base and detail layers are obtained by decomposing the source images. Secondly, we utilize the Laplacian pyramid-based method to fuse these base layers to obtain more information of the base layer. Thirdly, the detail part is fused by a generative adversarial network. In addition, generative adversarial network avoids the manual design complicated fusion rules. Finally, the fused base layer and fused detail layer are reconstructed to obtain the fused image. Experimental results demonstrate that the proposed method can obtain state-of-the-art fusion performance in both visual quality and objective assessment. In terms of visual observation, the fusion image obtained by Laplacian-GAN algorithm in this paper is clearer in detail. At the same time, in the six metrics of MI, AG, EI, MS_SSIM, Qabf and SCD, the algorithm presented in this paper has improved by 0.62%, 7.10%, 14.53%, 12.18%, 34.33% and 12.23%, respectively, compared with the best of the other three algorithms.