• Title/Summary/Keyword: Multi-sensor images

Search Result 179, Processing Time 0.032 seconds

A Feasibility Study for Mapping Using The KOMPSAT-2 Stereo Imagery (아리랑위성 2호 입체영상을 이용한 지도제작 가능성 연구)

  • Lee, Kwang-Jae;Kim, Youn-Soo;Seo, Hyun-Duck
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.1
    • /
    • pp.197-210
    • /
    • 2012
  • The KOrea Multi-Purpose SATellite(KOMPSAT)-2 has a capability to provide a cross-track stereo imagery using two different orbits for generating various spatial information. However, in order to fully realize the potential of the KOMPSAT-2 stereo imagery in terms of mapping, various tests are necessary. The purpose of this study is to evaluate the possibility of mapping using the KOMPSAT-2 stereo imagery. For this, digital plotting was conducted based on the stereoscopic images. Also the Digital Elevation Model(DEM) and an ortho-image were generated using digital plotting results. An accuracy of digital plotting, DEM, and ortho-image were evaluated by comparing with the existing data. Consequently, we found that horizontal and vertical error of the modeling results based on the Rational Polynomial Coefficient(RPC) was less than 1.5 meters compared with the Global Positioning System(GPS) survey results. The maximum difference of vertical direction between the plotted results in this study and the existing digital map on the scale of 1/5,000 was more than 5 meters according as the topographical characteristics. Although there were some irregular parallax on the images, we realized that it was possible to interpret and plot at least seventy percent of the layer which was required the digital map on the scale of 1/5,000. Also an accuracy of DEM, which was generated based on the digital plotting, was compared with the existing LiDAR DEM. We found that the ortho-images, which were generated using the extracted DEM in this study, sufficiently satisfied with the requirement of the geometric accuracy for an ortho-image map on the scale of 1/5,000.

Enhancement of Inter-Image Statistical Correlation for Accurate Multi-Sensor Image Registration (정밀한 다중센서 영상정합을 위한 통계적 상관성의 증대기법)

  • Kim, Kyoung-Soo;Lee, Jin-Hak;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.1-12
    • /
    • 2005
  • Image registration is a process to establish the spatial correspondence between images of the same scene, which are acquired at different view points, at different times, or by different sensors. This paper presents a new algorithm for robust registration of the images acquired by multiple sensors having different modalities; the EO (electro-optic) and IR(infrared) ones in the paper. The two feature-based and intensity-based approaches are usually possible for image registration. In the former selection of accurate common features is crucial for high performance, but features in the EO image are often not the same as those in the R image. Hence, this approach is inadequate to register the E0/IR images. In the latter normalized mutual Information (nHr) has been widely used as a similarity measure due to its high accuracy and robustness, and NMI-based image registration methods assume that statistical correlation between two images should be global. Unfortunately, since we find out that EO and IR images don't often satisfy this assumption, registration accuracy is not high enough to apply to some applications. In this paper, we propose a two-stage NMI-based registration method based on the analysis of statistical correlation between E0/1R images. In the first stage, for robust registration, we propose two preprocessing schemes: extraction of statistically correlated regions (ESCR) and enhancement of statistical correlation by filtering (ESCF). For each image, ESCR automatically extracts the regions that are highly correlated to the corresponding regions in the other image. And ESCF adaptively filters out each image to enhance statistical correlation between them. In the second stage, two output images are registered by using NMI-based algorithm. The proposed method provides prospective results for various E0/1R sensor image pairs in terms of accuracy, robustness, and speed.

Multiple SL-AVS(Small size & Low power Around View System) Synchronization Maintenance Method (다중 SL-AVS 동기화 유지기법)

  • Park, Hyun-Moon;Park, Soo-Huyn;Seo, Hae-Moon;Park, Woo-Chool
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.3
    • /
    • pp.73-82
    • /
    • 2009
  • Due to the many advantages including low price, low power consumption, and miniaturization, the CMOS camera has been utilized in many applications, including mobile phones, the automotive industry, medical sciences and sensoring, robotic controls, and research in the security field. In particular, the 360 degree omni-directional camera when utilized in multi-camera applications has displayed issues of software nature, interface communication management, delays, and a complicated image display control. Other issues include energy management problems, and miniaturization of a multi-camera in the hardware field. Traditional CMOS camera systems are comprised of an embedded system that consists of a high-performance MCU enabling a camera to send and receive images and a multi-layer system similar to an individual control system that consists of the camera's high performance Micro Controller Unit. We proposed the SL-AVS (Small Size/Low power Around-View System) to be able to control a camera while collecting image data using a high speed synchronization technique on the foundation of a single layer low performance MCU. It is an initial model of the omni-directional camera that takes images from a 360 view drawing from several CMOS camera utilizing a 110 degree view. We then connected a single MCU with four low-power CMOS cameras and implemented controls that include synchronization, controlling, and transmit/receive functions of individual camera compared with the traditional system. The synchronization of the respective cameras were controlled and then memorized by handling each interrupt through the MCU. We were able to improve the efficiency of data transmission that minimizes re-synchronization amongst a target, the CMOS camera, and the MCU. Further, depending on the choice of users, respective or groups of images divided into 4 domains were then provided with a target. We finally analyzed and compared the performance of the developed camera system including the synchronization and time of data transfer and image data loss, etc.

Orthophoto and DEM Generation Using Low Specification UAV Images from Different Altitudes (고도가 다른 저사양 UAV 영상을 이용한 정사영상 및 DEM 제작)

  • Lee, Ki Rim;Lee, Won Hee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.5
    • /
    • pp.535-544
    • /
    • 2016
  • Even though existing methods for orthophoto production using expensive aircraft are effective in large areas, they are drawbacks when dealing with renew quickly according to geographic features. But, as UAV(Unmanned Aerial Vehicle) technology has advanced rapidly, and also by loading sensors such as GPS and IMU, they are evaluates that these UAV and sensor technology can substitute expensive traditional aerial photogrammetry. Orthophoto production by using UAV has advantages that spatial information of small area can be updated quickly. But in the case of existing researches, images of same altitude are used in orthophoto generation, they are drawbacks about repetition of data and renewal of data. In this study, we targeted about small slope area, and by using low-end UAV, generated orthophoto and DEM(Digital Elevation Model) through different altitudinal images. The RMSE of the check points is σh = 0.023m on a horizontal plane and σv = 0.049m on a vertical plane. This maximum value and mean RMSE are in accordance with the working rule agreement for the aerial photogrammetry of the National Geographic Information Institute(NGII) on a 1/500 scale digital map. This paper suggests that generate orthophoto of high accuracy using a different altitude images. Reducing the repetition of data through images of different altitude and provide the informations about the spatial information quickly.

An Approach for Localization Around Indoor Corridors Based on Visual Attention Model (시각주의 모델을 적용한 실내 복도에서의 위치인식 기법)

  • Yoon, Kook-Yeol;Choi, Sun-Wook;Lee, Chong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.93-101
    • /
    • 2011
  • For mobile robot, recognizing its current location is very important to navigate autonomously. Especially, loop closing detection that robot recognize location where it has visited before is a kernel problem to solve localization. A considerable amount of research has been conducted on loop closing detection and localization based on appearance because vision sensor has an advantage in terms of costs and various approaching methods to solve this problem. In case of scenes that consist of repeated structures like in corridors, perceptual aliasing in which, the two different locations are recognized as the same, occurs frequently. In this paper, we propose an improved method to recognize location in the scenes which have similar structures. We extracted salient regions from images using visual attention model and calculated weights using distinctive features in the salient region. It makes possible to emphasize unique features in the scene to classify similar-looking locations. In the results of corridor recognition experiments, proposed method showed improved recognition performance. It shows 78.2% in the accuracy of single floor corridor recognition and 71.5% for multi floor corridors recognition.

Visible Image Enhancement Method Considering Thermal Information from Infrared Image (원적외선 영상의 열 정보를 고려한 가시광 영상 개선 방법)

  • Kim, Seonkeol;Kang, Hang-Bong
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.550-558
    • /
    • 2013
  • The infrared and visible images are represented by different information due to the different wavelength of the light. The infrared image has thermal information and the visible image has texture information. Desirable results are obtained by fusing infrared and visible information. To enhance a visible image, we extract a weight map from a visible image using saturation, brightness. After that, the weight map is adjusted using thermal information in the infrared image. Finally, an enhanced image is resulted from combining an infrared image and a visible image. Our experiment results show that our proposed algorithm is working well to enhance the smoke in the original image.

Human Touching Behavior Recognition based on Neural Network in the Touch Detector using Force Sensors (힘 센서를 이용한 접촉감지부에서 신경망기반 인간의 접촉행동 인식)

  • Ryu, Joung-Woo;Park, Cheon-Shu;Sohn, Joo-Chan
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.10
    • /
    • pp.910-917
    • /
    • 2007
  • Of the possible interactions between human and robot, touch is an important means of providing human beings with emotional relief. However, most previous studies have focused on interactions based on voice and images. In this paper. a method of recognizing human touching behaviors is proposed for developing a robot that can naturally interact with humans through touch. In this method, the recognition process is divided into pre-process and recognition Phases. In the Pre-Process Phase, recognizable characteristics are calculated from the data generated by the touch detector which was fabricated using force sensors. The force sensor used an FSR (force sensing register). The recognition phase classifies human touching behaviors using a multi-layer perceptron which is a neural network model. Experimental data was generated by six men employing three types of human touching behaviors including 'hitting', 'stroking' and 'tickling'. As the experimental result of a recognizer being generated for each user and being evaluated as cross-validation, the average recognition rate was 82.9% while the result of a single recognizer for all users showed a 74.5% average recognition rate.

Recognition of Tactilie Image Dependent on Imposed Force Using Fuzzy Fusion Algorithm (접촉력에 따라 변하는 Tactile 영상의 퍼지 융합을 통한 인식기법)

  • 고동환;한헌수
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.3
    • /
    • pp.95-103
    • /
    • 1998
  • This paper deals with a problem occuring in recognition of tactile images due to the effects of imposed force at a me urement moment. Tactile image of a contact surface, used for recognition of the surface type, varies depending on the forces imposed so that a false recognition may result in. This paper fuzzifies two parameters of the contour of a tactile image with the membership function formed by considering the imposed force. Two fuzzifed paramenters are fused by the average Minkowski's dist; lnce. The proposed algorithm was implemented on the multisensor system cnmposed of an optical tact le sensor and a 6 axes forceltorque sensor. By the experiments, the proposed algorithm has shown average recognition ratio greater than 869% over all imposed force ranges and object models which is about 14% enhancement comparing to the case where only the contour information is used. The pro- ~oseda lgorithm can be used for end-effectors manipulating a deformable or fragile objects or for recognition of 3D objects by implementing on multi-fingered robot hand.

  • PDF

Cone-beam computed tomography versus digital periapical radiography in the detection of artificially created periapical lesions: A pilot study of the diagnostic accuracy of endodontists using both techniques

  • Campello, Andrea Fagundes;Goncalves, Lucio Souza;Guedes, Fabio Ribeiro;Marques, Fabio Vidal
    • Imaging Science in Dentistry
    • /
    • v.47 no.1
    • /
    • pp.25-31
    • /
    • 2017
  • Purpose: The aim of this study was to compare the diagnostic accuracy of previously trained endodontists in the detection of artificially created periapical lesions using cone-beam computed tomography (CBCT) and digital periapical radiography (DPR). Materials and Methods: An ex vivo model using dry skulls was used, in which simulated apical lesions were created and then progressively enlarged using #1/2, #2, #4, and #6 round burs. A total of 11 teeth were included in the study, and 110 images were obtained with CBCT and with an intraoral digital periapical radiographic sensor (Instrumentarium dental, Tuusula, Finland) initially and after each bur was used. Specificity and sensitivity were calculated. All images were evaluated by 10 previously trained, certified endodontists. Agreement was calculated using the kappa coefficient. The accuracy of each method in detecting apical lesions was calculated using the chisquare test. Results: The kappa coefficient between examiners showed low agreement (range, 0.17-0.64). No statistical difference was found between CBCT and DPR in teeth without apical lesions (P=.15). The accuracy for CBCT was significantly higher than for DPR in all corresponding simulated lesions(P<.001). The correct diagnostic rate for CBCT ranged between 56.9% and 73.6%. The greatest difference between CBCT and DPR was seen in the maxillary teeth (CBCT, 71.4%; DPR, 28.6%; P<.01) and multi-rooted teeth (CBCT, 83.3%; DPR, 33.3%; P<.01). Conclusion: CBCT allowed higher accuracy than DPR in detecting simulated lesions for all simulated lesions tested. Endodontists need to be properly trained in interpreting CBCT scans to achieve higher diagnostic accuracy.

Development of Mobile Volume Visualization System (모바일 볼륨 가시화 시스템 개발)

  • Park, Sang-Hun;Kim, Won-Tae;Ihm, In-Sung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.5
    • /
    • pp.286-299
    • /
    • 2006
  • Due to the continuing technical progress in the capabilities of modeling, simulation, and sensor devices, huge volume data with very high resolution are common. In scientific visualization, various interactive real-time techniques on high performance parallel computers to effectively render such large scale volume data sets have been proposed. In this paper, we present a mobile volume visualization system that consists of mobile clients, gateways, and parallel rendering servers. The mobile clients allow to explore the regions of interests adaptively in higher resolution level as well as specify rendering / viewing parameters interactively which are sent to parallel rendering server. The gateways play a role in managing requests / responses between mobile clients and parallel rendering servers for stable services. The parallel rendering servers visualize the specified sub-volume with rendering contexts from clients and then transfer the high quality final images back. This proposed system lets multi-users with PDA simultaneously share commonly interesting parts of huge volume, rendering contexts, and final images through CSCW(Computer Supported Cooperative Work) mode.