• Title/Summary/Keyword: Processing Map

Search Result 1,460, Processing Time 0.031 seconds

INVESTIGATION OF BAIKDU-SAN VOLCANO WITH SPACE-BORNE SAR SYSTEM

  • Kim, Duk-Jin;Feng, Lanying;Moon, Wooil-M.
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.148-153
    • /
    • 1999
  • Baikdu-san was a very active volcano during the Cenozoic era and is believed to be formed in late Cenozoic era. Recently it was also reported that there was a major eruption in or around 1002 A.D. and there are evidences which indicate that it is still an active volcano and a potential volcanic hazard. Remote sensing techniques have been widely used to monitor various natural hazards, including volcanic hazards. However, during an active volcanic eruption, volcanic ash can basically cover the sky and often blocks the solar radiation preventing any use of optical sensors. Synthetic aperture radar(SAR) is an ideal tool to monitor the volcanic activities and lava flows, because the wavelength of the microwave signal is considerably longer that the average volcanic ash particle size. In this study we have utilized several sets of SAR data to evaluate the utility of the space-borne SAR system. The data sets include JERS-1(L-band) SAR, and RADARSAT(C-band) data which included both standard mode and the ScanSAR mode data sets. We also utilized several sets of auxiliary data such as local geological maps and JERS-1 OPS data. The routine preprocessing and image processing steps were applied to these data sets before any attempts of classifying and mapping surface geological features. Although we computed sigma nought ($\sigma$$^{0}$) values far the standard mode RADARSAT data, the utility of sigma nought image was minimal in this study. Application of various types of classification algorithms to identify and map several stages of volcanic flows was not very successful. Although this research is still in progress, the following preliminary conclusions could be made: (1) sigma nought (RADARSAT standard mode data) and DN (JERS-1 SAR and RADARSAT ScanSAR data) have limited usefulness for distinguishing early basalt lava flows from late trachyte flows or later trachyte flows from the old basement granitic rocks around Baikdu-san volcano, (2) surface geological structure features such as several faults and volcanic lava flow channels can easily be identified and mapped, and (3) routine application of unsupervised classification methods cannot be used for mapping any types of surface lava flow patterns.

  • PDF

Realtime Attention System of Autonomous Virtual Character using Image Feature Map (시각적 특징 맵을 이용한 자율 가상 캐릭터의 실시간 주목 시스템)

  • Cha, Myaung-Hee;Kim, Ky-Hyub;Cho, Kyung-Eun;Um, Ky-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.5
    • /
    • pp.745-756
    • /
    • 2009
  • An autonomous virtual character can conduct itself like a human after recognizing and interpreting the virtual environment. Artificial vision is mainly used in the recognition of the environment for a virtual character. The present artificial vision that has been developed takes all the information at once from everything that comes into view. However, this can reduce the efficiency and reality of the system by saving too much information at once, and it also causes problems because the speed slows down in the dynamic environment of the game. Therefore, to construct a vision system similar to that of humans, a visual observation system which saves only the required information is needed. For that reason, this research focuses on the descriptive artificial intelligence engine which detects the most important information visually recognized by the character in the virtual world and saves it into the memory by degrees. In addition, a visual system is constructed in accordance with an image transaction theory to make it sense and recognize human feelings. This system finds the attention area of moving objects quickly and effectively through the experiment of the virtual environment with three dynamic dimensions. Also the experiment enhanced processing speed more than 1.6 times.

  • PDF

Query Expansion Based on Word Graphs Using Pseudo Non-Relevant Documents and Term Proximity (잠정적 부적합 문서와 어휘 근접도를 반영한 어휘 그래프 기반 질의 확장)

  • Jo, Seung-Hyeon;Lee, Kyung-Soon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.3
    • /
    • pp.189-194
    • /
    • 2012
  • In this paper, we propose a query expansion method based on word graphs using pseudo-relevant and pseudo non-relevant documents to achieve performance improvement in information retrieval. The initially retrieved documents are classified into a core cluster when a document includes core query terms extracted by query term combinations and the degree of query term proximity. Otherwise, documents are classified into a non-core cluster. The documents that belong to a core query cluster can be seen as pseudo-relevant documents, and the documents that belong to a non-core cluster can be seen as pseudo non-relevant documents. Each cluster is represented as a graph which has nodes and edges. Each node represents a term and each edge represents proximity between the term and a query term. The term weight is calculated by subtracting the term weight in the non-core cluster graph from the term weight in the core cluster graph. It means that a term with a high weight in a non-core cluster graph should not be considered as an expanded term. Expansion terms are selected according to the term weights. Experimental results on TREC WT10g test collection show that the proposed method achieves 9.4% improvement over the language model in mean average precision.

Immersive Visualization of Casting Solidification by Mapping Geometric Model to Reconstructed Model of Numerical Simulation Result (주물 응고 수치해석 복원모델의 설계모델 매핑을 통한 몰입형 가시화)

  • Park, Ji-Young;Suh, Ji-Hyun;Kim, Sung-Hee;Rhee, Seon-Min;Kim, Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.15A no.3
    • /
    • pp.141-149
    • /
    • 2008
  • In this research we present a novel method which combines and visualizes the design model and the FDM-based simulation result of solidification. Moreover we employ VR displays and visualize stereoscopic images to provide an effective analysis environment. First we reconstruct the solidification simulation result to a rectangular mesh model using a conventional simulation software. Then each point color of the reconstructed model represents a temperature value of its position. Next we map the two models by finding the nearest point of the reconstructed model for each point of the design model and then assign the point color of the design model as that of the reconstructed model. Before this mapping we apply mesh subdivision because the design model is composed of minimum number of points and that makes the point distribution of the design model not uniform compared with the reconstructed model. In this process the original shape is preserved in the manner that points are added to the mesh edge which length is longer than a predefined threshold value. The implemented system visualizes the solidification simulation data on the design model, which allows the user to understand the object geometry precisely. The immersive and realistic working environment constructed with use of VR display can support the user to discover the defect occurrence faster and more effectively.

Analysis on Topographic Normalization Methods for 2019 Gangneung-East Sea Wildfire Area Using PlanetScope Imagery (2019 강릉-동해 산불 피해 지역에 대한 PlanetScope 영상을 이용한 지형 정규화 기법 분석)

  • Chung, Minkyung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_1
    • /
    • pp.179-197
    • /
    • 2020
  • Topographic normalization reduces the terrain effects on reflectance by adjusting the brightness values of the image pixels to be equal if the pixels cover the same land-cover. Topographic effects are induced by the imaging conditions and tend to be large in high mountainousregions. Therefore, image analysis on mountainous terrain such as estimation of wildfire damage assessment requires appropriate topographic normalization techniques to yield accurate image processing results. However, most of the previous studies focused on the evaluation of topographic normalization on satellite images with moderate-low spatial resolution. Thus, the alleviation of topographic effects on multi-temporal high-resolution images was not dealt enough. In this study, the evaluation of terrain normalization was performed for each band to select the optimal technical combinations for rapid and accurate wildfire damage assessment using PlanetScope images. PlanetScope has considerable potential in the disaster management field as it satisfies the rapid image acquisition by providing the 3 m resolution daily image with global coverage. For comparison of topographic normalization techniques, seven widely used methods were employed on both pre-fire and post-fire images. The analysis on bi-temporal images suggests the optimal combination of techniques which can be applied on images with different land-cover composition. Then, the vegetation index was calculated from the images after the topographic normalization with the proposed method. The wildfire damage detection results were obtained by thresholding the index and showed improvementsin detection accuracy for both object-based and pixel-based image analysis. In addition, the burn severity map was constructed to verify the effects oftopographic correction on a continuous distribution of brightness values.

Awareness of Integrated School Education and Education for Sustainable Development of Science Teachers in Secondary Schools with or without Common Science Teacher Qualification (중등학교 과학교사들의 공통과학 교사자격증 유무에 따른 통합과학교육과 지속가능발전교육에 대한 인식)

  • JI, Dukyoung
    • Journal of the Korean Society of Earth Science Education
    • /
    • v.12 no.3
    • /
    • pp.224-238
    • /
    • 2019
  • The purpose of this study is to analyze the perception of integrated school education and education for sustainable development of secondary school science teachers, judging that the existence of a common science teacher qualification certificate is related to the expertise of the common science map. To that end, the survey was conducted for three months from June to August 2018 for secondary school science teachers, and multiple choice questions were analyzed as statistical processing and descriptive questions as topical modeling. According to the analysis, teachers with a common science teacher qualification certificate had a high awareness of integrated school education, and the response average was high in all areas of value, direction and success condition of integrated school education. In addition, the average response of teachers with a common science teacher qualification certificate was also found to be higher than those with a common science teacher qualification. There were no major differences in perception between the two groups on the strength of science education and ESD integration, but the difference was that teachers with a common science teaching certificate focused on science in all topics and recognized science as a medium for each topic compared to teachers who did not.

Digital Camera Identification Based on Interpolation Pattern Used Lens Distortion Correction (디지털 카메라의 렌즈 왜곡 보정에 사용된 보간 패턴 추출을 통한 카메라 식별 방법)

  • Hwang, Min-Gu;Kim, Dong-Min;Har, Dong-Hwan
    • Journal of Internet Computing and Services
    • /
    • v.13 no.3
    • /
    • pp.49-59
    • /
    • 2012
  • Throughout developing digital technology, reproduction of image is growing better day by day. And at the same time, diverse image editing softwares are developed to manage images easily. In the process of editing images, those programs could delete or modify EXIF files which have the original image information; therefore images without the origin source are widely spread on the web site after editing. This matter could affect analysis of images due to the distortion of originality. Especially in the court of law, the source of evidence should be expressed clearly; therefore digital image EXIF file without deletion or distortion could not be the objective evidence. In this research, we try to trace the identification of a digital camera in order to solve digital images originality, and also we focus on lens distortion correction algorism which is used in digital image processing. Lens distortion correction uses mapping algorism, and at this moment it also uses interpolation algorism to prevent aliasing artifact and reconstruction artifact. At this point interpolation shows the similar mapping pattern; therefore we want to find out the interpolation evidence. We propose a minimum filter algorism in order to detect interpolation pattern and adjust the same minimum filter coefficient in two areas; one has interpolation and the second has no interpolation. Throughout DFT, we confirm frequency character between each area. Based on this result, we make the final detection map by using differences between two areas. In other words, thereby the area which has the interpolation caused by mapping is adjusted using minimum filter for detection algorism; the second area which has no interpolation tends to different frequency character.

An Effective Extraction Algorithm of Pulmonary Regions Using Intensity-level Maps in Chest X-ray Images (흉부 X-ray 영상에서의 명암 레벨지도를 이용한 효과적인 폐 영역 추출 알고리즘)

  • Jang, Geun-Ho;Park, Ho-Hyun;Lee, Seok-Lyong;Kim, Deok-Hwan;Lim, Myung-Kwan
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.7
    • /
    • pp.1062-1075
    • /
    • 2010
  • In the medical image application the difference of intensity is widely used for the image segmentation and feature extraction, and a well known method is the threshold technique that determines a threshold value and generates a binary image based on the threshold. A frequently-used threshold technique is the Otsu algorithm that provides efficient processing and effective selection criterion for choosing the threshold value. However, we cannot get good segmentation results by applying the Otsu algorithm to chest X-ray images. It is because there are various organic structures around lung regions such as ribs and blood vessels, causing unclear distribution of intensity levels. To overcome the ambiguity, we propose in this paper an effective algorithm to extract pulmonary regions that utilizes the Otsu algorithm after removing the background of an X-ray image, constructs intensity-level maps, and uses them for segmenting the X-ray image. To verify the effectiveness of our method, we compared it with the existing 1-dimensional and 2-dimensional Otsu algorithms, and also the results by expert's naked eyes. The experimental result showed that our method achieved the more accurate extraction of pulmonary regions compared to the Otsu methods and showed the similar result as the naked eye's one.

An Algorithm for Filtering False Minutiae in Fingerprint Recognition and its Performance Evaluation (지문의 의사 특징점 제거 알고리즘 및 성능 분석)

  • Yang, Ji-Seong;An, Do-Seong;Kim, Hak-Il
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.3
    • /
    • pp.12-26
    • /
    • 2000
  • In this paper, we propose a post-processing algorithm to remove false minutiae which decrease the overall performance of an automatic fingerprint identification system by increasing computational complexity, FAR(False Acceptance Rate), and FRR(False Rejection Rate) in matching process. The proposed algorithm extracts candidate minutiae from thinned fingerprint image. Considering characteristics of the thinned fingerprint image, the algorithm selects the minutiae that may be false and located in recoverable area. If the area where the selected minutiae reside is thinned incorrectly due to noise and loss of information, the algorithm recovers the area and the selected minutiae are removed from the candidate minutiae list. By examining the ridge pattern of the block where the candidate minutiae are found, true minutiae are recovered and in contrast, false minutiae are filtered out. In an experiment, Fingerprint images from NIST special database 14 are tested and the result shows that the proposed algorithm reduces the false minutiae extraction rate remarkably and increases the overall performance of an automatic fingerprint identification system.

  • PDF

A Study of Correcting Technology based POI for Pedestrian Location-information Detecting in Traffic Connective Transferring System (교통 연계 환승 시스템의 보행자 위치정보 수집을 위한 POI 기반 위치 보정 기술 연구)

  • Jung, Jong-In;Lee, Sang-Sun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.2
    • /
    • pp.84-93
    • /
    • 2011
  • In order to provide the real time and proper information to the pedestrian who is using the transport connection and transfer center through data collecting and processing process, the design of the test-bed (Gimpo airport)'s communication construction and the technology of the pedestrian location tracking has been researched. The design of the communication construction should make sure that it can provide believable data to the user of the transfer center. At the same time, the location tracking should also be considered, so that the require of the communication efficiency and the location tracking efficiency can be met together. In order to make the efficient location tracking technology, the problems related to the commercial technology based real time location identification will be resolved and the new approach method was proposed and be applied and analysed to the test-bed. The wireless access points can be located in the most real-world situation which has added the characteristics of the real building to the electronic map, and through the analysis of theirs location, they can be set as the mainly necessary points for the communication construction design and the location tracking and the method to locate that points has been proposed. How to set, how to apply it to the test-bed and the examination result will be introduced in this paper.