• Title/Summary/Keyword: iterative algorithms

Search Result 355, Processing Time 0.062 seconds

Receiver Function Inversion Beneath Ngauruhoe Volcano, New Zealand, using the Genetic Algorithm (유전자 알고리즘을 이용한 뉴질랜드 Ngauruhoe 화산 하부의 수신함수 역산)

  • Park, Iseul;Kim, Ki Young
    • Geophysics and Geophysical Exploration
    • /
    • v.18 no.1
    • /
    • pp.1-8
    • /
    • 2015
  • To estimate the shear-wave velocity (${\nu}_s$ beneath the OTVZ seismic station on Ngauruhoe volcano in New Zealand, we calculated receiver functions (RFs) using 127 teleseismic data ($Mw{\geq}5.5$) with high signal-to-noise ratios recorded during November 11, 2011 to September 11, 2013. The genetic inversion algorithms was applied to 21 RFs calculated by the iterative time-domain deconvolution method. In the 1-D ${\nu}_s$ model derived by the inversion, the Moho is observed at a 14 km depth, marked by a ${\nu}_s$ transition from 3.7 km/s to 4.7 km/s. The average ${\nu}_s$ of the overlying crust is 3.4 km/s, and the average ${\nu}_s$ of a greater than 9-km thick low-velocity layer (LVL) in the lower crust is 3.1 km/s. The LVL becomes thinner with increasing distance from the station. Another LVL thicker than 10 km with ${\nu}_s$ less than 4.3 km/s is found in the upper mantle. Those LVLs in the lower crust and the upper mantle and the relatively thin crust might be related to the magma activity caused by the subducting Pacific plate.

Application of Multispectral Remotely Sensed Imagery for the Characterization of Complex Coastal Wetland Ecosystems of southern India: A Special Emphasis on Comparing Soft and Hard Classification Methods

  • Shanmugam, Palanisamy;Ahn, Yu-Hwan;Sanjeevi , Shanmugam
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.189-211
    • /
    • 2005
  • This paper makes an effort to compare the recently evolved soft classification method based on Linear Spectral Mixture Modeling (LSMM) with the traditional hard classification methods based on Iterative Self-Organizing Data Analysis (ISODATA) and Maximum Likelihood Classification (MLC) algorithms in order to achieve appropriate results for mapping, monitoring and preserving valuable coastal wetland ecosystems of southern India using Indian Remote Sensing Satellite (IRS) 1C/1D LISS-III and Landsat-5 Thematic Mapper image data. ISODATA and MLC methods were attempted on these satellite image data to produce maps of 5, 10, 15 and 20 wetland classes for each of three contrast coastal wetland sites, Pitchavaram, Vedaranniyam and Rameswaram. The accuracy of the derived classes was assessed with the simplest descriptive statistic technique called overall accuracy and a discrete multivariate technique called KAPPA accuracy. ISODATA classification resulted in maps with poor accuracy compared to MLC classification that produced maps with improved accuracy. However, there was a systematic decrease in overall accuracy and KAPPA accuracy, when more number of classes was derived from IRS-1C/1D and Landsat-5 TM imagery by ISODATA and MLC. There were two principal factors for the decreased classification accuracy, namely spectral overlapping/confusion and inadequate spatial resolution of the sensors. Compared to the former, the limited instantaneous field of view (IFOV) of these sensors caused occurrence of number of mixture pixels (mixels) in the image and its effect on the classification process was a major problem to deriving accurate wetland cover types, in spite of the increasing spatial resolution of new generation Earth Observation Sensors (EOS). In order to improve the classification accuracy, a soft classification method based on Linear Spectral Mixture Modeling (LSMM) was described to calculate the spectral mixture and classify IRS-1C/1D LISS-III and Landsat-5 TM Imagery. This method considered number of reflectance end-members that form the scene spectra, followed by the determination of their nature and finally the decomposition of the spectra into their endmembers. To evaluate the LSMM areal estimates, resulted fractional end-members were compared with normalized difference vegetation index (NDVI), ground truth data, as well as those estimates derived from the traditional hard classifier (MLC). The findings revealed that NDVI values and vegetation fractions were positively correlated ($r^2$= 0.96, 0.95 and 0.92 for Rameswaram, Vedaranniyam and Pitchavaram respectively) and NDVI and soil fraction values were negatively correlated ($r^2$ =0.53, 0.39 and 0.13), indicating the reliability of the sub-pixel classification. Comparing with ground truth data, the precision of LSMM for deriving moisture fraction was 92% and 96% for soil fraction. The LSMM in general would seem well suited to locating small wetland habitats which occurred as sub-pixel inclusions, and to representing continuous gradations between different habitat types.

Development of Detailed Design Automation Technology for AI-based Exterior Wall Panels and its Backframes

  • Kim, HaYoung;Yi, June-Seong
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1249-1249
    • /
    • 2022
  • The facade, an exterior material of a building, is one of the crucial factors that determine its morphological identity and its functional levels, such as energy performance, earthquake and fire resistance. However, regardless of the type of exterior materials, huge property and human casualties are continuing due to frequent exterior materials dropout accidents. The quality of the building envelope depends on the detailed design and is closely related to the back frames that support the exterior material. Detailed design means the creation of a shop drawing, which is the stage of developing the basic design to a level where construction is possible by specifying the exact necessary details. However, due to chronic problems in the construction industry, such as reducing working hours and the lack of design personnel, detailed design is not being appropriately implemented. Considering these characteristics, it is necessary to develop the detailed design process of exterior materials and works based on the domain-expert knowledge of the construction industry using artificial intelligence (AI). Therefore, this study aims to establish a detailed design automation algorithm for AI-based condition-responsive exterior wall panels and their back frames. The scope of the study is limited to "detailed design" performed based on the working drawings during the exterior work process and "stone panels" among exterior materials. First, working-level data on stone works is collected to analyze the existing detailed design process. After that, design parameters are derived by analyzing factors that affect the design of the building's exterior wall and back frames, such as structure, floor height, wind load, lift limit, and transportation elements. The relational expression between the derived parameters is derived, and it is algorithmized to implement a rule-based AI design. These algorithms can be applied to detailed designs based on 3D BIM to automatically calculate quantity and unit price. The next goal is to derive the iterative elements that occur in the process and implement a robotic process automation (RPA)-based system to link the entire "Detailed design-Quality calculation-Order process." This study is significant because it expands the design automation research, which has been rather limited to basic and implemented design, to the detailed design area at the beginning of the construction execution and increases the productivity by using AI. In addition, it can help fundamentally improve the working environment of the construction industry through the development of direct and applicable technologies to practice.

  • PDF

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

Assessment of Attenuation Correction Techniques with a $^{137}Cs$ Point Source ($^{137}Cs$ 점선원을 이용한 감쇠 보정기법들의 평가)

  • Bong, Jung-Kyun;Kim, Hee-Joung;Son, Hye-Kyoung;Park, Yun-Young;Park, Hae-Joung;Yun, Mi-Jin;Lee, Jong-Doo;Jung, Hae-Jo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.1
    • /
    • pp.57-68
    • /
    • 2005
  • Purpose: The objective of this study was to assess attenuation correction algorithms with the $^{137}Cs$ point source for the brain positron omission tomography (PET) imaging process. Materials & Methods: Four different types of phantoms were used in this study for testing various types of the attenuation correction techniques. Transmission data of a $^{137}Cs$ point source were acquired after infusing the emission source into phantoms and then the emission data were subsequently acquired in 3D acquisition mode. Scatter corrections were performed with a background tail-fitting algorithm. Emission data were then reconstructed using iterative reconstruction method with a measured (MAC), elliptical (ELAC), segmented (SAC) and remapping (RAC) attenuation correction, respectively. Reconstructed images were then both qualitatively and quantitatively assessed. In addition, reconstructed images of a normal subject were assessed by nuclear medicine physicians. Subtracted images were also compared. Results: ELEC, SAC, and RAC provided a uniform phantom image with less noise for a cylindrical phantom. In contrast, a decrease in intensity at the central portion of the attenuation map was noticed at the result of the MAC. Reconstructed images of Jaszack and Hoffan phantoms presented better quality with RAC and SAC. The attenuation of a skull on images of the normal subject was clearly noticed and the attenuation correction without considering the attenuation of the skull resulted in artificial defects on images of the brain. Conclusion: the complicated and improved attenuation correction methods were needed to obtain the better accuracy of the quantitative brain PET images.