• Title/Summary/Keyword: iterative algorithms

Search Result 358, Processing Time 0.035 seconds

A 32${\times}$32-b Multiplier Using a New Method to Reduce a Compression Level of Partial Products (부분곱 압축단을 줄인 32${\times}$32 비트 곱셈기)

  • 홍상민;김병민;정인호;조태원
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.6
    • /
    • pp.447-458
    • /
    • 2003
  • A high speed multiplier is essential basic building block for digital signal processors today. Typically iterative algorithms in Signal processing applications are realized which need a large number of multiply, add and accumulate operations. This paper describes a macro block of a parallel structured multiplier which has adopted a 32$\times$32-b regularly structured tree (RST). To improve the speed of the tree part, modified partial product generation method has been devised at architecture level. This reduces the 4 levels of compression stage to 3 levels, and propagation delay in Wallace tree structure by utilizing 4-2 compressor as well. Furthermore, this enables tree part to be combined with four modular block to construct a CSA tree (carry save adder tree). Therefore, combined with four modular block to construct a CSA tree (carry save adder tree). Therefore, multiplier architecture can be regularly laid out with same modules composed of Booth selectors, compressors and Modified Partial Product Generators (MPPG). At the circuit level new Booth selector with less transistors and encoder are proposed. The reduction in the number of transistors in Booth selector has a greater impact on the total transistor count. The transistor count of designed selector is 9 using PTL(Pass Transistor Logic). This reduces the transistor count by 50% as compared with that of the conventional one. The designed multiplier in 0.25${\mu}{\textrm}{m}$ technology, 2.5V, 1-poly and 5-metal CMOS process is simulated by Hspice and Epic. Delay is 4.2㎱ and average power consumes 1.81㎽/MHz. This result is far better than conventional multiplier with equal or better than the best one published.

An Indirect Localization Scheme for Low- Density Sensor Nodes in Wireless Sensor Networks (무선 센서 네트워크에서 저밀도 센서 노드에 대한 간접 위치 추정 알고리즘)

  • Jung, Young-Seok;Wu, Mary;Kim, Chong-Gun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.13 no.1
    • /
    • pp.32-38
    • /
    • 2012
  • Each sensor node can know its location in several ways, if the node process the information based on its geographical position in sensor networks. In the localization scheme using GPS, there could be nodes that don't know their locations because the scheme requires line of sight to radio wave. Moreover, this scheme is high costly and consumes a lot of power. The localization scheme without GPS uses a sophisticated mathematical algorithm estimating location of sensor nodes that may be inaccurate. AHLoS(Ad Hoc Localization System) is a hybrid scheme using both GPS and location estimation algorithm. In AHLoS, the GPS node, which can receive its location from GPS, broadcasts its location to adjacent normal nodes which are not GPS devices. Normal nodes can estimate their location by using iterative triangulation algorithms if they receive at least three beacons which contain the position informations of neighbor nodes. But, there are some cases that a normal node receives less than two beacons by geographical conditions, network density, movements of nodes in sensor networks. We propose an indirect localization scheme for low-density sensor nodes which are difficult to receive directly at least three beacons from GPS nodes in wireless network.

A Study on the Automatic Detection of Railroad Power Lines Using LiDAR Data and RANSAC Algorithm (LiDAR 데이터와 RANSAC 알고리즘을 이용한 철도 전력선 자동탐지에 관한 연구)

  • Jeon, Wang Gyu;Choi, Byoung Gil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.4
    • /
    • pp.331-339
    • /
    • 2013
  • LiDAR has been one of the widely used and important technologies for 3D modeling of ground surface and objects because of its ability to provide dense and accurate range measurement. The objective of this research is to develop a method for automatic detection and modeling of railroad power lines using high density LiDAR data and RANSAC algorithms. For detecting railroad power lines, multi-echoes properties of laser data and shape knowledge of railroad power lines were employed. Cuboid analysis for detecting seed line segments, tracking lines, connecting and labeling are the main processes. For modeling railroad power lines, iterative RANSAC and least square adjustment were carried out to estimate the lines parameters. The validation of the result is very challenging due to the difficulties in determining the actual references on the ground surface. Standard deviations of 8cm and 5cm for x-y and z coordinates, respectively are satisfactory outcomes. In case of completeness, the result of visual inspection shows that all the lines are detected and modeled well as compare with the original point clouds. The overall processes are fully automated and the methods manage any state of railroad wires efficiently.

Receiver Function Inversion Beneath Ngauruhoe Volcano, New Zealand, using the Genetic Algorithm (유전자 알고리즘을 이용한 뉴질랜드 Ngauruhoe 화산 하부의 수신함수 역산)

  • Park, Iseul;Kim, Ki Young
    • Geophysics and Geophysical Exploration
    • /
    • v.18 no.1
    • /
    • pp.1-8
    • /
    • 2015
  • To estimate the shear-wave velocity (${\nu}_s$ beneath the OTVZ seismic station on Ngauruhoe volcano in New Zealand, we calculated receiver functions (RFs) using 127 teleseismic data ($Mw{\geq}5.5$) with high signal-to-noise ratios recorded during November 11, 2011 to September 11, 2013. The genetic inversion algorithms was applied to 21 RFs calculated by the iterative time-domain deconvolution method. In the 1-D ${\nu}_s$ model derived by the inversion, the Moho is observed at a 14 km depth, marked by a ${\nu}_s$ transition from 3.7 km/s to 4.7 km/s. The average ${\nu}_s$ of the overlying crust is 3.4 km/s, and the average ${\nu}_s$ of a greater than 9-km thick low-velocity layer (LVL) in the lower crust is 3.1 km/s. The LVL becomes thinner with increasing distance from the station. Another LVL thicker than 10 km with ${\nu}_s$ less than 4.3 km/s is found in the upper mantle. Those LVLs in the lower crust and the upper mantle and the relatively thin crust might be related to the magma activity caused by the subducting Pacific plate.

Application of Multispectral Remotely Sensed Imagery for the Characterization of Complex Coastal Wetland Ecosystems of southern India: A Special Emphasis on Comparing Soft and Hard Classification Methods

  • Shanmugam, Palanisamy;Ahn, Yu-Hwan;Sanjeevi , Shanmugam
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.189-211
    • /
    • 2005
  • This paper makes an effort to compare the recently evolved soft classification method based on Linear Spectral Mixture Modeling (LSMM) with the traditional hard classification methods based on Iterative Self-Organizing Data Analysis (ISODATA) and Maximum Likelihood Classification (MLC) algorithms in order to achieve appropriate results for mapping, monitoring and preserving valuable coastal wetland ecosystems of southern India using Indian Remote Sensing Satellite (IRS) 1C/1D LISS-III and Landsat-5 Thematic Mapper image data. ISODATA and MLC methods were attempted on these satellite image data to produce maps of 5, 10, 15 and 20 wetland classes for each of three contrast coastal wetland sites, Pitchavaram, Vedaranniyam and Rameswaram. The accuracy of the derived classes was assessed with the simplest descriptive statistic technique called overall accuracy and a discrete multivariate technique called KAPPA accuracy. ISODATA classification resulted in maps with poor accuracy compared to MLC classification that produced maps with improved accuracy. However, there was a systematic decrease in overall accuracy and KAPPA accuracy, when more number of classes was derived from IRS-1C/1D and Landsat-5 TM imagery by ISODATA and MLC. There were two principal factors for the decreased classification accuracy, namely spectral overlapping/confusion and inadequate spatial resolution of the sensors. Compared to the former, the limited instantaneous field of view (IFOV) of these sensors caused occurrence of number of mixture pixels (mixels) in the image and its effect on the classification process was a major problem to deriving accurate wetland cover types, in spite of the increasing spatial resolution of new generation Earth Observation Sensors (EOS). In order to improve the classification accuracy, a soft classification method based on Linear Spectral Mixture Modeling (LSMM) was described to calculate the spectral mixture and classify IRS-1C/1D LISS-III and Landsat-5 TM Imagery. This method considered number of reflectance end-members that form the scene spectra, followed by the determination of their nature and finally the decomposition of the spectra into their endmembers. To evaluate the LSMM areal estimates, resulted fractional end-members were compared with normalized difference vegetation index (NDVI), ground truth data, as well as those estimates derived from the traditional hard classifier (MLC). The findings revealed that NDVI values and vegetation fractions were positively correlated ($r^2$= 0.96, 0.95 and 0.92 for Rameswaram, Vedaranniyam and Pitchavaram respectively) and NDVI and soil fraction values were negatively correlated ($r^2$ =0.53, 0.39 and 0.13), indicating the reliability of the sub-pixel classification. Comparing with ground truth data, the precision of LSMM for deriving moisture fraction was 92% and 96% for soil fraction. The LSMM in general would seem well suited to locating small wetland habitats which occurred as sub-pixel inclusions, and to representing continuous gradations between different habitat types.

Development of Detailed Design Automation Technology for AI-based Exterior Wall Panels and its Backframes

  • Kim, HaYoung;Yi, June-Seong
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1249-1249
    • /
    • 2022
  • The facade, an exterior material of a building, is one of the crucial factors that determine its morphological identity and its functional levels, such as energy performance, earthquake and fire resistance. However, regardless of the type of exterior materials, huge property and human casualties are continuing due to frequent exterior materials dropout accidents. The quality of the building envelope depends on the detailed design and is closely related to the back frames that support the exterior material. Detailed design means the creation of a shop drawing, which is the stage of developing the basic design to a level where construction is possible by specifying the exact necessary details. However, due to chronic problems in the construction industry, such as reducing working hours and the lack of design personnel, detailed design is not being appropriately implemented. Considering these characteristics, it is necessary to develop the detailed design process of exterior materials and works based on the domain-expert knowledge of the construction industry using artificial intelligence (AI). Therefore, this study aims to establish a detailed design automation algorithm for AI-based condition-responsive exterior wall panels and their back frames. The scope of the study is limited to "detailed design" performed based on the working drawings during the exterior work process and "stone panels" among exterior materials. First, working-level data on stone works is collected to analyze the existing detailed design process. After that, design parameters are derived by analyzing factors that affect the design of the building's exterior wall and back frames, such as structure, floor height, wind load, lift limit, and transportation elements. The relational expression between the derived parameters is derived, and it is algorithmized to implement a rule-based AI design. These algorithms can be applied to detailed designs based on 3D BIM to automatically calculate quantity and unit price. The next goal is to derive the iterative elements that occur in the process and implement a robotic process automation (RPA)-based system to link the entire "Detailed design-Quality calculation-Order process." This study is significant because it expands the design automation research, which has been rather limited to basic and implemented design, to the detailed design area at the beginning of the construction execution and increases the productivity by using AI. In addition, it can help fundamentally improve the working environment of the construction industry through the development of direct and applicable technologies to practice.

  • PDF

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

Assessment of Attenuation Correction Techniques with a $^{137}Cs$ Point Source ($^{137}Cs$ 점선원을 이용한 감쇠 보정기법들의 평가)

  • Bong, Jung-Kyun;Kim, Hee-Joung;Son, Hye-Kyoung;Park, Yun-Young;Park, Hae-Joung;Yun, Mi-Jin;Lee, Jong-Doo;Jung, Hae-Jo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.1
    • /
    • pp.57-68
    • /
    • 2005
  • Purpose: The objective of this study was to assess attenuation correction algorithms with the $^{137}Cs$ point source for the brain positron omission tomography (PET) imaging process. Materials & Methods: Four different types of phantoms were used in this study for testing various types of the attenuation correction techniques. Transmission data of a $^{137}Cs$ point source were acquired after infusing the emission source into phantoms and then the emission data were subsequently acquired in 3D acquisition mode. Scatter corrections were performed with a background tail-fitting algorithm. Emission data were then reconstructed using iterative reconstruction method with a measured (MAC), elliptical (ELAC), segmented (SAC) and remapping (RAC) attenuation correction, respectively. Reconstructed images were then both qualitatively and quantitatively assessed. In addition, reconstructed images of a normal subject were assessed by nuclear medicine physicians. Subtracted images were also compared. Results: ELEC, SAC, and RAC provided a uniform phantom image with less noise for a cylindrical phantom. In contrast, a decrease in intensity at the central portion of the attenuation map was noticed at the result of the MAC. Reconstructed images of Jaszack and Hoffan phantoms presented better quality with RAC and SAC. The attenuation of a skull on images of the normal subject was clearly noticed and the attenuation correction without considering the attenuation of the skull resulted in artificial defects on images of the brain. Conclusion: the complicated and improved attenuation correction methods were needed to obtain the better accuracy of the quantitative brain PET images.