• Title/Summary/Keyword: Sampling-Based Algorithm

Search Result 477, Processing Time 0.023 seconds

Image Reconstruction Based on Deep Learning for the SPIDER Optical Interferometric System

  • Sun, Yan;Liu, Chunling;Ma, Hongliu;Zhang, Wang
    • Current Optics and Photonics
    • /
    • v.6 no.3
    • /
    • pp.260-269
    • /
    • 2022
  • Segmented planar imaging detector for electro-optical reconnaissance (SPIDER) is an emerging technology for optical imaging. However, this novel detection approach is faced with degraded imaging quality. In this study, a 6 × 6 planar waveguide is used after each lenslet to expand the field of view. The imaging principles of field-plane waveguide structures are described in detail. The local multiple-sampling simulation mode is adopted to process the simulation of the improved imaging system. A novel image-reconstruction algorithm based on deep learning is proposed, which can effectively address the defects in imaging quality that arise during image reconstruction. The proposed algorithm is compared to a conventional algorithm to verify its better reconstruction results. The comparison of different scenarios confirms the suitability of the algorithm to the system in this paper.

The Design of Single Phase PFC using a DSP (DSP를 이용한 단상 PFC의 설계)

  • Yang, Oh
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.44 no.6
    • /
    • pp.57-65
    • /
    • 2007
  • This paper presents the design of single phase PFC(Power Factor Correction) using a DSP(TMS320F2812). In order to realize the proposed boost PFC converter in average current mode control, the DSP requires the A/D sampling values for a line input voltage, a inductor current, and the output voltage of the converter. Because of a FET switching noise, these sampling values contain a high frequency noise and switching ripple. The solution of A/D sampling keeps away from the switching point. Because the PWM duty is changed from 5% to 95%, we can#t decide a fixed sampling time. In this paper, the three A/D converters of the DSP are started using the prediction algorithm for the FET ON/OFF time at every sampling cycle(40 KHz). Implemented A/D sampling algorithm with only one timer of the DSP is very simple and gives the autostart of these A/D converters. From the experimental result, it was shown that the power factor was about 0.99 at wide input voltage, and the output ripple voltage was smaller than 5 Vpp at 80 Vdc output. Finally the parameters and gains of PI controllers are controlled by serial communication with Windows Xp based PC. Also it was shown that the implemented PFC converter can achieve the feasibility and the usefulness.

Improvement in Computation of Δ V10 Flicker Severity Index Using Intelligent Methods

  • Moallem, Payman;Zargari, Abolfazl;Kiyoumarsi, Arash
    • Journal of Power Electronics
    • /
    • v.11 no.2
    • /
    • pp.228-236
    • /
    • 2011
  • The ${\Delta}\;V_{10}$ or 10-Hz flicker index, as a common method of measurement of voltage flicker severity in power systems, requires a high computational cost and a large amount of memory. In this paper, for measuring the ${\Delta}\;V_{10}$ index, a new method based on the Adaline (adaptive linear neuron) system, the FFT (fast Fourier transform), and the PSO (particle swarm optimization) algorithm is proposed. In this method, for reducing the sampling frequency, calculations are carried out on the envelope of a power system voltage that contains a flicker component. Extracting the envelope of the voltage is implemented by the Adaline system. In addition, in order to increase the accuracy in computing the flicker components, the PSO algorithm is used for reducing the spectral leakage error in the FFT calculations. Therefore, the proposed method has a lower computational cost in FFT computation due to the use of a smaller sampling window. It also requires less memory since it uses the envelope of the power system voltage. Moreover, it shows more accuracy because the PSO algorithm is used in the determination of the flicker frequency and the corresponding amplitude. The sensitivity of the proposed method with respect to the main frequency drift is very low. The proposed algorithm is evaluated by simulations. The validity of the simulations is proven by the implementation of the algorithm with an ARM microcontroller-based digital system. Finally, its function is evaluated with real-time measurements.

Visual Model of Pattern Design Based on Deep Convolutional Neural Network

  • Jingjing Ye;Jun Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.311-326
    • /
    • 2024
  • The rapid development of neural network technology promotes the neural network model driven by big data to overcome the texture effect of complex objects. Due to the limitations in complex scenes, it is necessary to establish custom template matching and apply it to the research of many fields of computational vision technology. The dependence on high-quality small label sample database data is not very strong, and the machine learning system of deep feature connection to complete the task of texture effect inference and speculation is relatively poor. The style transfer algorithm based on neural network collects and preserves the data of patterns, extracts and modernizes their features. Through the algorithm model, it is easier to present the texture color of patterns and display them digitally. In this paper, according to the texture effect reasoning of custom template matching, the 3D visualization of the target is transformed into a 3D model. The high similarity between the scene to be inferred and the user-defined template is calculated by the user-defined template of the multi-dimensional external feature label. The convolutional neural network is adopted to optimize the external area of the object to improve the sampling quality and computational performance of the sample pyramid structure. The results indicate that the proposed algorithm can accurately capture the significant target, achieve more ablation noise, and improve the visualization results. The proposed deep convolutional neural network optimization algorithm has good rapidity, data accuracy and robustness. The proposed algorithm can adapt to the calculation of more task scenes, display the redundant vision-related information of image conversion, enhance the powerful computing power, and further improve the computational efficiency and accuracy of convolutional networks, which has a high research significance for the study of image information conversion.

K-means Clustering using a Grid-based Sampling

  • Park, Hee-Chang;Lee, Sun-Myung
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.249-258
    • /
    • 2003
  • K-means clustering has been widely used in many applications, such that pattern analysis or recognition, data analysis, image processing, market research and so on. It can identify dense and sparse regions among data attributes or object attributes. But k-means algorithm requires many hours to get k clusters that we want, because it is more primitive, explorative. In this paper we propose a new method of k-means clustering using the grid-based sample. It is more fast than any traditional clustering method and maintains its accuracy.

  • PDF

Single Image Super-Resolution Using CARDB Based on Iterative Up-Down Sampling Architecture (CARDB를 이용한 반복적인 업-다운 샘플링 네트워크 기반의 단일 영상 초해상도 복원)

  • Kim, Ingu;Yu, Songhyun;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.242-251
    • /
    • 2020
  • Recently, many deep convolutional neural networks for image super-resolution have been studied. Existing deep learning-based super-resolution algorithms are architecture that up-samples the resolution at the end of the network. The post-upsampling architecture has an inefficient structure at large scaling factor result of predicting a lot of information for mapping from low-resolution to high-resolution at once. In this paper, we propose a single image super-resolution using Channel Attention Residual Dense Block based on an iterative up-down sampling architecture. The proposed algorithm efficiently predicts the mapping relationship between low-resolution and high-resolution, and shows up to 0.14dB performance improvement and enhanced subjective image quality compared to the existing algorithm at large scaling factor result.

Performance Improvement of Fractal Dimension Estimator Based on a New Sampling Method (새로운 샘플링법에 기초한 프랙탈 차원 추정자의 정도 개선)

  • Jin, Gang-Gyoo;Choi, Dong-Sik
    • Journal of Navigation and Port Research
    • /
    • v.38 no.1
    • /
    • pp.45-52
    • /
    • 2014
  • Fractal theory has been widely used to quantify the complexity of remotely sensed digital elevation models and images. Despite successful applications of fractals to a variety of fields including computer graphics, engineering and geosciences, the performance of fractal estimators depends highly on data sampling. In this paper, we propose an algorithm for computing the fractal dimension based on the triangular prism method and a new sampling method. The proposed sampling method combines existing two methods, that is, the geometric step method and the divisor step method to increase pixel utilization. In addition, while the existing estimation methods are based on $N{\times}M$ window, the proposed method expands to $N{\times}M$ window. The proposed method is applied to generated fractal DEM, Brodatz's image DB and real images taken in the campus to demonstrate its feasibility.

An Algorithm of Minimum Bandpass Sampling Selection with Guard-band Between Down-converted Adjacent IF signals (하향변환된 인접 IF신호간의 보호대역을 고려한 최소 대역통과 샘플링 주파수 선택 알고리즘)

  • Bae, Jung-Hwa;Cho, Jae-Wan;Ko, Yong-Chae;Cac, Tran Nguyen;Park, Jin-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.12A
    • /
    • pp.1286-1295
    • /
    • 2007
  • This paper proposes, based on a bandpass sampling theory, a novel method to find valid sampling frequency range and minimum sampling rate with low computational complexity for downconversion of N bandpass radio frequency(RF) signals, under application of all possible signal placements(full permutations) in a IF stage. Additionally, we have developed a complexity-reducing method to obtaine the opttimal and minimal sampling rate for supporting the user-wanted guard-band or spacing between adjacent downconverted signal spectrums. Moreover, we have verified through comparisons with other methods that the proposed methods have more advantageous properties.

Improvement of Online Motion Planning based on RRT* by Modification of the Sampling Method (샘플링 기법의 보완을 통한 RRT* 기반 온라인 이동 계획의 성능 개선)

  • Lee, Hee Beom;Kwak, HwyKuen;Kim, JoonWon;Lee, ChoonWoo;Kim, H.Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.192-198
    • /
    • 2016
  • Motion planning problem is still one of the important issues in robotic applications. In many real-time motion planning problems, it is advisable to find a feasible solution quickly and improve the found solution toward the optimal one before the previously-arranged motion plan ends. For such reasons, sampling-based approaches are becoming popular for real-time application. Especially the use of a rapidly exploring random $tree^*$ ($RRT^*$) algorithm is attractive in real-time application, because it is possible to approach an optimal solution by iterating itself. This paper presents a modified version of informed $RRT^*$ which is an extended version of $RRT^*$ to increase the rate of convergence to optimal solution by improving the sampling method of $RRT^*$. In online motion planning, the robot plans a path while simultaneously moving along the planned path. Therefore, the part of the path near the robot is less likely to be sampled extensively. For a better solution in online motion planning, we modified the sampling method of informed $RRT^*$ by combining with the sampling method to improve the path nearby robot. With comparison among basic $RRT^*$, informed $RRT^*$ and the proposed $RRT^*$ in online motion planning, the proposed $RRT^*$ showed the best result by representing the closest solution to optimum.

A Study on Efficient Signing Methods and Optimal Parameters Proposal for SeaSign Implementation (SeaSign에 대한 효율적인 서명 방법 및 최적 파라미터 제안 연구)

  • Suhri Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.167-177
    • /
    • 2024
  • This paper proposes optimization techniques for SeaSign, an isogeny-based digital signature algorithm. SeaSign combines class group actions of CSIDH with the Fiat-Shamir with abort. While CSIDH-based algorithms have regained attention due to polynomial time attacks for SIDH-based algorithms, SeaSiogn has not undergone significat optimization because of its inefficiency. In this paper, an efficient signing method for SeaSign is proposed. The proposed signing method is simple yet powerful, achived by repositioning the rejection sampling within the algorithm. Additionally, this paper presnts parameters that can provide optimal performance for the proposed algorithm. As a result, by using the original parameters of SeaSign, the proposed method is three times faster than the original SeaSign. Additonally, combining the newly suggested parameters with the signing method proposed in this paper yields a performance that is 290 times faster than the original SeaSign and 7.47 times faster than the method proposed by Decru et al.