• Title/Summary/Keyword: input size scalable

Search Result 10, Processing Time 0.024 seconds

A Sclable Parallel Labeling Algorithm on Mesh Connected SIMD Computers (메쉬 구조형 SIMD 컴퓨터 상에서 신축적인 병렬 레이블링 알고리즘)

  • 박은진;이갑섭성효경최흥문
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.731-734
    • /
    • 1998
  • A scalable parallel algorithm is proposed for efficient image component labeling with local operatos on a mesh connected SIMD computer. In contrast to the conventional parallel labeling algorithms, where a single pixel is assigned to each PE, the algorithm presented here is scalable and can assign m$\times$m pixel set to each PE according to the input image size. The assigned pixel set is converted to a single pixel that has representative value, and the amount of the required memory and processing time can be highly reduced. For N$\times$N image, if m$\times$m pixel set is assigned to each PE of P$\times$P mesh, where P=N/m, the time complexity due to the communication of each PE and the computation complexity are reduced to O(PlogP) bit operations and O(P) bit operations, respectively, which is 1/m of each of the conventional method. This method also diminishes the amount of memory in each PE to O(P), and can decrease the number of PE to O(P2) =Θ(N2/m2) as compared to O(N2) of conventional method. Because the proposed parallel labeling algorithm is scalable, we can adapt to the increase of image size without the hardware change of the given mesh connected SIMD computer.

  • PDF

A Deep Learning-based Automatic Modulation Classification Method on SDR Platforms (SDR 플랫폼을 위한 딥러닝 기반의 무선 자동 변조 분류 기술 연구)

  • Jung-Ik, Jang;Jaehyuk, Choi;Young-Il, Yoon
    • Journal of IKEEE
    • /
    • v.26 no.4
    • /
    • pp.568-576
    • /
    • 2022
  • Automatic modulation classification(AMC) is a core technique in Software Defined Radio(SDR) platform that enables smart and flexible spectrum sensing and access in a wide frequency band. In this study, we propose a simple yet accurate deep learning-based method that allows AMC for variable-size radio signals. To this end, we design a classification architecture consisting of two Convolutional Neural Network(CNN)-based models, namely main and small models, which were trained on radio signal datasets with two different signal sizes, respectively. Then, for a received signal input with an arbitrary length, modulation classification is performed by augmenting the input samples using a self-replicating padding technique to fit the input layer size of our model. Experiments using the RadioML 2018.01A dataset demonstrated that the proposed method provides higher accuracy than the existing methods in all signal-to-noise ratio(SNR) domains with less computation overhead.

An Integrated Approach of CNT Front-end Amplifier towards Spikes Monitoring for Neuro-prosthetic Diagnosis

  • Kumar, Sandeep;Kim, Byeong-Soo;Song, Hanjung
    • BioChip Journal
    • /
    • v.12 no.4
    • /
    • pp.332-339
    • /
    • 2018
  • The future neuro-prosthetic devices would be required spikes data monitoring through sub-nanoscale transistors that enables to neuroscientists and clinicals for scalable, wireless and implantable applications. This research investigates the spikes monitoring through integrated CNT front-end amplifier for neuro-prosthetic diagnosis. The proposed carbon nanotube-based architecture consists of front-end amplifier (FEA), integrate fire neuron and pseudo resistor technique that observed high electrical performance through neural activity. A pseudo resistor technique ensures large input impedance for integrated FEA by compensating the input leakage current. While carbon nanotube based FEA provides low-voltage operation with directly impacts on the power consumption and also give detector size that demonstrates fidelity of the neural signals. The observed neural activity shows amplitude of spiking in terms of action potential up to $80{\mu}V$ while local field potentials up to 40 mV by using proposed architecture. This fully integrated architecture is implemented in Analog cadence virtuoso using design kit of CNT process. The fabricated chip consumes less power consumption of $2{\mu}W$ under the supply voltage of 0.7 V. The experimental and simulated results of the integrated FEA achieves $60G{\Omega}$ of input impedance and input referred noise of $8.5nv/{\sqrt{Hz}}$ over the wide bandwidth. Moreover, measured gain of the amplifier achieves 75 dB midband from range of 1 KHz to 35 KHz. The proposed research provides refreshing neural recording data through nanotube integrated circuit and which could be beneficial for the next generation neuroscientists.

Review: Magnetite Synthesis using NanoFermentation (Review: NanoFermentation을 이용한 자철석 합성연구)

  • Moon, Ji-Won;Roh, Yul;Phelps, Tommy J.
    • Economic and Environmental Geology
    • /
    • v.45 no.2
    • /
    • pp.195-204
    • /
    • 2012
  • Biomineralization has been explored for geochemical cycles and microbial tolerance mechanisms to metal toxicity. Here, we are introducing NanoFermentation which enables economic, environmentally friendly, requiring low input energy, and scalable manufacturing of nano-dimensioned magnetite. We are also focusing on controlling factors of crystallite size which can determine superparamagnetism and ferrimagnetism. Controlling factors are such as microbial species, temperature, incubation time, medium composition, substituted elements and their concentration, precursor type, reaction volume, precursor concentration density and their combinations. Crystallite size distribution of biomagnetite depends on the balance between nuclei generation and crystal growth. Biomineralization will elucidate elemental cycles on earth crust and microbial ecology as well as it will be applied to material sciences and devices via massive production of nanomaterials.

BRAIN: A bivariate data-driven approach to damage detection in multi-scale wireless sensor networks

  • Kijewski-Correa, T.;Su, S.
    • Smart Structures and Systems
    • /
    • v.5 no.4
    • /
    • pp.415-426
    • /
    • 2009
  • This study focuses on the concept of multi-scale wireless sensor networks for damage detection in civil infrastructure systems by first over viewing the general network philosophy and attributes in the areas of data acquisition, data reduction, assessment and decision making. The data acquisition aspect includes a scalable wireless sensor network acquiring acceleration and strain data, triggered using a Restricted Input Network Activation scheme (RINAS) that extends network lifetime and reduces the size of the requisite undamaged reference pool. Major emphasis is given in this study to data reduction and assessment aspects that enable a decentralized approach operating within the hardware and power constraints of wireless sensor networks to avoid issues associated with packet loss, synchronization and latency. After over viewing various models for data reduction, the concept of a data-driven Bivariate Regressive Adaptive INdex (BRAIN) for damage detection is presented. Subsequent examples using experimental and simulated data verify two major hypotheses related to the BRAIN concept: (i) data-driven damage metrics are more robust and reliable than their counterparts and (ii) the use of heterogeneous sensing enhances overall detection capability of such data-driven damage metrics.

Interpolation based Single-path Sub-pixel Convolution for Super-Resolution Multi-Scale Networks

  • Alao, Honnang;Kim, Jin-Sung;Kim, Tae Sung;Oh, Juhyen;Lee, Kyujoong
    • Journal of Multimedia Information System
    • /
    • v.8 no.4
    • /
    • pp.203-210
    • /
    • 2021
  • Deep leaning convolutional neural networks (CNN) have successfully been applied to image super-resolution (SR). Despite their great performances, SR techniques tend to focus on a certain upscale factor when training a particular model. Algorithms for single model multi-scale networks can easily be constructed if images are upscaled prior to input, but sub-pixel convolution upsampling works differently for each scale factor. Recent SR methods employ multi-scale and multi-path learning as a solution. However, this causes unshared parameters and unbalanced parameter distribution across various scale factors. We present a multi-scale single-path upsample module as a solution by exploiting the advantages of sub-pixel convolution and interpolation algorithms. The proposed model employs sub-pixel convolution for the highest scale factor among the learning upscale factors, and then utilize 1-dimension interpolation, compressing the learned features on the channel axis to match the desired output image size. Experiments are performed for the single-path upsample module, and compared to the multi-path upsample module. Based on the experimental results, the proposed algorithm reduces the upsample module's parameters by 24% and presents slightly to better performance compared to the previous algorithm.

Performance Analysis of Super-Resolution based Video Coding for HEVC (HEVC 기반 초해상화를 이용한 비디오 부호화 효율 성능 분석)

  • Ki, Sehwan;Kim, Dae-Eun;Jun, Ki Nam;Baek, Seung Ho;Choi, Jeung Won;Kim, Dong Hyun;Kim, Munchurl
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.306-314
    • /
    • 2019
  • Since the resolutions of videos increase rapidly, there are continuing needs for effective video compression methods despite an increase in the transmission bandwidth. In order to satisfy such a demand, a reconstructive video coding (RVC) method by using a super resolution has been proposed. Since RVC reduces the resolution of the input video, when frames are compressed to the same size, the number of bits per pixel increases, thereby reducing coding artifacts caused by video coding. However, RVC method using super resolution is not effective in all target bitrates. Comparing the size of the loss generated while downsizing the resolution and the size of the loss caused by the video compression, only when the size of loss generated in the video compression is larger, RVC method can perform the improved compression performance compared to direct video coding. In particular, since HEVC has considerably higher compression performance than the previous standard video codec, it can be experimentally confirmed that the compression distortions become larger than the distortions of downsizing the resolution only in the very low-bitrate conditions. In this paper, we applied RVC based HEVC in various video types and measured the target bitrates that RVC method can be effectively applied.

Exploiting Patterns for Handling Incomplete Coevolving EEG Time Series

  • Thi, Ngoc Anh Nguyen;Yang, Hyung-Jeong;Kim, Sun-Hee
    • International Journal of Contents
    • /
    • v.9 no.4
    • /
    • pp.1-10
    • /
    • 2013
  • The electroencephalogram (EEG) time series is a measure of electrical activity received from multiple electrodes placed on the scalp of a human brain. It provides a direct measurement for characterizing the dynamic aspects of brain activities. These EEG signals are formed from a series of spatial and temporal data with multiple dimensions. Missing data could occur due to fault electrodes. These missing data can cause distortion, repudiation, and further, reduce the effectiveness of analyzing algorithms. Current methodologies for EEG analysis require a complete set of EEG data matrix as input. Therefore, an accurate and reliable imputation approach for missing values is necessary to avoid incomplete data sets for analyses and further improve the usage of performance techniques. This research proposes a new method to automatically recover random consecutive missing data from real world EEG data based on Linear Dynamical System. The proposed method aims to capture the optimal patterns based on two main characteristics in the coevolving EEG time series: namely, (i) dynamics via discovering temporal evolving behaviors, and (ii) correlations by identifying the relationships between multiple brain signals. From these exploits, the proposed method successfully identifies a few hidden variables and discovers their dynamics to impute missing values. The proposed method offers a robust and scalable approach with linear computation time over the size of sequences. A comparative study has been performed to assess the effectiveness of the proposed method against interpolation and missing values via Singular Value Decomposition (MSVD). The experimental simulations demonstrate that the proposed method provides better reconstruction performance up to 49% and 67% improvements over MSVD and interpolation approaches, respectively.

A Scalable and Modular Approach to Understanding of Real-time Software: An Architecture-based Software Understanding(ARSU) and the Software Re/reverse-engineering Environment(SRE) (실시간 소프트웨어의 조절적${\cdot}$단위적 이해 방법 : ARSU(Architecture-based Software Understanding)와 SRE(Software Re/reverse-engineering Environment))

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3159-3174
    • /
    • 1997
  • This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.

  • PDF

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.