• Title/Summary/Keyword: Popularity Bias

Search Result 14, Processing Time 0.02 seconds

Implications for Memory Reference Analysis and System Design to Execute AI Workloads in Personal Mobile Environments (개인용 모바일 환경의 AI 워크로드 수행을 위한 메모리 참조 분석 및 시스템 설계 방안)

  • Seokmin Kwon;Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.31-36
    • /
    • 2024
  • Recently, mobile apps that utilize AI technologies are increasing. In the personal mobile environment, performance degradation may occur during the training phase of large AI workload due to limitations in memory capacity. In this paper, we extract memory reference traces of AI workloads and analyze their characteristics. From this analysis, we observe that AI workloads can cause frequent storage access due to weak temporal locality and irregular popularity bias during memory write operations, which can degrade the performance of mobile devices. Based on this observation, we discuss ways to efficiently manage memory write operations of AI workloads using persistent memory-based swap devices. Through simulation experiments, we show that the system architecture proposed in this paper can improve the I/O time of mobile systems by more than 80%.

Development of $14"{\times}8.5"$ active matrix flat-panel digital x-ray detector system and Imaging performance (평판 디지털 X-ray 검출기의 개발과 성능 평가에 관한 연구)

  • Park, Ji-Koon;Choi, Jang-Yong;Kang, Sang-Sik;Lee, Dong-Gil;Seok, Dae-Woo;Nam, Sang Hee
    • Journal of radiological science and technology
    • /
    • v.26 no.4
    • /
    • pp.39-46
    • /
    • 2003
  • Digital radiographic systems based on solid-state detectors, commonly referred to as flat-panel detectors, are gaining popularity in clinical practice. Large area, flat panel solid state detectors are being investigated for digital radiography. The purpose of this work was to evaluate the active matrix flat panel digital x-ray detectors in terms of their modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE). In this paper, development and evaluation of a selenium-based flat-panel digital x-ray detector are described. The prototype detector has a pixel pitch of $139\;{\mu}m$ and a total active imaging area of $14{\times}8.5\;inch^2$, giving a total 3.9 million pixels. This detector include a x-ray imaging layer of amorphous selenium as a photoconductor which is evaporated in vacuum state on a TFT flat panel, to make signals in proportion to incident x-ray. The film thickness was about $500\;{\mu}m$. To evaluate the imaging performance of the digital radiography(DR) system developed in our group, sensitivity, linearity, the modulation transfer function(MTF), noise power spectrum (NPS) and detective quantum efficiency(DQE) of detector was measured. The measured sensitivity was $4.16{\times}10^6\;ehp/pixel{\cdot}mR$ at the bias field of $10\;V/{\mu}m$ : The beam condition was 41.9\;KeV. Measured MTF at 2.5\;lp/mm was 52%, and the DQE at 1.5\;lp/mm was 75%. And the excellent linearity was showed where the coefficient of determination ($r^2$) is 0.9693.

  • PDF

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Conditional Generative Adversarial Network based Collaborative Filtering Recommendation System (Conditional Generative Adversarial Network(CGAN) 기반 협업 필터링 추천 시스템)

  • Kang, Soyi;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.157-173
    • /
    • 2021
  • With the development of information technology, the amount of available information increases daily. However, having access to so much information makes it difficult for users to easily find the information they seek. Users want a visualized system that reduces information retrieval and learning time, saving them from personally reading and judging all available information. As a result, recommendation systems are an increasingly important technologies that are essential to the business. Collaborative filtering is used in various fields with excellent performance because recommendations are made based on similar user interests and preferences. However, limitations do exist. Sparsity occurs when user-item preference information is insufficient, and is the main limitation of collaborative filtering. The evaluation value of the user item matrix may be distorted by the data depending on the popularity of the product, or there may be new users who have not yet evaluated the value. The lack of historical data to identify consumer preferences is referred to as data sparsity, and various methods have been studied to address these problems. However, most attempts to solve the sparsity problem are not optimal because they can only be applied when additional data such as users' personal information, social networks, or characteristics of items are included. Another problem is that real-world score data are mostly biased to high scores, resulting in severe imbalances. One cause of this imbalance distribution is the purchasing bias, in which only users with high product ratings purchase products, so those with low ratings are less likely to purchase products and thus do not leave negative product reviews. Due to these characteristics, unlike most users' actual preferences, reviews by users who purchase products are more likely to be positive. Therefore, the actual rating data is over-learned in many classes with high incidence due to its biased characteristics, distorting the market. Applying collaborative filtering to these imbalanced data leads to poor recommendation performance due to excessive learning of biased classes. Traditional oversampling techniques to address this problem are likely to cause overfitting because they repeat the same data, which acts as noise in learning, reducing recommendation performance. In addition, pre-processing methods for most existing data imbalance problems are designed and used for binary classes. Binary class imbalance techniques are difficult to apply to multi-class problems because they cannot model multi-class problems, such as objects at cross-class boundaries or objects overlapping multiple classes. To solve this problem, research has been conducted to convert and apply multi-class problems to binary class problems. However, simplification of multi-class problems can cause potential classification errors when combined with the results of classifiers learned from other sub-problems, resulting in loss of important information about relationships beyond the selected items. Therefore, it is necessary to develop more effective methods to address multi-class imbalance problems. We propose a collaborative filtering model using CGAN to generate realistic virtual data to populate the empty user-item matrix. Conditional vector y identify distributions for minority classes and generate data reflecting their characteristics. Collaborative filtering then maximizes the performance of the recommendation system via hyperparameter tuning. This process should improve the accuracy of the model by addressing the sparsity problem of collaborative filtering implementations while mitigating data imbalances arising from real data. Our model has superior recommendation performance over existing oversampling techniques and existing real-world data with data sparsity. SMOTE, Borderline SMOTE, SVM-SMOTE, ADASYN, and GAN were used as comparative models and we demonstrate the highest prediction accuracy on the RMSE and MAE evaluation scales. Through this study, oversampling based on deep learning will be able to further refine the performance of recommendation systems using actual data and be used to build business recommendation systems.