• Title/Summary/Keyword: Large Size data Processing

Search Result 246, Processing Time 0.027 seconds

A Development of Optimal Design Model for Initial Blank Shape Using Artificial Neural Network in Rectangular Case Forming with Large Aspect Ratio (세장비가 큰 사각케이스 성형 공정에서의 인공신경망을 적용한 초기 블랭크 형상 최적설계 모델 개발)

  • Kwak, M.J.;Park, J.W.;Park, K.T.;Kang, B.S.
    • Transactions of Materials Processing
    • /
    • v.29 no.5
    • /
    • pp.272-281
    • /
    • 2020
  • As the thickness of mobile communication devices is getting thinner, the size of the internal parts is also getting smaller. Among them, the battery case requires a high-level deep drawing technique because it has a rectangular shape with a large aspect ratio. In this study, the initial blank shape was optimized to minimize earing in a multi-stage deep drawing process using an artificial neural network(ANN). There has been no reported case of applying artificial neural network technology to the initial blank optimal design for a square case with large aspect ratio. The training data for ANN were obtained though simulation, and the model reliability was verified by performing comparative study with regression model using random sample test and goodness-of-fit test. Finally, the optimal design of the initial blank shape was performed through the verified ANN model.

A Fast Processing Algorithm for Lidar Data Compression Using Second Generation Wavelets

  • Pradhan B.;Sandeep K.;Mansor Shattri;Ramli Abdul Rahman;Mohamed Sharif Abdul Rashid B.
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.1
    • /
    • pp.49-61
    • /
    • 2006
  • The lifting scheme has been found to be a flexible method for constructing scalar wavelets with desirable properties. In this paper, it is extended to the UDAR data compression. A newly developed data compression approach to approximate the UDAR surface with a series of non-overlapping triangles has been presented. Generally a Triangulated Irregular Networks (TIN) are the most common form of digital surface model that consists of elevation values with x, y coordinates that make up triangles. But over the years the TIN data representation has become an important research topic for many researchers due its large data size. Compression of TIN is needed for efficient management of large data and good surface visualization. This approach covers following steps: First, by using a Delaunay triangulation, an efficient algorithm is developed to generate TIN, which forms the terrain from an arbitrary set of data. A new interpolation wavelet filter for TIN has been applied in two steps, namely splitting and elevation. In the splitting step, a triangle has been divided into several sub-triangles and the elevation step has been used to 'modify' the point values (point coordinates for geometry) after the splitting. Then, this data set is compressed at the desired locations by using second generation wavelets. The quality of geographical surface representation after using proposed technique is compared with the original UDAR data. The results show that this method can be used for significant reduction of data set.

Causality join query processing for data stream by spatio-temporal sliding window (시공간 슬라이딩윈도우기법을 이용한 데이터스트림의 인과관계 결합질의처리방법)

  • Kwon, O-Je;Li, Ki-Joune
    • Spatial Information Research
    • /
    • v.16 no.2
    • /
    • pp.219-236
    • /
    • 2008
  • Data stream collected from sensors contain a large amount of useful information including causality relationships. The causality join query for data stream is to retrieve a set of pairs (cause, effect) from streams of data. A part of causality pairs may however be lost from the query result, due to the delay from sensors to a data stream management system, and the limited size of sliding windows. In this paper, we first investigate spatial, temporal, and spatio-temporal aspects of the causality join query for data stream. Second, we propose several strategies for sliding window management based on these observations. The accuracy of the proposed strategies is studied by intensive experiments, and the result shows that we improve the accuracy of causality join query in data stream from simple FIFO strategy.

  • PDF

Design of high speed data processing controller for the full color LED display board system (풀칼라 LED 전광판용 고속 데이터처리 제어장치 설계)

  • Ha, Young-Jae;Jin, Byung-Yun;Kim, Sun-Hyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.2
    • /
    • pp.462-468
    • /
    • 2010
  • In this paper, the new efficient drive control technology based on the conventional full-color electric display board image processing method is proposed. This technology can be applied to high-fidelity large size panel TV and LCD Display, and makes high-fidelity-display-drive control possible in those. The proposed drive control technology can strengthen the image processing function of the conventional technology. Also, the automatic or manual adjustment of contrast, bright, tint, color, gamma revision, etc. can help to achieve high-fidelity vision. This technology can be adopted with lower price.

Access Control Mechanism for CouchDB

  • Ashwaq A., Al-otaibi;Reem M., Alotaibi;Nermin, Hamza
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.107-115
    • /
    • 2022
  • Recently, big data applications need another database different from the Relation database. NoSQL databases are used to save and handle massive amounts of data. NoSQL databases have many advantages over traditional databases like flexibility, efficiently processing data, scalability, and dynamic schemas. Most of the current applications are based on the web, and the size of data is in increasing. NoSQL databases are expected to be used on a more and large scale in the future. However, NoSQL suffers from many security issues, and one of them is access control. Many recent applications need Fine-Grained Access control (FGAC). The integration of the NoSQL databases with FGAC will increase their usability in various fields. It will offer customized data protection levels and enhance security in NoSQL databases. There are different NoSQL database models, and a document-based database is one type of them. In this research, we choose the CouchDB NoSQL document database and develop an access control mechanism that works at a fain-grained level. The proposed mechanism uses role-based access control of CouchDB and restricts read access to work at the document level. The experiment shows that our mechanism effectively works at the document level in CouchDB with good execution time.

Evaluation of Sentimental Texts Automatically Generated by a Generative Adversarial Network (생성적 적대 네트워크로 자동 생성한 감성 텍스트의 성능 평가)

  • Park, Cheon-Young;Choi, Yong-Seok;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.6
    • /
    • pp.257-264
    • /
    • 2019
  • Recently, deep neural network based approaches have shown a good performance for various fields of natural language processing. A huge amount of training data is essential for building a deep neural network model. However, collecting a large size of training data is a costly and time-consuming job. A data augmentation is one of the solutions to this problem. The data augmentation of text data is more difficult than that of image data because texts consist of tokens with discrete values. Generative adversarial networks (GANs) are widely used for image generation. In this work, we generate sentimental texts by using one of the GANs, CS-GAN model that has a discriminator as well as a classifier. We evaluate the usefulness of generated sentimental texts according to various measurements. CS-GAN model not only can generate texts with more diversity but also can improve the performance of its classifier.

Symbolizing Numbers to Improve Neural Machine Translation (숫자 기호화를 통한 신경기계번역 성능 향상)

  • Kang, Cheongwoong;Ro, Youngheon;Kim, Jisu;Choi, Heeyoul
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1161-1167
    • /
    • 2018
  • The development of machine learning has enabled machines to perform delicate tasks that only humans could do, and thus many companies have introduced machine learning based translators. Existing translators have good performances but they have problems in number translation. The translators often mistranslate numbers when the input sentence includes a large number. Furthermore, the output sentence structure completely changes even if only one number in the input sentence changes. In this paper, first, we optimized a neural machine translation model architecture that uses bidirectional RNN, LSTM, and the attention mechanism through data cleansing and changing the dictionary size. Then, we implemented a number-processing algorithm specialized in number translation and applied it to the neural machine translation model to solve the problems above. The paper includes the data cleansing method, an optimal dictionary size and the number-processing algorithm, as well as experiment results for translation performance based on the BLEU score.

A New BISON-like Construction Block Cipher: DBISON

  • Zhao, Haixia;Wei, Yongzhuang;Liu, Zhenghong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1611-1633
    • /
    • 2022
  • At EUROCRYPT 2019, a new block cipher algorithm called BISON was proposed by Canteaut et al. which uses a novel structure named as Whitened Swap-Or-Not (WSN). Unlike the traditional wide trail strategy, the differential and linear properties of this algorithm can be easily determined. However, the encryption speed of the BISON algorithm is quite low due to a large number of iterative rounds needed to ensure certain security margins. Commonly, denoting by n is the data block length, this design requires 3n encryption rounds. Moreover, the block size n of BISON is always odd, which is not convenient for operations performed on a byte level. In order to overcome these issues, we propose a new block cipher, named DBISON, which more efficiently employs the ideas of double layers typical to the BISON-like construction. More precisely, DBISON divides the input into two parts of size n/2 bits and performs the round computations in parallel, which leads to an increased encryption speed. In particular, the data block length n of DBISON can be even, which gives certain additional implementation benefits over BISON. Furthermore, the resistance of DBISON against differential and linear attacks is also investigated. It is shown the maximal differential probability (MDP) is 1/2n-1 for n encryption rounds and that the maximal linear probability (MLP) is strictly less than 1/2n-1 when (n/2+3) iterative encryption rounds are used. These estimates are very close to the ideal values when n is close to 256.

Analysis of big data using Rhipe (Rhipe를 활용한 빅데이터 처리 및 분석)

  • Ko, Youngjun;Kim, Jinseog
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.5
    • /
    • pp.975-987
    • /
    • 2013
  • The Hadoop system was developed by the Apache foundation based on GFS and MapReduce technologies of Google. Many modern systems for managing and processing the big data have been developing based on the Hadoop because the Hadoop was designed for scalability and distributed computing. The R software has been considered as a well-suited analytic tool in the Hadoop based systems because the R is flexible to other languages and has many libraries for complex analyses. We introduced Rhipe which is a R package supporting MapReduce programming easily under the Hadoop system, and implemented a MapReduce program using Rhipe for multiple regression especially. In addition, we compared the computing speeds of our program with the other packages (ff and bigmemory) for processing the large data. The simulation results showed that our program was more fast than ff and bigmemory as the size of data increases.

High-Performance FFT Using Data Reorganization (데이터 재구성 기법을 이용한 고성능 FFT)

  • Park Neungsoo;Choi Yungho
    • The KIPS Transactions:PartA
    • /
    • v.12A no.3 s.93
    • /
    • pp.215-222
    • /
    • 2005
  • The efficient utilization of cache memories is a key factor in achieving high performance for computing large signal transforms. Nonunit stride access in computation of large DFTs causes cache conflict misses, thereby resulting in poor cache performance. It leads to a severe degradation in overall performance. In this paper, we propose a dynamic data layout approach considering the memory hierarchy system. In our approach, data reorganization is performed between computation stages to reduce the number of cache misses. Also, we develop an efficient search algorithm to determine the optimal tree with the minimum execution time among possible factorization trees considering the size of DFTs and the data access stride. Our approach is applied to compute the fast Fourier Transform (FFT). Experiments were performed on Pentium 4, $Athlon^{TM}$ 64, Alpha 21264, UtraSPARC III. Experiment results show that our FFT achieve performance improvement of up to 3.37 times better than the previous FFT packages.