• Title/Summary/Keyword: Electronic and Processing Set

Search Result 150, Processing Time 0.027 seconds

FE Analysis of Alumina Green Body Density for Pressure Compaction Process (압축성형공정에 대한 알루미나 성형체 밀도분포의 FE 분석)

  • Im, Jong-In;Yook, Young-Jin
    • Journal of the Korean Ceramic Society
    • /
    • v.43 no.12 s.295
    • /
    • pp.859-864
    • /
    • 2006
  • For the pressure compaction process of the ceramic powder, the green density is very different with both the ceramic body shape and the processing conditions. The density difference cause non-uniform shrinkages and deformations, and make cracks in the sintered ceramics. In this paper, Material properties of the alumina powder mixed with binder and the friction coefficient between the powder and the tool set were determined through the simple compaction experiments: Also the powder flow characteristics were simulated and the green density was analyzed during the powder compaction process with Finite Element Method (FEM). The results show that the density distributions of the green body were improved at the optimized processing condition and both the possibility of the farming crack generation and rho deformation of the sintered Alumina body were reduced.

Clustering Algorithm Using Hashing in Classification of Multispectral Satellite Images

  • Park, Sung-Hee;Kim, Hwang-Soo;Kim, Young-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.16 no.2
    • /
    • pp.145-156
    • /
    • 2000
  • Clustering is the process of partitioning a data set into meaningful clusters. As the data to process increase, a laster algorithm is required than ever. In this paper, we propose a clustering algorithm to partition a multispectral remotely sensed image data set into several clusters using a hash search algorithm. The processing time of our algorithm is compared with that of clusters algorithm using other speed-up concepts. The experiment results are compared with respect to the number of bands, the number of clusters and the size of data. It is also showed that the processing time of our algorithm is shorter than that of cluster algorithms using other speed-up concepts when the size of data is relatively large.

Contact oxide etching using $CHF_3/CF_4$ ($CHF_3/CF_4$를 사용한 콘택 산화막 식각)

  • 김창일;김태형;장의구
    • Electrical & Electronic Materials
    • /
    • v.8 no.6
    • /
    • pp.774-779
    • /
    • 1995
  • Process optimization experiments based on the Taguchi method were performed in order to set up the optimal process conditions for the contact oxide etching process module which was built in order to be attached to the cluster system of multi-processing purpose. In order to compare with Taguchi method, the contact oxide etching process carried out with different process parameters(CHF$_{3}$/CF$_{4}$ gas flow rate, chamber pressure, RF power and magnetic field intensity). Optimal etching characteristics were evaluated in terms of etch rate, selectivity, uniformity and etched profile. In this paper, as a final analysis of experimental results the optimal etching characteristics were obtained at the process conditions of CHF3/CF4 gas flow rate = 72/8 sccm, chamber pressure = 50 mTorr, RF power = 500 watts, and magnetic field intensity = 90 gauss.

  • PDF

Storage Policies for Versions Management of XML Documents using a Change Set (변경 집합을 이용한 XML 문서의 버전 관리를 위한 저장 기법)

  • Yun Hong Won
    • The KIPS Transactions:PartD
    • /
    • v.11D no.7 s.96
    • /
    • pp.1349-1356
    • /
    • 2004
  • The interest of version management is increasing in electronic commerce requiring data mining and documents processing system related to digital governmentapplications. In this paper, we define a change set that is to manage historicalinformation and to maintain XML documents during a long period of time and propose several storage policies of XML documents using a change set. A change set includes a change oper-ation set and temporal dimensions and a change operation set is composed with schema change operations and data change operations. We pro-pose three storage policies using a change set. Three storage policies are (1) storing all the change sets, (2) storing the change sets and the versions periodically. (3) storing the aggregation of change sets and the versions at a point of proper time. Also, we compare the performance between the existing storage policy and the proposed storage policies. Though the performance evaluation, we show that the method to store the aggregation of change sets and the versions at a point of proper time outperforms others.

Developing Sentimental Analysis System Based on Various Optimizer

  • Eom, Seong Hoon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.100-106
    • /
    • 2021
  • Over the past few decades, natural language processing research has not made much. However, the widespread use of deep learning and neural networks attracted attention for the application of neural networks in natural language processing. Sentiment analysis is one of the challenges of natural language processing. Emotions are things that a person thinks and feels. Therefore, sentiment analysis should be able to analyze the person's attitude, opinions, and inclinations in text or actual text. In the case of emotion analysis, it is a priority to simply classify two emotions: positive and negative. In this paper we propose the deep learning based sentimental analysis system according to various optimizer that is SGD, ADAM and RMSProp. Through experimental result RMSprop optimizer shows the best performance compared to others on IMDB data set. Future work is to find more best hyper parameter for sentimental analysis system.

The Development of the Real Time Target Simulator for the RF Signal of Electronic Warfare using VST and FPGA (VST 및 FPGA를 이용한 전자표적 생성 및 신호 모의장치 개발)

  • Sanghun Song
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.26 no.4
    • /
    • pp.324-334
    • /
    • 2023
  • In this paper, the target simulator for RF signals was developed by using VST(Vector Signal Transceiver) and set by real-time signal processing SW programs. A function to process RF signals using FPGA(Field Programmable Gate Array) board was designed. The system functions capable of data processing, raw signals monitoring, target signals(simulated range, velocity) generating and RF environments data analyzing were implemented. And the characteristics of modulated signal were analyzed in RF environment. All function of programs for processing RF signal have options to store signal data and to manage the data. The validity of the signal simulation was confirmed through verification of simulated signal results.

Fast Sampling Set Selection Algorithm for Arbitrary Graph Signals (임의의 그래프신호를 위한 고속 샘플링 집합 선택 알고리즘)

  • Kim, Yoon-Hak
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.6
    • /
    • pp.1023-1030
    • /
    • 2020
  • We address the sampling set selection problem for arbitrary graph signals such that the original graph signal is reconstructed from the signal values on the nodes in the sampling set. We introduce a variation difference as a new indirect metric that measures the error of signal variations caused by sampling process without resorting to the eigen-decomposition which requires a huge computational cost. Instead of directly minimizing the reconstruction error, we propose a simple and fast greedy selection algorithm that minimizes the variation differences at each iteration and justify the proposed reasoning by showing that the principle used in the proposed process is similar to that in the previous novel technique. We run experiments to show that the proposed method yields a competitive reconstruction performance with a substantially reduced complexity for various graphs as compared with the previous selection methods.

SPIHT-based Subband Division Compression Method for High-resolution Image Compression (고해상도 영상 압축을 위한 SPIHT 기반의 부대역 분할 압축 방법)

  • Kim, Woosuk;Park, Byung-Seo;Oh, Kwan-Jung;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.198-206
    • /
    • 2022
  • This paper proposes a method to solve problems that may occur when SPIHT(set partition in hierarchical trees) is used in a dedicated codec for compressing complex holograms with ultra-high resolution. The development of codecs for complex holograms can be largely divided into a method of creating dedicated compression methods and a method of using anchor codecs such as HEVC and JPEG2000 and adding post-processing techniques. In the case of creating a dedicated compression method, a separate conversion tool is required to analyze the spatial characteristics of complex holograms. Zero-tree-based algorithms in subband units such as EZW and SPIHT have a problem that when coding for high-resolution images, intact subband information is not properly transmitted during bitstream control. This paper proposes a method of dividing wavelet subbands to solve such a problem. By compressing each divided subbands, information throughout the subbands is kept uniform. The proposed method showed better restoration results than PSNR compared to the existing method.

Sampling Set Selection Algorithm for Weighted Graph Signals (가중치를 갖는 그래프신호를 위한 샘플링 집합 선택 알고리즘)

  • Kim, Yoon Hak
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.153-160
    • /
    • 2022
  • A greedy algorithm is proposed to select a subset of nodes of a graph for bandlimited graph signals in which each signal value is generated with its weight. Since graph signals are weighted, we seek to minimize the weighted reconstruction error which is formulated by using the QR factorization and derive an analytic result to find iteratively the node minimizing the weighted reconstruction error, leading to a simplified iterative selection process. Experiments show that the proposed method achieves a significant performance gain for graph signals with weights on various graphs as compared with the previous novel selection techniques.

One-dimensional CNN Model of Network Traffic Classification based on Transfer Learning

  • Lingyun Yang;Yuning Dong;Zaijian Wang;Feifei Gao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.420-437
    • /
    • 2024
  • There are some problems in network traffic classification (NTC), such as complicated statistical features and insufficient training samples, which may cause poor classification effect. A NTC architecture based on one-dimensional Convolutional Neural Network (CNN) and transfer learning is proposed to tackle these problems and improve the fine-grained classification performance. The key points of the proposed architecture include: (1) Model classification--by extracting normalized rate feature set from original data, plus existing statistical features to optimize the CNN NTC model. (2) To apply transfer learning in the classification to improve NTC performance. We collect two typical network flows data from Youku and YouTube, and verify the proposed method through extensive experiments. The results show that compared with existing methods, our method could improve the classification accuracy by around 3-5%for Youku, and by about 7 to 27% for YouTube.