• Title/Summary/Keyword: 메모리형

Search Result 679, Processing Time 0.03 seconds

Etching properties of $Na_{0.5}K_{0.5}NbO_2$ thin film using inductively coupled plasma (유도결합 플라즈마를 이용한 $Na_{0.5}K_{0.5}NbO_2$ 박막의 식각 특성)

  • Kim, Gwan-Ha;Kim, Kyoung-Tae;Kim, Jong-Gyu;Woo, Jong-Chang;Kim, Chang-Il
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2007.06a
    • /
    • pp.116-116
    • /
    • 2007
  • 21 세기에 접어들면서 인터넷을 통한 정보 통신의 발달과 개인 휴대용 이동 통신기기의 활발한 보급에 따라 휴대형 전자기기들의 소형화와 고성능화로 나아가고 있다. 이러한 전자기기에 사용될 IC의 내장 메모리 또한 집적화 및 고속화, 저 전력화가 이루어져야 한다. 이러한 전자기기들에 필수적인 압전 세라믹스 부품 중 압전 부저 및 기타 음향 부품등을 각종 전자기기와 무선 전화기에 채택함으로써 압전 부품에 대한 수요와 생산이 계속 증가할 것으로 전망된다. 이처럼 압전 세라믹스를 이용한 그 응용 범위는 대단히 방대하며, 현재 모든 압전 부품들은 PZT 계열 재료로 만들어지고 있고, 차후 모두 비납계열 재료로 대체될 것이 확실시된다. Pb의 환경오염은 이미 오래전부터 큰 문제점으로 인식되고 있었으며 그 일례로 미국의 캘리포니아 주에서는 1986년부터 약 800종의 유해물질, 그 중에서도 Pb 사용을 300ppm 이하로 규제하는 Proposition 65를 제정하여 실행하고 있다. 그리고 2003년 2월에 EU (European Union) 에서 발표한 전자산업에 관한 규제 사항중 하나인 위험물질 사용에 관한 지칭 (Restriction of Hazardous Substance, RoHS) 에 의하면, 2006 년 7월부터 전기 전자 제품에 있어서 위험 물질인 Pb을 포함한 중금속 물질(카드늄, 수은, 6가 크롬, 브롬계 난연재)의 사용을 금지한다고 발표하였다. 비록 전자세라믹 부품에 함유된 Pb는 예외 사항으로 두었지만 대체 가능한 물질이 개발되면 전자세라믹 부품에서도 Pb의 사용을 금지한다고 규정하였다. 더욱이 일본은 2005 년부터 Pb 사용을 금지시켰다. 이와 같이 Pb가 환경에 미치는 영향 때문에 비납계 강유전 물질 및 압전 세라믹스 재료에 대한 연구가 전 세계적으로 활발히 진행되고 있다. 본 연구에서는 비납계 강유전체의 patterning을 위해서, NKN 박막을 고밀도 플라즈마원인 ICP를 이용하여 식각 mechanism을 연구하고, 식각변수에 따른 식각 공정을 최적화에 대하여 연구하였다. 가스 혼합비에 따라 식각 할때 700 W의 RF 전력과 - 150 V의 직류 바이어스 전압을 인가하였고, 공정 압력은 2 Pa, 기판 온도는 $23^{\circ}C$로 고정하였다. 식각 속도는 Tencor사의 Alpha-step 500을 이용하여 측정되었으며 식각 시 NKN 박막 표면과 라디칼과의 화학적인 반응을 분석하고 식각 메커니즘을 규명하기 위하여 XPS(x-ray photoelectron spectroscopy)를 사용하였다.

  • PDF

Development of Embedded Type VOD Client System (임베디드 형태의 VOD 클라이언트 시스템의 개발)

  • Hong Chul-Ho;Kim Dong-Jin;Jung Young-Chang;Kim Jeong-Do
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.6 no.4
    • /
    • pp.315-324
    • /
    • 2005
  • VOD(video on demand) is a video service by users' order, that is, a video service on demand. That means the users can select and watch the video content that has been saved on sewer, out of broadcasting in the usual process like TV. At present the client of VOD system bases on PC. As the PC-based client uses the software MPEG decoder, the main processor specification has an effect on the capacity. Also people, who don't know how to use their PC, cannot be provided the VOD service. The purpose of this paper is to show the process of the development the VOD client system Into the embedded type with hardware MPEG-4 decoder. The main processor is the SC1200 of x86 Family in National Semiconductor with a built-in video processor and the memory is 128Mbyte SDRAM. Also, in order that the VOD service can be provided using the Internet, the Ethernet controller is included. As the hardware MPEG-4 decoder is used in the embedded VOD client system, which is developed, it can make the low capacity of the main processor. Therefore it is able to be developed as a low-price system. The embedded VOD client system is easy for anyone to control easily with the remote control and can be played through TV.

  • PDF

Optimization of Elastic Modulus and Cure Characteristics of Composition for Die Attach Film (다이접착필름용 조성물의 탄성 계수 및 경화 특성 최적화)

  • Sung, Choonghyun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.4
    • /
    • pp.503-509
    • /
    • 2019
  • The demand for smaller, faster, and multi-functional mobile devices in increasing at a rapidly increasing rate. In response to these trends, Stacked Chip Scale Package (SCSP) is used widely in the assembly industry. A film type adhesive called die attach film (DAF) is used widely for bonding chips in SCSP. The DAF requires high flowability at high die attachment temperatures for bonding chips on organic substrates, where the DAF needs to feel the gap depth, or for bonding the same sized dies, where the DAF needs to penetrate bonding wires. In this study, the mixture design of experiment (DOE) was performed for three raw materials to obtain the optimized DAF recipe for low elastic modulus at high temperature. Three components are acrylic polymer (SG-P3) and two solid epoxy resins (YD011 and YDCN500-1P) with different softening points. According to the DOE results, the elastic modulus at high temperature was influenced greatly by SG-P3. The elastic modulus at $100^{\circ}C$ decreased from 1.0 MPa to 0.2 MPa as the amount of SG-P3 was decreased by 20%. In contrast, the elastic modulus at room temperature was dominated by YD011, an epoxy with a higher softening point. The optimized DAF recipe showed approximately 98.4% pickup performance when a UV dicing tape was used. A DAF crack that occurred in curing was effectively suppressed through optimization of the cure accelerator amount and two-step cure schedule. The imizadole type accelerator showed better performance than the amine type accelerator.

A Study on MRD Methods of A RAM-based Neural Net (RAM 기반 신경망의 MRD 기법에 관한 연구)

  • Lee, Dong-Hyung;Kim, Seong-Jin;Park, Sang-Moo;Lee, Soo-Dong;Ock, Cheol-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.9
    • /
    • pp.11-19
    • /
    • 2009
  • A RAM-based Neural Net(RBNN) which has multi-discriminators is more effective than RBNN with a discriminator. Experience Sensitive Cumulative Neural Network and 3-D Neuro System(3DNS) that accumulate the features point improved the performance of BNN, which were enabled to train additional and repeated patterns and extract a generalized pattern. In recognition process of Neural Net with multi-discriminator, the selection of class was decided by the value of MRD which calculates the accumulated sum of each class. But they had a saturation problem of its memory cells caused by learning volume increment. Therefore, the decision of MRD has a low performance because recognition rate is decreased by saturation. In this paper, we propose the method which improve the MRD ability. The method consists of the optimum MRD and the matching ratio prototype to generalized image, the cumulative filter ratio, the gap of prototype response MRD. We experimented the performance using NIST database of NIST without preprocessor, and compared this model with 3DNS. The proposed MRD method has more performance of recognition rate and more stable system for distortion of input pattern than 3DNS.

Cortex M3 Based Lightweight Security Protocol for Authentication and Encrypt Communication between Smart Meters and Data Concentrate Unit (스마트미터와 데이터 집중 장치간 인증 및 암호화 통신을 위한 Cortex M3 기반 경량 보안 프로토콜)

  • Shin, Dong-Myung;Ko, Sang-Jun
    • Journal of Software Assessment and Valuation
    • /
    • v.15 no.2
    • /
    • pp.111-119
    • /
    • 2019
  • The existing smart grid device authentication system is concentrated on DCU, meter reading FEP and MDMS, and the authentication system for smart meters is not established. Although some cryptographic chips have been developed at present, it is difficult to complete the PKI authentication scheme because it is at the low level of simple encryption. Unlike existing power grids, smart grids are based on open two-way communication, increasing the risk of accidents as information security vulnerabilities increase. However, PKI is difficult to apply to smart meters, and there is a possibility of accidents such as system shutdown by sending manipulated packets and sending false information to the operating system. Issuing an existing PKI certificate to smart meters with high hardware constraints makes authentication and certificate renewal difficult, so an ultra-lightweight password authentication protocol that can operate even on the poor performance of smart meters (such as non-IP networks, processors, memory, and storage space) was designed and implemented. As a result of the experiment, lightweight cryptographic authentication protocol was able to be executed quickly in the Cortex-M3 environment, and it is expected that it will help to prepare a more secure authentication system in the smart grid industry.

The Method for Real-time Complex Event Detection of Unstructured Big data (비정형 빅데이터의 실시간 복합 이벤트 탐지를 위한 기법)

  • Lee, Jun Heui;Baek, Sung Ha;Lee, Soon Jo;Bae, Hae Young
    • Spatial Information Research
    • /
    • v.20 no.5
    • /
    • pp.99-109
    • /
    • 2012
  • Recently, due to the growth of social media and spread of smart-phone, the amount of data has considerably increased by full use of SNS (Social Network Service). According to it, the Big Data concept is come up and many researchers are seeking solutions to make the best use of big data. To maximize the creative value of the big data held by many companies, it is required to combine them with existing data. The physical and theoretical storage structures of data sources are so different that a system which can integrate and manage them is needed. In order to process big data, MapReduce is developed as a system which has advantages over processing data fast by distributed processing. However, it is difficult to construct and store a system for all key words. Due to the process of storage and search, it is to some extent difficult to do real-time processing. And it makes extra expenses to process complex event without structure of processing different data. In order to solve this problem, the existing Complex Event Processing System is supposed to be used. When it comes to complex event processing system, it gets data from different sources and combines them with each other to make it possible to do complex event processing that is useful for real-time processing specially in stream data. Nevertheless, unstructured data based on text of SNS and internet articles is managed as text type and there is a need to compare strings every time the query processing should be done. And it results in poor performance. Therefore, we try to make it possible to manage unstructured data and do query process fast in complex event processing system. And we extend the data complex function for giving theoretical schema of string. It is completed by changing the string key word into integer type with filtering which uses keyword set. In addition, by using the Complex Event Processing System and processing stream data at real-time of in-memory, we try to reduce the time of reading the query processing after it is stored in the disk.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Transmission Electron Microscopy Study on the Crystallization Behavior of In-Sb-Te Thin Films (In-Sb-Te 박막의 결정화 거동에 관한 투과전자현미경 연구)

  • Kim, Chung-Soo;Kim, Eun-Tae;Lee, Jeong-Yong;Kim, Yong-Tae
    • Applied Microscopy
    • /
    • v.38 no.4
    • /
    • pp.279-284
    • /
    • 2008
  • The phase change materials have been extensively used as an optical rewritable data storage media utilizing their phase change properties. Recently, the phase change materials have been spotlighted for the application of non-volatile memory device, such as the phase change random access memory. In this work, we have investigated the crystallization behavior and microstructure analysis of In-Sb-Te (IST) thin films deposited by RF magnetron sputtering. Transmission electron microscopy measurement was carried out after the annealing at $300^{\circ}C$, $350^{\circ}C$, $400^{\circ}C$ and $450^{\circ}C$ for 5 min. It was observed that InSb phases change into $In_3SbTe_2$ phases and InTe phases as the temperature increases. It was found that the thickness of thin films was decreased and the grain size was increased by the bright field transmission electron microscopy (BF TEM) images and the selected area electron diffraction (SAED) patterns. In a high resolution transmission electron microscopy (HRTEM) study, it shows that $350^{\circ}C$-annealed InSb phases have {111} facet because the surface energy of a {111} close-packed plane is the lowest in FCC crystals. When the film was heated up to $400^{\circ}C$, $In_3SbTe_2$ grains have coherent micro-twins with {111} mirror plane, and they are healed annealing at $450^{\circ}C$. From the HRTEM, InTe phase separation was occurred in this stage. It can be found that $In_3SbTe_2$ forms in the crystallization process as composition of the film near stoichiometric composition, while InTe phase separation may take place as the composition deviates from $In_3SbTe_2$.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.