• Title/Summary/Keyword: graph structure

Search Result 506, Processing Time 0.029 seconds

Evaluation of Multi-Level Memory Characteristics in Ge2Sb2Te5/TiN/W-Doped Ge2Sb2Te5 Cell Structure (Ge2Sb2Te5/TiN/W-Doped Ge2Sb2Te5 셀 구조의 다중준위 메모리 특성 평가 )

  • Jun-Hyeok Jo;Jun-Young Seo;Ju-Hee Lee;Ju-Yeong Park;Hyun-Yong Lee
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.37 no.1
    • /
    • pp.88-93
    • /
    • 2024
  • To evaluate the possibility as a multi-level memory medium for the Ge2Sb2Te5/TiN/W-doped Ge2Sb2Te5 cell structure, the crystallization rate and stabilization characteristics according to voltage (V)- and current (I)- pulse sweeping were investigated. In the cell structures prepared by a magnetron sputtering system on a p-type Si (100) substrate, the Ge2Sb2Te5 and W-doped Ge2Sb2Te5 thin films were separated by a barrier metal, TiN, and the individual thicknesses were varied, but the total thickness was fixed at 200 nm. All cell structures exhibited relatively stable multi-level states of high-middle-low resistance (HR-MR-LR), which guarantee the reliability of the multilevel phase-change random access memory (PRAM). The amorphousto-multilevel crystallization rate was evaluated from a graph of resistance (R) vs. pulse duration (T) obtained by the nanoscaled pulse sweeping at a fixed applied voltage (12 V). For all structures, the phase-change rates of HR→MR and MR→LR were estimated to be approximately t<20 ns and t<40 ns, respectively, and the states were relatively stable. We believe that the doublestack structure of an appropriate Ge-Sb-Te film separated by barrier metal (TiN) can be optimized for high-speed and stable multilevel PRAM.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Developing Image Processing Program for Automated Counting of Airborne Fibers (이미지 처리를 통한 공기 중 섬유의 자동계수 알고리즘 프로그램 개발)

  • Choi, Sungwon;Lee, Heekong;Lee, Jong Il;Kim, Hyunwook
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.24 no.4
    • /
    • pp.484-491
    • /
    • 2014
  • Objectives: An image processing program for asbestos fibers analyzing the gradient components and partial linearity was developed in order to accurately segment fibers. The objectives were to increase the accuracy of counting through the formulation of the size and shape of fibers and to guarantee robust fiber detection in noisy backgrounds. Methods: We utilized samples mixed with sand and sepiolite, which has a similar structure to asbestos. Sample concentrations of 0.01%, 0.05%, 0.1%, 0.5%, 1%, 2%, and 3%(w/w) were prepared. The sand used was homogenized after being sieved to less than $180{\mu}m$. Airborne samples were collected on MCE filters by utilizing a personal pump with 2 L/min flow rate for 30 minutes. We used the NIOSH 7400 method for pre-treating and counting the fibers on the filters. The results of the NIOSH 7400 method were compared with those of the image processing program. Results: The performance of the developed algorithm, when compared with the target images acquired by PCM, showed that the detection rate was on average 88.67%. The main causes of non-detection were missing fibers with a low degree of contrast and overlapping of faint and thin fibers. Also, some duplicate countings occurred for fibers with breaks in the middle due to overlapping particles. Conclusions: An image detection algorithm that could increase the accuracy of fiber counting was developed by considering the direction of the edge to extract images of fibers. It showed comparable results to PCM analysis and could be used to count fibers through real-time tracking by modeling a branch point to graph. This algorithm can be utilized to measure the concentrations of asbestos in real-time if a suitable optical design is developed.

Automated Development of Rank-Based Concept Hierarchical Structures using Wikipedia Links (위키피디아 링크를 이용한 랭크 기반 개념 계층구조의 자동 구축)

  • Lee, Ga-hee;Kim, Han-joon
    • The Journal of Society for e-Business Studies
    • /
    • v.20 no.4
    • /
    • pp.61-76
    • /
    • 2015
  • In general, we have utilized the hierarchical concept tree as a crucial data structure for indexing huge amount of textual data. This paper proposes a generality rank-based method that can automatically develop hierarchical concept structures with the Wikipedia data. The goal of the method is to regard each of Wikipedia articles as a concept and to generate hierarchical relationships among concepts. In order to estimate the generality of concepts, we have devised a special ranking function that mainly uses the number of hyperlinks among Wikipedia articles. The ranking function is effectively used for computing the probabilistic subsumption among concepts, which allows to generate relatively more stable hierarchical structures. Eventually, a set of concept pairs with hierarchical relationship is visualized as a DAG (directed acyclic graph). Through the empirical analysis using the concept hierarchy of Open Directory Project, we proved that the proposed method outperforms a representative baseline method and it can automatically extract concept hierarchies with high accuracy.

Efficient Approximation of State Space for Reinforcement Learning Using Complex Network Models (복잡계망 모델을 사용한 강화 학습 상태 공간의 효율적인 근사)

  • Yi, Seung-Joon;Eom, Jae-Hong;Zhang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.479-490
    • /
    • 2009
  • A number of temporal abstraction approaches have been suggested so far to handle the high computational complexity of Markov decision problems (MDPs). Although the structure of temporal abstraction can significantly affect the efficiency of solving the MDP, to our knowledge none of current temporal abstraction approaches explicitly consider the relationship between topology and efficiency. In this paper, we first show that a topological measurement from complex network literature, mean geodesic distance, can reflect the efficiency of solving MDP. Based on this, we build an incremental method to systematically build temporal abstractions using a network model that guarantees a small mean geodesic distance. We test our algorithm on a realistic 3D game environment, and experimental results show that our model has subpolynomial growth of mean geodesic distance according to problem size, which enables efficient solving of resulting MDP.

An Analysis of Middle School Student's Eye Movements in the Law of Large Numbers Simulation Activity (큰 수의 법칙 시뮬레이션에서 중학생의 안구 운동 분석)

  • Choi, In Yong;Cho, Han Hyuk
    • The Mathematical Education
    • /
    • v.56 no.3
    • /
    • pp.281-300
    • /
    • 2017
  • This study analyzed the difficulties of middle school students in computer simulation of the law of large numbers through eye movement analysis. Some students did not attend to the simulation results and could not make meaningful inferences. It is observed that students keep the existing concept even though they observe the simulation results which are inconsistent with the misconceptions they have. Since probabilistic intuition influence student's thinking very strongly, it is necessary to design a task that allows students to clearly recognize the difference between their erroneous intuitions and simulation results. In addition, we could confirm through eye movements analysis that students could not make meaningful observations and inferences if too much reasoning was needed even though the simulation included a rich context. It is necessary to use visual representations such as graphs to provide immediate feedback to students, to encourage students to attend to the results in a certain intentional way to discover the underlying mathematical structure rather than simply presenting experimental data. Some students focused their attention on the visually salient feature of the experimental results and have made incorrect conclusion. The simulation should be designed so that the patterns of the experimental results that the student must discover are not visually distorted and allow the students to perform a sufficient number of simulations. Based on the results of this study, we suggested that cumulative relative frequency graph showing multiple results at the same time, and the term 'generally tends to get closer' should be used in learning of the law of large numbers. In addition, it was confirmed that eye-tracking method is a useful tool for analyzing interaction in technology-based probabilistic learning.

Progressive Reconstruction of 3D Objects from a Single Freehand Line Drawing (Free-Hand 선화로부터 점진적 3차원 물체 복원)

  • 오범수;김창헌
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.3_4
    • /
    • pp.168-185
    • /
    • 2003
  • This paper presents a progressive algorithm that not only can narrow down the search domain in the course of face identification but also can fast reconstruct various 3D objects from a sketch drawing. The sketch drawing, edge-vertex graph without hidden line removal, which serves as input for reconstruction process, is obtained from an inaccurate freehand sketch of a 3D wireframe object. The algorithm is executed in two stages. In the face identification stage, we generate and classify potential faces into implausible, basis, and minimal faces by using geometrical and topological constraints to reduce search space. The proposed algorithm searches the space of minimal faces only to identify actual faces of an object fast. In the object reconstruction stage, we progressively calculate a 3D structure by optimizing the coordinates of vertices of an object according to the sketch order of faces. The progressive method reconstructs the most plausible 3D object quickly by applying 3D constraints that are derived from the relationship between the object and the sketch drawing in the optimization process. Furthermore, it allows the designer to change viewpoint during sketching. The progressive reconstruction algorithm is discussed, and examples from a working implementation are given.

Semi-automatic 3D Building Reconstruction from Uncalibrated Images (비교정 영상에서의 반자동 3차원 건물 모델링)

  • Jang, Kyung-Ho;Jang, Jae-Seok;Lee, Seok-Jun;Jung, Soon-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.9
    • /
    • pp.1217-1232
    • /
    • 2009
  • In this paper, we propose a semi-automatic 3D building reconstruction method using uncalibrated images which includes the facade of target building. First, we extract feature points in all images and find corresponding points between each pair of images. Second, we extract lines on each image and estimate the vanishing points. Extracted lines are grouped with respect to their corresponding vanishing points. The adjacency graph is used to organize the image sequence based on the number of corresponding points between image pairs and camera calibration is performed. The initial solid model can be generated by some user interactions using grouped lines and camera pose information. From initial solid model, a detailed building model is reconstructed by a combination of predefined basic Euler operators on half-edge data structure. Automatically computed geometric information is visualized to help user's interaction during the detail modeling process. The proposed system allow the user to get a 3D building model with less user interaction by augmenting various automatically generated geometric information.

  • PDF

A Study on the Visual Perception Space Structure Analysis of Exhibition Contents Organization in Natural History Museum (자연사박물관 전시내용구성의 시지각적 공간구조분석에 관한 연구)

  • Kim, Eun-Jung;Hong, Kwan-Seon
    • Korean Institute of Interior Design Journal
    • /
    • v.18 no.2
    • /
    • pp.80-92
    • /
    • 2009
  • Natural history museums preserve and manage the creatures living in each country, so they play unique roles for bio-diversity, and in fact, their roles are really instrumental for the collection, preservation, research, exhibition and education of creatures in the 21st century. Therefore, this research has the purpose to survey the status of our country's existing natural history museums, analyze their visual perception space structural characteristics, and ultimately utilize their characteristics as basic data in planning out and designing spaces of natural history museums to be established later on. As for the research scope, the research selected as research subjects 7 natural history museums that currently have composite set of open type and mixed type and have been accommodating comparatively active exhibitions since 2000 among 10 or so natural history museums in our country. Research method is that the research analyzed the exhibition spaces of 7 natural history museums by using depthmap program which can analyze space with visual graph analysis function, and analyzed the visibility among unit areas by each natural history museum integration and exhibition contents composition. In such analysis method, the research was able to quantitatively analyze the visual characteristics of exhibition space that induces and adjusts the motion of audience. Visual perception quantitative analysis as in this research will enhance exhibition design by considering the correlation between audience and exhibited items when planning out natural history museums space to be established later on.

Hypertext Model Extension and Dynamic Server Allocation for Database Gateway in Web Database Systems (웹 데이타베이스에서 하이퍼텍스트 모델 확장 및 데이타베이스 게이트웨이의 동적 서버 할당)

  • Shin, Pan-Seop;Kim, Sung-Wan;Lim, Hae-Chull
    • Journal of KIISE:Databases
    • /
    • v.27 no.2
    • /
    • pp.227-237
    • /
    • 2000
  • A Web database System is a large-scaled multimedia application system that has multimedia processing facilities and cooperates with relational/Object-Oriented DBMS. Conventional hypertext modeling methods and DB gateway have limitations for Web database because of their restricted versatile presentation abilities and inefficient concurrency control caused by bottleneck in cooperation processing. Thus, we suggest a Dynamic Navigation Model & Virtual Graph Structure. The Dynamic Navigation Model supports implicit query processing and dynamic creation of navigation spaces, and introduce node-link creation rule considering navigation styles. We propose a mapping methodology between the suggested hypertext model and the relational data model, and suggest a dynamic allocation scheduling technique for query processing server based on weighted value. We show that the proposed technique enhances the retrieval performance of Web database systems in processing complex queries concurrently.

  • PDF