• Title/Summary/Keyword: Embedded Memory

Search Result 730, Processing Time 0.027 seconds

A Library for Object-to-Graph Mapping with Annotations in Java

  • Ji-Woong Choi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.10
    • /
    • pp.219-228
    • /
    • 2024
  • In this paper, we propose a method for constructing RDF knowledge graphs from objects in OOP. RML mapping has been the de-facto way of generating RDF graphs from heterogeneous data. However, the input to an RML mapping is limited to the data in files or databases. Our new RML implementation, designed to overcome the limit, has two differences compared to existing RML implementations. First, our implementation provides a new way to specify mapping rules in the form of special comments known as annotations in the source code. It is because existing works do not provide a means to refer to specific program elements to which the mapping rules will be applied. Second, our work provides mapping engine as a library, whereas the engines in existing studies runs in an independent process. Therefore, our mapping engine can be easily embedded in other applications to access in-memory objects to be mapped. In this system paper, we describe the proposed system in detail and present the results of RML test cases execution to confirm the usefulness of the system.

Development of a Remote Multi-Task Debugger for Qplus-T RTOS (Qplus-T RTOS를 위한 원격 멀티 태스크 디버거의 개발)

  • 이광용;김흥남
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.4
    • /
    • pp.393-409
    • /
    • 2003
  • In this paper, we present a multi-task debugging environment for Qplus-T embedded-system such as internet information appliances. We will propose the structure and functions of a remote multi-task debugging environment supporting environment effective ross-development. And, we are going enhance the communication architecture between the host and target system to provide more efficient cross-development environment. The remote development toolset called Q+Esto consists to several independent support tools: an interactive shell, a remote debugger, a resource monitor, a target manager and a debug agent. Excepting a debug agent, all these support tools reside on the host systems. Using the remote multi-task debugger on the host, the developer can spawn and debug tasks on the target run-time system. It can also be attached to already-running tasks spawned from the application or from interactive shell. Application code can be viewed as C/C++ source, or as assembly-level code. It incorporates a variety of display windows for source, registers, local/global variables, stack frame, memory, event traces and so on. The target manager implements common functions that are shared by Q+Esto tools, e.g., the host-target communication, object file loading, and management of target-resident host tool´s memory pool and target system´s symbol-table, and so on. These functions are called OPEn C APIs and they greatly improve the extensibility of the Q+Esto Toolset. The Q+Esto target manager is responsible for communicating between host and target system. Also, there exist a counterpart on the target system communicating with the host target manager, which is called debug agent. Debug agent is a daemon task on real-time operating systems in the target system. It gets debugging requests from the host tools including debugger via target manager, interprets the requests, executes them and sends the results to the host.

Timely Sensor Fault Detection Scheme based on Deep Learning (딥 러닝 기반 실시간 센서 고장 검출 기법)

  • Yang, Jae-Wan;Lee, Young-Doo;Koo, In-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.163-169
    • /
    • 2020
  • Recently, research on automation and unmanned operation of machines in the industrial field has been conducted with the advent of AI, Big data, and the IoT, which are the core technologies of the Fourth Industrial Revolution. The machines for these automation processes are controlled based on the data collected from the sensors attached to them, and further, the processes are managed. Conventionally, the abnormalities of sensors are periodically checked and managed. However, due to various environmental factors and situations in the industrial field, there are cases where the inspection due to the failure is not missed or failures are not detected to prevent damage due to sensor failure. In addition, even if a failure occurs, it is not immediately detected, which worsens the process loss. Therefore, in order to prevent damage caused by such a sudden sensor failure, it is necessary to identify the failure of the sensor in an embedded system in real-time and to diagnose the failure and determine the type for a quick response. In this paper, a deep neural network-based fault diagnosis system is designed and implemented using Raspberry Pi to classify typical sensor fault types such as erratic fault, hard-over fault, spike fault, and stuck fault. In order to diagnose sensor failure, the network is constructed using Google's proposed Inverted residual block structure of MobilieNetV2. The proposed scheme reduces memory usage and improves the performance of the conventional CNN technique to classify sensor faults.

Finding the time sensitive frequent itemsets based on data mining technique in data streams (데이터 스트림에서 데이터 마이닝 기법 기반의 시간을 고려한 상대적인 빈발항목 탐색)

  • Park, Tae-Su;Chun, Seok-Ju;Lee, Ju-Hong;Kang, Yun-Hee;Choi, Bum-Ghi
    • Journal of The Korean Association of Information Education
    • /
    • v.9 no.3
    • /
    • pp.453-462
    • /
    • 2005
  • Recently, due to technical improvements of storage devices and networks, the amount of data increase rapidly. In addition, it is required to find the knowledge embedded in a data stream as fast as possible. Huge data in a data stream are created continuously and changed fast. Various algorithms for finding frequent itemsets in a data stream are actively proposed. Current researches do not offer appropriate method to find frequent itemsets in which flow of time is reflected but provide only frequent items using total aggregation values. In this paper we proposes a novel algorithm for finding the relative frequent itemsets according to the time in a data stream. We also propose the method to save frequent items and sub-frequent items in order to take limited memory into account and the method to update time variant frequent items. The performance of the proposed method is analyzed through a series of experiments. The proposed method can search both frequent itemsets and relative frequent itemsets only using the action patterns of the students at each time slot. Thus, our method can enhance the effectiveness of learning and make the best plan for individual learning.

  • PDF

A Study on GPU Computing of Bi-conjugate Gradient Method for Finite Element Analysis of the Incompressible Navier-Stokes Equations (유한요소 비압축성 유동장 해석을 위한 이중공액구배법의 GPU 기반 연산에 대한 연구)

  • Yoon, Jong Seon;Jeon, Byoung Jin;Jung, Hye Dong;Choi, Hyoung Gwon
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.40 no.9
    • /
    • pp.597-604
    • /
    • 2016
  • A parallel algorithm of bi-conjugate gradient method was developed based on CUDA for parallel computation of the incompressible Navier-Stokes equations. The governing equations were discretized using splitting P2P1 finite element method. Asymmetric stenotic flow problem was solved to validate the proposed algorithm, and then the parallel performance of the GPU was examined by measuring the elapsed times. Further, the GPU performance for sparse matrix-vector multiplication was also investigated with a matrix of fluid-structure interaction problem. A kernel was generated to simultaneously compute the inner product of each row of sparse matrix and a vector. In addition, the kernel was optimized to improve the performance by using both parallel reduction and memory coalescing. In the kernel construction, the effect of warp on the parallel performance of the present CUDA was also examined. The present GPU computation was more than 7 times faster than the single CPU by double precision.

Formation of an Intestine-Cartilage Composite Graft for Tracheal Reconstruction (기관 재건을 위한 장과 연골의 복합 이식판 개발)

  • Jheon, Sang-Hoon;Lee, Sub;Jung, Jin-Yong;Kong, Jun-Hyuk;Lim, Jeong-Ok;Kim, Yu-Mi;Jin, Chun-Jin;Park, Tae-In;Lee, jae-Ik;Sung, Seok-Whan;Choh, Joong-Haeng
    • Journal of Chest Surgery
    • /
    • v.37 no.6
    • /
    • pp.474-481
    • /
    • 2004
  • Background: Tracheal transplantation is necessary in patients with extensive tracheal stenosis, congenital lesions and other oncologic conditions but bears. many critical problems compared to other organ transplantations. The purpose of this study was to develop intestine-cartilage composite grafts for potential application in tracheal reconstruction by free intestinal graft. Material and Method: Hyaline cartilage was harvested from trachea of 2 weeks old New Zealand White Rabbits. Chondrocytes were isolated and cultured for 8 weeks. Cultured chondrocytes were seeded in the PLGA scaffolds and mixed in pluronic gel Chondrocyte bearing scaffolds and gel mixture were embedded in submucosal area of stomach and colon of 3 kg weighted New Zealand White Rabbits under general anesthesia. 10 weeks after implantation, bowels were harvested for evaluation. Result: We identified implantation site by gross examination and palpation. Developed cartilage made a good frame for shape memory. Microscopic examinations included special stain s howed absorption of scaffold and cartilage formation even though it was not fully matured. Conclusion: Intestine-cartilage composite graft could be applicable in the future as tracheal substitute and should be further investigated.

OpenGL ES 1.1 Implementation Using OpenGL (OpenGL을 이용한 OpenGL ES 1.1 구현)

  • Lee, Hwan-Yong;Baek, Nak-Hoon
    • The KIPS Transactions:PartA
    • /
    • v.16A no.3
    • /
    • pp.159-168
    • /
    • 2009
  • In this paper, we present an efficient way of implementing OpenGL ES 1.1 standard for the environments with hardware-supported OpenGL API, such as desktop PCs. Although OpenGL ES was started from the existing OpenGL features, it becomes a new three-dimensional graphics library customized for embedded systems through introducing fixed-point arithmetic operations, buffer management with fixed-point data type supports, completely new texture mapping functionalities and others. Currently, it is the official three dimensional graphics library for Google Android, Apple iPhone, PlayStation3, etc. In this paper, we achieved improvements on the arithmetic operations for the fixed-point number representation, which is the most characteristic data type for OpenGL ES. For the conversion of fixed-point data types to the floating-point number representations for the underlying OpenGL, we show the way of efficient conversion processes even with satisfying OpenGL ES standard requirements. We also introduced a simple memory management scheme to mange the converted data for the buffer containing fixed-point numbers. In the case of texture processing, the requirements in both standards are quite different and thus we used completely new software-implementations. Our final implementation result of OpenGL ES library provides all of over than 200 functions in OpenGL ES 1.1 standard and completely passed its conformance test, to show its compliance with the standard. From the efficiency viewpoint, we measured its execution times for several OpenGL ES-specific application programs and achieved at most 33.147 times improvements, to become the fastest one among the OpenGL ES implementations in the same category.

A 2.0-GS/s 5-b Current Mode ADC-Based Receiver with Embedded Channel Equalizer (채널 등화기를 내장한 2.0GS/s 5비트 전류 모드 ADC 기반 수신기)

  • Moon, Jong-Ho;Jung, Woo-Chul;Kim, Jin-Tae;Kwon, Kee-Won;Jun, Young-Hyun;Chun, Jung-Hoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.184-193
    • /
    • 2012
  • In this paper, a 5-bit 2-GS/s 2-way time interleaved pipeline ADC for high-speed serial link receiver is demonstrated. Implemented as a current-mode amplifier, the stage ADC simultaneously processes the tracking and residue amplification to achieve higher sampling rate. In addition, each stage incorporates a built-in 1-tap FIR equalizer, reducing inter-symbol-interference (ISI)without an extra digital post-processing. The ADC is designed in a 110nm CMOS technology. It comsumes 91mW from a 1.2-V supply. The area excluding the memory block is $0.58{\times}0.42mm^2$. Simulation results show that when equalizer is enabled, the ADC achieves SNDR of 25.2dB and ENOB of 3.9bits at 2.0GS/s sample rate for a Nyquist input signal. When the equalizer is disengaged, SNDR is 26.0dB for 20MHz-1.0GHz input signal, and the ENOB of 4.0bits.

Frequently Occurred Information Extraction from a Collection of Labeled Trees (라벨 트리 데이터의 빈번하게 발생하는 정보 추출)

  • Paik, Ju-Ryon;Nam, Jung-Hyun;Ahn, Sung-Joon;Kim, Ung-Mo
    • Journal of Internet Computing and Services
    • /
    • v.10 no.5
    • /
    • pp.65-78
    • /
    • 2009
  • The most commonly adopted approach to find valuable information from tree data is to extract frequently occurring subtree patterns from them. Because mining frequent tree patterns has a wide range of applications such as xml mining, web usage mining, bioinformatics, and network multicast routing, many algorithms have been recently proposed to find the patterns. However, existing tree mining algorithms suffer from several serious pitfalls in finding frequent tree patterns from massive tree datasets. Some of the major problems are due to (1) modeling data as hierarchical tree structure, (2) the computationally high cost of the candidate maintenance, (3) the repetitious input dataset scans, and (4) the high memory dependency. These problems stem from that most of these algorithms are based on the well-known apriori algorithm and have used anti-monotone property for candidate generation and frequency counting in their algorithms. To solve the problems, we base a pattern-growth approach rather than the apriori approach, and choose to extract maximal frequent subtree patterns instead of frequent subtree patterns. The proposed method not only gets rid of the process for infrequent subtrees pruning, but also totally eliminates the problem of generating candidate subtrees. Hence, it significantly improves the whole mining process.

  • PDF

A Study on the RFID Biometrics System Based on Hippocampal Learning Algorithm Using NMF and LDA Mixture Feature Extraction (NMF와 LDA 혼합 특징추출을 이용한 해마 학습기반 RFID 생체 인증 시스템에 관한 연구)

  • Oh Sun-Moon;Kang Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.46-54
    • /
    • 2006
  • Recently, the important of a personal identification is increasing according to expansion using each on-line commercial transaction and personal ID-card. Although a personal ID-card embedded RFID(Radio Frequency Identification) tag is gradually increased, the way for a person's identification is deficiency. So we need automatic methods. Because RFID tag is vary small storage capacity of memory, it needs effective feature extraction method to store personal biometrics information. We need new recognition method to compare each feature. In this paper, we studied the face verification system using Hippocampal neuron modeling algorithm which can remodel the hippocampal neuron as a principle of a man's brain in engineering, then it can learn the feature vector of the face images very fast. and construct the optimized feature each image. The system is composed of two parts mainly. One is feature extraction using NMF(Non-negative Matrix Factorization) and LDA(Linear Discriminants Analysis) mixture algorithm and the other is hippocampal neuron modeling and recognition simulation experiments confirm the each recognition rate, that are face changes, pose changes and low-level quality image. The results of experiments, we can compare a feature extraction and learning method proposed in this paper of any other methods, and we can confirm that the proposed method is superior to the existing method.