• Title/Summary/Keyword: 메모리 모델링

Search Result 167, Processing Time 0.031 seconds

Development of Learning Algorithm using Brain Modeling of Hippocampus for Face Recognition (얼굴인식을 위한 해마의 뇌모델링 학습 알고리즘 개발)

  • Oh, Sun-Moon;Kang, Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.55-62
    • /
    • 2005
  • In this paper, we propose the face recognition system using HNMA(Hippocampal Neuron Modeling Algorithm) which can remodel the cerebral cortex and hippocampal neuron as a principle of a man's brain in engineering, then it can learn the feature-vector of the face images very fast and construct the optimized feature each image. The system is composed of two parts. One is feature-extraction and the other is teaming and recognition. In the feature extraction part, it can construct good-classified features applying PCA(Principal Component Analysis) and LDA(Linear Discriminants Analysis) in order. In the learning part, it cm table the features of the image data which are inputted according to the order of hippocampal neuron structure to reaction-pattern according to the adjustment of a good impression in the dentate gyrus region and remove the noise through the associate memory in the CA3 region. In the CA1 region receiving the information of the CA3, it can make long-term memory learned by neuron. Experiments confirm the each recognition rate, that are face changes, pose changes and low quality image. The experimental results show that we can compare a feature extraction and learning method proposed in this paper of any other methods, and we can confirm that the proposed method is superior to existing methods.

A Study on the RFID Biometrics System Based on Hippocampal Learning Algorithm Using NMF and LDA Mixture Feature Extraction (NMF와 LDA 혼합 특징추출을 이용한 해마 학습기반 RFID 생체 인증 시스템에 관한 연구)

  • Oh Sun-Moon;Kang Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.46-54
    • /
    • 2006
  • Recently, the important of a personal identification is increasing according to expansion using each on-line commercial transaction and personal ID-card. Although a personal ID-card embedded RFID(Radio Frequency Identification) tag is gradually increased, the way for a person's identification is deficiency. So we need automatic methods. Because RFID tag is vary small storage capacity of memory, it needs effective feature extraction method to store personal biometrics information. We need new recognition method to compare each feature. In this paper, we studied the face verification system using Hippocampal neuron modeling algorithm which can remodel the hippocampal neuron as a principle of a man's brain in engineering, then it can learn the feature vector of the face images very fast. and construct the optimized feature each image. The system is composed of two parts mainly. One is feature extraction using NMF(Non-negative Matrix Factorization) and LDA(Linear Discriminants Analysis) mixture algorithm and the other is hippocampal neuron modeling and recognition simulation experiments confirm the each recognition rate, that are face changes, pose changes and low-level quality image. The results of experiments, we can compare a feature extraction and learning method proposed in this paper of any other methods, and we can confirm that the proposed method is superior to the existing method.

Data Congestion Control Using Drones in Clustered Heterogeneous Wireless Sensor Network (클러스터된 이기종 무선 센서 네트워크에서의 드론을 이용한 데이터 혼잡 제어)

  • Kim, Tae-Rim;Song, Jong-Gyu;Im, Hyun-Jae;Kim, Bum-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.12-19
    • /
    • 2020
  • The clustered heterogeneous wireless sensor network is comprised of sensor nodes and cluster heads, which are hierarchically organized for different objectives. In the network, we should especially take care of managing node resources to enhance network performance based on memory and battery capacity constraints. For instances, if some interesting events occur frequently in the vicinity of particular sensor nodes, those nodes might receive massive amounts of data. Data congestion can happen due to a memory bottleneck or link disconnection at cluster heads because the remaining memory space is filled with those data. In this paper, we utilize drones as mobile sinks to resolve data congestion and model the network, sensor nodes, and cluster heads. We also design a cost function and a congestion indicator to calculate the degree of congestion. Then we propose a data congestion map index and a data congestion mapping scheme to deploy drones at optimal points. Using control variable, we explore the relationship between the degree of congestion and the number of drones to be deployed, as well as the number of drones that must be below a certain degree of congestion and within communication range. Furthermore, we show that our algorithm outperforms previous work by a minimum of 20% in terms of memory overflow.

Adaptive Service Mode Conversion to Minimize Buffer Space Requirement in VOD Server (주문형 비디오 서버의 버퍼 최소화를 위한 가변적 서비스 모드 변환)

  • Won, Yu-Jip
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.5
    • /
    • pp.213-217
    • /
    • 2001
  • Excessive memory buffer requirement in continuous media playback is a serious impediment of wide spread usage of on-line multimedia service. Skewed access frequency of available video files provides an opportunity of re-using the date blocks which has been loaded by one session for later usage. We present novel algorithm which minimizes the buffer requirement in multiple sessions of multimedia playbacks. In continuous media playback originated from the disk, a certain amount of memory buffer is required to synchronize asynchronous disk. Read operation and synchronous playback operation. As aggregate playback bandwodth increases, larger amount of buffer needs to be allocated for this synchronization purpose. The focus of this work is to study the asymptotic behavior of the synchronization buffer requirement and to develop an algorithm coping with this excessive buffer requirement under bandwidth congestioon. We argue that in a large scale continuous media server, it may not be necessary to read the blocks for each session directly from the disk. The beauty of our work lies in the fact that it dynamically adapts to disk utilization of the server and finds the optimal way of servicinh the individual sessions while minimizing the overall buffer space requirement. Optimality of the proposed algorithm is shown by proof. The effectiveness and performance of the proposed scheme is examined via simulation.

  • PDF

Development of the Hippocampal Learning Algorithm Using Associate Memory and Modulator of Neural Weight (연상기억과 뉴런 연결강도 모듈레이터를 이용한 해마 학습 알고리즘 개발)

  • Oh Sun-Moon;Kang Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.37-45
    • /
    • 2006
  • In this paper, we propose the development of MHLA(Modulatory Hippocampus Learning Algorithm) which remodel a principle of brain of hippocampus. Hippocampus takes charge auto-associative memory and controlling functions of long-term or short-term memory strengthening. We organize auto-associative memory based 3 steps system(DG, CA3, CAl) and improve speed of learning by addition of modulator to long-term memory learning. In hippocampal system, according to the 3 steps order, information applies statistical deviation on Dentate Gyrus region and is labelled to responsive pattern by adjustment of a good impression. In CA3 region, pattern is reorganized by auto-associative memory. In CAI region, convergence of connection weight which is used long-term memory is learned fast by neural networks which is applied modulator. To measure performance of MHLA, PCA(Principal Component Analysis) is applied to face images which are classified by pose, expression and picture quality. Next, we calculate feature vectors and learn by MHLA. Finally, we confirm cognitive rate. The results of experiments, we can compare a proposed method of other methods, and we can confirm that the proposed method is superior to the existing method.

A Study on Light-weight Algorithm of Large scale BIM data for Visualization on Web based GIS Platform (웹기반 GIS 플랫폼 상 가시화 처리를 위한 대용량 BIM 데이터의 경량화 알고리즘 제시)

  • Kim, Ji Eun;Hong, Chang Hee
    • Spatial Information Research
    • /
    • v.23 no.1
    • /
    • pp.41-48
    • /
    • 2015
  • BIM Technology contains data from the life cycle of facility through 3D modeling. For these, one building products the huge file because of massive data. One of them is IFC which is the standard format, and there are issues that large scale data processing based on geometry and property information of object. It increases the rendering speed and constitutes the graphic card, so large scale data is inefficient for screen visualization to user. The light weighting of large scale BIM data has to solve for process and quality of program essentially. This paper has been searched and confirmed about light weight techniques from domestic and abroad researches. To control and visualize the large scale BIM data effectively, we proposed and verified the technique which is able to optimize the BIM character. For operating the large scale data of facility on web based GIS platform, the quality of screen switch from user phase and the effective memory operation were secured.

Influence and Application of an External Variable Magnetic Field on the Aqueous HCl Solution Behavior: Experimental Study and Modelling Using the Taguchi Method (염산 수용액 거동에 대한 가변 외부 자기장의 적용과 영향: 실험 연구 및 Taguchi 법을 이용한 모델링)

  • Hashemizadeh, Abbas;Ameri, Mohammad Javad;Aminshahidy, Babak;Gholizadeh, Mostafa
    • Applied Chemistry for Engineering
    • /
    • v.29 no.2
    • /
    • pp.215-224
    • /
    • 2018
  • Influences of the magnetic field on 5, 10 and 15 wt% (1.5, 3 and 4.5 M) HCl solution behaviour, which has widespread applications in petroleum well acidizing, were investigated in various conditions. Differences in the pH of magnetized hydrochloric acid compared to that of normal hydrochloric acid were measured. Taguchi design of experimental (DoE) method were used to model effects of the magnetic field intensity, concentration, velocity and temperature of acid in addition to the elapsed time. The experimental results showed that the magnetic field decreases [$H^+$] concentration of hydrochloric acid up to 42% after magnetization. Increasing the magnetic field intensity (with 28% contribution), concentration (with 42% contribution), and velocity of acid increases the effect of magnetic treatment. The results also demonstrated that the acid magnetization was-not influenced by the fluid velocity and heating. It was also displayed that the acid preserves its magnetic memory during time. The optimum combination of factors with respect to the highest change of [$H^+$] concentration was obtained as an acid concentration of 10% and an applied magnetic field of 4,300 Gauss. Due to the reduction of HCl reaction rate under the magnetization process, it can be proposed that the magnetized HCl is a cost effective and reliable alternative retarder in the matrix acidizing of hydrocarbon (crude oil and natural gas) wells.

A Comprehensive Groundwater Modeling using Multicomponent Multiphase Theory: 1. Development of a Multidimensional Finite Element Model (다중 다상이론을 이용한 통합적 지하수 모델링: 1. 다차원 유한요소 모형의 개발)

  • Joon Hyun Kim
    • Journal of Korea Soil Environment Society
    • /
    • v.1 no.1
    • /
    • pp.89-102
    • /
    • 1996
  • An integrated model is presented to describe underground flow and mass transport, using a multicomponent multiphase approach. The comprehensive governing equation is derived considering mass and force balances of chemical species over four phases(water, oil, air, and soil) in a schematic elementary volume. Compact and systemati notations of relevant variables and equations are introduced to facilitate the inclusion of complex migration and transformation processes, and variable spatial dimensions. The resulting nonlinear system is solved by a multidimensional finite element code. The developed code with dynamic array allocation, is sufficiently flexible to work across a wide spectrum of computers, including an IBM ES 9000/900 vector facility, SP2 cluster machine, Unix workstations and PCs, for one-, two and three-dimensional problems. To reduce the computation time and storage requirements, the system equations are decoupled and solved using a banded global matrix solver, with the vector and parallel processing on the IBM 9000. To avoide the numerical oscillations of the nonlinear problems in the case of convective dominant transport, the techniques of upstream weighting, mass lumping, and elementary-wise parameter evaluation are applied. The instability and convergence criteria of the nonlinear problems are studied for the one-dimensional analogue of FEM and FDM. Modeling capacity is presented in the simulation of three dimensional composite multiphase TCE migration. Comprehesive simulation feature of the code is presented in a companion paper of this issue for the specific groundwater or flow and contamination problems.

  • PDF

A Cost-effective Control Flow Checking using Loop Detection and Prediction (루프 검출 및 예측 방법을 적용한 비용 효율적인 실시간 분기 흐름 검사 기법)

  • Kim Gunbae;Ahn Jin-Ho;Kang Sungho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.12
    • /
    • pp.91-102
    • /
    • 2005
  • Recently, concurrent error detection for the processor becomes important. But it imposes too much overhead to adopt concurrent error detection capability on the system. In this paper, a new approach to resolve the problems of concurrent error detection is proposed. A loop detection scheme is introduced to reduce the repetitive loop iteration and memory access. To reduce the memory overheat an offset to calculate the target address of branching node is proposed. Performance evaluation shows that the new architecture has lower memory overhead and frequency of memory access than previous works. In addition, the new architecture provides the same error coverage and requires nearly constant memory size regardless of the size of the application program. Consequently, the proposed architecture can be used as an cost effective method to detect control flow errors in the commercial on the shelf products.

Improving the I/O Performance of Disk-Based Graph Engine by Graph Ordering (디스크 기반 그래프 엔진의 입출력 성능 향상을 위한 그래프 오더링)

  • Lim, Keunhak;Kim, Junghyun;Lee, Eunjae;Seo, Jiwon
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.1
    • /
    • pp.40-45
    • /
    • 2018
  • With the advent of big data and social networks, large-scale graph processing becomes popular research topic. Recently, an optimization technique called Gorder has been proposed to improve the performance of in-memory graph processing. This technique improves performance by optimizing the graph layout on memory to have better cache locality. However, since it is designed for in-memory graph processing systems, the technique is not suitable for disk-based graph engines; also the cost for applying the technique is significantly high. To solve the problem, we propose a new graph ordering called I/O Order. I/O Order considers the characteristics of I/O accesses for SSDs and HDDs to improve the performance of disk-based graph engine. In addition, the algorithmic complexity of I/O Order is simple compared to Gorder, hence it is cheaper to apply I/O Ordering. I/O order reduces the cost of pre-processing up to 9.6 times compared to that of Gorder's, still its performance is 2 times higher compared to the Random in low-locality graph algorithms.