• Title/Summary/Keyword: 메모리 확장기술

Search Result 95, Processing Time 0.022 seconds

Contents Conversion System for Mobile Devices using Light-Weight Web Document (웹 문서 경량화에 의한 모바일용 콘텐츠 변환 시스템)

  • Kim Jeong-Hee;Kwon Hoon;Kwak Ho-Young
    • Journal of Internet Computing and Services
    • /
    • v.6 no.6
    • /
    • pp.13-22
    • /
    • 2005
  • This paper aims to develop a system for converting web contents to mobile contents that can be used on mobile devices. Since web contents generally consist of pop-up ad windows, a bunch of unnecessary images and useless links, it is difficult to efficiently display them on common mobile devices that have lower bandwidth and memory, as well as much smaller screen, than the online environment. It is also troublesome for mobile device users to directly access contents. Thus, there has been a great demand for a new method for extracting useful and adequate contents from web documents, and optimizing them for use on mobile phones, In the paper, a system based on WAP 2,0 and XHTML Basic, which is a content creation language adopted for WAP 2,0, has been suggested. The system is designed to convert web contents by using the conversion rules of the existing filtering method after making the size of web documents smaller. The adopted conversion rules use the XHTML Basic's module units so that modification and deletion can be carried out with ease. In addition, it has been defined in a XSL document written in XSLT to maintain the extensibility of conversion and the validity of documents, In order to allow it to efficiently work together with WAP l.X's legacy services, the system has been built in a way that can have modules, which analyze information about CC/PP profiles and mobile device headers.

  • PDF

Development of Micro Thermal Image Acquisition System (마이크로 열화상 계측 시스템의 IOT 모듈화 개발)

  • Lee, Jun-Yeob;Oh, Jong-woo;Lee, DongHoon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.169-169
    • /
    • 2017
  • 스마트 돈사 내의 열환경 분석에 필수적으로 고려되어야 인자는 가축의 복사 에너지 변화로 볼 수 있다. 열환경 제어의 대상이기도 하지만 회귀적으로 열환경 변화의 인자이기도 하다. 이러한 가축의 복사 에너지 분석을 위하여 시설 내에 용이하게 배포가 가능한 열화상 계측 시스템을 개발하였다. 초소형 마이크로 열화상 계측 시스템에 부가적으로 IOT(Internet of Thing) 기반 기술을 이용한 모듈화 개발을 병행하였다. 열화상 계측 센서로 LWIR(Longwave infrared)영역에 해당하는 $8{\mu}m{\sim}4{\mu}m$의 영역에서 $0.05^{\circ}C$의 분해능을 보이는 $Lepton^{TM}$ (500-0690-00, FLIR, Goleta, CA)모델을 사용하였다. SPI(Serial Peripheral Interface) 속도 2 Mhz로 마이크로프로세서(NanoPi NEO Air, FrendlyArm, CA, USA)와 고속 통신을 수행하여 9 Hz의 계측이 가능하다. 열화상 센서와 마이컴으로 구성되는 단위 계측 시스템의 통신 기능 확장을 위하여 다음과 같이 세 단계의 정보 전달 시나리오를 설계하였다. 1) 단독적으로 열화상을 계측 하고 내장된 메모리에 저장하는 형식 2) 인접한 사용자 인터페이스에서 1번 단독 모듈에 접속하여 열화상을 실시간으로 전송하여 화면에 도시하는 형식 3) 2번 사용자 도시모듈과 병행적으로 Local WI-FI 통신을 이용한 모바일 기기에 화면을 도시하는 형식. 이와 같은 계층적이며 모듈화된 계측 시스템을 구성하기 위해서 1번 모듈에 공개 소프트웨어인 Hostapd 2.5(http://w1.fi/hostapd)버전을 설치하였다. 외부 인터넷 환경이 없는 상황에 1번 모듈 단독으로 AP(Access Point) 기능을 제공하여 지근 거리에 있는 2번 모듈과 3번 모바일 기기의 접속을 관리할 수 있다. 2번 모듈의 경우 화면 다수의 1번 모듈에 접속을 교차적으로 수행하는 방식과 2번 모듈 자체가 AP가 되어 1번 모듈의 접속을 허용하는 형태로 구성되어 있다. 계측 시스템의 계측 매트릭스 구성에 따라 선택적으로 결정할 수 있다. 1번 2번 모듈 공통적으로 TCP/IP Listener와 Client 서비스를 병렬적으로 수행할 수 있도록 개발을 하였다. 3번 모바일 기기에서 사용자 인터페이스 구현을 위하여 범용 Android 기반 GUI 프로그램과 Socket 통신을 연동시켰다. 1개의 열화상 Frame의 전송량은 9,600 Byte ($=80{\times}60{\times}2Byte$) 로 WI-FI 통신 전송 시 2회 ~ 6회 정도 내외로 가변적인 통신 수행 횟수를 나타내었다. 센서 계측 시스템과 정보 전송 시스템을 병렬적으로 구성한 모듈화 된 계측시스템의 전 요소에서 센서에서 제공하는 최대 계측 주기인 9 Hz 구현이 일반적으로 가능하였다. 이를 이용한 추후 연구를 통해 가축 객체의 열복사 정보와 돈사 내 열환경 간의 역학성을 연구할 것이다.

  • PDF

Study on Development of HDD Integrity Verification System using FirmOS (FirmOS를 이용한 HDD 무결성 검사 시스템 개발에 관한 연구)

  • Yeom, Jae-Hwan;Oh, Se-Jin;Roh, Duk-Gyoo;Jung, Dong-Kyu;Hwang, Ju-Yeon;Oh, Chungsik;Kim, Hyo-Ryoung;Shin, Jae-Sik
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.18 no.2
    • /
    • pp.55-61
    • /
    • 2017
  • In radio astronomy, high-capacity HDDs are being used to save huge amounts of HDDs in order to record the observational data. For VLBI observations, observational speeds increase and huge amounts of observational data must be stored as they expand to broadband. As the HDD is frequently used, the number of failures occurred, and then it takes a lot of time to recover it. In addition, if a failed HDD is continuously used, observational data loss occurs. And it costs a lot of money to buy a new HDD. In this study, we developed the integrity verification system of the Serial ATA HDD using FirmOS. The FirmOS is an OS that has been developed to function exclusively for specific purposes on a system having a general server board and CPU. The developed system performs the process of writing and reading specific patterns of data in a physical area of the SATA HDD based on a FirmOS. In addition, we introduced a method to investigate the integrity of HDD integrity by comparing it with the stored pattern data from the HDD controller. Using the developed system, it was easy to determine whether the disk pack used in VLBI observations has error or not, and it is very useful to improve the observation efficiency. This paper introduces the detail for the design, configuration, testing, etc. of the SATA HDD integrity verification system developed.

  • PDF

Change Detection for High-resolution Satellite Images Using Transfer Learning and Deep Learning Network (전이학습과 딥러닝 네트워크를 활용한 고해상도 위성영상의 변화탐지)

  • Song, Ah Ram;Choi, Jae Wan;Kim, Yong Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.199-208
    • /
    • 2019
  • As the number of available satellites increases and technology advances, image information outputs are becoming increasingly diverse and a large amount of data is accumulating. In this study, we propose a change detection method for high-resolution satellite images that uses transfer learning and a deep learning network to overcome the limit caused by insufficient training data via the use of pre-trained information. The deep learning network used in this study comprises convolutional layers to extract the spatial and spectral information and convolutional long-short term memory layers to analyze the time series information. To use the learned information, the two initial convolutional layers of the change detection network are designed to use learned values from 40,000 patches of the ISPRS (International Society for Photogrammertry and Remote Sensing) dataset as initial values. In addition, 2D (2-Dimensional) and 3D (3-dimensional) kernels were used to find the optimized structure for the high-resolution satellite images. The experimental results for the KOMPSAT-3A (KOrean Multi-Purpose SATllite-3A) satellite images show that this change detection method can effectively extract changed/unchanged pixels but is less sensitive to changes due to shadow and relief displacements. In addition, the change detection accuracy of two sites was improved by using 3D kernels. This is because a 3D kernel can consider not only the spatial information but also the spectral information. This study indicates that we can effectively detect changes in high-resolution satellite images using the constructed image information and deep learning network. In future work, a pre-trained change detection network will be applied to newly obtained images to extend the scope of the application.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.