• Title/Summary/Keyword: File system

Search Result 2,276, Processing Time 0.029 seconds

PC-based Control System of Serially Connected Multi-channel Speakers (직렬연결 다채널 스피커의 PC 기반 제어 시스템)

  • Lee, Sun-Yong;Kim, Tae-Wan;Byun, Ji-Sung;Song, Moon-Vin;Chung, Yun-Mo
    • The KIPS Transactions:PartA
    • /
    • v.15A no.6
    • /
    • pp.317-324
    • /
    • 2008
  • In this paper, we propose a system which easily controls the existing serially connected multi-channel speakers in a general personal computer by using a USB(Universal Serial Bus) interface. The personal computer as a host of the USB interface analyzes a sound source and sends audio data in a real-time fashion by the use of the isochronous transmission, one of four transmission methods provided by the USB interface. In addition, a channel is assigned by means of the bulk transmission, one of four transmission methods provided by the USB interface. Transmitted data from the USB host are sent to each speaker through compression and packet generation process. Each speaker detects corresponding digital data and regenerates audio signals through DAC(Digital-to-Analog Converter). A user can easily select a sound source file and a channel by the use of a GUI environment in a personal computer.

Application of GeoJSON to Geo-spatial Web Service (지공간정보 웹 서비스에서 GeoJSON 적용)

  • Park, Yong-Jae;Lee, Ki-Won
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.6
    • /
    • pp.613-620
    • /
    • 2008
  • Web on Web 2.0 paradigm is regarded as a kind of platform. Accordingly, users on web can use almost same applications like using certain applications on personal computer, to given purposes. For Web as platform, it needs web-based or web-recognizable file format to communicate or to exchange various information contents and data among applied applications. Text-based JSON is a practical format directly linked Javascript on Web, so that XML-typed data, being previously built, can be possible for tagging process containing JSON format. However, GeoJSON handling geo-spatial data sets is now fledgling stage in standards. Thus, it is not on the practical applicability level, and there are a few tools or open sources for this format. To adopt GeoJSON for the future Geo-web application, users implement GeoJSON parser or apply the server-based open source GIS for their purpose. In this study, a preliminary work for GeoJSON application in Geo-web service carried out using Google Maps API and openlayers library API.

MapReduce-based Localized Linear Regression for Electricity Price Forecasting (전기 가격 예측을 위한 맵리듀스 기반의 로컬 단위 선형회귀 모델)

  • Han, Jinju;Lee, Ingyu;On, Byung-Won
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.67 no.4
    • /
    • pp.183-190
    • /
    • 2018
  • Predicting accurate electricity prices is an important task in the electricity trading market. To address the electricity price forecasting problem, various approaches have been proposed so far and it is known that linear regression-based approaches are the best. However, the use of such linear regression-based methods is limited due to low accuracy and performance. In traditional linear regression methods, it is not practical to find a nonlinear regression model that explains the training data well. If the training data is complex (i.e., small-sized individual data and large-sized features), it is difficult to find the polynomial function with n terms as the model that fits to the training data. On the other hand, as a linear regression model approximating a nonlinear regression model is used, the accuracy of the model drops considerably because it does not accurately reflect the characteristics of the training data. To cope with this problem, we propose a new electricity price forecasting method that divides the entire dataset to multiple split datasets and find the best linear regression models, each of which is the optimal model in each dataset. Meanwhile, to improve the performance of the proposed method, we modify the proposed localized linear regression method in the map and reduce way that is a framework for parallel processing data stored in a Hadoop distributed file system. Our experimental results show that the proposed model outperforms the existing linear regression model. Specifically, the accuracy of the proposed method is improved by 45% and the performance is faster 5 times than the existing linear regression-based model.

Chaincode-based File Integrity Verification Model (체인코드 기반의 파일 무결성 검증 모델)

  • Kim, Hyo-Jong;Han, Kun-Hee;Shin, Seung-Soo
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.4
    • /
    • pp.51-60
    • /
    • 2021
  • Recent advances in network and hardware technologies have led to active research and multiple network technologies that fuse blockchain technologies with security. We propose a system model that analyzes technologies using existing blockchain and verifies the integrity of files using private blockchain in a limited environment. The proposed model can be written as a chain code of Hyperleisure Fabric, a private blockchain platform, and verified for integrity of files through Hyperleisure Explorer, a private blockchain integrated management platform. The system performance of the proposed model was analyzed from a developer perspective and from a user perspective. As a result of the analysis, there are compatibility problems according to the version of various modules to run the blockchain platform, and only limited elements such as chain code status and groups can be checked.

A Watermarking Algorithm of 3D Mesh Model Using Spherical Parameterization (구면 파라미터기법을 이용한 3차원 메쉬 모델의 워더마킹 알고리즘)

  • Cui, Ji-Zhe;Kim, Jong-Weon;Choi, Jong-Uk
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.1
    • /
    • pp.149-159
    • /
    • 2008
  • In this paper, we propose a blind watermarking algorithm of 3d mesh model using spherical parameterization. Spherical parameterization is a useful method which is applicable to 3D data processing. Especially, orthogonal coordinate can not analyse the feature of the vertex coordination of the 3D mesh model, but this is possible to analyse and process. In this paper, the centroid center of the 3D model was set to the origin of the spherical coordinate, the orthogonal coordinate system was transformed to the spherical coordinate system, and then the spherical parameterization was applied. The watermark was embedded via addition/modification of the vertex after the feature analysis of the geometrical information and topological information. This algorithm is robust against to the typical geometrical attacks such as translation, scaling and rotation. It is also robust to the mesh reordering, file format change, mesh simplification, and smoothing. In this case, the this algorithm can extract the watermark information about $90{\sim}98%$ from the attacked model. This means it can be applicable to the game, virtual reality and rapid prototyping fields.

Tracking Data through Tracking Data Server in Edge Computing (엣지 컴퓨팅 환경에서 추적 데이터 서버를 통한 데이터 추적)

  • Lim, Han-wool;Byoun, Won-jun;Yun, Joobeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.3
    • /
    • pp.443-452
    • /
    • 2021
  • One of the key technologies in edge computing is that it always provides services close to the user by moving data between edge servers according to the user's movements. As such, the movement of data between edge servers is frequent. As IoT technology advances and usage areas expand, the data generated also increases, requiring technology to accurately track and process each data to properly manage the data present in the edge computing environment. Currently, cloud systems do not have data disposal technology based on tracking technology for data movement and distribution in their environment, so users cannot see where it is now, whether it is properly removed or not left in the cloud system if users request it to be deleted. In this paper, we propose a tracking data server to create and manage the movement and distribution of data for each edge server and data stored in the central cloud in an edge computing environment.

Spatial Data Update Method for Efficient Operation of the Integrated Underground Geospatial Map Management System (지하공간통합지도 관리 시스템의 효율적인 운영을 위한 공간 데이터 갱신 방법)

  • Lee, Bong-Jun;Kouh, Hoon-Joon
    • Journal of Industrial Convergence
    • /
    • v.20 no.7
    • /
    • pp.57-64
    • /
    • 2022
  • There are various structures in the underground space, and the structures is managed as 3D data in Integrated Underground Geospatial Map Management System(IUGMMS). The worker transmits and updates the integrated map including the changed underground geospatial data to IUGMMS with the completed book submission program. However, there is a problem in that the transmission time and the update time is delayed because the size of the integrated map file is large. In this paper, we try to extract and transmit only the changed integrated map by obtaining and comparing each spatial characteristic information of the integrated map. As a result of the experiment, the transmission time of the suggestion method is short and the update time is also short than the transitional method because the suggestion method transmits only the integrated map including the changed underground geospatial data. As a result, it was possible to reduce the delay time in transmitting and updating the changed integrated map.

Development of a distributed high-speed data acquisition and monitoring system based on a special data packet format for HUST RF negative ion source

  • Li, Dong;Yin, Ling;Wang, Sai;Zuo, Chen;Chen, Dezhi
    • Nuclear Engineering and Technology
    • /
    • v.54 no.10
    • /
    • pp.3587-3594
    • /
    • 2022
  • A distributed high-speed data acquisition and monitoring system for the RF negative ion source at Huazhong University of Science and Technology (HUST) is developed, which consists of data acquisition, data forwarding and data processing. Firstly, the data acquisition modules sample physical signals at high speed and upload the sampling data with corresponding absolute-time labels over UDP, which builds the time correlation among different signals. And a special data packet format is proposed for the data upload, which is convenient for packing or parsing a fixed-length packet, especially when the span of the time labels in a packet crosses an absolute second. The data forwarding modules then receive the UDP messages and distribute their data packets to the real-time display module and the data storage modules by PUB/SUB-pattern message queue of ZeroMQ. As for the data storage, a scheme combining the file server and MySQL database is adopted to increase the storage rate and facilitate the data query. The test results show that the loss rate of the data packets is within the range of 0-5% and the storage rate is higher than 20 Mbps, both acceptable for the HUST RF negative ion source.

Analysis and Advice on Cache Algorithms of SSD FTL (SSD FTL 캐시 알고리즘 분석 및 제언)

  • Hyung Bong, Lee;Tae Yun, Chung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.1
    • /
    • pp.1-8
    • /
    • 2023
  • It is impossible to overwrite on an already allocated page in SSDs, so whenever a write operation occurs a page replacement with a clean page is required. To resolve this problem, SSDs have an internal flash translation layer called FTL that maps logical pages managed by a file system of operating system to currently allocated physical pages. SSD pages discarded due to write operations must be recycled through initialization, but since the number of initialization times is limited the FTL provides a caching function to reduce the number of writes in addition to the page mapping function, which is a core function. In this study, we focus on the FTL cache methodologies reducing the number of page writes and analyze the related algorithms, and propose a write-only cache strategy. As a result of experimenting with the write-only cache using a simulator, it showed an improvement of up to 29%.

Porting gcc Based eCos OS and PROFINET Communication Stack to IAR (gcc 기반 eCos 운영체제 및 PROFINET 통신 스택의 IAR 포팅 방법)

  • Jin Ho Kim
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.4
    • /
    • pp.127-134
    • /
    • 2023
  • This paper describes how to port the eCos operating system and PROFINET communication stack developed based on gcc to the IAR compiler. The eCos operating system provides basic functions such as multi-thread, TCP/IP, and device driver for PROFINET operation, so there is no need to change it when developing PROFINET applications. Therefore, in this study, we reuse an eCos library built with gcc and it link with PROFINET communication stack that are ported to IAR complier. Due to the different of the gcc and IAR linker, symbol definitions and address of the constructors should be changed using the external tool that generates symbol definitions and address of the constructors from MAP file. In order to verify the proposed method, it was confirmed that the actual I/O was operating normally through PROFINET IRT communication by connecting to the Siemens PLC. IAR compiler has better performance in both the compile time and the size of the generated binary. The proposed method in this study is expected to help port various open sources as well as eCos and PROFINET communication stacks to other compilers.