• Title/Summary/Keyword: Large Size data Processing

Search Result 246, Processing Time 0.02 seconds

A study on the establishment and utilization of large-scale local spatial information using search drones (수색 드론을 활용한 대규모 지역 공간정보 구축 및 활용방안에 관한 연구)

  • Lee, Sang-Beom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.1
    • /
    • pp.37-43
    • /
    • 2022
  • Drones, one of the 4th industrial technologies that are expanding from military use to industrial use, are being actively used in the search missions of the National Police Agency and finding missing persons, thereby reducing interest in a wide area and the input of large-scale search personnel. However, legal review of police drone operation is continuously required, and the importance of advanced system for related operations and analysis of captured images in connection with search techniques is increasing at the same time. In this study, in order to facilitate recording, preservation, and monitoring in the concept of precise search and monitoring, it is possible to achieve high efficiency and secure golden time when precise search is performed by constructing spatial information based on photo rather than image data-based search. Therefore, we intend to propose a spatial information construction technique that reduces the resulting data volume by adjusting the unnecessary spatial information completion rate according to the size of the subject. Through this, the scope of use of drone search missions for large-scale areas is advanced and it is intended to be used as basic data for building a drone operation manual for police searches.

Memory Efficient Query Processing over Dynamic XML Fragment Stream (동적 XML 조각 스트림에 대한 메모리 효율적 질의 처리)

  • Lee, Sang-Wook;Kim, Jin;Kang, Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.15D no.1
    • /
    • pp.1-14
    • /
    • 2008
  • This paper is on query processing in the mobile devices where memory capacity is limited. In case that a query against a large volume of XML data is processed in such a mobile device, techniques of fragmenting the XML data into chunks and of streaming and processing them are required. Such techniques make it possible to process queries without materializing the XML data in its entirety. The previous schemes such as XFrag[4], XFPro[5], XFLab[6] are not scalable with respect to the increase of the size of the XML data because they lack proper memory management capability. After some information on XML fragments necessary for query processing is stored, it is not deleted even after it becomes of no use. As such, when the XML fragments are dynamically generated and infinitely streamed, there could be no guarantee of normal completion of query processing. In this paper, we address scalability of query processing over dynamic XML fragment stream, proposing techniques of deleting information on XML fragments accumulated during query processing in order to extend the previous schemes. The performance experiments through implementation showed that our extended schemes considerably outperformed the previous ones in memory efficiency and scalability with respect to the size of the XML data.

A Study on the Improvement of Availability of Distributed Processing Systems Using Edge Computing (엣지컴퓨팅을 활용한 분산처리 시스템의 가용성 향상에 관한 연구)

  • Lee, Kun-Woo;Kim, Young-Gon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.83-88
    • /
    • 2022
  • Internet of Things (hereinafter referred to as IoT) related technologies are continuously developing in line with the recent development of information and communication technologies. IoT system sends and receives unique data through network based on various sensors. Data generated by IoT systems can be defined as big data in that they occur in real time, and that the amount is proportional to the amount of sensors installed. Until now, IoT systems have applied data storage, processing and computation through centralized processing methods. However, existing centralized processing servers can be under load due to bottlenecks if the deployment grows in size and a large amount of sensors are used. Therefore, in this paper, we propose a distributed processing system for applying a data importance-based algorithm aimed at the high availability of the system to efficiently handle real-time sensor data arising in IoT environments.

An Efficient Indexing Scheme Considering the Characteristics of Large Scale RDF Data (대규모 RDF 데이터의 특성을 고려한 효율적인 색인 기법)

  • Kim, Kiyeon;Yoon, Jonghyeon;Kim, Cheonjung;Lim, Jongtae;Bok, Kyoungsoo;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.1
    • /
    • pp.9-23
    • /
    • 2015
  • In this paper, we propose a new RDF index scheme considering the characteristics of large scale RDF data to improve the query processing performance. The proposed index scheme creates a S-O index for subjects and objects since the subjects and objects of RDF triples are used redundantly. In order to reduce the total size of the index, it constructs a P index for the relatively small number of predicates in RDF triples separately. If a query contains the predicate, we first searches the P index since its size is relatively smaller compared to the S-O index. Otherwise, we first searches the S-O index. It is shown through performance evaluation that the proposed scheme outperforms the existing scheme in terms of the query processing time.

A Study for Quality Improvement of Three-dimensional Body Measurement Data (3차원 인체치수 조사 자료의 품질 개선을 위한 연구)

  • Park, Sun-Mi;Nam, Yun-Ja;Park, Jin-Woo
    • Journal of the Ergonomics Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.117-124
    • /
    • 2009
  • To inspect the quality of data collected from a large-scale body measurement and investigation project, it is necessary to establish a proper data editing process. The three-dimensional body measurement may have measuring errors caused from measurer's proficiency or changes in the subject's posture. And it may also have errors caused in the process of algorithm expressing the information obtained from the three-dimensional scanner into numerical values, and in the course of data-processing dealing with numerous data for individuals. When those errors are found, the quality of the measured data is deteriorated, and they consequently reduce the quality of statistics which was conducted on the basis of it. Therefore this study intends to suggest a new way to improve the quality of the data collected from the three-dimensional body measurement by proposing a working procedure identifying data errors and correcting them from the whole data processing procedure-collecting, processing, and analyzing- of the 2004 Size Korea Three-dimensional Body Measurement Project. This study was carried out into three stages: Firstly, we detected erroneous data by examining of logical relations among variables under each edit rule. Secondly, we detected suspicious data through independent examination of individual variable value by sex and age. Finally, we examined scatter-plot matrix of many variables to consider the relationships among them. This simple graphical tool helps us to find out whether some suspicious data exist in the data set or not. As a result of this study, we detected some erroneous data included in the raw data. We figured out that the main errors are not because of the system errors that the three-dimensional body measurement system has but because of the subject's original three-dimensional shape data. Therefore by correcting some erroneous data, we have enhanced data quality.

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

Important Facility Guard System Using Edge Computing for LiDAR (LiDAR용 엣지 컴퓨팅을 활용한 중요시설 경계 시스템)

  • Jo, Eun-Kyung;Lee, Eun-Seok;Shin, Byeong-Seok
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.10
    • /
    • pp.345-352
    • /
    • 2022
  • Recent LiDAR(Light Detection And Ranging) sensor is used for scanning object around in real-time. This sensor can detect movement of the object and how it has changed. As the production cost of the sensors has been decreased, LiDAR begins to be used for various industries such as facility guard, smart city and self-driving car. However, LiDAR has a large input data size due to its real-time scanning process. So another way for processing a large amount of data are needed in LiDAR system because it can cause a bottleneck. This paper proposes edge computing to compress massive point cloud for processing quickly. Since laser's reflection range of LiDAR sensor is limited, multiple LiDAR should be used to scan a large area. In this reason multiple LiDAR sensor's data should be processed at once to detect or recognize object in real-time. Edge computer compress point cloud efficiently to accelerate data processing and decompress every data in the main cloud in real-time. In this way user can control LiDAR sensor in the main system without any bottleneck. The system we suggest solves the bottleneck which was problem on the cloud based method by applying edge computing service.

Memory Reduction Method of Radix-22 MDF IFFT for OFDM Communication Systems (OFDM 통신시스템을 위한 radix-22 MDF IFFT의 메모리 감소 기법)

  • Cho, Kyung-Ju
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.1
    • /
    • pp.42-47
    • /
    • 2020
  • In OFDM-based very high-speed communication systems, FFT/IFFT processor should have several properties of low-area and low-power consumption as well as high throughput and low processing latency. Thus, radix-2k MDF (multipath delay feedback) architectures by adopting pipeline and parallel processing are suitable. In MDF architecture, the feedback memory which increases in proportion to the input signal word-length has a large area and power consumption. This paper presents a feedback memory size reduction method of radix-22 MDF IFFT processor for OFDM applications. The proposed method focuses on reducing the feedback memory size in the first two stages of MDF architectures since the first two stages occupy about 75% of the total feedback memory. In OFDM transmissions, IFFT input signals are composed of modulated data and pilot, null signals. In order to reduce the IFFT input word-length, the integer mapping which generates mapped data composed of two signed integer corresponding to modulated data and pilot/null signals is proposed. By simulation, it is shown that the proposed method has achieved a feedback memory reduction up to 39% compared to conventional approach.

Side Looking Vehicle Detection Radar Using A Novel Signal Processing Algorithm (새로운 신호처리 알고리즘을 이용한 측방설치 차량감지용 레이다)

  • Kang Sung Min;Kim Tae Young;Choi Jae Hong;Koo Kyung Heon
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.41 no.12
    • /
    • pp.1-7
    • /
    • 2004
  • We have developed a 24GHz side-looking vehicle detection radar. A 24GHz front-end module and a novel signal processing algorithm have been developed for speed measurement and size classification of vehicles in multiple lanes. The system has a fixed antenna and FMCW processing module. This paper presents the background theory of operation and shows some measured data using the algorithm. The data shows that measured velocity of the passing vehicle is within the accuracy of 95% in single lane and the velocity of the vehicles in two lanes is within the accuracy of 90% by using variable threshold estimation. The classification of vehicle size as small, medium and large has been measured with 89% accuracy.

Developing Program for Processing a Mass DEM Data using Streaming Method (스트리밍 방식을 이용한 대용량 DEM 프로세싱 프로그램의 개발)

  • Lee, Dong-Ha;Lee, Yong-Gyun;Suh, Yong-Cheol
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.4
    • /
    • pp.61-66
    • /
    • 2009
  • This Paper describes a new program called DEM Generator need to process DEM from LiDAR data or digital map data. It is difficult to generate raster DEM from LiDAR mass point data sets and digital maps too large to fit into memory. The DEM Generator was designed to process DEM and shaded relief image of GeoTiff format in order of streaming meshes; I/O minimize tag, delaunay triangle, natural neighborhood or TIN, temporary files and grid. It is expected that we can be improved the precision of DEM and solved the time consuming problem of DEM generating of a wider area.

  • PDF