• Title/Summary/Keyword: real-time processing

Search Result 4,993, Processing Time 0.036 seconds

National Disaster Management, Investigation, and Analysis Using RS/GIS Data Fusion (RS/GIS 자료융합을 통한 국가 재난관리 및 조사·분석)

  • Seongsam Kim;Jaewook Suk;Dalgeun Lee;Junwoo Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_2
    • /
    • pp.743-754
    • /
    • 2023
  • The global occurrence of myriad natural disasters and incidents, catalyzed by climate change and extreme meteorological conditions, has engendered substantial human and material losses. International organizations such as the International Charter have established an enduring collaborative framework for real-time coordination to provide high-resolution satellite imagery and geospatial information. These resources are instrumental in the management of large-scale disaster scenarios and the expeditious execution of recovery operations. At the national level, the operational deployment of advanced National Earth Observation Satellites, controlled by National Geographic Information Institute, has not only catalyzed the advancement of geospatial data but has also contributed to the provisioning of damage analysis data for significant domestic and international disaster events. This special edition of the National Disaster Management Research Institute delineates the contemporary landscape of major disaster incidents in the year 2023 and elucidates the strategic blueprint of the government's national disaster safety system reform. Additionally, it encapsulates the most recent research accomplishments in the domains of artificial satellite systems, information and communication technology, and spatial information utilization, which are paramount in the institution's disaster situation management and analysis efforts. Furthermore, the publication encompasses the most recent research findings relevant to data collection, processing, and analysis pertaining to disaster cause and damage extent. These findings are especially pertinent to the institute's on-site investigation initiatives and are informed by cutting-edge technologies, including drone-based mapping and LiDAR observation, as evidenced by a case study involving the 2023 landslide damage resulting from concentrated heavy rainfall.

Methodology for Estimating Highway Traffic Performance Based on Origin/Destination Traffic Volume (기종점통행량(O/D) 기반의 고속도로 통행실적 산정 방법론 연구)

  • Howon Lee;Jungyeol Hong;Yoonhyuk Choi
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.2
    • /
    • pp.119-131
    • /
    • 2024
  • Understanding accurate traffic performance is crucial for ensuring efficient highway operation and providing a sustainable mobility environment. On the other hand, an immediate and precise estimation of highway traffic performance faces challenges because of infrastructure and technological constraints, data processing complexities, and limitations in using integrated big data. This paper introduces a framework for estimating traffic performance by analyzing real-time data sourced from toll collection systems and dedicated short-range communications used on highways. In particular, this study addresses the data errors arising from segmented information in data, influencing the individual travel trajectories of vehicles and establishing a more reliable Origin-Destination (OD) framework. The study revealed the necessity of trip linkage for accurate estimations when consecutive segments of individual vehicle travel within the OD occur within a 20-minute window. By linking these trip ODs, the daily average highway traffic performance for South Korea was estimated to be248,624 thousand vehicle kilometers per day. This value shows an increase of approximately 458 thousand vehicle kilometers per day compared to the 248,166 thousand vehicle kilometers per day reported in the highway operations manual. This outcome highlights the potential for supplementing previously omitted traffic performance data through the methodology proposed in this study.

Immersive Smart Balance Board with Multiple Feedback (다중 피드백을 지원하는 몰입형 스마트 밸런스 보드)

  • Seung-Yong Lee;Seonho Lee;Junesung Park;Min-Chul Shin;Seung-Hyun Yoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.171-178
    • /
    • 2024
  • Exercises using a Balance Board (BB) are effective in developing balance, strengthening core muscles, and improving physical fitness and concentration. In particular, the Smart Balance Board (SBB), which integrates with various digital content, provides appropriate feedback compared to traditional balance boards, maximizing the effectiveness of the exercise. However, most systems only offer visual and auditory feedback, failing to evaluate the impact on user engagement, interest, and the accuracy of exercise postures. This study proposes an Immersive Smart Balance Board (I-SBB) that utilizes multiple sensors to enable training with various feedback mechanisms and precise postures. The proposed system, based on Arduino, consists of a gyro sensor for measuring the board's posture, a communication module for wired/wireless communication, an infrared sensor to guide the user's foot placement, and a vibration motor for tactile feedback. The board's posture measurements are smoothly corrected using a Kalman Filter, and the multi-sensor data is processed in real-time using FreeRTOS. The proposed I-SBB is shown to be effective in enhancing user concentration and engagement, as well as generating interest, by integrating with diverse content.

Enhancing A Neural-Network-based ISP Model through Positional Encoding (위치 정보 인코딩 기반 ISP 신경망 성능 개선)

  • DaeYeon Kim;Woohyeok Kim;Sunghyun Cho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.81-86
    • /
    • 2024
  • The Image Signal Processor (ISP) converts RAW images captured by the camera sensor into user-preferred sRGB images. While RAW images contain more meaningful information for image processing than sRGB images, RAW images are rarely shared due to their large sizes. Moreover, the actual ISP process of a camera is not disclosed, making it difficult to model the inverse process. Consequently, research on learning the conversion between sRGB and RAW has been conducted. Recently, the ParamISP[1] model, which directly incorporates camera parameters (exposure time, sensitivity, aperture size, and focal length) to mimic the operations of a real camera ISP, has been proposed by advancing the simple network structures. However, existing studies, including ParamISP[1], have limitations in modeling the camera ISP as they do not consider the degradation caused by lens shading, optical aberration, and lens distortion, which limits the restoration performance. This study introduces Positional Encoding to enable the camera ISP neural network to better handle degradations caused by lens. The proposed positional encoding method is suitable for camera ISP neural networks that learn by dividing the image into patches. By reflecting the spatial context of the image, it allows for more precise image restoration compared to existing models.

Fast Join Mechanism that considers the switching of the tree in Overlay Multicast (오버레이 멀티캐스팅에서 트리의 스위칭을 고려한 빠른 멤버 가입 방안에 관한 연구)

  • Cho, Sung-Yean;Rho, Kyung-Taeg;Park, Myong-Soon
    • The KIPS Transactions:PartC
    • /
    • v.10C no.5
    • /
    • pp.625-634
    • /
    • 2003
  • More than a decade after its initial proposal, deployment of IP Multicast has been limited due to the problem of traffic control in multicast routing, multicast address allocation in global internet, reliable multicast transport techniques etc. Lately, according to increase of multicast application service such as internet broadcast, real time security information service etc., overlay multicast is developed as a new internet multicast technology. In this paper, we describe an overlay multicast protocol and propose fast join mechanism that considers switching of the tree. To find a potential parent, an existing search algorithm descends the tree from the root by one level at a time, and it causes long joining latency. Also, it is try to select the nearest node as a potential parent. However, it can't select the nearest node by the degree limit of the node. As a result, the generated tree has low efficiency. To reduce long joining latency and improve the efficiency of the tree, we propose searching two levels of the tree at a time. This method forwards joining request message to own children node. So, at ordinary times, there is no overhead to keep the tree. But the joining request came, the increasing number of searching messages will reduce a long joining latency. Also searching more nodes will be helpful to construct more efficient trees. In order to evaluate the performance of our fast join mechanism, we measure the metrics such as the search latency and the number of searched node and the number of switching by the number of members and degree limit. The simulation results show that the performance of our mechanism is superior to that of the existing mechanism.

Liver Splitting Using 2 Points for Liver Graft Volumetry (간 이식편의 체적 예측을 위한 2점 이용 간 분리)

  • Seo, Jeong-Joo;Park, Jong-Won
    • The KIPS Transactions:PartB
    • /
    • v.19B no.2
    • /
    • pp.123-126
    • /
    • 2012
  • This paper proposed a method to separate a liver into left and right liver lobes for simple and exact volumetry of the river graft at abdominal MDCT(Multi-Detector Computed Tomography) image before the living donor liver transplantation. A medical team can evaluate an accurate river graft with minimized interaction between the team and a system using this algorithm for ensuring donor's and recipient's safe. On the image of segmented liver, 2 points(PMHV: a point in Middle Hepatic Vein and PPV: a point at the beginning of right branch of Portal Vein) are selected to separate a liver into left and right liver lobes. Middle hepatic vein is automatically segmented using PMHV, and the cutting line is decided on the basis of segmented Middle Hepatic Vein. A liver is separated on connecting the cutting line and PPV. The volume and ratio of the river graft are estimated. The volume estimated using 2 points are compared with a manual volume that diagnostic radiologist processed and estimated and the weight measured during surgery to support proof of exact volume. The mean ${\pm}$ standard deviation of the differences between the actual weights and the estimated volumes was $162.38cm^3{\pm}124.39$ in the case of manual segmentation and $107.69cm^3{\pm}97.24$ in the case of 2 points method. The correlation coefficient between the actual weight and the manually estimated volume is 0.79, and the correlation coefficient between the actual weight and the volume estimated using 2 points is 0.87. After selection the 2 points, the time involved in separation a liver into left and right river lobe and volumetry of them is measured for confirmation that the algorithm can be used on real time during surgery. The mean ${\pm}$ standard deviation of the process time is $57.28sec{\pm}32.81$ per 1 data set ($149.17pages{\pm}55.92$).

A Framework on 3D Object-Based Construction Information Management System for Work Productivity Analysis for Reinforced Concrete Work (철근콘크리트 공사의 작업 생산성 분석을 위한 3차원 객체 활용 정보관리 시스템 구축방안)

  • Kim, Jun;Cha, Heesung
    • Korean Journal of Construction Engineering and Management
    • /
    • v.19 no.2
    • /
    • pp.15-24
    • /
    • 2018
  • Despite the recognition of the need for productivity information and its importance, the feedback of productivity information is not well-established in the construction industry. Effective use of productivity information is required to improve the reliability of construction planning. However, in many cases, on-site productivity information is hardly management effectively, but rather it relies on the experience and/or intuition of project participants. Based on the literature review and expert interviews, the authors recognized that one of the possible solutions is to develop a systematic approach in dealing with productivity information of the construction job-sites. It is required that the new system should not be burdensome to users, purpose-oriented information management, easy-to follow information structure, real-time information feedback, and productivity-related factor recognition. Based on the preliminary investigations, this study proposed a framework for a novel system that facilitate the effective management of construction productivity information. This system has utilized Sketchup software which has good user accessibility by minimizing additional data input and related workload. The proposed system has been designed to input, process, and output the pertinent information through a four-stage process: preparation, input, processing, and output. The inputted construction information is classified into Task Breakdown Structure (TBS) and Material Breakdown Structure (MBS), which are constructed by referring to the contents of the standard specification of building construction, and converted into productivity information. In addition, the converted information is also graphically visualized on the screen, allowing the users to use the productivity information from the job-site. The productivity information management system proposed in this study has been pilot-tested in terms of practical applicability and information availability in the real construction project. Very positive results have been obtained from the usability and the applicability of the system and benefits are expected from the validity test of the system. If the proposed system is used in the planning stage in the construction, the productivity information and the continuous information is accumulated, the expected effectiveness of this study would be conceivably further enhanced.

Noise-robust electrocardiogram R-peak detection with adaptive filter and variable threshold (적응형 필터와 가변 임계값을 적용하여 잡음에 강인한 심전도 R-피크 검출)

  • Rahman, MD Saifur;Choi, Chul-Hyung;Kim, Si-Kyung;Park, In-Deok;Kim, Young-Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.126-134
    • /
    • 2017
  • There have been numerous studies on extracting the R-peak from electrocardiogram (ECG) signals. However, most of the detection methods are complicated to implement in a real-time portable electrocardiograph device and have the disadvantage of requiring a large amount of calculations. R-peak detection requires pre-processing and post-processing related to baseline drift and the removal of noise from the commercial power supply for ECG data. An adaptive filter technique is widely used for R-peak detection, but the R-peak value cannot be detected when the input is lower than a threshold value. Moreover, there is a problem in detecting the P-peak and T-peak values due to the derivation of an erroneous threshold value as a result of noise. We propose a robust R-peak detection algorithm with low complexity and simple computation to solve these problems. The proposed scheme removes the baseline drift in ECG signals using an adaptive filter to solve the problems involved in threshold extraction. We also propose a technique to extract the appropriate threshold value automatically using the minimum and maximum values of the filtered ECG signal. To detect the R-peak from the ECG signal, we propose a threshold neighborhood search technique. Through experiments, we confirmed the improvement of the R-peak detection accuracy of the proposed method and achieved a detection speed that is suitable for a mobile system by reducing the amount of calculation. The experimental results show that the heart rate detection accuracy and sensitivity were very high (about 100%).

The Construction of QoS Integration Platform for Real-time Negotiation and Adaptation Stream Service in Distributed Object Computing Environments (분산 객체 컴퓨팅 환경에서 실시간 협약 및 적응 스트림 서비스를 위한 QoS 통합 플랫폼의 구축)

  • Jun, Byung-Taek;Kim, Myung-Hee;Joo, Su-Chong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11S
    • /
    • pp.3651-3667
    • /
    • 2000
  • Recently, in the distributed multimedia environments based on internet, as radical growing technologies, the most of researchers focus on both streaming technology and distributed object thchnology, Specially, the studies which are tried to integrate the streaming services on the distributed object technology have been progressing. These technologies are applied to various stream service mamgements and protocols. However, the stream service management mexlels which are being proposed by the existing researches are insufficient for suporting the QoS of stream services. Besides, the existing models have the problems that cannot support the extensibility and the reusability, when the QoS-reiatedfunctions are being developed as a sub-module which is suited on the specific-purpose application services. For solving these problems, in this paper. we suggested a QoS Integrated platform which can extend and reuse using the distributed object technologies, and guarantee the QoS of the stream services. A structure of platform we suggested consists of three components such as User Control Module(UCM), QoS Management Module(QoSM) and Stream Object. Stream Object has Send/Receive operations for transmitting the RTP packets over TCP/IP. User Control ModuleI(UCM) controls Stream Objects via the COREA service objects. QoS Management Modulel(QoSM) has the functions which maintain the QoS of stream service between the UCMs in client and server. As QoS control methexlologies, procedures of resource monitoring, negotiation, and resource adaptation are executed via the interactions among these comiXments mentioned above. For constmcting this QoS integrated platform, we first implemented the modules mentioned above independently, and then, used IDL for defining interfaces among these mexlules so that can support platform independence, interoperability and portability base on COREA. This platform is constructed using OrbixWeb 3.1c following CORBA specification on Solaris 2.5/2.7, Java language, Java, Java Media Framework API 2.0, Mini-SQL1.0.16 and multimedia equipments. As results for verifying this platform functionally, we showed executing results of each module we mentioned above, and a numerical data obtained from QoS control procedures on client and server's GUI, while stream service is executing on our platform.

  • PDF

Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm (유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습)

  • Kim, Sang Hun;Chung, Byung Hee;Lee, Gun Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.351-360
    • /
    • 2018
  • The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.