• Title/Summary/Keyword: processing.

Search Result 69,877, Processing Time 0.091 seconds

An Approach Using LSTM Model to Forecasting Customer Congestion Based on Indoor Human Tracking (실내 사람 위치 추적 기반 LSTM 모델을 이용한 고객 혼잡 예측 연구)

  • Hee-ju Chae;Kyeong-heon Kwak;Da-yeon Lee;Eunkyung Kim
    • Journal of the Korea Society for Simulation
    • /
    • v.32 no.3
    • /
    • pp.43-53
    • /
    • 2023
  • In this detailed and comprehensive study, our primary focus has been placed on accurately gauging the number of visitors and their real-time locations in commercial spaces. Particularly, in a real cafe, using security cameras, we have developed a system that can offer live updates on available seating and predict future congestion levels. By employing YOLO, a real-time object detection and tracking algorithm, the number of visitors and their respective locations in real-time are also monitored. This information is then used to update a cafe's indoor map, thereby enabling users to easily identify available seating. Moreover, we developed a model that predicts the congestion of a cafe in real time. The sophisticated model, designed to learn visitor count and movement patterns over diverse time intervals, is based on Long Short Term Memory (LSTM) to address the vanishing gradient problem and Sequence-to-Sequence (Seq2Seq) for processing data with temporal relationships. This innovative system has the potential to significantly improve cafe management efficiency and customer satisfaction by delivering reliable predictions of cafe congestion to all users. Our groundbreaking research not only demonstrates the effectiveness and utility of indoor location tracking technology implemented through security cameras but also proposes potential applications in other commercial spaces.

Shear-wave elasticity imaging with axial sub-Nyquist sampling (축방향 서브 나이퀴스트 샘플링 기반의 횡탄성 영상 기법)

  • Woojin Oh;Heechul Yoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.5
    • /
    • pp.403-411
    • /
    • 2023
  • Functional ultrasound imaging, such as elasticity imaging and micro-blood flow Doppler imaging, enhances diagnostic capability by providing useful mechanical and functional information about tissues. However, the implementation of functional ultrasound imaging poses limitations such as the storage of vast amounts of data in Radio Frequency (RF) data acquisition and processing. In this paper, we propose a sub-Nyquist approach that reduces the amount of acquired axial samples for efficient shear-wave elasticity imaging. The proposed method acquires data at a sampling rate one-third lower than the conventional Nyquist sampling rate and tracks shear-wave signals through RF signals reconstructed using band-pass filtering-based interpolation. In this approach, the RF signal is assumed to have a fractional bandwidth of 67 %. To validate the approach, we reconstruct the shear-wave velocity images using shear-wave tracking data obtained by conventional and proposed approaches, and compare the group velocity, contrast-to-noise ratio, and structural similarity index measurement. We qualitatively and quantitatively demonstrate the potential of sub-Nyquist sampling-based shear-wave elasticity imaging, indicating that our approach could be practically useful in three-dimensional shear-wave elasticity imaging, where a massive amount of ultrasound data is required.

Metamodeling Construction for Generating Test Case via Decision Table Based on Korean Requirement Specifications (한글 요구사항 기반 결정 테이블로부터 테스트 케이스 생성을 위한 메타모델링 구축화)

  • Woo Sung Jang;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.381-386
    • /
    • 2023
  • Many existing test case generation researchers extract test cases from models. However, research on generating test cases from natural language requirements is required in practice. For this purpose, the combination of natural language analysis and requirements engineering is very necessary. However, Requirements analysis written in Korean is difficult due to the diverse meaning of sentence expressions. We research test case generation through natural language requirement definition analysis, C3Tree model, cause-effect graph, and decision table steps as one of the test case generation methods from Korean natural requirements. As an intermediate step, this paper generates test cases from C3Tree model-based decision tables using meta-modeling. This method has the advantage of being able to easily maintain the model-to-model and model-to-text transformation processes by modifying only the transformation rules. If an existing model is modified or a new model is added, only the model transformation rules can be maintained without changing the program algorithm. As a result of the evaluation, all combinations for the decision table were automatically generated as test cases.

Predicting the Number of Confirmed COVID-19 Cases Using Deep Learning Models with Search Term Frequency Data (검색어 빈도 데이터를 반영한 코로나 19 확진자수 예측 딥러닝 모델)

  • Sungwook Jung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.387-398
    • /
    • 2023
  • The COVID-19 outbreak has significantly impacted human lifestyles and patterns. It was recommended to avoid face-to-face contact and over-crowded indoor places as much as possible as COVID-19 spreads through air, as well as through droplets or aerosols. Therefore, if a person who has contacted a COVID-19 patient or was at the place where the COVID-19 patient occurred is concerned that he/she may have been infected with COVID-19, it can be fully expected that he/she will search for COVID-19 symptoms on Google. In this study, an exploratory data analysis using deep learning models(DNN & LSTM) was conducted to see if we could predict the number of confirmed COVID-19 cases by summoning Google Trends, which played a major role in surveillance and management of influenza, again and combining it with data on the number of confirmed COVID-19 cases. In particular, search term frequency data used in this study are available publicly and do not invade privacy. When the deep neural network model was applied, Seoul (9.6 million) with the largest population in South Korea and Busan (3.4 million) with the second largest population recorded lower error rates when forecasting including search term frequency data. These analysis results demonstrate that search term frequency data plays an important role in cities with a population above a certain size. We also hope that these predictions can be used as evidentiary materials to decide policies, such as the deregulation or implementation of stronger preventive measures.

Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion (멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동)

  • Jeong Hyun Choi;In Cheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.407-418
    • /
    • 2023
  • The Multi-Object Goal Visual Navigation(MultiOn) is a visual navigation task in which an agent must visit to multiple object goals in an unknown indoor environment in a given order. Existing models for the MultiOn task suffer from the limitation that they cannot utilize an integrated view of multimodal context because use only a unimodal context map. To overcome this limitation, in this paper, we propose a novel deep neural network-based agent model for MultiOn task. The proposed model, MCFMO, uses a multimodal context map, containing visual appearance features, semantic features of environmental objects, and goal object features. Moreover, the proposed model effectively fuses these three heterogeneous features into a global multimodal context map by using a point-wise convolutional neural network module. Lastly, the proposed model adopts an auxiliary task learning module to predict the observation status, goal direction and the goal distance, which can guide to learn the navigational policy efficiently. Conducting various quantitative and qualitative experiments using the Habitat-Matterport3D simulation environment and scene dataset, we demonstrate the superiority of the proposed model.

Real Estate Asset NFT Tokenization and FT Asset Portfolio Management (부동산 유동화 NFT와 FT 분할 거래 시스템 설계 및 구현)

  • Young-Gun Kim;Seong-Whan Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.419-430
    • /
    • 2023
  • Currently, NFTs have no dominant application except for the proof of ownership for digital content, and it also have small liquidity problem, which makes their price difficult to predict. Real estate usually has very high barriers to investment due to its high pricing. Real estate can be converted into NFTs and also divided into small value fungible tokens (FTs), and it can increase the the volume of the investor community due to more liquidity and better accessibility. In this document, we implement and design a system that allows ordinary users can invest on high priced real estate utilizing Black Litterman (BL) model-based Portfolio investment interface. To this end, we target a set of real estates pegged as collateral and issue NFT for the collateral using blockchain. We use oracle to get the current real estate information and to monitor varying real estate prices. After tokenizing real estate into NFTs, we divide the NFTs into easily accessible price FTs, thereby, we can lower prices and provide large liquidity with price volatility limited. In addition, we also implemented BL based asset portfolio interface for effective portfolio composition for investing in multiple of real estates with small investments. Using BL model, investors can fix the asset portfolio. We implemented the whole system using Solidity smart contracts on Flask web framework with public data portals as oracle interfaces.

The Effect of Soil Amended with β-glucan under Drought Stress in Ipomoea batatas L. (𝛽-glucan 토양혼합에 따른 고구마의 가뭄피해 저감 효과 )

  • Jung-Ho Shin;Hyun-Sung Kim;Gwan-Ju Seong;Won Park;Sung-Ju Ahn
    • Ecology and Resilient Infrastructure
    • /
    • v.10 no.3
    • /
    • pp.64-72
    • /
    • 2023
  • Biopolymer is a versatile material used in food processing, medicine, construction, and soil reinforcement. 𝛽-glucan is one of the biopolymers that improves the soil water content and ion adsorption in a drought or toxic metal contaminated land for plant survival. We analyzed drought stress damage reduction in sweet potatoes (Ipomoea batatas L. cv. Sodammi) by measuring the growth and major protein expression and activity under 𝛽-glucan soil amendment. The result showed that sweet potato leaf length and width were not affected by drought stress for 14 days, but sweet potatoes grown in 𝛽-glucan-amended soil showed an effect in preventing wilting caused by drought in phenotypic changes. Under drought stress, sweet potato leaves did not show any changes in electrolyte leakage, but the relative water content was higher in sweet potatoes grown in 𝛽-glucan-amended soil than in normal soil. 𝛽-glucan soil amendment increased the expression of plasma membrane (PM) H+-ATPase, but it decreased the aquaporin PIP2 (plasma membrane intrinsic protein 2) in sweet potatoes under drought stress. Moreover, water maintenance affected the PM H+-ATPase activity, which contributed to tolerance under drought stress. These results indicate that 𝛽-glucan soil amendment improves the soil water content during drought and affects the water supply in sweet potatoes. Consequently, 𝛽-glucan is a potential material for maintaining soil water contents, and analysis of the major PM proteins is one of the indicators for evaluating the biopolymer effect on plant survival under drought stress.

Intrusion Detection Method Using Unsupervised Learning-Based Embedding and Autoencoder (비지도 학습 기반의 임베딩과 오토인코더를 사용한 침입 탐지 방법)

  • Junwoo Lee;Kangseok Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.355-364
    • /
    • 2023
  • As advanced cyber threats continue to increase in recent years, it is difficult to detect new types of cyber attacks with existing pattern or signature-based intrusion detection method. Therefore, research on anomaly detection methods using data learning-based artificial intelligence technology is increasing. In addition, supervised learning-based anomaly detection methods are difficult to use in real environments because they require sufficient labeled data for learning. Research on an unsupervised learning-based method that learns from normal data and detects an anomaly by finding a pattern in the data itself has been actively conducted. Therefore, this study aims to extract a latent vector that preserves useful sequence information from sequence log data and develop an anomaly detection learning model using the extracted latent vector. Word2Vec was used to create a dense vector representation corresponding to the characteristics of each sequence, and an unsupervised autoencoder was developed to extract latent vectors from sequence data expressed as dense vectors. The developed autoencoder model is a recurrent neural network GRU (Gated Recurrent Unit) based denoising autoencoder suitable for sequence data, a one-dimensional convolutional neural network-based autoencoder to solve the limited short-term memory problem that GRU can have, and an autoencoder combining GRU and one-dimensional convolution was used. The data used in the experiment is time-series-based NGIDS (Next Generation IDS Dataset) data, and as a result of the experiment, an autoencoder that combines GRU and one-dimensional convolution is better than a model using a GRU-based autoencoder or a one-dimensional convolution-based autoencoder. It was efficient in terms of learning time for extracting useful latent patterns from training data, and showed stable performance with smaller fluctuations in anomaly detection performance.

General Relation Extraction Using Probabilistic Crossover (확률적 교차 연산을 이용한 보편적 관계 추출)

  • Je-Seung Lee;Jae-Hoon Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.371-380
    • /
    • 2023
  • Relation extraction is to extract relationships between named entities from text. Traditionally, relation extraction methods only extract relations between predetermined subject and object entities. However, in end-to-end relation extraction, all possible relations must be extracted by considering the positions of the subject and object for each pair of entities, and so this method uses time and resources inefficiently. To alleviate this problem, this paper proposes a method that sets directions based on the positions of the subject and object, and extracts relations according to the directions. The proposed method utilizes existing relation extraction data to generate direction labels indicating the direction in which the subject points to the object in the sentence, adds entity position tokens and entity type to sentences to predict the directions using a pre-trained language model (KLUE-RoBERTa-base, RoBERTa-base), and generates representations of subject and object entities through probabilistic crossover operation. Then, we make use of these representations to extract relations. Experimental results show that the proposed model performs about 3 ~ 4%p better than a method for predicting integrated labels. In addition, when learning Korean and English data using the proposed model, the performance was 1.7%p higher in English than in Korean due to the number of data and language disorder and the values of the parameters that produce the best performance were different. By excluding the number of directional cases, the proposed model can reduce the waste of resources in end-to-end relation extraction.

An Evaluation of Development Plans for Rolling Stock Maintenance Shop Using Computer Simulation - Emphasizing CDC and Generator Car - (시뮬레이션 기법을 이용한 철도차량 중정비 공장 설계검증 - 디젤동차 및 발전차 중정비 공장을 중심으로 -)

  • Jeon, Byoung-Hack;Jang, Seong-Yong;Lee, Won-Young;Oh, Jeong-Heon
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.3
    • /
    • pp.23-34
    • /
    • 2009
  • In the railroad rolling stock depot, long-term maintenance tasks is done regularly every two or four year basis to maintain the functionality of equipments and rolling stock body or for the repair operation of the heavily damaged rolling stocks by fatal accidents. This paper addresses the computer simulation model building for the rolling stock maintenance shop for the CDC(Commuter Diesel Car) and Generator Car planned to be constructed at Daejon Rolling Stock Depot, which will be moved from Yongsan Rolling Stock Depot. We evaluated the processing capacity of two layout design alternatives based on the maintenance process chart through the developed simulation models. The performance measures are the number of processed cars per year, the cycle time, shop utilization, work in process and the average number waiting car for input. The simulation result shows that one design alternative outperforms another design alternative in every aspect and superior design alternative can process total 340 number of trains per year 15% more than the proposed target within the current average cycle time.