• Title/Summary/Keyword: 연산 지도

Search Result 3,998, Processing Time 0.03 seconds

Low-Power Streamable AI Software Runtime Execution based on Collaborative Edge-Cloud Image Processing in Metaverse Applications (에지 클라우드 협동 이미지 처리기반 메타버스에서 스트리밍 가능한 저전력 AI 소프트웨어의 런타임 실행)

  • Kang, Myeongjin;Kim, Ho;Park, Jungwon;Yang, Seongbeom;Yun, Junseo;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1577-1585
    • /
    • 2022
  • As the interest in the 4th industrial revolution and metaverse increases, metaverse with multi edge structure is proposed and noted. Metaverse is a structure that can create digital doctor-like system through a large amount of image processing and data transmission in a multi edge system. Since metaverse application requires calculating performance, which can reconstruct 3-D space, edge hardware's insufficient calculating performance has been a problem. To provide streamable AI software in runtime, image processing, and data transmission, which is edge's loads, needs to be lightweight. Also lightweight at the edge leads to power consumption reduction of the entire metaverse application system. In this paper, we propose collaborative edge-cloud image processing with remote image processing method and Region of Interest (ROI) to overcome edge's power performance and build streamable and runtime executable AI software. The proposed structure was implemented using a PC and an embedded board, and the reduction of time, power, and network communications were verified.

Quality Visualization of Quality Metric Indicators based on Table Normalization of Static Code Building Information (정적 코드 내부 정보의 테이블 정규화를 통한 품질 메트릭 지표들의 가시화를 위한 추출 메커니즘)

  • Chansol Park;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.5
    • /
    • pp.199-206
    • /
    • 2023
  • The current software becomes the huge size of source codes. Therefore it is increasing the importance and necessity of static analysis for high-quality product. With static analysis of the code, it needs to identify the defect and complexity of the code. Through visualizing these problems, we make it guild for developers and stakeholders to understand these problems in the source codes. Our previous visualization research focused only on the process of storing information of the results of static analysis into the Database tables, querying the calculations for quality indicators (CK Metrics, Coupling, Number of function calls, Bad-smell), and then finally visualizing the extracted information. This approach has some limitations in that it takes a lot of time and space to analyze a code using information extracted from it through static analysis. That is since the tables are not normalized, it may occur to spend space and time when the tables(classes, functions, attributes, Etc.) are joined to extract information inside the code. To solve these problems, we propose a regularized design of the database tables, an extraction mechanism for quality metric indicators inside the code, and then a visualization with the extracted quality indicators on the code. Through this mechanism, we expect that the code visualization process will be optimized and that developers will be able to guide the modules that need refactoring. In the future, we will conduct learning of some parts of this process.

A Study on Improving Performance of Software Requirements Classification Models by Handling Imbalanced Data (불균형 데이터 처리를 통한 소프트웨어 요구사항 분류 모델의 성능 개선에 관한 연구)

  • Jong-Woo Choi;Young-Jun Lee;Chae-Gyun Lim;Ho-Jin Choi
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.295-302
    • /
    • 2023
  • Software requirements written in natural language may have different meanings from the stakeholders' viewpoint. When designing an architecture based on quality attributes, it is necessary to accurately classify quality attribute requirements because the efficient design is possible only when appropriate architectural tactics for each quality attribute are selected. As a result, although many natural language processing models have been studied for the classification of requirements, which is a high-cost task, few topics improve classification performance with the imbalanced quality attribute datasets. In this study, we first show that the classification model can automatically classify the Korean requirement dataset through experiments. Based on these results, we explain that data augmentation through EDA(Easy Data Augmentation) techniques and undersampling strategies can improve the imbalance of quality attribute datasets, and show that they are effective in classifying requirements. The results improved by 5.24%p on F1-score, indicating that handling imbalanced data helps classify Korean requirements of classification models. Furthermore, detailed experiments of EDA illustrate operations that help improve classification performance.

Real-Time Terrain Visualization with Hierarchical Structure (실시간 시각화를 위한 계층 구조 구축 기법 개발)

  • Park, Chan Su;Suh, Yong Cheol
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.2D
    • /
    • pp.311-318
    • /
    • 2009
  • Interactive terrain visualization is an important research area with applications in GIS, games, virtual reality, scientific visualization and flight simulators, besides having military use. This is a complex and challenging problem considering that some applications require precise visualizations of huge data sets at real-time rates. In general, the size of data sets makes rendering at real-time difficult since the terrain data cannot fit entirely in memory. In this paper, we suggest the effective Real-time LOD(level-of-detail) algorithm for displaying the huge terrain data and processing mass geometry. We used a hierarchy structure with $4{\times}4$ and $2{\times}2$ tiles for real-time rendering of mass volume DEM which acquired from Digital map, LiDAR, DTM and DSM. Moreover, texture mapping is performed to visualize realistically while displaying height data of normalized Giga Byte level with user oriented terrain information and creating hill shade map using height data to hierarchy tile structure of file type. Large volume of terrain data was transformed to LOD data for real time visualization. This paper show the new LOD algorithm for seamless visualization, high quality, minimize the data loss and maximize the frame speed.

Comparative Study of Automatic Trading and Buy-and-Hold in the S&P 500 Index Using a Volatility Breakout Strategy (변동성 돌파 전략을 사용한 S&P 500 지수의 자동 거래와 매수 및 보유 비교 연구)

  • Sunghyuck Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.57-62
    • /
    • 2023
  • This research is a comparative analysis of the U.S. S&P 500 index using the volatility breakout strategy against the Buy and Hold approach. The volatility breakout strategy is a trading method that exploits price movements after periods of relative market stability or concentration. Specifically, it is observed that large price movements tend to occur more frequently after periods of low volatility. When a stock moves within a narrow price range for a while and then suddenly rises or falls, it is expected to continue moving in that direction. To capitalize on these movements, traders adopt the volatility breakout strategy. The 'k' value is used as a multiplier applied to a measure of recent market volatility. One method of measuring volatility is the Average True Range (ATR), which represents the difference between the highest and lowest prices of recent trading days. The 'k' value plays a crucial role for traders in setting their trade threshold. This study calculated the 'k' value at a general level and compared its returns with the Buy and Hold strategy, finding that algorithmic trading using the volatility breakout strategy achieved slightly higher returns. In the future, we plan to present simulation results for maximizing returns by determining the optimal 'k' value for automated trading of the S&P 500 index using artificial intelligence deep learning techniques.

A Study on the Development of Ultrasonography Guide using Motion Tracking System (이미지 가이드 시스템 기반 초음파 검사 교육 기법 개발: 예비 연구)

  • Jung Young-Jin;Kim Eun-Hye;Choi Hye-Rin;Lee Chae-Jeong;Kim Seo-Hyeon;Choi Yu-Jin;Hong Dong-Hee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.7
    • /
    • pp.1067-1073
    • /
    • 2023
  • Breast cancer is one of the top three most common cancers in modern women, and the incidence rate is increasing rapidly. Breast cancer has a high family history and a mortality rate of about 15%, making it a high-risk group. Therefore, breast cancer needs constant management after an early examination. Among the various equipment that can diagnose cancer, ultrasound has the advantage of low risk and being able to diagnose in real time. In addition, breast ultrasound will be more useful because Asian women's breasts are denser and less sensitive. However, the results of ultrasound examinations vary greatly depending on the technology of the examiner. To compensate for this, we intend to incorporate motion tracking technology. Motion tracking is a technology that specifies and analyzes a location according to the movement of an object in a three-dimensional space. Therefore, real-time control is possible, and complex and fast movements can be recorded in real time. We would like to present the production of an ultrasound examination guide using these advantages.

Optimized Implementation of PIPO Lightweight Block Cipher on 32-bit RISC-V Processor (32-bit RISC-V상에서의 PIPO 경량 블록암호 최적화 구현)

  • Eum, Si Woo;Jang, Kyung Bae;Song, Gyeong Ju;Lee, Min Woo;Seo, Hwa Jeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.6
    • /
    • pp.167-174
    • /
    • 2022
  • PIPO lightweight block ciphers were announced in ICISC'20. In this paper, a single-block optimization implementation and parallel optimization implementation of PIPO lightweight block cipher ECB, CBC, and CTR operation modes are performed on a 32-bit RISC-V processor. A single block implementation proposes an efficient 8-bit unit of Rlayer function implementation on a 32-bit register. In a parallel implementation, internal alignment of registers for parallel implementation is performed, and a method for four different blocks to perform Rlayer function operations on one register is described. In addition, since it is difficult to apply the parallel implementation technique to the encryption process in the parallel implementation of the CBC operation mode, it is proposed to apply the parallel implementation technique in the decryption process. In parallel implementation of the CTR operation mode, an extended initialization vector is used to propose a register internal alignment omission technique. This paper shows that the parallel implementation technique is applicable to several block cipher operation modes. As a result, it is confirmed that the performance improvement is 1.7 times in a single-block implementation and 1.89 times in a parallel implementation compared to the performance of the existing research implementation that includes the key schedule process in the ECB operation mode.

Effective Multi-Modal Feature Fusion for 3D Semantic Segmentation with Multi-View Images (멀티-뷰 영상들을 활용하는 3차원 의미적 분할을 위한 효과적인 멀티-모달 특징 융합)

  • Hye-Lim Bae;Incheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.505-518
    • /
    • 2023
  • 3D point cloud semantic segmentation is a computer vision task that involves dividing the point cloud into different objects and regions by predicting the class label of each point. Existing 3D semantic segmentation models have some limitations in performing sufficient fusion of multi-modal features while ensuring both characteristics of 2D visual features extracted from RGB images and 3D geometric features extracted from point cloud. Therefore, in this paper, we propose MMCA-Net, a novel 3D semantic segmentation model using 2D-3D multi-modal features. The proposed model effectively fuses two heterogeneous 2D visual features and 3D geometric features by using an intermediate fusion strategy and a multi-modal cross attention-based fusion operation. Also, the proposed model extracts context-rich 3D geometric features from input point cloud consisting of irregularly distributed points by adopting PTv2 as 3D geometric encoder. In this paper, we conducted both quantitative and qualitative experiments with the benchmark dataset, ScanNetv2 in order to analyze the performance of the proposed model. In terms of the metric mIoU, the proposed model showed a 9.2% performance improvement over the PTv2 model using only 3D geometric features, and a 12.12% performance improvement over the MVPNet model using 2D-3D multi-modal features. As a result, we proved the effectiveness and usefulness of the proposed model.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

A 2kβ Algorithm for Euler function 𝜙(n) Decryption of RSA (RSA의 오일러 함수 𝜙(n) 해독 2kβ 알고리즘)

  • Lee, Sang-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.7
    • /
    • pp.71-76
    • /
    • 2014
  • There is to be virtually impossible to solve the very large digits of prime number p and q from composite number n=pq using integer factorization in typical public-key cryptosystems, RSA. When the public key e and the composite number n are known but the private key d remains unknown in an asymmetric-key RSA, message decryption is carried out by first obtaining ${\phi}(n)=(p-1)(q-1)=n+1-(p+q)$ and then using a reverse function of $d=e^{-1}(mod{\phi}(n))$. Integer factorization from n to p,q is most widely used to produce ${\phi}(n)$, which has been regarded as mathematically hard. Among various integer factorization methods, the most popularly used is the congruence of squares of $a^2{\equiv}b^2(mod\;n)$, a=(p+q)/2,b=(q-p)/2 which is more commonly used then n/p=q trial division. Despite the availability of a number of congruence of scares methods, however, many of the RSA numbers remain unfactorable. This paper thus proposes an algorithm that directly and immediately obtains ${\phi}(n)$. The proposed algorithm computes $2^k{\beta}_j{\equiv}2^i(mod\;n)$, $0{\leq}i{\leq}{\gamma}-1$, $k=1,2,{\ldots}$ or $2^k{\beta}_j=2{\beta}_j$ for $2^j{\equiv}{\beta}_j(mod\;n)$, $2^{{\gamma}-1}$ < n < $2^{\gamma}$, $j={\gamma}-1,{\gamma},{\gamma}+1$ to obtain the solution. It has been found to be capable of finding an arbitrarily located ${\phi}(n)$ in a range of $n-10{\lfloor}{\sqrt{n}}{\rfloor}$ < ${\phi}(n){\leq}n-2{\lfloor}{\sqrt{n}}{\rfloor}$ much more efficiently than conventional algorithms.