• Title/Summary/Keyword: object pooling

Search Result 25, Processing Time 0.03 seconds

Efficient UML Modeling Method for Remote University Application EJB Component Extraction (원격대학 애플리케이션용 EJB 컴포넌트 추출을 위한 UML 설계에 관한 연구)

  • 반길우;최유순;박종구
    • KSCI Review
    • /
    • v.8 no.1
    • /
    • pp.29-36
    • /
    • 2001
  • EJB application development environment is developing component support Object-Oriented distributed processing, it is component architecture for distributed arrangement. Application developed using EJB is component coupled for business program development easily. EJB is automatically sovled to security. resource Pooling, persistency, concurrency. transaction transparency. This Paper illustrate for EJB extract to EJB sufficient flexibility its development environment, and it was applicated remote university application domain.

  • PDF

Memory Optimization in Bullet Hell Game using 'Object Pooling' ('오브젝트 풀링'을 이용한 탄막 게임 메모리 최적화)

  • Jung Lee;Moongi Choe;Jinhyeop Sung;Youngjong Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.37-38
    • /
    • 2023
  • 컴퓨터 부품의 고성능화로 점점 게임이 요구하는 사양이 높아지는 추세이다. 따라서 게임 개발에 있어 최적화가 필수적으로 요구된다. 본 논문에서는 '오브젝트 풀링'이라는 메모리 최적화 기법을 소개한다. 따라서 '오브젝트 풀링'을 적용한 탄막 게임을 만들어 메모리 최적화를 직접 구현해보고 연구한다.

Performance Improvement of Object Recognition System in Broadcast Media Using Hierarchical CNN (계층적 CNN을 이용한 방송 매체 내의 객체 인식 시스템 성능향상 방안)

  • Kwon, Myung-Kyu;Yang, Hyo-Sik
    • Journal of Digital Convergence
    • /
    • v.15 no.3
    • /
    • pp.201-209
    • /
    • 2017
  • This paper is a smartphone object recognition system using hierarchical convolutional neural network. The overall configuration is a method of communicating object information to the smartphone by matching the collected data by connecting the smartphone and the server and recognizing the object to the convergence neural network in the server. It is also compared to a hierarchical convolutional neural network and a fractional convolutional neural network. Hierarchical convolutional neural networks have 88% accuracy, fractional convolutional neural networks have 73% accuracy and 15%p performance improvement. Based on this, it shows possibility of expansion of T-Commerce market connected with smartphone and broadcasting media.

Classification Algorithms for Human and Dog Movement Based on Micro-Doppler Signals

  • Lee, Jeehyun;Kwon, Jihoon;Bae, Jin-Ho;Lee, Chong Hyun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.1
    • /
    • pp.10-17
    • /
    • 2017
  • We propose classification algorithms for human and dog movement. The proposed algorithms use micro-Doppler signals obtained from humans and dogs moving in four different directions. A two-stage classifier based on a support vector machine (SVM) is proposed, which uses a radial-based function (RBF) kernel and $16^{th}$-order linear predictive code (LPC) coefficients as feature vectors. With the proposed algorithms, we obtain the best classification results when a first-level SVM classifies the type of movement, and then, a second-level SVM classifies the moving object. We obtain the correct classification probability 95.54% of the time, on average. Next, to deal with the difficult classification problem of human and dog running, we propose a two-layer convolutional neural network (CNN). The proposed CNN is composed of six ($6{\times}6$) convolution filters at the first and second layers, with ($5{\times}5$) max pooling for the first layer and ($2{\times}2$) max pooling for the second layer. The proposed CNN-based classifier adopts an auto regressive spectrogram as the feature image obtained from the $16^{th}$-order LPC vectors for a specific time duration. The proposed CNN exhibits 100% classification accuracy and outperforms the SVM-based classifier. These results show that the proposed classifiers can be used for human and dog classification systems and also for classification problems using data obtained from an ultra-wideband (UWB) sensor.

Development of 3D Defense Space Game using Oculus

  • Iim, Won-Gyu;Lee, Byeong Cheol;Kim, Soo Kyun;An, Syoungog
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.45-50
    • /
    • 2019
  • Oculus Rift is the most universal VR (virtual Reality) headset for gamers and FPS (First Person Shooting) is the most suitable game genre to play with VR. Using VR can increase the player's sense of reality and make them feel as though they are in direct contact with the enemy while battling. The suggested VR game is a first person game where the player must defend a specific target against the surging enemy all within the time limit. Many objects will need to be used in this method. Object pooling will be used in order to manage all the numerous objects. When an object is repeatedly created and deleted it typically overwhelms the memory. To resolve this issue the game initially summons the object at the beginning of the scene and afterwards only uses the object when needed, lessening the burden on the memory. A ranking system is implemented to keep the game records in order to stimulate a competitive spirit between the players, and the game has received positive response during test play among college students in their 20s.

Pointwise CNN for 3D Object Classification on Point Cloud

  • Song, Wei;Liu, Zishu;Tian, Yifei;Fong, Simon
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.787-800
    • /
    • 2021
  • Three-dimensional (3D) object classification tasks using point clouds are widely used in 3D modeling, face recognition, and robotic missions. However, processing raw point clouds directly is problematic for a traditional convolutional network due to the irregular data format of point clouds. This paper proposes a pointwise convolution neural network (CNN) structure that can process point cloud data directly without preprocessing. First, a 2D convolutional layer is introduced to percept coordinate information of each point. Then, multiple 2D convolutional layers and a global max pooling layer are applied to extract global features. Finally, based on the extracted features, fully connected layers predict the class labels of objects. We evaluated the proposed pointwise CNN structure on the ModelNet10 dataset. The proposed structure obtained higher accuracy compared to the existing methods. Experiments using the ModelNet10 dataset also prove that the difference in the point number of point clouds does not significantly influence on the proposed pointwise CNN structure.

One-step deep learning-based method for pixel-level detection of fine cracks in steel girder images

  • Li, Zhihang;Huang, Mengqi;Ji, Pengxuan;Zhu, Huamei;Zhang, Qianbing
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.153-166
    • /
    • 2022
  • Identifying fine cracks in steel bridge facilities is a challenging task of structural health monitoring (SHM). This study proposed an end-to-end crack image segmentation framework based on a one-step Convolutional Neural Network (CNN) for pixel-level object recognition with high accuracy. To particularly address the challenges arising from small object detection in complex background, efforts were made in loss function selection aiming at sample imbalance and module modification in order to improve the generalization ability on complicated images. Specifically, loss functions were compared among alternatives including the Binary Cross Entropy (BCE), Focal, Tversky and Dice loss, with the last three specialized for biased sample distribution. Structural modifications with dilated convolution, Spatial Pyramid Pooling (SPP) and Feature Pyramid Network (FPN) were also performed to form a new backbone termed CrackDet. Models of various loss functions and feature extraction modules were trained on crack images and tested on full-scale images collected on steel box girders. The CNN model incorporated the classic U-Net as its backbone, and Dice loss as its loss function achieved the highest mean Intersection-over-Union (mIoU) of 0.7571 on full-scale pictures. In contrast, the best performance on cropped crack images was achieved by integrating CrackDet with Dice loss at a mIoU of 0.7670.

A Method of Eye and Lip Region Detection using Faster R-CNN in Face Image (초고속 R-CNN을 이용한 얼굴영상에서 눈 및 입술영역 검출방법)

  • Lee, Jeong-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.8
    • /
    • pp.1-8
    • /
    • 2018
  • In the field of biometric security such as face and iris recognition, it is essential to extract facial features such as eyes and lips. In this paper, we have studied a method of detecting eye and lip region in face image using faster R-CNN. The faster R-CNN is an object detection method using deep running and is well known to have superior performance compared to the conventional feature-based method. In this paper, feature maps are extracted by applying convolution, linear rectification process, and max pooling process to facial images in order. The RPN(region proposal network) is learned using the feature map to detect the region proposal. Then, eye and lip detector are learned by using the region proposal and feature map. In order to examine the performance of the proposed method, we experimented with 800 face images of Korean men and women. We used 480 images for the learning phase and 320 images for the test one. Computer simulation showed that the average precision of eye and lip region detection for 50 epoch cases is 97.7% and 91.0%, respectively.

Depth Map Estimation Model Using 3D Feature Volume (3차원 특징볼륨을 이용한 깊이영상 생성 모델)

  • Shin, Soo-Yeon;Kim, Dong-Myung;Suh, Jae-Won
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.11
    • /
    • pp.447-454
    • /
    • 2018
  • This paper proposes a depth image generation algorithm of stereo images using a deep learning model composed of a CNN (convolutional neural network). The proposed algorithm consists of a feature extraction unit which extracts the main features of each parallax image and a depth learning unit which learns the parallax information using extracted features. First, the feature extraction unit extracts a feature map for each parallax image through the Xception module and the ASPP(Atrous spatial pyramid pooling) module, which are composed of 2D CNN layers. Then, the feature map for each parallax is accumulated in 3D form according to the time difference and the depth image is estimated after passing through the depth learning unit for learning the depth estimation weight through 3D CNN. The proposed algorithm estimates the depth of object region more accurately than other algorithms.

Keypoint-based Deep Learning Approach for Building Footprint Extraction Using Aerial Images

  • Jeong, Doyoung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.1
    • /
    • pp.111-122
    • /
    • 2021
  • Building footprint extraction is an active topic in the domain of remote sensing, since buildings are a fundamental unit of urban areas. Deep convolutional neural networks successfully perform footprint extraction from optical satellite images. However, semantic segmentation produces coarse results in the output, such as blurred and rounded boundaries, which are caused by the use of convolutional layers with large receptive fields and pooling layers. The objective of this study is to generate visually enhanced building objects by directly extracting the vertices of individual buildings by combining instance segmentation and keypoint detection. The target keypoints in building extraction are defined as points of interest based on the local image gradient direction, that is, the vertices of a building polygon. The proposed framework follows a two-stage, top-down approach that is divided into object detection and keypoint estimation. Keypoints between instances are distinguished by merging the rough segmentation masks and the local features of regions of interest. A building polygon is created by grouping the predicted keypoints through a simple geometric method. Our model achieved an F1-score of 0.650 with an mIoU of 62.6 for building footprint extraction using the OpenCitesAI dataset. The results demonstrated that the proposed framework using keypoint estimation exhibited better segmentation performance when compared with Mask R-CNN in terms of both qualitative and quantitative results.