• Title/Summary/Keyword: GPU 병렬처리

Search Result 250, Processing Time 0.024 seconds

Performance Enhancement of Scaling Filter and Transcoder using CUDA (CUDA를 활용한 스케일링 필터 및 트랜스코더의 성능향상)

  • Han, Jae-Geun;Ko, Young-Sub;Suh, Sung-Han;Ha, Soon-Hoi
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.507-511
    • /
    • 2010
  • In this paper, we propose to enhance the performance of software transcoder by using GPGPU for scaling filters. Video transcoding is a technique that translates a video file to another video file that has a different coding algorithm and/or a different frame size. Its demand increases as more multimedia devices with different specification coexist in our daily life. Since transcoding is computationally intensive, a software transcoder that runs on a CPU takes long processing time. In this paper, we achieve significant speed-up by parallelizing the scaling filter using a GPGPU that can provide significantly large computation power. Through extensive experiments with various video scripts of different size and with various scaling filter options, it is verified that the enhanced transcoder could achieve 36% performance improvement in the default option, and up to 101% in a certain option.

Design and Implementation of an Approximate Surface Lens Array System based on OpenCL (OpenCL 기반 근사곡면 렌즈어레이 시스템의 설계 및 구현)

  • Kim, Do-Hyeong;Song, Min-Ho;Jung, Ji-Sung;Kwon, Ki-Chul;Kim, Nam;Kim, Kyung-Ah;Yoo, Kwan-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.10
    • /
    • pp.1-9
    • /
    • 2014
  • Generally, integral image used for autostereoscopic 3d display is generated for flat lens array, but flat lens array cannot provide a wide range of view for generated integral image because of narrow range of view. To make up for this flat lens array's weak point, curved lens array has been proposed, and due to technical and cost problem, approximate surface lens array composed of several flat lens array is used instead of ideal curved lens array. In this paper, we constructed an approximate surface lens array arranged for $20{\times}8$ square flat lens in 100mm radius sphere, and we could get about twice angle of view compared to flat lens array. Specially, unlike existing researches which manually generate integral image, we propose an OpenCL GPU parallel process algorithm for generating real-time integral image. As a result, we could get 12-20 frame/sec speed about various 3D volume data from $15{\times}15$ approximate surface lens array.

Real-time Eye Contact System Using a Kinect Depth Camera for Realistic Telepresence (Kinect 깊이 카메라를 이용한 실감 원격 영상회의의 시선 맞춤 시스템)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.277-282
    • /
    • 2012
  • In this paper, we present a real-time eye contact system for realistic telepresence using a Kinect depth camera. In order to generate the eye contact image, we capture a pair of color and depth video. Then, the foreground single user is separated from the background. Since the raw depth data includes several types of noises, we perform a joint bilateral filtering method. We apply the discontinuity-adaptive depth filter to the filtered depth map to reduce the disocclusion area. From the color image and the preprocessed depth map, we construct a user mesh model at the virtual viewpoint. The entire system is implemented through GPU-based parallel programming for real-time processing. Experimental results have shown that the proposed eye contact system is efficient in realizing eye contact, providing the realistic telepresence.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Spatial Computation on Spark Using GPGPU (GPGPU를 활용한 스파크 기반 공간 연산)

  • Son, Chanseung;Kim, Daehee;Park, Neungsoo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.8
    • /
    • pp.181-188
    • /
    • 2016
  • Recently, as the amount of spatial information increases, an interest in the study of spatial information processing has been increased. Spatial database systems extended from the traditional relational database systems are difficult to handle large data sets because of the scalability. SpatialHadoop extended from Hadoop system has a low performance, because spatial computations in SpationHadoop require a lot of write operations of intermediate results to the disk, resulting in the performance degradation. In this paper, Spatial Computation Spark(SC-Spark) is proposed, which is an in-memory based distributed processing framework. SC-Spark is extended from Spark in order to efficiently perform the spatial operation for large-scale data. In addition, SC-Spark based on the GPGPU is developed to improve the performance of the SC-Spark. SC-Spark uses the advantage of the Spark holding intermediate results in the memory. And GPGPU-based SC-Spark can perform spatial operations in parallel using a plurality of processing elements of an GPU. To verify the proposed work, experiments on a single AMD system were performed using SC-Spark and GPGPU-based SC-Spark for Point-in-Polygon and spatial join operation. The experimental results showed that the performance of SC-Spark and GPGPU-based SC-Spark were up-to 8 times faster than SpatialHadoop.

Development of long-term daily high-resolution gridded meteorological data based on deep learning (딥러닝에 기반한 우리나라 장기간 일 단위 고해상도 격자형 기상자료 생산)

  • Yookyung Jeong;Kyuhyun Byu
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.198-198
    • /
    • 2023
  • 유역 내 수자원 계획을 효율적으로 수립하기 위해서는 장기간에 걸친 수문 모델링 뿐만 아니라 미래 기후 시나리오에 따른 수문학적 기후변화 영향 분석도 중요하다. 이를 위해서는 관측 값에 기반한 고품질 및 고해상도 격자형 기상자료 생산이 필수적이다. 하지만, 우리나라는 종관기상관측시스템(ASOS)과 방재기상관측시스템(AWS)으로 이루어진 고밀도 관측 네트워크가 2000년 이후부터 이용 가능했기에 장기간 격자형 기상자료가 부족하다. 이를 보완하고자 본 연구는 가정적인 상황에 기반하여 만약 2000년 이전에도 현재와 동일한 고밀도 관측 네트워크가 존재했다면 산출 가능했을 장기간 일 단위 고해상도 격자형 기상자료를 생산하는 것을 목표로 한다. 구체적으로, 2000년을 기준으로 최근과 과거 기간의 격자형 기상자료를 딥러닝 알고리즘으로 모델링하여 과거 기간을 대상으로 기상자료(일 단위 기온, 강수량)의 공간적 변동성 및 특성을 재구성한다. 격자형 기상자료의 생산을 위해 우리나라의 고도에 기반하여 기상 인자들의 영향을 정량화 하는 보간법인 K-PRISM을 적용하여 고밀도 및 저밀도 관측 네트워크로 두 가지 격자형 기상자료를 생산한다. 생산한 격자형 기상자료 중 저밀도 관측 네트워크의 자료를 입력 자료로, 고밀도 관측 네트워크의 자료를 출력 자료로 선정하여 각 격자점에 대해 Long-Short Term Memory(LSTM) 알고리즘을 개발한다. 이 때, 멀티 그래픽 처리장치(GPU)에 기반한 병렬 처리를 통해 비용 효율적인 계산이 가능하도록 한다. 최종적으로 1973년부터 1999년까지의 저밀도 관측 네트워크의 격자형 기상자료를 입력 자료로 하여 해당 기간에 대한 고밀도 관측 네트워크의 격자형 기상자료를 생산한다. 개발된 대부분의 예측 모델 결과가 0.9 이상의 NSE 값을 나타낸다. 따라서, 본 연구에서 개발된 모델은 고품질의 장기간 기상자료를 효율적으로 정확도 높게 산출하며, 이는 향후 장기간 기후 추세 및 변동 분석에 중요 자료로 활용 가능하다.

  • PDF

CINEMAPIC : Generative AI-based movie concept photo booth system (시네마픽 : 생성형 AI기반 영화 컨셉 포토부스 시스템)

  • Seokhyun Jeong;Seungkyu Leem;Jungjin Lee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.149-158
    • /
    • 2024
  • Photo booths have traditionally provided a fun and easy way to capture and print photos to cherish memories. These booths allow individuals to capture their desired poses and props, sharing memories with friends and family. To enable diverse expressions, generative AI-powered photo booths have emerged. However, existing AI photo booths face challenges such as difficulty in taking group photos, inability to accurately reflect user's poses, and the challenge of applying different concepts to individual subjects. To tackle these issues, we present CINEMAPIC, a photo booth system that allows users to freely choose poses, positions, and concepts for their photos. The system workflow includes three main steps: pre-processing, generation, and post-processing to apply individualized concepts. To produce high-quality group photos, the system generates a transparent image for each character and enhances the backdrop-composited image through a small number of denoising steps. The workflow is accelerated by applying an optimized diffusion model and GPU parallelization. The system was implemented as a prototype, and its effectiveness was validated through a user study and a large-scale pilot operation involving approximately 400 users. The results showed a significant preference for the proposed system over existing methods, confirming its potential for real-world photo booth applications. The proposed CINEMAPIC photo booth is expected to lead the way in a more creative and differentiated market, with potential for widespread application in various fields.

A Benchmark of AI Application based on Open Source for Data Mining Environmental Variables in Smart Farm (스마트 시설환경 환경변수 분석을 위한 Open source 기반 인공지능 활용법 분석)

  • Min, Jae-Ki;Lee, DongHoon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.159-159
    • /
    • 2017
  • 스마트 시설환경은 대표적으로 원예, 축산 분야 등 여러 형태의 농업현장에 정보 통신 및 데이터 분석 기술을 도입하고 있는 시설화된 생산 환경이라 할 수 있다. 근래에 하드웨어적으로 급증한 스마트 시설환경에서 생산되는 방대한 생육/환경 데이터를 올바르고 적합하게 사용하기 위해서는 일반 산업 현장과는 차별화 된 분석기법이 요구된다고 할 수 있다. 소프트웨어 공학 분야에서 연구된 빅데이터 처리 기술을 기계적으로 농업 분야의 빅데이터에 적용하기에는 한계가 있을 수 있다. 시설환경 내/외부의 다양한 환경 변수는 시계열 데이터의 난해성, 비가역성, 불특정성, 비정형 패턴 등에 기인하여 예측 모델 연구가 매우 난해한 대상이기 때문이라 할 수 있다. 본 연구에서는 근래에 관심이 급증하고 있는 인공신경망 연구 소프트웨어인 Tensorflow (www.tensorflow.org)와 대표적인 Open source인 OpenNN (www.openn.net)을 스마트 시설환경 환경변수 상호간 상관성 분석에 응용하였다. 해당 소프트웨어 라이브러리의 운영환경을 살펴보면 Tensorflow 는 Linux(Ubuntu 16.04.4), Max OS X(EL capitan 10.11), Windows (x86 compatible)에서 활용가능하고, OpenNN은 별도의 운영환경에 대한 바이너리를 제공하지 않고 소스코드 전체를 제공하므로, 해당 운영환경에서 바이너리 컴파일 후 활용이 가능하다. 소프트웨어 개발 언어의 경우 Tensorflow는 python이 기본 언어이며 python(v2.7 or v3.N) 가상 환경 내에서 개발이 수행이 된다. 주의 깊게 살펴볼 부분은 이러한 개발 환경의 제약으로 인하여 Tensorflow의 주요한 장점 중에 하나인 고속 연산 기능 수행이 일부 운영 환경에 국한이 되어 제공이 된다는 점이다. GPU(Graphics Processing Unit)의 제공하는 하드웨어 가속기능은 Linux 운영체제에서 활용이 가능하다. 가상 개발 환경에 운영되는 한계로 인하여 실시간 정보 처리에는 한계가 따르므로 이에 대한 고려가 필요하다. 한편 근래(2017.03)에 공개된 Tensorflow API r1.0의 경우 python, C++, Java언어와 함께 Go라는 언어를 새로 지원하여 개발자의 활용 범위를 매우 높였다. OpenNN의 경우 C++ 언어를 기본으로 제공하며 C++ 컴파일러를 지원하는 임의의 개발 환경에서 모두 활용이 가능하다. 특징은 클러스터링 플랫폼과 연동을 통해 하드웨어 가속 기능의 부재를 일부 극복했다는 점이다. 상기 두 가지 패키지를 이용하여 2016년 2월부터 5월 까지 충북 음성군 소재 딸기 온실 내부에서 취득한 온도, 습도, 조도, CO2에 대하여 Large-scale linear model을 실험적(시간단위, 일단위, 주단위 분할)으로 적용하고, 인접한 세그먼트의 환경변수 예측 모델링을 수행하였다. 동일한 조건의 학습을 수행함에 있어, Tensorflow가 개발 소요 시간과 학습 실행 속도 측면에서 매우 우세하였다. OpenNN을 이용하여 대등한 성능을 보이기 위해선 병렬 클러스터링 기술을 활용해야 할 것이다. 오프라인 일괄(Offline batch)처리 방식의 한계가 있는 인공신경망 모델링 기법과 현장 보급이 불가능한 고성능 하드웨어 연산 장치에 대한 대안 마련을 위한 연구가 필요하다.

  • PDF

A Study on Improvement of the Human Posture Estimation Method for Performing Robots (공연로봇을 위한 인간자세 추정방법 개선에 관한 연구)

  • Park, Cheonyu;Park, Jaehun;Han, Jeakweon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.750-757
    • /
    • 2020
  • One of the basic tasks for robots to interact with humans is to quickly and accurately grasp human behavior. Therefore, it is necessary to increase the accuracy of human pose recognition when the robot is estimating the human pose and to recognize it as quickly as possible. However, when the human pose is estimated using deep learning, which is a representative method of artificial intelligence technology, recognition accuracy and speed are not satisfied at the same time. Therefore, it is common to select one of a top-down method that has high inference accuracy or a bottom-up method that has high processing speed. In this paper, we propose two methods that complement the disadvantages while including both the advantages of the two methods mentioned above. The first is to perform parallel inference on the server using multi GPU, and the second is to mix bottom-up and One-class Classification. As a result of the experiment, both of the methods presented in this paper showed improvement in speed. If these two methods are applied to the entertainment robot, it is expected that a highly reliable interaction with the audience can be performed.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.