• Title/Summary/Keyword: Learning Processing

Search Result 3,681, Processing Time 0.032 seconds

Image-to-Image Translation Based on U-Net with R2 and Attention (R2와 어텐션을 적용한 유넷 기반의 영상 간 변환에 관한 연구)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • In the Image processing and computer vision, the problem of reconstructing from one image to another or generating a new image has been steadily drawing attention as hardware advances. However, the problem of computer-generated images also continues to emerge when viewed with human eyes because it is not natural. Due to the recent active research in deep learning, image generating and improvement problem using it are also actively being studied, and among them, the network called Generative Adversarial Network(GAN) is doing well in the image generating. Various models of GAN have been presented since the proposed GAN, allowing for the generation of more natural images compared to the results of research in the image generating. Among them, pix2pix is a conditional GAN model, which is a general-purpose network that shows good performance in various datasets. pix2pix is based on U-Net, but there are many networks that show better performance among U-Net based networks. Therefore, in this study, images are generated by applying various networks to U-Net of pix2pix, and the results are compared and evaluated. The images generated through each network confirm that the pix2pix model with Attention, R2, and Attention-R2 networks shows better performance than the existing pix2pix model using U-Net, and check the limitations of the most powerful network. It is suggested as a future study.

RBM-based distributed representation of language (RBM을 이용한 언어의 분산 표상화)

  • You, Heejo;Nam, Kichun;Nam, Hosung
    • Korean Journal of Cognitive Science
    • /
    • v.28 no.2
    • /
    • pp.111-131
    • /
    • 2017
  • The connectionist model is one approach to studying language processing from a computational perspective. And building a representation in the connectionist model study is just as important as making the structure of the model in that it determines the level of learning and performance of the model. The connectionist model has been constructed in two different ways: localist representation and distributed representation. However, the localist representation used in the previous studies had limitations in that the unit of the output layer having a rare target activation value is inactivated, and the past distributed representation has the limitation of difficulty in confirming the result by the opacity of the displayed information. This has been a limitation of the overall connection model study. In this paper, we present a new method to induce distributed representation with local representation using abstraction of information, which is a feature of restricted Boltzmann machine, with respect to the limitation of such representation of the past. As a result, our proposed method effectively solves the problem of conventional representation by using the method of information compression and inverse transformation of distributed representation into local representation.

Implementation and Performance Measuring of Erasure Coding of Distributed File System (분산 파일시스템의 소거 코딩 구현 및 성능 비교)

  • Kim, Cheiyol;Kim, Youngchul;Kim, Dongoh;Kim, Hongyeon;Kim, Youngkyun;Seo, Daewha
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1515-1527
    • /
    • 2016
  • With the growth of big data, machine learning, and cloud computing, the importance of storage that can store large amounts of unstructured data is growing recently. So the commodity hardware based distributed file systems such as MAHA-FS, GlusterFS, and Ceph file system have received a lot of attention because of their scale-out and low-cost property. For the data fault tolerance, most of these file systems uses replication in the beginning. But as storage size is growing to tens or hundreds of petabytes, the low space efficiency of the replication has been considered as a problem. This paper applied erasure coding data fault tolerance policy to MAHA-FS for high space efficiency and introduces VDelta technique to solve data consistency problem. In this paper, we compares the performance of two file systems, MAHA-FS and GlusterFS. They have different IO processing architecture, the former is server centric and the latter is client centric architecture. We found the erasure coding performance of MAHA-FS is better than GlusterFS.

Infrared Image Sharpness Enhancement Method Using Super-resolution Based on Adaptive Dynamic Range Coding and Fusion with Visible Image (적외선 영상 선명도 개선을 위한 ADRC 기반 초고해상도 기법 및 가시광 영상과의 융합 기법)

  • Kim, Yong Jun;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.11
    • /
    • pp.73-81
    • /
    • 2016
  • In general, infrared images have less sharpness and image details than visible images. So, the prior image upscaling methods are not effective in the infrared images. In order to solve this problem, this paper proposes an algorithm which initially up-scales an input infrared (IR) image by using adaptive dynamic range encoding (ADRC)-based super-resolution (SR) method, and then fuses the result with the corresponding visible images. The proposed algorithm consists of a up-scaling phase and a fusion phase. First, an input IR image is up-scaled by the proposed ADRC-based SR algorithm. In the dictionary learning stage of this up-scaling phase, so-called 'pre-emphasis' processing is applied to training-purpose high-resolution images, hence better sharpness is achieved. In the following fusion phase, high-frequency information is extracted from the visible image corresponding to the IR image, and it is adaptively weighted according to the complexity of the IR image. Finally, a up-scaled IR image is obtained by adding the processed high-frequency information to the up-scaled IR image. The experimental results show than the proposed algorithm provides better results than the state-of-the-art SR, i.e., anchored neighborhood regression (A+) algorithm. For example, in terms of just noticeable blur (JNB), the proposed algorithm shows higher value by 0.2184 than the A+. Also, the proposed algorithm outperforms the previous works even in terms of subjective visual quality.

Modeling and Digital Predistortion Design of RF Power Amplifier Using Extended Memory Polynomial (확장된 메모리 다항식 모델을 이용한 전력 증폭기 모델링 및 디지털 사전 왜곡기 설계)

  • Lee, Young-Sup;Ku, Hyun-Chul;Kim, Jeong-Hwi;Ryoo, Kyoo-Tae
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.19 no.11
    • /
    • pp.1254-1264
    • /
    • 2008
  • This paper suggests an extended memory polynomial model that improves accuracy in modeling memory effects of RF power amplifiers(PAs), and verifies effectiveness of the suggested method. The extended memory polynomial model includes cross-terms that are products of input terms that have different delay values to improve the limited accuracy of basic memory polynomial model that includes the diagonal terms of Volterra kernels. The complexity of the memoryless model, memory polynomial model, and the suggested model are compared. The extended memory polynomial model is represented with a matrix equation, and the Volterra kernels are extracted using least square method. In addition, the structure of digital predistorter and digital signal processing(DSP) algorithm based on the suggested model and indirect learning method are proposed to implement a digital predistortion linearization. To verify the suggested model, the predicted output of the model is compared with the measured output for a 10W GaN HEMT RF PA and 30 W LDMOS RF PA using 2.3 GHz WiBro input signal, and adjacent-channel power ratio(ACPR) performance with the proposed digital predistortion is measured. The proposed model increases model accuracy for the PAs, and improves the linearization performance by reducing ACPR.

An cows using BSC founder for a study on the management and business consulting (BSC를 이용한 소상공인 창업자를 위한 자영업컨설팅 경영성과에 관한 연구)

  • An, Seong Hui;Jo, Yoon Ah;Jo, In Seog
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.10 no.3
    • /
    • pp.39-49
    • /
    • 2015
  • Thus this study looked into existing literature focusing on generalities, and after literature studies, hypothesis was set up to solve the problems in the study. According to literature examination, self employed consulting was found to be comprised of four areas: awareness, reliability, satisfaction, and utilization, while consulting could be divided into four customer perspectives: customer, financial, internal processing, and learning and growth. An empirical study was conducted to verify the causal relationship between these causes, and we describe the findings of the study on the business management performance pursuant to self employed business consulting as follows: In this study, we examine an overall business management performance measurement by adopting the four variables of self employed business consulting, and enhance the chance of success by having systematic access to business establishment. In conclusion, in order to increase the success rate of the small business start-up, it is important to choose such items that fit the founder's experience and the characteristics of the business zone, and a successful founding of a business will be accomplished only when sufficient funding is combined with successful running, therefore, most importantly, striking a balance between the factors should start with the founder as the center, and there must be professional business knowledge and technical assistance by the business start-up support agency.

  • PDF

Band Selection Using L2,1-norm Regression for Hyperspectral Target Detection (초분광 표적 탐지를 위한 L2,1-norm Regression 기반 밴드 선택 기법)

  • Kim, Joochang;Yang, Yukyung;Kim, Jun-Hyung;Kim, Junmo
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_1
    • /
    • pp.455-467
    • /
    • 2017
  • When performing target detection using hyperspectral imagery, a feature extraction process is necessary to solve the problem of redundancy of adjacent spectral bands and the problem of a large amount of calculation due to high dimensional data. This study proposes a new band selection method using the $L_{2,1}$-norm regression model to apply the feature selection technique in the machine learning field to the hyperspectral band selection. In order to analyze the performance of the proposed band selection technique, we collected the hyperspectral imagery and these were used to analyze the performance of target detection with band selection. The Adaptive Cosine Estimator (ACE) detection performance is maintained or improved when the number of bands is reduced from 164 to about 30 to 40 bands in the 350 nm to 2500 nm wavelength band. Experimental results show that the proposed band selection technique extracts bands that are effective for detection in hyperspectral images and can reduce the size of the data without reducing the performance, which can help improve the processing speed of real-time target detection system in the future.

A Study of Statistical Learning as a CRM s Classifier Functions (CRM의 기능 분류를 위한 통계적 학습에 관한 연구)

  • Jang, Geun;Lee, Jung-Bae;Lee, Byung-Soo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.71-76
    • /
    • 2004
  • The recent ERP and CRM is mostly focused on the conventional function performances. However, the recent business environment has brought the change in market due to the rapid progress of internet and e-commerce. It is mostly becoming e-business and spreading out as development of the relationship with other cooperating companies, the rapid progress of the relationship with customers, and intensification competitive power through the development of business progress in the organization. CRM(custom relationship management) is a kind of the marketing progress which forms, manages, and intensifies the relationship between the customers and companies to manage the acquired customers and increase the worth of customers for the company. It needs the system base which analyzes the information of customers since it functions on the basis of various information about customers and is linked to the business category such as producing, marketing, and decision making. Since ERP is extending its function to SCM, CRM, and SEM(strategic Enterprise Management), the 21 century s ERP develop as the strategy tool of e-business and, as the mediation for this, will subdivide the functions of CRM effectively by the analogic study of data. Also, to accomplish classification work of the file which in existing becomes accomplished with possibility work with an automatic movement with the user will be able to accomplish a more efficiently work the agent which in order leads the machine studying law, it is one thing with system feature.

Traffic Flooding Attack Detection on SNMP MIB Using SVM (SVM을 이용한 SNMP MIB에서의 트래픽 폭주 공격 탐지)

  • Yu, Jae-Hak;Park, Jun-Sang;Lee, Han-Sung;Kim, Myung-Sup;Park, Dai-Hee
    • The KIPS Transactions:PartC
    • /
    • v.15C no.5
    • /
    • pp.351-358
    • /
    • 2008
  • Recently, as network flooding attacks such as DoS/DDoS and Internet Worm have posed devastating threats to network services, rapid detection and proper response mechanisms are the major concern for secure and reliable network services. However, most of the current Intrusion Detection Systems(IDSs) focus on detail analysis of packet data, which results in late detection and a high system burden to cope with high-speed network environment. In this paper we propose a lightweight and fast detection mechanism for traffic flooding attacks. Firstly, we use SNMP MIB statistical data gathered from SNMP agents, instead of raw packet data from network links. Secondly, we use a machine learning approach based on a Support Vector Machine(SVM) for attack classification. Using MIB and SVM, we achieved fast detection with high accuracy, the minimization of the system burden, and extendibility for system deployment. The proposed mechanism is constructed in a hierarchical structure, which first distinguishes attack traffic from normal traffic and then determines the type of attacks in detail. Using MIB data sets collected from real experiments involving a DDoS attack, we validate the possibility of our approaches. It is shown that network attacks are detected with high efficiency, and classified with low false alarms.

Application of Advertisement Filtering Model and Method for its Performance Improvement (광고 글 필터링 모델 적용 및 성능 향상 방안)

  • Park, Raegeun;Yun, Hyeok-Jin;Shin, Ui-Cheol;Ahn, Young-Jin;Jeong, Seungdo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.11
    • /
    • pp.1-8
    • /
    • 2020
  • In recent years, due to the exponential increase in internet data, many fields such as deep learning have developed, but side effects generated as commercial advertisements, such as viral marketing, have been discovered. This not only damages the essence of the internet for sharing high-quality information, but also causes problems that increase users' search times to acquire high-quality information. In this study, we define advertisement as "a text that obscures the essence of information transmission" and we propose a model for filtering information according to that definition. The proposed model consists of advertisement filtering and advertisement filtering performance improvement and is designed to continuously improve performance. We collected data for filtering advertisements and learned document classification using KorBERT. Experiments were conducted to verify the performance of this model. For data combining five topics, accuracy and precision were 89.2% and 84.3%, respectively. High performance was confirmed, even if atypical characteristics of advertisements are considered. This approach is expected to reduce wasted time and fatigue in searching for information, because our model effectively delivers high-quality information to users through a process of determining and filtering advertisement paragraphs.