• 제목/요약/키워드: Deep Learning AI

검색결과 633건 처리시간 0.028초

Prediction of Barge Ship Roll Response Amplitude Operator Using Machine Learning Techniques

  • Lim, Jae Hwan;Jo, Hyo Jae
    • 한국해양공학회지
    • /
    • 제34권3호
    • /
    • pp.167-179
    • /
    • 2020
  • Recently, the increasing importance of artificial intelligence (AI) technology has led to its increased use in various fields in the shipbuilding and marine industries. For example, typical scenarios for AI include production management, analyses of ships on a voyage, and motion prediction. Therefore, this study was conducted to predict a response amplitude operator (RAO) through AI technology. It used a neural network based on one of the types of AI methods. The data used in the neural network consisted of the properties of the vessel and RAO values, based on simulating the in-house code. The learning model consisted of an input layer, hidden layer, and output layer. The input layer comprised eight neurons, the hidden layer comprised the variables, and the output layer comprised 20 neurons. The RAO predicted with the neural network and an RAO created with the in-house code were compared. The accuracy was assessed and reviewed based on the root mean square error (RMSE), standard deviation (SD), random number change, correlation coefficient, and scatter plot. Finally, the optimal model was selected, and the conclusion was drawn. The ultimate goals of this study were to reduce the difficulty in the modeling work required to obtain the RAO, to reduce the difficulty in using commercial tools, and to enable an assessment of the stability of medium/small vessels in waves.

Dropout Genetic Algorithm Analysis for Deep Learning Generalization Error Minimization

  • Park, Jae-Gyun;Choi, Eun-Soo;Kang, Min-Soo;Jung, Yong-Gyu
    • International Journal of Advanced Culture Technology
    • /
    • 제5권2호
    • /
    • pp.74-81
    • /
    • 2017
  • Recently, there are many companies that use systems based on artificial intelligence. The accuracy of artificial intelligence depends on the amount of learning data and the appropriate algorithm. However, it is not easy to obtain learning data with a large number of entity. Less data set have large generalization errors due to overfitting. In order to minimize this generalization error, this study proposed DGA(Dropout Genetic Algorithm) which can expect relatively high accuracy even though data with a less data set is applied to machine learning based genetic algorithm to deep learning based dropout. The idea of this paper is to determine the active state of the nodes. Using Gradient about loss function, A new fitness function is defined. Proposed Algorithm DGA is supplementing stochastic inconsistency about Dropout. Also DGA solved problem by the complexity of the fitness function and expression range of the model about Genetic Algorithm As a result of experiments using MNIST data proposed algorithm accuracy is 75.3%. Using only Dropout algorithm accuracy is 41.4%. It is shown that DGA is better than using only dropout.

A Study on the Construction Method of HS Item Classification Decision System Based on Artificial Intelligence

  • Choi, keong ju
    • International Journal of Advanced Culture Technology
    • /
    • 제8권1호
    • /
    • pp.165-172
    • /
    • 2020
  • Industrial Revolution means the improvement of productivity through technological innovation and has been a driving force of the whole change of economic system and social structure as the characteristic of technology as the tool of this productivity has changed. Since the first industrial revolution of the 18th century, productivity efficiency has been advanced through three industrial revolutions so far, and this fourth industrial revolution is expected to bring about another revolution of production. In this study, the demand for the introduction of artificial intelligence(AI) technology has been increasing in various business fields due to the rapid development of ICT technology, and the classification of HS(harmonized commodity description and coding system) items has been decided using artificial intelligence technology, which is the core of the fourth industrial revolution. And it is enough to construct HS classification system based on AI technology using inference and deep learning. Performing the HS item classification is not an easy task. Implementation of item classification system using artificial intelligence technology to analyze information of HS item classification which is performed manually by the current person more accurately and without any mistake, And the customs administrations, customs offices, and customs agencies, it is expected to be highly utilized in the innovation of trade practice and the customs administration innovation FTA origin agent.

인공지능 프로세서 기술 동향 (AI Processor Technology Trends)

  • 권영수
    • 전자통신동향분석
    • /
    • 제33권5호
    • /
    • pp.121-134
    • /
    • 2018
  • The Von Neumann based architecture of the modern computer has dominated the computing industry for the past 50 years, sparking the digital revolution and propelling us into today's information age. Recent research focus and market trends have shown significant effort toward the advancement and application of artificial intelligence technologies. Although artificial intelligence has been studied for decades since the Turing machine was first introduced, the field has recently emerged into the spotlight thanks to remarkable milestones such as AlexNet-CNN and Alpha-Go, whose neural-network based deep learning methods have achieved a ground-breaking performance superior to existing recognition, classification, and decision algorithms. Unprecedented results in a wide variety of applications (drones, autonomous driving, robots, stock markets, computer vision, voice, and so on) have signaled the beginning of a golden age for artificial intelligence after 40 years of relative dormancy. Algorithmic research continues to progress at a breath-taking pace as evidenced by the rate of new neural networks being announced. However, traditional Von Neumann based architectures have proven to be inadequate in terms of computation power, and inherently inefficient in their processing of vastly parallel computations, which is a characteristic of deep neural networks. Consequently, global conglomerates such as Intel, Huawei, and Google, as well as large domestic corporations and fabless companies are developing dedicated semiconductor chips customized for artificial intelligence computations. The AI Processor Research Laboratory at ETRI is focusing on the research and development of super low-power AI processor chips. In this article, we present the current trends in computation platform, parallel processing, AI processor, and super-threaded AI processor research being conducted at ETRI.

Autonomous Vehicles as Safety and Security Agents in Real-Life Environments

  • Al-Absi, Ahmed Abdulhakim
    • International journal of advanced smart convergence
    • /
    • 제11권2호
    • /
    • pp.7-12
    • /
    • 2022
  • Safety and security are the topmost priority in every environment. With the aid of Artificial Intelligence (AI), many objects are becoming more intelligent, conscious, and curious of their surroundings. The recent scientific breakthroughs in autonomous vehicular designs and development; powered by AI, network of sensors and the rapid increase of Internet of Things (IoTs) could be utilized in maintaining safety and security in our environments. AI based on deep learning architectures and models, such as Deep Neural Networks (DNNs), is being applied worldwide in the automotive design fields like computer vision, natural language processing, sensor fusion, object recognition and autonomous driving projects. These features are well known for their identification, detective and tracking abilities. With the embedment of sensors, cameras, GPS, RADAR, LIDAR, and on-board computers in many of these autonomous vehicles being developed, these vehicles can properly map their positions and proximity to everything around them. In this paper, we explored in detail several ways in which these enormous features embedded in these autonomous vehicles, such as the network of sensors fusion, computer vision and natural image processing, natural language processing, and activity aware capabilities of these automobiles, could be tapped and utilized in safeguarding our lives and environment.

The Influence of Creator Information on Preference for Artificial Intelligence- and Human-generated Artworks

  • Nam, Seungmin;Song, Jiwon;Kim, Chai-Youn
    • 감성과학
    • /
    • 제25권3호
    • /
    • pp.107-116
    • /
    • 2022
  • Purpose: Researchers have shown that aesthetic judgments of artworks depend on contexts, such as the authenticity of an artwork (Newman & Bloom, 2011) and an artwork's location of display (Kirk et al., 2009; Silveira et al., 2015). The present study aims to examine whether contextual information related to the creator, such as whether an artwork was created by a human or artificial intelligence (AI), influences viewers' preference judgments of an artwork. Methods: Images of Impressionist landscape paintings were selected as human-made artworks. AI-made artwork stimuli were created using Google's Deep Dream Generator by mimicking the Impressionist style via deep learning algorithms. Participants performed a preference rating task on each of the 108 artwork stimuli accompanied by one of the two creator labels. After this task, an art experience questionnaire (AEQ) was given to participants to examine whether individual differences in art experience influence their preference judgments. Results: Setting AEQ scores as a covariate in a two-way ANCOVA analysis, the stimuli with the human-made context were preferred over the stimuli with the AI-made context. Regarding the types of stimuli, the viewers preferred AI-made stimuli to human-made stimuli. There was no interaction effect between the two factors. Conclusion: These results suggest that preferences for visual artworks are influenced by the contextual information of the creator when the individual differences in art experience are controlled.

부가 정보를 활용한 비전 트랜스포머 기반의 추천시스템 (A Vision Transformer Based Recommender System Using Side Information)

  • 권유진;최민석;조윤호
    • 지능정보연구
    • /
    • 제28권3호
    • /
    • pp.119-137
    • /
    • 2022
  • 최근 추천 시스템 연구에서는 사용자와 아이템 간 상호 작용을 보다 잘 표현하고자 다양한 딥 러닝 모델을 적용하고 있다. ONCF(Outer product-based Neural Collaborative Filtering)는 사용자와 아이템의 행렬을 외적하고 합성곱 신경망을 거치는 구조로 2차원 상호작용 맵을 제작해 사용자와 아이템 간의 상호 작용을 더욱 잘 포착하고자 한 대표적인 딥러닝 기반 추천시스템이다. 하지만 합성곱 신경망을 이용하는 ONCF는 학습 데이터에 나타나지 않은 분포를 갖는 데이터의 경우 예측성능이 떨어지는 귀납적 편향을 가지는 한계가 있다. 본 연구에서는 먼저 NCF구조에 Transformer에 기반한 ViT(Vision Transformer)를 도입한 방법론을 제안한다. ViT는 NLP분야에서 주로 사용되던 트랜스포머를 이미지 분류에 적용하여 좋은 성과를 거둔 방법으로 귀납적 편향이 합성곱 신경망보다 약해 처음 보는 분포에도 robust한 특징이 있다. 다음으로, ONCF는 사용자와 아이템에 대한 단일 잠재 벡터를 사용하였지만 본 연구에서는 모델이 더욱 다채로운 표현을 학습하고 앙상블 효과도 얻기 위해 잠재 벡터를 여러 개 사용하여 채널을 구성한다. 마지막으로 ONCF와 달리 부가 정보(side information)를 추천에 반영할 수 있는 아키텍처를 제시한다. 단순한 입력 결합 방식을 활용하여 신경망에 부가 정보를 반영하는 기존 연구와 달리 본 연구에서는 독립적인 보조 분류기(auxiliary classifier)를 도입하여 추천 시스템에 부가정보를 보다 효율적으로 반영할 수 있도록 하였다. 결론적으로 본 논문에서는 ViT 의 적용, 임베딩 벡터의 채널화, 부가정보 분류기의 도입을 적용한 새로운 딥러닝 모델을 제안하였으며 실험 결과 ONCF보다 높은 성능을 보였다.

태양 에너지 수집형 IoT 엣지 컴퓨팅 환경에서 효율적인 오디오 딥러닝을 위한 에너지 적응형 데이터 전처리 기법 (Energy-Aware Data-Preprocessing Scheme for Efficient Audio Deep Learning in Solar-Powered IoT Edge Computing Environments)

  • 유연태;노동건
    • 대한임베디드공학회논문지
    • /
    • 제18권4호
    • /
    • pp.159-164
    • /
    • 2023
  • Solar energy harvesting IoT devices prioritize maximizing the utilization of collected energy due to the periodic recharging nature of solar energy, rather than minimizing energy consumption. Meanwhile, research on edge AI, which performs machine learning near the data source instead of the cloud, is actively conducted for reasons such as data confidentiality and privacy, response time, and cost. One such research area involves performing various audio AI applications using audio data collected from multiple IoT devices in an IoT edge computing environment. However, in most studies, IoT devices only perform sensing data transmission to the edge server, and all processes, including data preprocessing, are performed on the edge server. In this case, it not only leads to overload issues on the edge server but also causes network congestion by transmitting unnecessary data for learning. On the other way, if data preprocessing is delegated to each IoT device to address this issue, it leads to another problem of increased blackout time due to energy shortages in the devices. In this paper, we aim to alleviate the problem of increased blackout time in devices while mitigating issues in server-centric edge AI environments by determining where the data preprocessed based on the energy state of each IoT device. In the proposed method, IoT devices only perform the preprocessing process, which includes sound discrimination and noise removal, and transmit to the server if there is more energy available than the energy threshold required for the basic operation of the device.

인간의 습관적 특성을 고려한 악성 도메인 탐지 모델 구축 사례: LSTM 기반 Deep Learning 모델 중심 (Case Study of Building a Malicious Domain Detection Model Considering Human Habitual Characteristics: Focusing on LSTM-based Deep Learning Model)

  • 정주원
    • 융합보안논문지
    • /
    • 제23권5호
    • /
    • pp.65-72
    • /
    • 2023
  • 본 논문에서는 LSTM(Long Short-Term Memory)을 기반으로 하는 Deep Learning 모델을 구축하여 인간의 습관적 특성을 고려한 악성 도메인 탐지 방법을 제시한다. DGA(Domain Generation Algorithm) 악성 도메인은 인간의 습관적인 실수를 악용하여 심각한 보안 위협을 초래한다. 타이포스쿼팅을 통한 악성 도메인의 변화와 은폐 기술에 신속히 대응하고, 정확하게 탐지하여 보안 위협을 최소화하는 것이 목표이다. LSTM 기반 Deep Learning 모델은 악성코드별 특징을 분석하고 학습하여, 생성된 도메인을 악성 또는 양성으로 자동 분류한다. ROC 곡선과 AUC 정확도를 기준으로 모델의 성능 평가 결과, 99.21% 이상 뛰어난 탐지 정확도를 나타냈다. 이 모델을 활용하여 악성 도메인을 실시간 탐지할 수 있을 뿐만 아니라 다양한 사이버 보안 분야에 응용할 수 있다. 본 논문은 사용자 보호와 사이버 공격으로부터 안전한 사이버 환경 조성을 위한 새로운 접근 방식을 제안하고 탐구한다.

딥러닝의 파일 입출력을 위한 버퍼캐시 성능 개선 연구 (A Study on Improvement of Buffer Cache Performance for File I/O in Deep Learning)

  • 이정하;반효경
    • 한국인터넷방송통신학회논문지
    • /
    • 제24권2호
    • /
    • pp.93-98
    • /
    • 2024
  • 인공지능과 고성능 컴퓨팅 기술이 급속히 발전하면서 다양한 분야에 딥러닝 기술이 활용되고 있다. 딥러닝은 학습 과정에서 대량의 데이터를 무작위로 읽어 학습을 진행하고, 이 과정을 반복한다. 많은 수의 파일들이 무작위로 반복 참조되는 딥러닝의 파일 입출력은 시간적 지역성을 지닌 일반적인 응용과는 다른 특징을 보인다. 이로 인한 캐싱의 어려움을 극복하기 위해 본 연구에서는 딥러닝 데이터셋 읽기의 무작위성을 줄이고 기존의 버퍼 캐시 알고리즘에 적응적으로 동작하는 새로운 데이터 읽기 방안을 제안한다. 본 논문에서는 실험을 통해 제안하는 방식이 버퍼 캐시의 미스율을 기존의 방식에 비해 평균 16%, 최대 33% 감소시키고, 수행시간을 24%까지 개선함을 보인다.