• 제목/요약/키워드: 학습속도

Search Result 1,099, Processing Time 0.025 seconds

Analysis on Lightweight Methods of On-Device AI Vision Model for Intelligent Edge Computing Devices (지능형 엣지 컴퓨팅 기기를 위한 온디바이스 AI 비전 모델의 경량화 방식 분석)

  • Hye-Hyeon Ju;Namhi Kang
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.1-8
    • /
    • 2024
  • On-device AI technology, which can operate AI models at the edge devices to support real-time processing and privacy enhancement, is attracting attention. As intelligent IoT is applied to various industries, services utilizing the on-device AI technology are increasing significantly. However, general deep learning models require a lot of computational resources for inference and learning. Therefore, various lightweighting methods such as quantization and pruning have been suggested to operate deep learning models in embedded edge devices. Among the lightweighting methods, we analyze how to lightweight and apply deep learning models to edge computing devices, focusing on pruning technology in this paper. In particular, we utilize dynamic and static pruning techniques to evaluate the inference speed, accuracy, and memory usage of a lightweight AI vision model. The content analyzed in this paper can be used for intelligent video control systems or video security systems in autonomous vehicles, where real-time processing are highly required. In addition, it is expected that the content can be used more effectively in various IoT services and industries.

Model Interpretation through LIME and SHAP Model Sharing (LIME과 SHAP 모델 공유에 의한 모델 해석)

  • Yong-Gil Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.177-184
    • /
    • 2024
  • In the situation of increasing data at fast speed, we use all kinds of complex ensemble and deep learning algorithms to get the highest accuracy. It's sometimes questionable how these models predict, classify, recognize, and track unknown data. Accomplishing this technique and more has been and would be the goal of intensive research and development in the data science community. A variety of reasons, such as lack of data, imbalanced data, biased data can impact the decision rendered by the learning models. Many models are gaining traction for such interpretations. Now, LIME and SHAP are commonly used, in which are two state of the art open source explainable techniques. However, their outputs represent some different results. In this context, this study introduces a coupling technique of LIME and Shap, and demonstrates analysis possibilities on the decisions made by LightGBM and Keras models in classifying a transaction for fraudulence on the IEEE CIS dataset.

Development for Analysis Service of Crowd Density in CCTV Video using YOLOv4 (YOLOv4를 이용한 CCTV 영상 내 군중 밀집도 분석 서비스 개발)

  • Seung-Yeon Hwang;Jeong-Joon Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.177-182
    • /
    • 2024
  • In this paper, the purpose of this paper is to predict and prevent the risk of crowd concentration in advance for possible future crowd accidents based on the Itaewon crush accident in Korea on October 29, 2022. In the case of a single CCTV, the administrator can determine the current situation in real time, but since the screen cannot be seen throughout the day, objects are detected using YOLOv4, which learns images taken with CCTV angle, and safety accidents due to crowd concentration are prevented by notification when the number of clusters exceeds. The reason for using the YOLO v4 model is that it improves with higher accuracy and faster speed than the previous YOLO model, making object detection techniques easier. This service will go through the process of testing with CCTV image data registered on the AI-Hub site. Currently, CCTVs have increased exponentially in Korea, and if they are applied to actual CCTVs, it is expected that various accidents, including accidents caused by crowd concentration in the future, can be prevented.

Target-Aspect-Sentiment Joint Detection with CNN Auxiliary Loss for Aspect-Based Sentiment Analysis (CNN 보조 손실을 이용한 차원 기반 감성 분석)

  • Jeon, Min Jin;Hwang, Ji Won;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.1-22
    • /
    • 2021
  • Aspect Based Sentiment Analysis (ABSA), which analyzes sentiment based on aspects that appear in the text, is drawing attention because it can be used in various business industries. ABSA is a study that analyzes sentiment by aspects for multiple aspects that a text has. It is being studied in various forms depending on the purpose, such as analyzing all targets or just aspects and sentiments. Here, the aspect refers to the property of a target, and the target refers to the text that causes the sentiment. For example, for restaurant reviews, you could set the aspect into food taste, food price, quality of service, mood of the restaurant, etc. Also, if there is a review that says, "The pasta was delicious, but the salad was not," the words "steak" and "salad," which are directly mentioned in the sentence, become the "target." So far, in ABSA, most studies have analyzed sentiment only based on aspects or targets. However, even with the same aspects or targets, sentiment analysis may be inaccurate. Instances would be when aspects or sentiment are divided or when sentiment exists without a target. For example, sentences like, "Pizza and the salad were good, but the steak was disappointing." Although the aspect of this sentence is limited to "food," conflicting sentiments coexist. In addition, in the case of sentences such as "Shrimp was delicious, but the price was extravagant," although the target here is "shrimp," there are opposite sentiments coexisting that are dependent on the aspect. Finally, in sentences like "The food arrived too late and is cold now." there is no target (NULL), but it transmits a negative sentiment toward the aspect "service." Like this, failure to consider both aspects and targets - when sentiment or aspect is divided or when sentiment exists without a target - creates a dual dependency problem. To address this problem, this research analyzes sentiment by considering both aspects and targets (Target-Aspect-Sentiment Detection, hereby TASD). This study detected the limitations of existing research in the field of TASD: local contexts are not fully captured, and the number of epochs and batch size dramatically lowers the F1-score. The current model excels in spotting overall context and relations between each word. However, it struggles with phrases in the local context and is relatively slow when learning. Therefore, this study tries to improve the model's performance. To achieve the objective of this research, we additionally used auxiliary loss in aspect-sentiment classification by constructing CNN(Convolutional Neural Network) layers parallel to existing models. If existing models have analyzed aspect-sentiment through BERT encoding, Pooler, and Linear layers, this research added CNN layer-adaptive average pooling to existing models, and learning was progressed by adding additional loss values for aspect-sentiment to existing loss. In other words, when learning, the auxiliary loss, computed through CNN layers, allowed the local context to be captured more fitted. After learning, the model is designed to do aspect-sentiment analysis through the existing method. To evaluate the performance of this model, two datasets, SemEval-2015 task 12 and SemEval-2016 task 5, were used and the f1-score increased compared to the existing models. When the batch was 8 and epoch was 5, the difference was largest between the F1-score of existing models and this study with 29 and 45, respectively. Even when batch and epoch were adjusted, the F1-scores were higher than the existing models. It can be said that even when the batch and epoch numbers were small, they can be learned effectively compared to the existing models. Therefore, it can be useful in situations where resources are limited. Through this study, aspect-based sentiments can be more accurately analyzed. Through various uses in business, such as development or establishing marketing strategies, both consumers and sellers will be able to make efficient decisions. In addition, it is believed that the model can be fully learned and utilized by small businesses, those that do not have much data, given that they use a pre-training model and recorded a relatively high F1-score even with limited resources.

Utilizing Visual Information for Non-contact Predicting Method of Friction Coefficient (마찰계수의 비접촉 추정을 위한 영상정보 활용방법)

  • Kim, Doo-Gyu;Kim, Ja-Young;Lee, Ji-Hong;Choi, Dong-Geol;Kweon, In-So
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.4
    • /
    • pp.28-34
    • /
    • 2010
  • In this paper, we proposed an algorithm for utilizing visual information for non-contact predicting method of friction coefficient. Coefficient of friction is very important in driving on road and traversing over obstacle. Our algorithm is based on terrain classification for visual image. The proposed method, non-contacting approach, has advantage over other methods that extract material characteristic of road by sensors contacting road surface. This method is composed of learning group(experiment, grouping material) and predicting friction coefficient group(Bayesian classification prediction function). Every group include previous work of vision. Advantage of our algorithm before entering such terrain can be very useful for avoiding slippery areas. We make experiment on measurement of friction coefficient of terrain. This result is utilized real friction coefficient as prediction method. We show error between real friction coefficient and predicted friction coefficient for performance evaluation of our algorithm.

The Relationships among Students' Mapping Understanding, Mapping Errors and Cognitive/Affective Variables in Learning with Analogy (비유를 사용한 수업에서 학생들의 인지적.정의적 특성과 대응 이해 및 대응 오류 유형과의 관계)

  • Kim, Kyung-Sun;Hwang, Sun-Young;Noh, Tae-Hee
    • Journal of the Korean Chemical Society
    • /
    • v.54 no.1
    • /
    • pp.150-157
    • /
    • 2010
  • In this study, we investigated the differences of mapping understanding and the types of mapping errors by the levels of students' cognitive/affective variables and the relationships between mapping understanding and these variables in learning 'concentration and reaction rate' with analogy. After administering the tests regarding logical thinking ability, visual imagery ability, analogical reasoning ability, self efficacy, and need for cognition as pretests, students learned with analogy. Then, students' familiarity and mapping understanding were examined. Analyses of the results revealed that the scores of the mapping understanding for the students with higher levels of all cognitive/affective variables except visual imagery ability and familiarity were significantly higher than those for the students with lower levels. The differences in the types of the mapping errors such as overmapping, failure to map, impossible mapping, artificial mapping, mismapping, rash mapping, and retention of a base feature were also found by the levels of students' cognitive and affective variables. The scores of students' mapping understanding were positively correlated with those of all cognitive and affective variables. The results of multiple regression analysis indicated that students' science achievement, logical thinking ability, and familiarity were significant predictors of mapping understanding. Educational implications of these findings are discussed.

Estimating Gastrointestinal Transition Location Using CNN-based Gastrointestinal Landmark Classifier (CNN 기반 위장관 랜드마크 분류기를 이용한 위장관 교차점 추정)

  • Jang, Hyeon Woong;Lim, Chang Nam;Park, Ye-Suel;Lee, Gwang Jae;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.3
    • /
    • pp.101-108
    • /
    • 2020
  • Since the performance of deep learning techniques has recently been proven in the field of image processing, there are many attempts to perform classification, analysis, and detection of images using such techniques in various fields. Among them, the expectation of medical image analysis software, which can serve as a medical diagnostic assistant, is increasing. In this study, we are attention to the capsule endoscope image, which has a large data set and takes a long time to judge. The purpose of this paper is to distinguish the gastrointestinal landmarks and to estimate the gastrointestinal transition location that are common to all patients in the judging of capsule endoscopy and take a lot of time. To do this, we designed CNN-based Classifier that can identify gastrointestinal landmarks, and used it to estimate the gastrointestinal transition location by filtering the results. Then, we estimate gastrointestinal transition location about seven of eight patients entered the suspected gastrointestinal transition area. In the case of change from the stomach to the small intestine(pylorus), and change from the small intestine to the large intestine(ileocecal valve), we can check all eight patients were found to be in the suspected gastrointestinal transition area. we can found suspected gastrointestinal transition area in the range of 100 frames, and if the reader plays images at 10 frames per second, the gastrointestinal transition could be found in 10 seconds.

Real-Time Traffic Information and Road Sign Recognitions of Circumstance on Expressway for Vehicles in C-ITS Environments (C-ITS 환경에서 차량의 고속도로 주행 시 주변 환경 인지를 위한 실시간 교통정보 및 안내 표지판 인식)

  • Im, Changjae;Kim, Daewon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.55-69
    • /
    • 2017
  • Recently, the IoT (Internet of Things) environment is being developed rapidly through network which is linked to intellectual objects. Through the IoT, it is possible for human to intercommunicate with objects and objects to objects. Also, the IoT provides artificial intelligent service mixed with knowledge of situational awareness. One of the industries based on the IoT is a car industry. Nowadays, a self-driving vehicle which is not only fuel-efficient, smooth for traffic, but also puts top priority on eventual safety for humans became the most important conversation topic. Since several years ago, a research on the recognition of the surrounding environment for self-driving vehicles using sensors, lidar, camera, and radar techniques has been progressed actively. Currently, based on the WAVE (Wireless Access in Vehicular Environment), the research is being boosted by forming networking between vehicles, vehicle and infrastructures. In this paper, a research on the recognition of a traffic signs on highway was processed as a part of the awareness of the surrounding environment for self-driving vehicles. Through the traffic signs which have features of fixed standard and installation location, we provided a learning theory and a corresponding results of experiment about the way that a vehicle is aware of traffic signs and additional informations on it.

A Benchmark of Open Source Data Mining Package for Thermal Environment Modeling in Smart Farm(R, OpenCV, OpenNN and Orange) (스마트팜 열환경 모델링을 위한 Open source 기반 Data mining 기법 분석)

  • Lee, Jun-Yeob;Oh, Jong-wo;Lee, DongHoon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.168-168
    • /
    • 2017
  • ICT 융합 스마트팜 내의 환경계측 센서, 영상 및 사양관리 시스템의 증가에도 불구하고 이들 장비에서 확보되는 데이터를 적절히 유효하게 활용하는 기술이 미흡한 실정이다. 돈사의 경우 가축의 복지수준, 성장 변화를 실시간으로 모니터링 및 예측할 수 있는 데이터 분석 및 모델링 기술 확보가 필요하다. 이를 위해선 가축의 생리적 변화 및 행동적 변화를 조기에 감지하고 가축의 복지수준을 실시간으로 감시하고 분석 및 예측 기술이 필요한데 이를 위한 대표적인 정보 통신 공학적 접근법 중에 하나가 Data mining 이다. Data mining에 대한 연구 수행에 필요한 다양한 소프트웨어 중에서 Open source로 제공이 되는 4가지 도구를 비교 분석하였다. 스마트 돈사 내에서 열환경 모델링을 목표로 한 데이터 분석에서 고려해야할 요인으로 데이터 분석 알고리즘 도출 시간, 시각화 기능, 타 라이브러리와 연계 기능 등을 중점 적으로 분석하였다. 선정된 4가지 분석 도구는 1) R(https://cran.r-project.org), 2) OpenCV(http://opencv.org), 3) OpenNN (http://www.opennn.net), 4) Orange(http://orange.biolab.si) 이다. 비교 분석을 수행한 운영체제는 Linux-Ubuntu 16.04.4 LTS(X64)이며, CPU의 클럭속도는 3.6 Ghz, 메모리는 64 Gb를 설치하였다. 개발언어 측면에서 살펴보면 1) R 스크립트, 2) C/C++, Python, Java, 3) C++, 4) C/C++, Python, Cython을 지원하여 C/C++ 언어와 Python 개발 언어가 상대적으로 유리하였다. 데이터 분석 알고리즘의 경우 소스코드 범위에서 라이브러리를 제공하는 경우 Cross-Platform 개발이 가능하여 여러 운영체제에서 개발한 결과를 별도의 Porting 과정을 거치지 않고 사용할 수 있었다. 빌트인 라이브러리 경우 순서대로 R 의 경우 가장 많은 수의 Data mining 알고리즘을 제공하고 있다. 이는 R 운영 환경 자체가 개방형으로 되어 있어 온라인에서 추가되는 새로운 라이브러리를 클라우드를 통하여 공유하기 때문인 것으로 판단되었다. OpenCV의 경우 영상 처리에 강점이 있었으며, OpenNN은 신경망학습과 관련된 라이브러리를 소스코드 레벨에서 공개한 것이 강점이라 할 수 있다. Orage의 경우 라이브러리 집합을 제공하는 것에 중점을 둔 다른 패키지와 달리 시각화 기능 및 망 구성 등 사용자 인터페이스를 통합하여 운영한 것이 강점이라 할 수 있다. 열환경 모델링에 요구되는 시간 복잡도에 대응하기 위한 부가 정보 처리 기술에 대한 연구를 수행하여 스마트팜 열환경 모델링을 실시간으로 구현할 수 있는 방안 연구를 수행할 것이다.

  • PDF

A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM (AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구)

  • Han, Eun-Jung;Kang, Byung-Jun;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model) is an algorithm to extract face feature points with statistical models of shape and texture information based on PCA(Principal Component Analysis). This method is widely used for face recognition, face modeling and expression recognition. However, the detection performance of AAM algorithm is sensitive to initial value and the AAM method has the problem that detection error is increased when an input image is quite different from training data. Especially, the algorithm shows high accuracy in case of closed lips but the detection error is increased in case of opened lips and deformed lips according to the facial expression of user. To solve these problems, we propose the improved AAM algorithm using lip feature points which is extracted based on a new lip detection algorithm. In this paper, we select a searching region based on the face feature points which are detected by AAM algorithm. And lip corner points are extracted by using Canny edge detection and histogram projection method in the selected searching region. Then, lip region is accurately detected by combining color and edge information of lip in the searching region which is adjusted based on the position of the detected lip corners. Based on that, the accuracy and processing speed of lip detection are improved. Experimental results showed that the RMS(Root Mean Square) error of the proposed method was reduced as much as 4.21 pixels compared to that only using AAM algorithm.