• 제목/요약/키워드: computer models

검색결과 3,894건 처리시간 0.036초

외국환 거래의 자금세탁 혐의도 점수모형 개발에 관한 연구 (Scoring models to detect foreign exchange money laundering)

  • 홍성익;문태희;손소영
    • 산업공학
    • /
    • 제18권3호
    • /
    • pp.268-276
    • /
    • 2005
  • In recent years, the money Laundering crimes are increasing by means of foreign exchange transactions. Our study proposes four scoring models to provide early warning of the laundering in foreign exchange transactions for both inward and outward remittances: logistic regression model, decision tree, neural network, and ensemble model which combines the three models. In terms of accuracy of test data, decision tree model is selected for the inward remittance and an ensemble model for the outward remittance. From our study results, the accumulated number of transaction turns out to be the most important predictor variable. The proposed scoring models deal with the transaction level and is expected to help the bank teller to detect the laundering related transactions in the early stage.

A hybrid approach for character modeling using geometric primitives and shape-from-shading algorithm

  • Kazmin, Ismail Khalid;You, Lihua;Zhang, Jian Jun
    • Journal of Computational Design and Engineering
    • /
    • 제3권2호
    • /
    • pp.121-131
    • /
    • 2016
  • Organic modeling of 3D characters is a challenging task when it comes to correctly modeling the anatomy of the human body. Most sketch based modeling tools available today for modeling organic models (humans, animals, creatures etc) are focused towards modeling base mesh models only and provide little or no support to add details to the base mesh. We propose a hybrid approach which combines geometrical primitives such as generalized cylinders and cube with Shape-from-Shading (SFS) algorithms to create plausible human character models from sketches. The results show that an artist can quickly create detailed character models from sketches by using this hybrid approach.

CNN 모델의 최적 양자화를 위한 웹 서비스 플랫폼 (Web Service Platform for Optimal Quantization of CNN Models)

  • 노재원;임채민;조상영
    • 반도체디스플레이기술학회지
    • /
    • 제20권4호
    • /
    • pp.151-156
    • /
    • 2021
  • Low-end IoT devices do not have enough computation and memory resources for DNN learning and inference. Integer quantization of real-type neural network models can reduce model size, hardware computational burden, and power consumption. This paper describes the design and implementation of a web-based quantization platform for CNN deep learning accelerator chips. In the web service platform, we implemented visualization of the model through a convenient UI, analysis of each step of inference, and detailed editing of the model. Additionally, a data augmentation function and a management function of files that store models and inference intermediate results are provided. The implemented functions were verified using three YOLO models.

객체 탐지 과업에서의 트랜스포머 기반 모델의 특장점 분석 연구 (A Survey on Vision Transformers for Object Detection Task)

  • 하정민;이현종;엄정민;이재구
    • 대한임베디드공학회논문지
    • /
    • 제17권6호
    • /
    • pp.319-327
    • /
    • 2022
  • Transformers are the most famous deep learning models that has achieved great success in natural language processing and also showed good performance on computer vision. In this survey, we categorized transformer-based models for computer vision, particularly object detection tasks and perform comprehensive comparative experiments to understand the characteristics of each model. Next, we evaluated the models subdivided into standard transformer, with key point attention, and adding attention with coordinates by performance comparison in terms of object detection accuracy and real-time performance. For performance comparison, we used two metrics: frame per second (FPS) and mean average precision (mAP). Finally, we confirmed the trends and relationships related to the detection and real-time performance of objects in several transformer models using various experiments.

Concepts and Design Aspects of Granular Models of Type-1 and Type-2

  • Pedrycz, Witold
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제15권2호
    • /
    • pp.87-95
    • /
    • 2015
  • In this study, we pursue a new direction for system modeling by introducing the concept of granular models, which produce results in the form of information granules (such as intervals, fuzzy sets, and rough sets). We present a rationale and several key motivating arguments behind the use of granular models and discuss their underlying design processes. The development of the granular model includes optimal allocation of information granularity through optimizing the criteria of coverage and specificity. The emergence and construction of granular models of type-2 and type-n (in general) is discussed. It is shown that achieving a suitable coverage-specificity tradeoff (compromise) is essential for developing granular models.

Steel Surface Defect Detection using the RetinaNet Detection Model

  • Sharma, Mansi;Lim, Jong-Tae;Chae, Yi-Geun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권2호
    • /
    • pp.136-146
    • /
    • 2022
  • Some surface defects make the weak quality of steel materials. To limit these defects, we advocate a one-stage detector model RetinaNet among diverse detection algorithms in deep learning. There are several backbones in the RetinaNet model. We acknowledged two backbones, which are ResNet50 and VGG19. To validate our model, we compared and analyzed several traditional models, one-stage models like YOLO and SSD models and two-stage models like Faster-RCNN, EDDN, and Xception models, with simulations based on steel individual classes. We also performed the correlation of the time factor between one-stage and two-stage models. Comparative analysis shows that the proposed model achieves excellent results on the dataset of the Northeastern University surface defect detection dataset. We would like to work on different backbones to check the efficiency of the model for real world, increasing the datasets through augmentation and focus on improving our limitation.

A Method and Tool for Identifying Domain Components Using Object Usage Information

  • Lee, Woo-Jin;Kwon, Oh-Cheon;Kim, Min-Jung;Shin, Gyu-Sang
    • ETRI Journal
    • /
    • 제25권2호
    • /
    • pp.121-132
    • /
    • 2003
  • To enhance the productivity of software development and accelerate time to market, software developers have recently paid more attention to a component-based development (CBD) approach due to the benefits of component reuse. Among CBD processes, the identification of reusable components is a key but difficult process. Currently, component identification depends mainly on the intuition and experience of domain experts. In addition, there are few systematic methods or tools for component identification that enable domain experts to identify reusable components. This paper presents a systematic method and its tool called a component identifier that identifies software components by using object-oriented domain information, namely, use case models, domain object models, and sequence diagrams. To illustrate our method, we use the component identifier to identify candidates of reusable components from the object-oriented domain models of a banking system. The component identifier enables domain experts to easily identify reusable components by assisting and automating identification processes in an earlier development phase.

  • PDF

조명 시뮬레이션을 위한 측광데이터의 생성과 적용 (A Study on the Generation and Application of Photometric Data for Lighting Simulation)

  • 홍승대
    • 한국디지털건축인테리어학회논문집
    • /
    • 제6권2호
    • /
    • pp.25-30
    • /
    • 2006
  • The purpose of this study was to investigate how student felt the strengths and shortness of presentation methods for formation of interior spaces. For this study, the process of the interior architecture design class was divided into three stages: the programming. the design development, and the design completion. In the design development stage, students used presentation methods: hand sketch, scale model, computer modeling, and virtual realty. The strengths of hand sketch was that quick expression. Models provided three-dimensional feelings. Computer modelling provide realistic color and texture. Virtual reality provided three-dimensional immersion and real scale. It is effective that students collect brain storm images using quick hand sketch in the beginning of design development stage. After that, they compose interior spaces in study models with small scale. Watching the models, they design details of spaces by using hand sketch and computer modelling. Using virtual reality, they can check the scale and circulation. Finally, they complete computer modelling by texture mapping and check the final design in virtual reality.

  • PDF

Comparison of Machine Learning Techniques for Cyberbullying Detection on YouTube Arabic Comments

  • Alsubait, Tahani;Alfageh, Danyah
    • International Journal of Computer Science & Network Security
    • /
    • 제21권1호
    • /
    • pp.1-5
    • /
    • 2021
  • Cyberbullying is a problem that is faced in many cultures. Due to their popularity and interactive nature, social media platforms have also been affected by cyberbullying. Social media users from Arab countries have also reported being a target of cyberbullying. Machine learning techniques have been a prominent approach used by scientists to detect and battle this phenomenon. In this paper, we compare different machine learning algorithms for their performance in cyberbullying detection based on a labeled dataset of Arabic YouTube comments. Three machine learning models are considered, namely: Multinomial Naïve Bayes (MNB), Complement Naïve Bayes (CNB), and Linear Regression (LR). In addition, we experiment with two feature extraction methods, namely: Count Vectorizer and Tfidf Vectorizer. Our results show that, using count vectroizer feature extraction, the Logistic Regression model can outperform both Multinomial and Complement Naïve Bayes models. However, when using Tfidf vectorizer feature extraction, Complement Naive Bayes model can outperform the other two models.

Fine-tuning BERT Models for Keyphrase Extraction in Scientific Articles

  • Lim, Yeonsoo;Seo, Deokjin;Jung, Yuchul
    • 한국정보기술학회 영문논문지
    • /
    • 제10권1호
    • /
    • pp.45-56
    • /
    • 2020
  • Despite extensive research, performance enhancement of keyphrase (KP) extraction remains a challenging problem in modern informatics. Recently, deep learning-based supervised approaches have exhibited state-of-the-art accuracies with respect to this problem, and several of the previously proposed methods utilize Bidirectional Encoder Representations from Transformers (BERT)-based language models. However, few studies have investigated the effective application of BERT-based fine-tuning techniques to the problem of KP extraction. In this paper, we consider the aforementioned problem in the context of scientific articles by investigating the fine-tuning characteristics of two distinct BERT models - BERT (i.e., base BERT model by Google) and SciBERT (i.e., a BERT model trained on scientific text). Three different datasets (WWW, KDD, and Inspec) comprising data obtained from the computer science domain are used to compare the results obtained by fine-tuning BERT and SciBERT in terms of KP extraction.