• Title/Summary/Keyword: learning through the image

Search Result 925, Processing Time 0.026 seconds

The Application of Music to Learning Regional Geography (지역지리 학습에 있어서 음악작품의 활용)

  • Hwang, Hong-Seop
    • Journal of the Korean association of regional geographers
    • /
    • v.1 no.1
    • /
    • pp.103-116
    • /
    • 1995
  • The purpose of this paper is to explore a brief review of trends in existing geographical research on music and to analyze music by the 5 themes of geography and to explore a variety of classroom techniques which examine song lyrics for their geographic content. The results of this paper are summarized as followed : Firstly, the trends in geographical research on music can be classified into five areas, the first is on spatial diffusion in music, the second on spatial diffusion in music, the third on regional division in music, the fourth on regional characteristics in music, the fifth on pedagogical tools in the teaching of geography. Secondly, music holds numerous possibilities for regional geographical study. The lyrics of music are littered with geographical term through which song writers impart image of culture, the distinct geographical nature of music lyrics gives rise to many geographical question, also, music lyrics gives place its special character. The results of analyses by the 5 themes of geography indicate that music are useful to learning of regional geography. The application of music to learning regional geography attracts much attentions. In the respect of importance of learning new regional geography, and in the respect of adapting globalization have to be focused on this subject.

  • PDF

A Study on Vehicle License Plate Recognition System through Fake License Plate Generator in YOLOv5 (YOLOv5에서 가상 번호판 생성을 통한 차량 번호판 인식 시스템에 관한 연구)

  • Ha, Sang-Hyun;Jeong, Seok Chan;Jeon, Young-Joon;Jang, Mun-Seok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_2
    • /
    • pp.699-706
    • /
    • 2021
  • Existing license plate recognition system is used as an optical character recognition method, but a method of using deep learning has been proposed in recent studies because it has problems with image quality and Korean misrecognition. This requires a lot of data collection, but the collection of license plates is not easy to collect due to the problem of the Personal Information Protection Act, and labeling work to designate the location of individual license plates is required, but it also requires a lot of time. Therefore, in this paper, to solve this problem, five types of license plates were created using a virtual Korean license plate generation program according to the notice of the Ministry of Land, Infrastructure and Transport. And the generated license plate is synthesized in the license plate part of collectable vehicle images to construct 10,147 learning data to be used in deep learning. The learning data classifies license plates, Korean, and numbers into individual classes and learn using YOLOv5. Since the proposed method recognizes letters and numbers individually, if the font does not change, it can be recognized even if the license plate standard changes or the number of characters increases. As a result of the experiment, an accuracy of 96.82% was obtained, and it can be applied not only to the learned license plate but also to new types of license plates such as new license plates and eco-friendly license plates.

Novel Algorithms for Early Cancer Diagnosis Using Transfer Learning with MobileNetV2 in Thermal Images

  • Swapna Davies;Jaison Jacob
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.570-590
    • /
    • 2024
  • Breast cancer ranks among the most prevalent forms of malignancy and foremost cause of death by cancer worldwide. It is not preventable. Early and precise detection is the only remedy for lowering the rate of mortality and improving the probability of survival for victims. In contrast to present procedures, thermography aids in the early diagnosis of cancer and thereby saves lives. But the accuracy experiences detrimental impact by low sensitivity for small and deep tumours and the subjectivity by physicians in interpreting the images. Employing deep learning approaches for cancer detection can enhance the efficacy. This study explored the utilization of thermography in early identification of breast cancer with the use of a publicly released dataset known as the DMR-IR dataset. For this purpose, we employed a novel approach that entails the utilization of a pre-trained MobileNetV2 model and fine tuning it through transfer learning techniques. We created three models using MobileNetV2: one was a baseline transfer learning model with weights trained from ImageNet dataset, the second was a fine-tuned model with an adaptive learning rate, and the third utilized early stopping with callbacks during fine-tuning. The results showed that the proposed methods achieved average accuracy rates of 85.15%, 95.19%, and 98.69%, respectively, with various performance indicators such as precision, sensitivity and specificity also being investigated.

Reliable Image-Text Fusion CAPTCHA to Improve User-Friendliness and Efficiency (사용자 편의성과 효율성을 증진하기 위한 신뢰도 높은 이미지-텍스트 융합 CAPTCHA)

  • Moon, Kwang-Ho;Kim, Yoo-Sung
    • The KIPS Transactions:PartC
    • /
    • v.17C no.1
    • /
    • pp.27-36
    • /
    • 2010
  • In Web registration pages and online polling applications, CAPTCHA(Completely Automated Public Turing Test To Tell Computers and Human Apart) is used for distinguishing human users from automated programs. Text-based CAPTCHAs have been widely used in many popular Web sites in which distorted text is used. However, because the advanced optical character recognition techniques can recognize the distorted texts, the reliability becomes low. Image-based CAPTCHAs have been proposed to improve the reliability of the text-based CAPTCHAs. However, these systems also are known as having some drawbacks. First, some image-based CAPTCHA systems with small number of image files in their image dictionary is not so reliable since attacker can recognize images by repeated executions of machine learning programs. Second, users may feel uncomfortable since they have to try CAPTCHA tests repeatedly when they fail to input a correct keyword. Third, some image-base CAPTCHAs require high communication cost since they should send several image files for one CAPTCHA. To solve these problems of image-based CAPTCHA, this paper proposes a new CAPTCHA based on both image and text. In this system, an image and keywords are integrated into one CAPTCHA image to give user a hint for the answer keyword. The proposed CAPTCHA can help users to input easily the answer keyword with the hint in the fused image. Also, the proposed system can reduce the communication costs since it uses only a fused image file for one CAPTCHA. To improve the reliability of the image-text fusion CAPTCHA, we also propose a dynamic building method of large image dictionary from gathering huge amount of images from theinternet with filtering phase for preserving the correctness of CAPTCHA images. In this paper, we proved that the proposed image-text fusion CAPTCHA provides users more convenience and high reliability than the image-based CAPTCHA through experiments.

Transfer learning in a deep convolutional neural network for implant fixture classification: A pilot study

  • Kim, Hak-Sun;Ha, Eun-Gyu;Kim, Young Hyun;Jeon, Kug Jin;Lee, Chena;Han, Sang-Sun
    • Imaging Science in Dentistry
    • /
    • v.52 no.2
    • /
    • pp.219-224
    • /
    • 2022
  • Purpose: This study aimed to evaluate the performance of transfer learning in a deep convolutional neural network for classifying implant fixtures. Materials and Methods: Periapical radiographs of implant fixtures obtained using the Superline (Dentium Co. Ltd., Seoul, Korea), TS III(Osstem Implant Co. Ltd., Seoul, Korea), and Bone Level Implant(Institut Straumann AG, Basel, Switzerland) systems were selected from patients who underwent dental implant treatment. All 355 implant fixtures comprised the total dataset and were annotated with the name of the system. The total dataset was split into a training dataset and a test dataset at a ratio of 8 to 2, respectively. YOLOv3 (You Only Look Once version 3, available at https://pjreddie.com/darknet/yolo/), a deep convolutional neural network that has been pretrained with a large image dataset of objects, was used to train the model to classify fixtures in periapical images, in a process called transfer learning. This network was trained with the training dataset for 100, 200, and 300 epochs. Using the test dataset, the performance of the network was evaluated in terms of sensitivity, specificity, and accuracy. Results: When YOLOv3 was trained for 200 epochs, the sensitivity, specificity, accuracy, and confidence score were the highest for all systems, with overall results of 94.4%, 97.9%, 96.7%, and 0.75, respectively. The network showed the best performance in classifying Bone Level Implant fixtures, with 100.0% sensitivity, specificity, and accuracy. Conclusion: Through transfer learning, high performance could be achieved with YOLOv3, even using a small amount of data.

Automatic Classification of Bridge Component based on Deep Learning (딥러닝 기반 교량 구성요소 자동 분류)

  • Lee, Jae Hyuk;Park, Jeong Jun;Yoon, Hyungchul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.239-245
    • /
    • 2020
  • Recently, BIM (Building Information Modeling) are widely being utilized in Construction industry. However, most structures that have been constructed in the past do not have BIM. For structures without BIM, the use of SfM (Structure from Motion) techniques in the 2D image obtained from the camera allows the generation of 3D model point cloud data and BIM to be established. However, since these generated point cloud data do not contain semantic information, it is necessary to manually classify what elements of the structure. Therefore, in this study, deep learning was applied to automate the process of classifying structural components. In the establishment of deep learning network, Inception-ResNet-v2 of CNN (Convolutional Neural Network) structure was used, and the components of bridge structure were learned through transfer learning. As a result of classifying components using the data collected to verify the developed system, the components of the bridge were classified with an accuracy of 96.13 %.

Analysis of Building Object Detection Based on the YOLO Neural Network Using UAV Images (YOLO 신경망 기반의 UAV 영상을 이용한 건물 객체 탐지 분석)

  • Kim, June Seok;Hong, Il Young
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.381-392
    • /
    • 2021
  • In this study, we perform deep learning-based object detection analysis on eight types of buildings defined by the digital map topography standard code, leveraging images taken with UAV (Unmanned Aerial Vehicle). Image labeling was done for 509 images taken by UAVs and the YOLO (You Only Look Once) v5 model was applied to proceed with learning and inference. For experiments and analysis, data were analyzed by applying an open source-based analysis platform and algorithm, and as a result of the analysis, building objects were detected with a prediction probability of 88% to 98%. In addition, the learning method and model construction method necessary for the high accuracy of building object detection in the process of constructing and repetitive learning of training data were analyzed, and a method of applying the learned model to other images was sought. Through this study, a model in which high-efficiency deep neural networks and spatial information data are fused will be proposed, and the fusion of spatial information data and deep learning technology will provide a lot of help in improving the efficiency, analysis and prediction of spatial information data construction in the future.

Development of Real-Time Objects Segmentation for Dual-Camera Synthesis in iOS (iOS 기반 실시간 객체 분리 및 듀얼 카메라 합성 개발)

  • Jang, Yoo-jin;Kim, Ji-yeong;Lee, Ju-hyun;Hwang, Jun
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.37-43
    • /
    • 2021
  • In this paper, we study how objects from front and back cameras can be recognized in real time in a mobile environment to segment regions of object pixels and synthesize them through image processing. To this work, we applied DeepLabV3 machine learning model to dual cameras provided by Apple's iOS. We also propose methods using Core Image and Core Graphics libraries from Apple for image synthesis and postprocessing. Furthermore, we improved CPU usage than previous works and compared the throughput rates and results of Depth and DeepLabV3. Finally, We also developed a camera application using these two methods.

MSaGAN: Improved SaGAN using Guide Mask and Multitask Learning Approach for Facial Attribute Editing

  • Yang, Hyeon Seok;Han, Jeong Hoon;Moon, Young Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.37-46
    • /
    • 2020
  • Recently, studies of facial attribute editing have obtained realistic results using generative adversarial net (GAN) and encoder-decoder structure. Spatial attention GAN (SaGAN), one of the latest researches, is the method that can change only desired attribute in a face image by spatial attention mechanism. However, sometimes unnatural results are obtained due to insufficient information on face areas. In this paper, we propose an improved SaGAN (MSaGAN) using a guide mask for learning and applying multitask learning approach to improve the limitations of the existing methods. Through extensive experiments, we evaluated the results of the facial attribute editing in therms of the mask loss function and the neural network structure. It has been shown that the proposed method can efficiently produce more natural results compared to the previous methods.

Implementation of A Web-based Virtual Laboratory For Digital Logic Circuits Using Multimedia (멀티미디어를 이용한 웹기반 디지털 논리회로 가상실험실의 구현)

  • Kim Dong-Sik;Choi Kwan-Sun;Lee Sun-Heum
    • Journal of Engineering Education Research
    • /
    • v.5 no.1
    • /
    • pp.27-33
    • /
    • 2002
  • Recently, according to the appearance of various virtual websites using multimedia technologies, the internet applications in engineering education have drawn muchinterests. But unidirectional communication, simple text/image-based webpages and tedious learning process without motivation, etc. have made the lowering of educational efficiency in cyberspace. This paper presents a virtual laboratory system which can be creating efficiencies in the learning process. The proposed virtual laboratory system for digital logic circuits provides interactive learning environment under which the multimedia capabilities of world-wide web can be enhanced. The virtual laboratory system is implemented to describe the on-campus laboratory, the learners can obtain similar experimental data through it. The virtual laboratory system is composed of four important components : principle classroom, simulation classroom, virtual experiment classroom and management system. Learning efficiencies as well as faculty productivity are increased in this innovative teaching and learning environment.