• Title/Summary/Keyword: recognized images

Search Result 512, Processing Time 0.03 seconds

Six new Records of Exetastes (Ichneumonidae: Banchinae) from South Korea (한국산 어리뭉툭맵시벌속 (맵시벌과, 가시뭉툭맵시벌아과)의 6미기록종에 관한 보고)

  • Kang, Gyu-Won;Lee, Jong-Wook
    • Korean journal of applied entomology
    • /
    • v.61 no.2
    • /
    • pp.387-397
    • /
    • 2022
  • A taxonomic study was carried out to discover unrecorded species of South Korean Exetastes of which six taxa were previously known. In the present study, another six taxa were newly recognized from the country: E. adpressorius, E. allopus, E. fukuchiyamanus, E. illyricus, E. sapponensis and E. tomentosus. With the result of this study, 11 species and one subspecies in Exetastes are in total known from South Korea. In addition, a key to the South Korean species and the diagnoses and digital images of the six newly recorded species are provided.

On the Study of Rotation Invariant Object Recognition (회전불변 객체 인식에 관한 연구)

  • Alom, Md. Zahangir;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.04a
    • /
    • pp.405-408
    • /
    • 2010
  • This paper presents a new feature extraction technique, correlation coefficient and Manhattan distance (MD) based method for recognition of rotated object in an image. This paper also represented a new concept of intensity invariant. We extracted global features of an image and converts a large size image into a one-dimensional vector called circular feature vector's (CFVs). An especial advantage of the proposed technique is that the extracted features are same even if original image is rotated with rotation angles 1 to 360 or rotated. The proposed technique is based on fuzzy sets and finally we have recognized the object by using histogram matching, correlation coefficient and manhattan distance of the objects. The proposed approach is very easy in implementation and it has implemented in Matlab7 on Windows XP. The experimental results have demonstrated that the proposed approach performs successfully on a variety of small as well as large scale rotated images.

Implementation of Handwriting Number Recognition using Convolutional Neural Network (콘볼류션 신경망을 이용한 손글씨 숫자 인식 구현)

  • Park, Tae-Ju;Song, Teuk-Seob
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.561-562
    • /
    • 2021
  • CNN (Convolutional Neural Network) is widely used to recognize various images. In this presentation, a single digit handwritten by humans was recognized by applying the CNN technique of deep learning. The deep learning network consists of a convolutional layer, a pooling layer, and a platen layer, and finally, we set an optimization method, learning rate and loss functions.

  • PDF

Research on Digital Construction Site Management Using Drone and Vision Processing Technology (드론 및 비전 프로세싱 기술을 활용한 디지털 건설현장 관리에 대한 연구)

  • Seo, Min Jo;Park, Kyung Kyu;Lee, Seung Been;Kim, Si Uk;Choi, Won Jun;Kim, Chee Kyeung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.11a
    • /
    • pp.239-240
    • /
    • 2023
  • Construction site management involves overseeing tasks from the construction phase to the maintenance stage, and digitalization of construction sites is necessary for digital construction site management. In this study, we aim to conduct research on object recognition at construction sites using drones. Images of construction sites captured by drones are reconstructed into BIM (Building Information Modeling) models, and objects are recognized after partially rendering the models using artificial intelligence. For the photorealistic rendering of the BIM models, both traditional filtering techniques and the generative adversarial network (GAN) model were used, while the YOLO (You Only Look Once) model was employed for object recognition. This study is expected to provide insights into the research direction of digital construction site management and help assess the potential and future value of introducing artificial intelligence in the construction industry.

  • PDF

Characteristics of Images in Image-based SNS and User Satisfaction - Focusing on Instagram and Pinterest - (이미지 기반 SNS에 나타난 이미지의 속성과 사용자 만족 인스타그램과 핀터레스트를 중심으로)

  • Yoon, Jisun;Ryoo, Han Young
    • Journal of the HCI Society of Korea
    • /
    • v.14 no.1
    • /
    • pp.5-13
    • /
    • 2019
  • SNS has been advanced from first to third generation by changing its service format in various ways. Nowadays, image-based SNS such as instagram and pinterest where users communicate via images has become popular as third generation service. Due to the fact that users communicate especially through images, image-based SNS utilizes images in different ways compared to other SNS. This study derived various characteristics of images in image-based SNS, and observed how users perceive each of them differently. Also, relationship between the characteristics and user satisfaction on image-based SNS is analyzed. The characteristics include 6 items; 'implicity', 'recordability', 'expressing identity', 'indirect experience', 'temporary amusement', and 'stimulating desire.' As a result of comparing user perception regarding those 6 characteristics, recordability and indirect experience were highly recognized than other characteristics. Also, according to users' age, motivation of using image-based SNS, and number of followers they have, users perceived each characteristic in different level. Finally, relationship between the characteristics and user satisfaction was analyzed, and the result showed that indirect experience had positive influence to user satisfaction. Regarding indirect experience, it was highly perceived by users and also had positive influence to their satisfaction, which means it is the most typical characteristic of image-based SNS.

A modified U-net for crack segmentation by Self-Attention-Self-Adaption neuron and random elastic deformation

  • Zhao, Jin;Hu, Fangqiao;Qiao, Weidong;Zhai, Weida;Xu, Yang;Bao, Yuequan;Li, Hui
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.1-16
    • /
    • 2022
  • Despite recent breakthroughs in deep learning and computer vision fields, the pixel-wise identification of tiny objects in high-resolution images with complex disturbances remains challenging. This study proposes a modified U-net for tiny crack segmentation in real-world steel-box-girder bridges. The modified U-net adopts the common U-net framework and a novel Self-Attention-Self-Adaption (SASA) neuron as the fundamental computing element. The Self-Attention module applies softmax and gate operations to obtain the attention vector. It enables the neuron to focus on the most significant receptive fields when processing large-scale feature maps. The Self-Adaption module consists of a multiplayer perceptron subnet and achieves deeper feature extraction inside a single neuron. For data augmentation, a grid-based crack random elastic deformation (CRED) algorithm is designed to enrich the diversities and irregular shapes of distributed cracks. Grid-based uniform control nodes are first set on both input images and binary labels, random offsets are then employed on these control nodes, and bilinear interpolation is performed for the rest pixels. The proposed SASA neuron and CRED algorithm are simultaneously deployed to train the modified U-net. 200 raw images with a high resolution of 4928 × 3264 are collected, 160 for training and the rest 40 for the test. 512 × 512 patches are generated from the original images by a sliding window with an overlap of 256 as inputs. Results show that the average IoU between the recognized and ground-truth cracks reaches 0.409, which is 29.8% higher than the regular U-net. A five-fold cross-validation study is performed to verify that the proposed method is robust to different training and test images. Ablation experiments further demonstrate the effectiveness of the proposed SASA neuron and CRED algorithm. Promotions of the average IoU individually utilizing the SASA and CRED module add up to the final promotion of the full model, indicating that the SASA and CRED modules contribute to the different stages of model and data in the training process.

Recognizing the Direction of Action using Generalized 4D Features (일반화된 4차원 특징을 이용한 행동 방향 인식)

  • Kim, Sun-Jung;Kim, Soo-Wan;Choi, Jin-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.518-528
    • /
    • 2014
  • In this paper, we propose a method to recognize the action direction of human by developing 4D space-time (4D-ST, [x,y,z,t]) features. For this, we propose 4D space-time interest points (4D-STIPs, [x,y,z,t]) which are extracted using 3D space (3D-S, [x,y,z]) volumes reconstructed from images of a finite number of different views. Since the proposed features are constructed using volumetric information, the features for arbitrary 2D space (2D-S, [x,y]) viewpoint can be generated by projecting the 3D-S volumes and 4D-STIPs on corresponding image planes in training step. We can recognize the directions of actors in the test video since our training sets, which are projections of 3D-S volumes and 4D-STIPs to various image planes, contain the direction information. The process for recognizing action direction is divided into two steps, firstly we recognize the class of actions and then recognize the action direction using direction information. For the action and direction of action recognition, with the projected 3D-S volumes and 4D-STIPs we construct motion history images (MHIs) and non-motion history images (NMHIs) which encode the moving and non-moving parts of an action respectively. For the action recognition, features are trained by support vector data description (SVDD) according to the action class and recognized by support vector domain density description (SVDDD). For the action direction recognition after recognizing actions, each actions are trained using SVDD according to the direction class and then recognized by SVDDD. In experiments, we train the models using 3D-S volumes from INRIA Xmas Motion Acquisition Sequences (IXMAS) dataset and recognize action direction by constructing a new SNU dataset made for evaluating the action direction recognition.

Acquisition of Subcentimeter GSD Images Using UAV and Analysis of Visual Resolution (UAV를 이용한 Subcentimeter GSD 영상의 취득 및 시각적 해상도 분석)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.563-572
    • /
    • 2017
  • The purpose of the study is to investigate the effect of flight height, flight speed, exposure time of camera shutter and autofocusing on the visual resolution of the image in order to obtain ultra-high resolution images with a GSD less than 1cm. It is also aimed to evaluate the ease of recognition of various types of aerial targets. For this purpose, we measured the visual resolution using a 7952*5304 pixel 35mm CMOS sensor and a 55mm prime lens at 20m intervals from 20m to 120m above ground. As a result, with automatic focusing, the visual resolution is measured 1.1~1.6 times as the theoretical GSD, and without automatic focusing, 1.5~3.5 times. Next, the camera was shot at 80m above ground at a constant flight speed of 5m/s, while reducing the exposure time by 1/2 from 1/60sec to 1/2000sec. Assuming that blur is allowed within 1 pixel, the visual resolution is 1.3~1.5 times larger than the theoretical GSD when the exposure time is kept within the longest exposure time, and 1.4~3.0 times larger when it is not kept. If the aerial targets are printed on A4 paper and they are shot within 80m above ground, the encoded targets can be recognized automatically by commercial software, and various types of general targets and coded ones can be manually recognized with ease.

The establishment of the secondary copyright according to the production method of the 3D stereoscopic video content and the attribution (3D입체영상 콘텐츠의 제작방법에 따른 2차적 저작권 성립 여부와 귀속에 관한 연구)

  • Lee, Sung-Gil;Kim, Gwang-Ho;Kim, Joon-Gi
    • Journal of Digital Contents Society
    • /
    • v.15 no.2
    • /
    • pp.237-250
    • /
    • 2014
  • In this paper, the research problem (1) 2D to 3D stereoscopic images to create the work, the stereoscopic 3D production work in accordance with the method works independently of the 2D image derivatives can be recognized as whether the rights were discussed. (2) In addition, 3D imaging work has to be recognized as a derivatives, the copyright belongs create derivatives and about the rights of attribution investigated.

Analysis of Software Image using Semantic Differential Scale in Elementary School Students (의미분별법에 의한 초등학생의 소프트웨어 이미지 분석)

  • Ryu, MiYoung;Han, SeonKwan
    • Journal of The Korean Association of Information Education
    • /
    • v.20 no.5
    • /
    • pp.527-534
    • /
    • 2016
  • This study is an analysis of Software image using the semantic differential scale with elementary students. We have selected the items in a total of 35 pairs of test about software-related image adjectives and are categorized into 7 main factors, and then analyzed the entire image with the students. The analysis of the differences between the software images of sex, the female students than male students were recognized the software is complex, slow and difficult and do not want to have. The analysis of the self-awareness on the software, the students who know that well recognized for the software select the positive term for the software. The inter-grade analysis are the older grade students were the answer to the objective features of the software like more difficult and complex.