• Title/Summary/Keyword: 객체탐지모형

Search Result 6, Processing Time 0.02 seconds

Integrated Object Representations in Visual Working Memory Examined by Change Detection and Recall Task Performance (변화탐지와 회상 과제에 기초한 시각작업기억의 통합적 객체 표상 검증)

  • Inae Lee;Joo-Seok Hyun
    • Korean Journal of Cognitive Science
    • /
    • v.35 no.1
    • /
    • pp.1-21
    • /
    • 2024
  • This study investigates the characteristics of visual working memory (VWM) representations by examining two theoretical models: the integrated-object and the parallel-independent feature storage models. Experiment I involved a change detection task where participants memorized arrays of either orientation bars, colored squares, or both. In the one-feature condition, the memory array consisted of one feature (either orientations or colors), whereas the two-feature condition included both. We found no differences in change detection performance between the conditions, favoring the integrated object model over the parallel-independent feature storage model. Experiment II employed a recall task with memory arrays of isosceles triangles' orientations, colored squares, or both, and one-feature and two-feature conditions were compared for their recall performance. We found again no clear difference in recall accuracy between the conditions, but the results of analyses for memory precision and guessing responses indicated the weak object model over the strong object model. For ongoing debates surrounding VWM's representational characteristics, these findings highlight the dominance of the integrated object model over the parallel independent feature storage model.

워게임 모형의 C41 기능통합 및 연동화 시뮬레이션 기법

  • 문형곤;박찬우
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2000.04a
    • /
    • pp.153-153
    • /
    • 2000
  • 최근 선진국들은 신규 워게임모형 개발시 장차전 개념을 반영하기 위하여 미래전자의 주요기능인 C4ISR 및 객체지향 기법을 적용하려고 노력하고 있다. 이러한 워게임 모형들은 현실과 같은 가상환경에서 합동작전을 모의할 수 있으며 전략, 작전 및 전술 수준을 모두 고려할 수 있고 지상전, 공중전, 해상전, 미사일전, 정보전 등 현대 전투개념을 모두 반영할 수 있도록 초대형 시뮬레이션 시스템으로 발전되고 있다. 본 고에서는 C4I 기능통합 및 연동화 모의 논리중에서 전략기동, 전술기동, 교전평가, 전략수송, 표적탐색, 미사일 판정을 위한 모의 기법과 초대형 시뮬레이션 시스템의 자료/명령 전달 구조 및 하드웨어/소프트웨어 사양, 구성 모듈등을 분석한다. 특히 현재 미 합참에서 개발중인 JWARS모형의 주요 객체들인 전투공간개체(BSE: Battle Space Entity), 아크-노드 네트워크, 화력 집중점(FCPs: Fire Concentration Points) 등을 살펴보고 현대전의 가장 큰 특징인 C4ISR/(Command, Control, Communication, Computer, Intelligence, Surveillance, Reconnaissance) 분야에서 표적탐지, 통신, 정보 모의 기법을 분석함으로써, 향후 한국적 여건에 적합한 분석모형 개발 방향을 제시하고자 한다.

  • PDF

KUeyes: A biologically motivated color stereo headeye system (KUeyes: 생물학적 시각 모형에 기반한 컬러 스테레오 헤드아이 시스템)

  • 이상웅;최형철;강성훈;이성환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.586-588
    • /
    • 2000
  • KUeyes는 3차원 실세계의 영상처리를 위해 고려대학교 인공시각연구센터에서 개발된 컬러 스테레오 헤드아이 시스템이다. KUeyes는 인간의 시각 시스템을 모델로 하여 다해상도 변환 영상, 칼라 정보와 거리 정보, 움직임 정보를 이용하여 지능적이고 빠르게 객체를 탐지하여 추적한다. 또한 병렬적으로 수행되는 인식기를 통해 탐지된 사람의 얼굴을 인식한다. 다양한 실험 및 분석을 통해 KUeyes가 복잡한 실영상을 대상으로 움직이는 개체를 신시간으로 안정되게 추적하고 인식하는 것을 확인할 수 있었다.

  • PDF

Analysis of performance changes based on the characteristics of input image data in the deep learning-based algal detection model (딥러닝 기반 조류 탐지 모형의 입력 이미지 자료 특성에 따른 성능 변화 분석)

  • Juneoh Kim;Jiwon Baek;Jongrack Kim;Jungsu Park
    • Journal of Wetlands Research
    • /
    • v.25 no.4
    • /
    • pp.267-273
    • /
    • 2023
  • Algae are an important component of the ecosystem. However, the excessive growth of cyanobacteria has various harmful effects on river environments, and diatoms affect the management of water supply processes. Algal monitoring is essential for sustainable and efficient algae management. In this study, an object detection model was developed that detects and classifies images of four types of harmful cyanobacteria used for the criteria of the algae alert system, and one diatom, Synedra sp.. You Only Look Once(YOLO) v8, the latest version of the YOLO model, was used for the development of the model. The mean average precision (mAP) of the base model was analyzed as 64.4. Five models were created to increase the diversity of the input images used for model training by performing rotation, magnification, and reduction of original images. Changes in model performance were compared according to the composition of the input images. As a result of the analysis, the model that applied rotation, magnification, and reduction showed the best performance with mAP 86.5. The mAP of the model that only used image rotation, combined rotation and magnification, and combined image rotation and reduction were analyzed as 85.3, 82.3, and 83.8, respectively.

Automatic Collection of Production Performance Data Based on Multi-Object Tracking Algorithms (다중 객체 추적 알고리즘을 이용한 가공품 흐름 정보 기반 생산 실적 데이터 자동 수집)

  • Lim, Hyuna;Oh, Seojeong;Son, Hyeongjun;Oh, Yosep
    • The Journal of Society for e-Business Studies
    • /
    • v.27 no.2
    • /
    • pp.205-218
    • /
    • 2022
  • Recently, digital transformation in manufacturing has been accelerating. It results in that the data collection technologies from the shop-floor is becoming important. These approaches focus primarily on obtaining specific manufacturing data using various sensors and communication technologies. In order to expand the channel of field data collection, this study proposes a method to automatically collect manufacturing data based on vision-based artificial intelligence. This is to analyze real-time image information with the object detection and tracking technologies and to obtain manufacturing data. The research team collects object motion information for each frame by applying YOLO (You Only Look Once) and DeepSORT as object detection and tracking algorithms. Thereafter, the motion information is converted into two pieces of manufacturing data (production performance and time) through post-processing. A dynamically moving factory model is created to obtain training data for deep learning. In addition, operating scenarios are proposed to reproduce the shop-floor situation in the real world. The operating scenario assumes a flow-shop consisting of six facilities. As a result of collecting manufacturing data according to the operating scenarios, the accuracy was 96.3%.

A Research on Adversarial Example-based Passive Air Defense Method against Object Detectable AI Drone (객체인식 AI적용 드론에 대응할 수 있는 적대적 예제 기반 소극방공 기법 연구)

  • Simun Yuk;Hweerang Park;Taisuk Suh;Youngho Cho
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.119-125
    • /
    • 2023
  • Through the Ukraine-Russia war, the military importance of drones is being reassessed, and North Korea has completed actual verification through a drone provocation towards South Korea at 2022. Furthermore, North Korea is actively integrating artificial intelligence (AI) technology into drones, highlighting the increasing threat posed by drones. In response, the Republic of Korea military has established Drone Operations Command(DOC) and implemented various drone defense systems. However, there is a concern that the efforts to enhance capabilities are disproportionately focused on striking systems, making it challenging to effectively counter swarm drone attacks. Particularly, Air Force bases located adjacent to urban areas face significant limitations in the use of traditional air defense weapons due to concerns about civilian casualties. Therefore, this study proposes a new passive air defense method that aims at disrupting the object detection capabilities of AI models to enhance the survivability of friendly aircraft against the threat posed by AI based swarm drones. Using laser-based adversarial examples, the study seeks to degrade the recognition accuracy of object recognition AI installed on enemy drones. Experimental results using synthetic images and precision-reduced models confirmed that the proposed method decreased the recognition accuracy of object recognition AI, which was initially approximately 95%, to around 0-15% after the application of the proposed method, thereby validating the effectiveness of the proposed method.