• Title/Summary/Keyword: Spatial detection system

Search Result 442, Processing Time 0.032 seconds

Maximizing the Probability of Detecting Interstellar Objects by using Space Weather Data (우주기상 데이터를 활용한 성간물체 관측 가능성의 제고)

  • Kwon, Ryun Young;Kim, Minsun;Hoang, Thiem
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.2
    • /
    • pp.62.1-62.1
    • /
    • 2021
  • Interstellar objects originate from other stellar systems. Thus, they contain information about the stellar systems that cannot be directly explored; the information includes the formation and evolution of the stellar systems and the possibility of life. The examples observed so far are 1l/Oumuamua in 2017 and 2l/Borisov in 2019. In this talk, we present the possibility of detecting interstellar objects using the Heliospheric Imagers designed for space weather research and forecasting by observing solar wind in interplanetary space between the Sun and Earth. Because interstellar objects are unpredictable events, the detection requires observations with wide coverage in spatial and long duration in temporal. The near-real time data availability is essential for follow-up observations to study their detailed properties and future rendezvous missions. Heliospheric Imagers provide day-side observations, inaccessible by traditional astronomical observations. This will dramatically increase the temporal and spatial coverage of observations and also the probability of detecting interstellar objects visiting our solar system, together with traditional astronomical observations. We demonstrate that this is the case. We have used data taken from Solar TErrestrial RElation Observatory (STEREO)/Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) HI-1. HI-1 is off-pointed from the Sun direction by 14 degrees with 20 degrees of the field of view. Using images observed from 2007 to 2019, we have found a total of 223 small objects other than stars, galaxies, or planets, indicative of the potential capability to detect interstellar objects. The same method can be applied to the currently operating missions such as the Parker Solar Probe and Solar Orbiter and also future L5 and L4 missions. Since the data can be analyzed in near-real time due to the space weather purposes, more detailed properties can be analyzed by follow-up observations in ground and space, and also future rendezvous missions. We discuss future possible rendezvous missions at the end of this talk.

  • PDF

Automatic Traffic Data Collection Using Simulated Satellite Imagery (인공위성영상을 이용한 교통량측량 자동화)

  • 조우석
    • Korean Journal of Remote Sensing
    • /
    • v.11 no.3
    • /
    • pp.101-116
    • /
    • 1995
  • The fact that the demands on traffic data collection are imposed by economic and safety considerations raisese the question of the potential for complementing existing traffic data collection programs with satellite data. Evaluating and monitoring traffic characteristics is becoming increasingly important as worsening congestion, declining economic situations, and increasing environmental sensitivies are forcing the government and municipalities to make better use of existing roadway capacities. The present system of using automatic counters at selected points on highways works well from a temporal point of view (i.e., during a specific period of time at one location). However, the present system does not cover the spatial aspects of the entire road system (i.e., for every location during specific periods of time); the counters are employed only at points and only on selected highways. This lack of spatial coverage is due, in part, to the cost of the automatic counters systems (fixed procurement and maintenance costs) and of the personal required to deploy them. The current procedure is believed to work fairly well in the aggregate mode, at the macro level. However, at micro level, the numbers are more suspect. In addition, the statistics only work when assuming a certain homogenity among characteristics of highways in the same class, an assumption that is impossible to test whn little or no data is gathered on many of the highways for a given class. In this paper, a remote sensing system as complement of the existing system is considered and implemented. Since satellite imagery with high resolution is not available, digitized panchromatic imagery acquired from an aircraft platform is utilized for initial test of the feasibility and performance capability of remote sensing data. Different levels of imagery resolutions are evaluated in an attempt to determine what vehicle types could be classified and counted against a background of pavement types, which might be expected in panchromatic satellite imagery. The results of a systematic study with three different levels of resolutions (1m, 2m and 4m) show that the panchromat ic reflectances of vehicles and pavements would be distributed so similarly that it would be difficult to classify systematically and analytically remotely sensing vehicles on pavement within panchromatic range. Anaysis of the aerial photographs show that the shadows of the vehicles could be a cue for vehicle detection.

A study for improvement of far-distance performance of a tunnel accident detection system by using an inverse perspective transformation (역 원근변환 기법을 이용한 터널 영상유고시스템의 원거리 감지 성능 향상에 관한 연구)

  • Lee, Kyu Beom;Shin, Hyu-Soung
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.3
    • /
    • pp.247-262
    • /
    • 2022
  • In domestic tunnels, it is mandatory to install CCTVs in tunnels longer than 200 m which are also recommended by installation of a CCTV-based automatic accident detection system. In general, the CCTVs in the tunnel are installed at a low height as well as near by the moving vehicles due to the spatial limitation of tunnel structure, so a severe perspective effect takes place in the distance of installed CCTV and moving vehicles. Because of this effect, conventional CCTV-based accident detection systems in tunnel are known in general to be very hard to achieve the performance in detection of unexpected accidents such as stop or reversely moving vehicles, person on the road and fires, especially far from 100 m. Therefore, in this study, the region of interest is set up and a new concept of inverse perspective transformation technique is introduced. Since moving vehicles in the transformed image is enlarged proportionally to the distance from CCTV, it is possible to achieve consistency in object detection and identification of actual speed of moving vehicles in distance. To show this aspect, two datasets in the same conditions are composed with the original and the transformed images of CCTV in tunnel, respectively. A comparison of variation of appearance speed and size of moving vehicles in distance are made. Then, the performances of the object detection in distance are compared with respect to the both trained deep-learning models. As a result, the model case with the transformed images are able to achieve consistent performance in object and accident detections in distance even by 200 m.

Automatic Person Identification using Multiple Cues

  • Swangpol, Danuwat;Chalidabhongse, Thanarat
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1202-1205
    • /
    • 2005
  • This paper describes a method for vision-based person identification that can detect, track, and recognize person from video using multiple cues: height and dressing colors. The method does not require constrained target's pose or fully frontal face image to identify the person. First, the system, which is connected to a pan-tilt-zoom camera, detects target using motion detection and human cardboard model. The system keeps tracking the moving target while it is trying to identify whether it is a human and identify who it is among the registered persons in the database. To segment the moving target from the background scene, we employ a version of background subtraction technique and some spatial filtering. Once the target is segmented, we then align the target with the generic human cardboard model to verify whether the detected target is a human. If the target is identified as a human, the card board model is also used to segment the body parts to obtain some salient features such as head, torso, and legs. The whole body silhouette is also analyzed to obtain the target's shape information such as height and slimness. We then use these multiple cues (at present, we uses shirt color, trousers color, and body height) to recognize the target using a supervised self-organization process. We preliminary tested the system on a set of 5 subjects with multiple clothes. The recognition rate is 100% if the person is wearing the clothes that were learned before. In case a person wears new dresses the system fail to identify. This means height is not enough to classify persons. We plan to extend the work by adding more cues such as skin color, and face recognition by utilizing the zoom capability of the camera to obtain high resolution view of face; then, evaluate the system with more subjects.

  • PDF

A Study on Traffic Situation Recognition System Based on Group Type Zigbee Mesh Network (그룹형 Zigbee Mesh 네트워크 기반 교통상황인지 시스템에 관한 연구)

  • Lim, Ji-Yong;Oh, Am-Suk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1723-1728
    • /
    • 2021
  • C-ITS is an intelligent transportation system that can improve transportation convenience and traffic safety by collecting, managing, and providing traffic information between components such as vehicles, road infrastructure, drivers, and pedestrians. In Korea, road infrastructure is being built across the country through the C-ITS project, and various services such as real-time traffic information provision and bus operation management are provided. However, the current state-of-the-art road infrastructure and information linkage system are insufficient to build C-ITS. In this paper, considering the continuity of time in various spatial aspects, we proposed a group-type network-based traffic situation recognition system that can recognize traffic flows and unexpected accidents through information linkage between traffic infrastructures. It is expected that the proposed system can primarily respond to accident detection and warning in the field, and can be utilized as more diverse traffic information services through information linkage with other systems.

Autonomous Driving Platform using Hybrid Camera System (복합형 카메라 시스템을 이용한 자율주행 차량 플랫폼)

  • Eun-Kyung Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1307-1312
    • /
    • 2023
  • In this paper, we propose a hybrid camera system that combines cameras with different focal lengths and LiDAR (Light Detection and Ranging) sensors to address the core components of autonomous driving perception technology, which include object recognition and distance measurement. We extract objects within the scene and generate precise location and distance information for these objects using the proposed hybrid camera system. Initially, we employ the YOLO7 algorithm, widely utilized in the field of autonomous driving due to its advantages of fast computation, high accuracy, and real-time processing, for object recognition within the scene. Subsequently, we use multi-focal cameras to create depth maps to generate object positions and distance information. To enhance distance accuracy, we integrate the 3D distance information obtained from LiDAR sensors with the generated depth maps. In this paper, we introduce not only an autonomous vehicle platform capable of more accurately perceiving its surroundings during operation based on the proposed hybrid camera system, but also provide precise 3D spatial location and distance information. We anticipate that this will improve the safety and efficiency of autonomous vehicles.

Head Mouse System Based on A Gyro and Opto Sensors (각속도 및 광센서를 이용한 헤드 마우스)

  • Park, Min-Je;Yoo, Jae-Ha;Kim, Soo-Chan
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.4
    • /
    • pp.70-76
    • /
    • 2009
  • We proposed the device to control a computer mouse with only head movements and eye blinks so that disabilities by car or other accidents can use a computer. The mouse position were estimated from a gyro-sensor which can measure head movements, and the mouse events such as click/double click were from opto sensors which can detect the eyes flicker, respectively. The sensor was mounted on the goggle in order not to disturb the visual field. There was no difference in movement speed between ours and a general mouse, but it required 3$\sim$4 more times in the result of the experiment to evaluate spatial movements and events detection of the proposed mouse because of the low accuracy. We could eliminate cumbersome work to periodically remove the accumulated error and intuitively control the mouse using non-linear relative point method with dead zones. Optical sensors are used in the event detection circuitry designed to remove the influence of the ambient light changes, therefore it was not affected in the change of external light source.

Scene Change Detection and Key Frame Selection Using Fast Feature Extraction in the MPEG-Compressed Domain (MPEG 압축 영상에서의 고속 특징 요소 추출을 이용한 장면 전환 검출과 키 프레임 선택)

  • 송병철;김명준;나종범
    • Journal of Broadcast Engineering
    • /
    • v.4 no.2
    • /
    • pp.155-163
    • /
    • 1999
  • In this paper, we propose novel scene change detection and key frame selection techniques, which use two feature images, i.e., DC and edge images, extracted directly from MPEG compressed video. For fast edge image extraction. we suggest to utilize 5 lower AC coefficients of each DCT. Based on this scheme, we present another edge image extraction technique using AC prediction. Although the former is superior to the latter in terms of visual quality, both methods all can extract important edge features well. Simulation results indicate that scene changes such as cut. fades, and dissolves can be correctly detected by using the edge energy diagram obtained from edge images and histograms from DC images. In addition. we find that our edge images are comparable to those obtained in the spatial domain while keeping much lower computational cost. And based on HVS, a key frame of each scene can also be selected. In comparison with an existing method using optical flow. our scheme can select semantic key frames because we only use the above edge and DC images.

  • PDF

A Study on Automatic Detection of Speed Bump by using Mathematical Morphology Image Filters while Driving (수학적 형태학 처리를 통한 주행 중 과속 방지턱 자동 탐지 방안)

  • Joo, Yong Jin;Hahm, Chang Hahk
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.3
    • /
    • pp.55-62
    • /
    • 2013
  • This paper aims to detect Speed Bump by using Omni-directional Camera and to suggest Real-time update scheme of Speed Bump through Vision Based Approach. In order to detect Speed Bump from sequence of camera images, noise should be removed as well as spot estimated as shape and pattern for speed bump should be detected first. Now that speed bump has a regular form of white and yellow area, we extracted speed bump on the road by applying erosion and dilation morphological operations and by using the HSV color model. By collecting huge panoramic images from the camera, we are able to detect the target object and to calculate the distance through GPS log data. Last but not least, we evaluated accuracy of obtained result and detection algorithm by implementing SLAMS (Simultaneous Localization and Mapping system).

Proposition for Retina Model Based on Electrophysiological Mechanism and Analysis for Spatiotemporal Response (전기생리학적 기전에 근거한 망막 모델의 제안과 시공간적 응답의 분석)

  • Lee, Jeong-Woo;Chae, Seung-Pyo;Cho, Jin-Ho;Kim, Myoung-Nam
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.39 no.6
    • /
    • pp.49-58
    • /
    • 2002
  • Based on electrophysiological retina mechanism, a retina model is proposed, which has similar response characteristics compared with the real primate retina. Photoreceptors, horizontal cells, and bipolar cells are modeled based on the previously studied retina models. And amacrine cells known to have relation to movements detection, and bipolar cell terminals are newly modeled using 3 NDP mechanism. The proposed model verified by analyzing the spatial response characteristics to stationary and moving stimuli, and characteristics for different speeds. Through this retina model, human vision system could be applied to computer vision systems for movement detection, and it could be the basic research for the implantable artificial retina.