• Title/Summary/Keyword: location of camera

Search Result 588, Processing Time 0.031 seconds

A Study on Estimation of Submarine Groundwater Discharge Distribution area using IR camera and Field survey around Jeju island (열화상카메라와 현장조사를 이용한 제주 주변 해역의 해저 용천수 분포 지역 추정 연구)

  • Park, Jae-Moon;Kim, Dae-Hyun;Yang, Sung-Kee;Yoon, Hong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.8
    • /
    • pp.861-866
    • /
    • 2015
  • This study was aimed to detect area of Submaine Groundwater Discharged(: SGD) around Jeju island using by remote sensing. Sea Surface Temperature(SST) was identified using IR camera on Unmaned Aerial Vehicle(UAV) at Gimnyeong port in study area. Then SGD location was detected by comparing range of SGD temperature. Generally, range of SGD temperature is distributed 15 to 17 like underground water. The result, SGD location was detected by SST distribution of Gimnyeong port recorded by IR camera in the southwest of study area.

Design and Implementation of facility Management System based Ubiquitous (u-기반 시설물 관리 시스템 설계 및 구현)

  • Kim, Jung Jae;Park, Chan Kil
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.4 no.4
    • /
    • pp.1-8
    • /
    • 2008
  • The USN is important in technique, unmanned observation using wireless network camera, detection technique that use intrusion detection sensor. But these encrypted data transmission and processing technique through sensor network, method of the staff's location recognition and arrangement aren't serviced still as a integrated system in facility security industry. This paper proposed that improve facility management, the staff present recognition and system efficiency using RFID, USN and wireless camera.

A Study on the Camera Calibration Algorithm of Robot Vision Using Cartesian Coordinates

  • Lee, Yong-Joong
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.11 no.6
    • /
    • pp.98-104
    • /
    • 2002
  • In this study, we have developed an algorithm by attaching a camera at the end-effector of industrial six-axis robot in order to determine position and orientation of the camera system from cartesian coordinates. Cartesian coordinate as a starting point to evaluate for suggested algorithm, it was easy to confront increase of orientation vector for a linear line point that connects two points from coordinate space applied by recursive least square method which includes previous data result and new data result according to increase of image point. Therefore, when the camera attached to the end-effector has been applied to production location, with a calibration mask that has more than eight points arranged, this simulation approved that it is possible to determine position and orientation of cartesian coordinates of camera system even without a special measuring equipment.

Camera Calibration and Pose Estimation for Tasks of a Mobile Manipulator (모바일 머니퓰레이터의 작업을 위한 카메라 보정 및 포즈 추정)

  • Choi, Ji-Hoon;Kim, Hae-Chang;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.4
    • /
    • pp.350-356
    • /
    • 2020
  • Workers have been replaced by mobile manipulators for factory automation in recent years. One of the typical tasks for automation is that a mobile manipulator moves to a target location and picks and places an object on the worktable. However, due to the pose estimation error of the mobile platform, the robot cannot reach the exact target position, which prevents the manipulator from being able to accurately pick and place the object on the worktable. In this study, we developed an automatic alignment system using a low-cost camera mounted on the end-effector of a collaborative robot. Camera calibration and pose estimation methods were also proposed for the automatic alignment system. This algorithm uses a markerboard composed of markers to calibrate the camera and then precisely estimate the camera pose. Experimental results demonstrate that the mobile manipulator can perform successful pick and place tasks on various conditions.

Real-time Tracking and Identification for Multi-Camera Surveillance System

  • Hong, Yo-Hoon;Song, Seung June;Rho, Jungkyu
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.16-22
    • /
    • 2018
  • This paper presents a solution for personal profiling system based on user-oriented tracking. Here, we introduce a new way to identify and track humans by using two types of cameras: dome and face camera. Dome camera has a wide view angle so that it is suitable for tracking human movement in large area. However, it is difficult to identify a person only by using dome camera because it only sees the target from above. Thus, face camera is employed to obtain facial information for identifying a person. In addition, we also propose a new mechanism to locate human on targeted location by using grid-cell system. These result in a system which has the capability of maintaining human identity and tracking human activity (movement) effectively.

Assessment of a smartphone-based monitoring system and its application

  • Ahn, Hoyong;Choi, Chuluong;Yu, Yeon
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.3
    • /
    • pp.383-397
    • /
    • 2014
  • Information technology advances are allowing conventional surveillance systems to be combined with mobile communication technologies, creating ubiquitous monitoring systems. This paper proposes monitoring system that uses smart camera technology. We discuss the dependence of interior orientation parameters on calibration target sheets and compare the accuracy of a three-dimensional monitoring system with camera location calculated by space resectioning using a Digital Surface Model (DSM) generated from stereo images. A monitoring housing is designed to protect a camera from various weather conditions and to provide the camera for power generated from solar panel. A smart camera is installed in the monitoring housing. The smart camera is operated and controlled through an Android application. At last the accuracy of a three-dimensional monitoring system is evaluated using a DSM. The proposed system was then tested against a DSM created from ground control points determined by Global Positioning Systems (GPSs) and light detection and ranging data. The standard deviation of the differences between DSMs are less than 0.12 m. Therefore the monitoring system is appropriate for extracting the information of objects' position and deformation as well as monitoring them. Through incorporation of components, such as camera housing, a solar power supply, the smart camera the system can be used as a ubiquitous monitoring system.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

Estimation of Person Height and 3D Location using Stereo Tracking System (스테레오 추적 시스템을 이용한 보행자 높이 및 3차원 위치 추정 기법)

  • Ko, Jung Hwan;Ahn, Sung Soo
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.2
    • /
    • pp.95-104
    • /
    • 2012
  • In this paper, an estimation of person height and 3D location of a moving person by using the pan/tilt-embedded stereo tracking system is suggested and implemented. In the proposed system, face coordinates of a target person is detected from the sequential input stereo image pairs by using the YCbCr color model and phase-type correlation methods and then, using this data as well as the geometric information of the stereo tracking system, distance to the target from the stereo camera and 3-dimensional location information of a target person are extracted. Basing on these extracted data the pan/tilt system embedded in the stereo camera is controlled to adaptively track a moving person and as a result, moving trajectory of a target person can be obtained. From some experiments using 780 frames of the sequential stereo image pairs, it is analyzed that standard deviation of the position displacement of the target in the horizontal and vertical directions after tracking is kept to be very low value of 1.5, 0.42 for 780 frames on average, and error ratio between the measured and computed 3D coordinate values of the target is also kept to be very low value of 0.5% on average. These good experimental results suggest a possibility of implementation of a new stereo target tracking system having a high degree of accuracy and a very fast response time with this proposed algorithm.

A Study on Portable Green-algae Remover Device based on Arduino and OpenCV using Do Sensor and Raspberry Pi Camera (DO 센서와 라즈베리파이 카메라를 활용한 아두이노와 OpenCV기반의 이동식 녹조제거장치에 관한 연구)

  • Kim, Min-Seop;Kim, Ye-Ji;Im, Ye-Eun;Hwang, You-Seong;Baek, Soo-Whang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.679-686
    • /
    • 2022
  • In this paper, we implemented an algae removal device that recognizes and removes algae existing in water using Raspberry Pi camera and DO (Dissolved Oxygen) sensor. The Raspberry Pi board recognizes the color of green algae by converting the RGB values obtained from the camera into HSV. Through this, the location of the algae is identified and when the amount of dissolved oxygen's decrease at the location is more than the reference value using the DO sensor, the algae removal device is driven to spray the algae removal solution. Raspberry Pi's camera uses OpenCV, and the motor movement is controlled according to the output value of the DO sensor and the result of the camera's green algae recognition. Algae recognition and spraying of algae removal solution were implemented through Arduino and Raspberry Pi, and the feasibility of the proposed portable algae removal device was verified through experiments.

Autonomous Driving Platform using Hybrid Camera System (복합형 카메라 시스템을 이용한 자율주행 차량 플랫폼)

  • Eun-Kyung Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1307-1312
    • /
    • 2023
  • In this paper, we propose a hybrid camera system that combines cameras with different focal lengths and LiDAR (Light Detection and Ranging) sensors to address the core components of autonomous driving perception technology, which include object recognition and distance measurement. We extract objects within the scene and generate precise location and distance information for these objects using the proposed hybrid camera system. Initially, we employ the YOLO7 algorithm, widely utilized in the field of autonomous driving due to its advantages of fast computation, high accuracy, and real-time processing, for object recognition within the scene. Subsequently, we use multi-focal cameras to create depth maps to generate object positions and distance information. To enhance distance accuracy, we integrate the 3D distance information obtained from LiDAR sensors with the generated depth maps. In this paper, we introduce not only an autonomous vehicle platform capable of more accurately perceiving its surroundings during operation based on the proposed hybrid camera system, but also provide precise 3D spatial location and distance information. We anticipate that this will improve the safety and efficiency of autonomous vehicles.