• Title/Summary/Keyword: 위치참조방법

Search Result 176, Processing Time 0.045 seconds

A Tag Proximity Information Acquisition Scheme for RFID Yoking Proof (RFID 요킹증명을 위한 인접태그 정보 획득 기법)

  • Ham, Hyoungmin
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.9
    • /
    • pp.476-484
    • /
    • 2019
  • RFID yoking proof proves that a pair of tags is scanned at the same time. Since the tags scanned simultaneously by a single reader are adjacent to each other, the yoking proof is used in applications that need to check the physical proximity of tagged objects. Most of the yoking proof schemes require pre-knowledge on adjacent tags. If an error occurs in the process of collecting information about adjacent tags, all subsequent proofs will fail verification. However, there is no research that suggests specific methods for obtaining information about adjacent tags. In this study, I propose a tag proximity information acquisition scheme for a yoking proof. The proposed method consists of two steps: scanning area determination and scanning area verification. In the first step, the size and position of the area to scan tags is determined in consideration of position and transmission range of the tags. In the next step, whether tag scanning is performed within the scanning area or not is verified through reference tags of the fixed position. In analysis, I show that the determined scanning area assures acquisition of adjacent tag information and the scanning area verification detects deformation and deviation of the scanning area.

New Illumination compensation algorithm improving a multi-view video coding performance by advancing its temporal and inter-view correlation (다시점 비디오의 시공간적 중복도를 높여 부호화 성능을 향상시키는 새로운 조명 불일치 보상 기법)

  • Lee, Dong-Seok;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.15 no.6
    • /
    • pp.768-782
    • /
    • 2010
  • Because of the different shooting position between multi-view cameras and the imperfect camera calibration, Illumination mismatches of multi-view video can happen. This variation can bring about the performance decrease of multi-view video coding(MVC) algorithm. A histogram matching algorithm can be applied to recompensate these inconsistencies in a prefiltering step. Once all camera frames of a multi-view sequence are adjusted to a predefined reference through the histogram matching, the coding efficiency of MVC is improved. However the histogram distribution can be different not only between neighboring views but also between sequential views on account of movements of camera angle and some objects, especially human. Therefore the histogram matching algorithm which references all frames in chose view is not appropriate for compensating the illumination differences of these sequence. Thus we propose new algorithms both the image classification algorithm which is applied two criteria to improve the correlation between inter-view frames and the histogram matching which references and matches with a group of pictures(GOP) as a unit to advance the correlation between successive frames. Experimental results show that the compression ratio for the proposed algorithm is improved comparing with the conventional algorithms.

A Fast Sub-pixel Motion Estimation Algorithm Using Motion Characteristics of Variable Block Sizes (가변블록에서의 움직임 특성을 이용한 부화소 단위 고속 움직임 예측 방법)

  • Kim, Dae-Gon;Kim, Song-Ju;Yoo, Cheol-Jung;Chang, Ok-Bae
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06d
    • /
    • pp.560-565
    • /
    • 2007
  • 본 논문에서는 H.264 동영상 표준의 가변 움직임 블록을 위한 고속 움직임 예측 기법을 제안한다. 움직임 예측은 H.264의 비디오 코딩 과정에서 가장 많은 연산량을 차지하는 중요한 처리과정이다. 움직임 예측과정에서 정수배 화소 단위에서의 탐색에 비하여, 부화소 단위까지의 움직임 추정은 실제 움직임 벡터를 찾아낼 수 있지만, 이를 구하기 위한 계산량이 늘어나는 문제가 있다. 본 논문에서는 기준점을 기준으로 기준점으로부터 $\pm1$ 화소 내에서 두 번째로 작은 오차 값이 있는 특성 및 부화소 단위의 화소 보간 특성을 이용하여 움직임 추정 과정에서 탐색점을 줄임으로써 연산 처리 속도를 증가시키고, 계산의 복잡도를 줄이는 알고리즘을 제안하였다. 제안한 방법에서는 정수 화소 단위에서의 가장 작은 SATD를 갖는 점과 참조 영상으로부터 추출한 PMV를 비교하여 기준점을 정한 후, 기준점 주위의 8개의 화소 위치 가운데 두 번째로 SATD값이 작은 점을 찾아 해당 방향으로 1/2 화소 단위의 움직임 추정을 수행하였고, 1/4 화소 단위에서도 1/2 화소단위에서 두 번째로 SATD가 작은 점 방향으로 움직임 추정을 실행하였다. 그 결과 기존의 JM에서 사용한 고속 움직임 예측 알고리즘에 비해 PSNR값에 큰 변화가 없고, 움직임 벡터 예측 시간 면에서 약 18%의 시간을 줄이는 결과를 보였다.

  • PDF

Physical Offset of UAVs Calibration Method for Multi-sensor Fusion (다중 센서 융합을 위한 무인항공기 물리 오프셋 검보정 방법)

  • Kim, Cheolwook;Lim, Pyeong-chae;Chi, Junhwa;Kim, Taejung;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1125-1139
    • /
    • 2022
  • In an unmanned aerial vehicles (UAVs) system, a physical offset can be existed between the global positioning system/inertial measurement unit (GPS/IMU) sensor and the observation sensor such as a hyperspectral sensor, and a lidar sensor. As a result of the physical offset, a misalignment between each image can be occurred along with a flight direction. In particular, in a case of multi-sensor system, an observation sensor has to be replaced regularly to equip another observation sensor, and then, a high cost should be paid to acquire a calibration parameter. In this study, we establish a precise sensor model equation to apply for a multiple sensor in common and propose an independent physical offset estimation method. The proposed method consists of 3 steps. Firstly, we define an appropriate rotation matrix for our system, and an initial sensor model equation for direct-georeferencing. Next, an observation equation for the physical offset estimation is established by extracting a corresponding point between a ground control point and the observed data from a sensor. Finally, the physical offset is estimated based on the observed data, and the precise sensor model equation is established by applying the estimated parameters to the initial sensor model equation. 4 region's datasets(Jeon-ju, Incheon, Alaska, Norway) with a different latitude, longitude were compared to analyze the effects of the calibration parameter. We confirmed that a misalignment between images were adjusted after applying for the physical offset in the sensor model equation. An absolute position accuracy was analyzed in the Incheon dataset, compared to a ground control point. For the hyperspectral image, root mean square error (RMSE) for X, Y direction was calculated for 0.12 m, and for the point cloud, RMSE was calculated for 0.03 m. Furthermore, a relative position accuracy for a specific point between the adjusted point cloud and the hyperspectral images were also analyzed for 0.07 m, so we confirmed that a precise data mapping is available for an observation without a ground control point through the proposed estimation method, and we also confirmed a possibility of multi-sensor fusion. From this study, we expect that a flexible multi-sensor platform system can be operated through the independent parameter estimation method with an economic cost saving.

Performance of the Finite Difference Method Using Cache and Shared Memory for Massively Parallel Systems (대규모 병렬 시스템에서 캐시와 공유메모리를 이용한 유한 차분법 성능)

  • Kim, Hyun Kyu;Lee, Hyo Jong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.4
    • /
    • pp.108-116
    • /
    • 2013
  • Many algorithms have been introduced to improve performance by using massively parallel systems, which consist of several hundreds of processors. A typical example is a GPU system of many processors which uses shared memory. In the case of image filtering algorithms, which make references to neighboring points, the shared memory helps improve performance by frequently accessing adjacent pixels. However, using shared memory requires rewriting the existing codes and consequently results in complexity of the codes. Recent GPU systems support both L1 and L2 cache along with shared memory. Since the L1 cache memory is located in the same area as the shared memory, the improvement of performance is predictable by using the cache memory. In this paper, the performance of cache and shared memory were compared. In conclusion, the performance of cache-based algorithm is very similar to the one of shared memory. The complexity of the code appearing in a shared memory system, however, is resolved with the cache-based algorithm.

Hand-Gesture Recognition Using Concentric-Circle Expanding and Tracing Algorithm (동심원 확장 및 추적 알고리즘을 이용한 손동작 인식)

  • Hwang, Dong-Hyun;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.3
    • /
    • pp.636-642
    • /
    • 2017
  • In this paper, We proposed a novel hand-gesture recognition algorithm using concentric-circle expanding and tracing. The proposed algorithm determines region of interest of hand image through preprocessing the original image acquired by web-camera and extracts the feature of hand gesture such as the number of stretched fingers, finger tips and finger bases, angle between the fingers which can be used as intuitive method for of human computer interaction. The proposed algorithm also reduces computational complexity compared with raster scan method through referencing only pixels of concentric-circles. The experimental result shows that the 9 hand gestures can be recognized with an average accuracy of 90.7% and an average algorithm execution time is 78ms. The algorithm is confirmed as a feasible way to a useful input method for virtual reality, augmented reality, mixed reality and perceptual interfaces of human computer interaction.

Pointer Networks based on Skip Pointing Model (스킵 포인팅 모델 기반 포인터 네트워크)

  • Park, Cheoneum;Lee, Changki
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.12
    • /
    • pp.625-631
    • /
    • 2016
  • Pointer Networks is a model which generates an output sequence with elements that correspond to an input sequence, based on the attention mechanism. A time complexity of the pointer networks is $O(N^2)$ resulting in longer decoding time of the model. This is because the model calculates attention for each input, if size of the input sequence is N. In this paper, we propose the pointer networks based on skip pointing model, which confirms the necessary input vector at decoding for reducing the decoding time of the pointer networks. Furthermore, experiments were conducted for the pronouns coreference resolution, which uses the method proposed in this paper. Our results show that the processing time per sentence was approximately 1.15 times faster, and the MUC F1 was 83.60%; this was approximately 2.17% improvement and a better performance than the original pointer networks.

A Data Transformation Method for Visualizing the Statistical Information based on the Grid (격자 기반의 통계정보 표현을 위한 데이터 변환 방법)

  • Kim, Munsu;Lee, Jiyeong
    • Spatial Information Research
    • /
    • v.23 no.5
    • /
    • pp.31-40
    • /
    • 2015
  • The purpose of this paper is to propose a data transformation method for visualizing the statistical information based on the grid system which has regular shape and size. Grid is better solution than administrator boundary or census block to check the distribution of the statistical information and be able to use as a spatial unit on the map flexibly. On the other hand, we need the additional process to convert the various statistical information to grid if we use the current method which is areal interpolation. Therefore, this paper proposes the 3 steps to convert the various statistical information to grid. 1)Geocoding the statistical information, 2)Converting the spatial information through the defining the spatial relationship, 3)Attribute transformation considering the data scale measurement. This method applies to the population density of Seoul to convert to the grid. Especially, spatial autocorrelation is performed to check the consistency of grid display if the reference data is different for same statistic information. As a result, both distribution of grid are similar to each other when the population density data which is represented by census block and building is converted to grid. Through the result of implementation, it is demonstrated to be able to perform the consistent data conversion based on the proposed method.

Detection Method of Human Face, Facial Components and Rotation Angle Using Color Value and Partial Template (컬러정보와 부분 템플릿을 이용한 얼굴영역, 요소 및 회전각 검출)

  • Lee, Mi-Ae;Park, Ki-Soo
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.465-472
    • /
    • 2003
  • For an effective pre-treatment process of a face input image, it is necessary to detect each of face components, calculate the face area, and estimate the rotary angle of the face. A proposed method of this study can estimate an robust result under such renditions as some different levels of illumination, variable fate sizes, fate rotation angels, and background color similar to skin color of the face. The first step of the proposed method detects the estimated face area that can be calculated by both adapted skin color Information of the band-wide HSV color coordinate converted from RGB coordinate, and skin color Information using histogram. Using the results of the former processes, we can detect a lip area within an estimated face area. After estimating a rotary angle slope of the lip area along the X axis, the method determines the face shape based on face information. After detecting eyes in face area by matching a partial template which is made with both eyes, we can estimate Y axis rotary angle by calculating the eye´s locations in three dimensional space in the reference of the face area. As a result of the experiment on various face images, the effectuality of proposed algorithm was verified.

Deep learning based face mask recognition for access control (출입 통제에 활용 가능한 딥러닝 기반 마스크 착용 판별)

  • Lee, Seung Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.8
    • /
    • pp.395-400
    • /
    • 2020
  • Coronavirus disease 2019 (COVID-19) was identified in December 2019 in China and has spread globally, resulting in an ongoing pandemic. Because COVID-19 is spread mainly from person to person, every person is required to wear a facemask in public. On the other hand, many people are still not wearing facemasks despite official advice. This paper proposes a method to predict whether a human subject is wearing a facemask or not. In the proposed method, two eye regions are detected, and the mask region (i.e., face regions below two eyes) is predicted and extracted based on the two eye locations. For more accurate extraction of the mask region, the facial region was aligned by rotating it such that the line connecting the two eye centers was horizontal. The mask region extracted from the aligned face was fed into a convolutional neural network (CNN), producing the classification result (with or without a mask). The experimental result on 186 test images showed that the proposed method achieves a very high accuracy of 98.4%.