• Title/Summary/Keyword: 연산 수행

Search Result 2,653, Processing Time 0.031 seconds

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

Implementation of Markerless Augmented Reality with Deformable Object Simulation (변형물체 시뮬레이션을 활용한 비 마커기반 증강현실 시스템 구현)

  • Sung, Nak-Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.17 no.4
    • /
    • pp.35-42
    • /
    • 2016
  • Recently many researches have been focused on the use of the markerless augmented reality system using face, foot, and hand of user's body to alleviate many disadvantages of the marker based augmented reality system. In addition, most existing augmented reality systems have been utilized rigid objects since they just desire to insert and to basic interaction with virtual object in the augmented reality system. In this paper, unlike restricted marker based augmented reality system with rigid objects that is based in display, we designed and implemented the markerless augmented reality system using deformable objects to apply various fields for interactive situations with a user. Generally, deformable objects can be implemented with mass-spring modeling and the finite element modeling. Mass-spring model can provide a real time simulation and finite element model can achieve more accurate simulation result in physical and mathematical view. In this paper, the proposed markerless augmented reality system utilize the mass-spring model using tetraheadron structure to provide real-time simulation result. To provide plausible simulated interaction result with deformable objects, the proposed method detects and tracks users hand with Kinect SDK and calculates the external force which is applied to the object on hand based on the position change of hand. Based on these force, 4th order Runge-Kutta Integration is applied to compute the next position of the deformable object. In addition, to prevent the generation of excessive external force by hand movement that can provide the natural behavior of deformable object, we set up the threshold value and applied this value when the hand movement is over this threshold. Each experimental test has been repeated 5 times and we analyzed the experimental result based on the computational cost of simulation. We believe that the proposed markerless augmented reality system with deformable objects can overcome the weakness of traditional marker based augmented reality system with rigid object that are not suitable to apply to other various fields including healthcare and education area.

Relationship Analysis between Lineaments and Epicenters using Hotspot Analysis: The Case of Geochang Region, South Korea (핫스팟 분석을 통한 거창지역의 선구조선과 진앙의 상관관계 분석)

  • Jo, Hyun-Woo;Chi, Kwang-Hoon;Cha, Sungeun;Kim, Eunji;Lee, Woo-Kyun
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_1
    • /
    • pp.469-480
    • /
    • 2017
  • This study aims to understand the relationship between lineaments and epicenters in Geochang region, Gyungsangnam-do, South Korea. An instrumental observation of earthquakes has been started by Korea Meteorological Administration (KMA) since 1978 and there were 6 earthquakes with magnitude ranging 2 to 2.5 in Geochang region from 1978 to 2016. Lineaments were extracted from LANDSAT 8 satellite image and shaded relief map displayed in 3-dimension using Digital Elevation Model (DEM). Then, lineament density was statistically examined by hotspot analysis. Hexagonal grids were generated to perform the analysis because hexagonal pattern expresses lineaments with less discontinuity than square girds, and the size of the grid was selected to minimize a variance of lineament density. Since hotspot analysis measures the extent of clustering with Z score, Z scores computed with lineaments' frequency ($L_f$), length ($L_d$), and intersection ($L_t$) were used to find lineament clusters in the density map. Furthermore, the Z scores were extracted from the epicenters and examined to see the relevance of each density elements to epicenters. As a result, 15 among 18 densities,recorded as 3 elements in 6 epicenters, were higher than 1.65 which is 95% of the standard normal distribution. This indicates that epicenters coincide with high density area. Especially, $L_f$ and $L_t$ had a significant relationship with epicenter, being located in upper 95% of the standard normal distribution, except for one epicenter in $L_t$. This study can be used to identify potential seismic zones by improving the accuracy of expressing lineaments' spatial distribution and analyzing relationship between lineament density and epicenter. However, additional studies in wider study area with more epicenters are recommended to promote the results.

A 2kβ Algorithm for Euler function 𝜙(n) Decryption of RSA (RSA의 오일러 함수 𝜙(n) 해독 2kβ 알고리즘)

  • Lee, Sang-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.7
    • /
    • pp.71-76
    • /
    • 2014
  • There is to be virtually impossible to solve the very large digits of prime number p and q from composite number n=pq using integer factorization in typical public-key cryptosystems, RSA. When the public key e and the composite number n are known but the private key d remains unknown in an asymmetric-key RSA, message decryption is carried out by first obtaining ${\phi}(n)=(p-1)(q-1)=n+1-(p+q)$ and then using a reverse function of $d=e^{-1}(mod{\phi}(n))$. Integer factorization from n to p,q is most widely used to produce ${\phi}(n)$, which has been regarded as mathematically hard. Among various integer factorization methods, the most popularly used is the congruence of squares of $a^2{\equiv}b^2(mod\;n)$, a=(p+q)/2,b=(q-p)/2 which is more commonly used then n/p=q trial division. Despite the availability of a number of congruence of scares methods, however, many of the RSA numbers remain unfactorable. This paper thus proposes an algorithm that directly and immediately obtains ${\phi}(n)$. The proposed algorithm computes $2^k{\beta}_j{\equiv}2^i(mod\;n)$, $0{\leq}i{\leq}{\gamma}-1$, $k=1,2,{\ldots}$ or $2^k{\beta}_j=2{\beta}_j$ for $2^j{\equiv}{\beta}_j(mod\;n)$, $2^{{\gamma}-1}$ < n < $2^{\gamma}$, $j={\gamma}-1,{\gamma},{\gamma}+1$ to obtain the solution. It has been found to be capable of finding an arbitrarily located ${\phi}(n)$ in a range of $n-10{\lfloor}{\sqrt{n}}{\rfloor}$ < ${\phi}(n){\leq}n-2{\lfloor}{\sqrt{n}}{\rfloor}$ much more efficiently than conventional algorithms.

Physicochemical Properties of Loin and Rump in the Native Horse Meat from Jeju (제주산 재래 마육의 등심부위와 볼기부위의 물리화학적 특성)

  • Kim Young-Boong;Jeon Ki-Hong;Rho Jung-Hae;Kang Suk-Nam
    • Food Science of Animal Resources
    • /
    • v.25 no.4
    • /
    • pp.365-372
    • /
    • 2005
  • This study was carried out to investigate the Physiochemical Properties of loin and rump in the native horse meat from Jeju. In the analysis of chemical composition of loin and rump, the result showed $72.2\%\;and\;73.8\%$ in moisture content $20.1\%\;and\;21.2\%$ in crude protein, $2.42\%\;and\;3.08\%$ in crude Int and $0.13\%\;and\;0.14\%$ in crude ash respectively. Glutamic acid was 3,275mg/100g and 3,577mg/100g in loin and rump each and it had highest result in amino acid analysis. K content was 388.0mg/100g which showed highest result in mineral analysis and next contents were P>Na>Mg>Ca. Oleic acid had highest result in fatty acid composition which were $62.64\%\;and\;63.77\%$ in loin and rump respectively. Cholesterol content of loin and rump were 43.25 and 43.57 mg/100g but showed no significant differences to the part. pH of loin and rump were 5.60 and 5.75 which had no significant differences. Loin had Higher result than that of rump with no significant differences in WHC and springiness of texture analysis. Redness of rump was higher than that of loin. In the sensory evaluation, there were significant differences in the color and odor. Loin had higher result than that of rump in the overall palatability but showed no significant differences. With the result of this experiment native horse meat from Jeju could be understood as good meat resources.

4-way Search Window for Improving The Memory Bandwidth of High-performance 2D PE Architecture in H.264 Motion Estimation (H.264 움직임추정에서 고속 2D PE 아키텍처의 메모리대역폭 개선을 위한 4-방향 검색윈도우)

  • Ko, Byung-Soo;Kong, Jin-Hyeung
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.6
    • /
    • pp.6-15
    • /
    • 2009
  • In this paper, a new 4-way search window is designed for the high-performance 2D PE architecture in H.264 Motion Estimation(ME) to improve the memory bandwidth. While existing 2D PE architectures reuse the overlapped data of adjacent search windows scanned in 1 or 3-way, the new window utilizes the overlapped data of adjacent search windows as well as adjacent multiple scanning (window) paths to enhance the reusage of retrieved search window data. In order to scan adjacent windows and multiple paths instead of single raster and zigzag scanning of adjacent windows, bidirectional row and column window scanning results in the 4-way(up. down, left, right) search window. The proposed 4-way search window could improve the reuse of overlapped window data to reduce the redundancy access factor by 3.1, though the 1/3-way search window redundantly requires $7.7{\sim}11$ times of data retrieval. Thus, the new 4-way search window scheme enhances the memory bandwidth by $70{\sim}58%$ compared with 1/3-way search window. The 2D PE architecture in H.264 ME for 4-way search window consists of $16{\times}16$ pe array. computing the absolute difference between current and reference frames, and $5{\times}16$ reusage array, storing the overlapped data of adjacent search windows and multiple scanning paths. The reference data could be loaded upward and downward into the new 2D PE depending on scanning direction, and the reusage array is combined with the pe array rotating left as well as right to utilize the overlapped data of adjacent multiple scan paths. In experiments, the new implementation of 4-way search window on Magnachip 0.18um could deal with the HD($1280{\times}720$) video of 1 reference frame, $48{\times}48$ search area and $16{\times}16$ macroblock by 30fps at 149.25MHz.

Research on the Prototype Landscape of Former Donam SeoWon Located in YeonSan (연산 돈암서원(豚巖書院) 구지(舊址)의 원형경관 탐색)

  • Rho, Jae-Hyun;Choi, Jong-Hee;Shin, Sang-Sup;Lee, Won-Ho
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.30 no.4
    • /
    • pp.14-22
    • /
    • 2012
  • The position, size and landscape of the former Donam SeoWon as well as the physical organization of the old site, are studied to extract data for the enhancement of the authenticity of Donam SeoWon since its registration as a world heritage site. The results are as follows. The 'Donam(豚巖)' encaved rock, the tombstone of teacher Sagye(沙溪), Kimjipsadang(金集祠堂), the head of the Gwangsan Kim family, the Sagye stream in front of them, and the Gyeryong and Daedun mountains in the distance are united in the former Donam SeoWon as landscape elements that clearly show the characteristics of the former site, which was called 'Donam-Wollim(豚巖園林).' Moreover Yangseongdangsipyoung(養性堂十詠), adds the garden elements of a medical herb field, twins pond, a bamboo forest, a school, and a peach field. On this site, one can also engage in activities that are related to the land and are closely related to Neo-Confucianism such as fish watching, conferencing, visit in seclusion(訪隱), looking for monks, and overseeing farming. The former site facing east is assumed to have Sau(祠宇) - Eungdodang(凝道堂) - Ipdeokmum(入德門) - Sanangru(山仰樓: estimated). Jeonsacheong seems to have been located to the left of the Sau area, Yangseongdang, which contained upper and lower twin lotus ponds, on the right and was surrounded by various plants. As it has been used as a lecture hall for the past 250 years, the former Donam SeoWon, located 1.8km away from the current area, must be preserved, and the landscape should be formed to establish the authenticity of Donam SeoWon.

The Performance Bottleneck of Subsequence Matching in Time-Series Databases: Observation, Solution, and Performance Evaluation (시계열 데이타베이스에서 서브시퀀스 매칭의 성능 병목 : 관찰, 해결 방안, 성능 평가)

  • 김상욱
    • Journal of KIISE:Databases
    • /
    • v.30 no.4
    • /
    • pp.381-396
    • /
    • 2003
  • Subsequence matching is an operation that finds subsequences whose changing patterns are similar to a given query sequence from time-series databases. This paper points out the performance bottleneck in subsequence matching, and then proposes an effective method that improves the performance of entire subsequence matching significantly by resolving the performance bottleneck. First, we analyze the disk access and CPU processing times required during the index searching and post processing steps through preliminary experiments. Based on their results, we show that the post processing step is the main performance bottleneck in subsequence matching, and them claim that its optimization is a crucial issue overlooked in previous approaches. In order to resolve the performance bottleneck, we propose a simple but quite effective method that processes the post processing step in the optimal way. By rearranging the order of candidate subsequences to be compared with a query sequence, our method completely eliminates the redundancy of disk accesses and CPU processing occurred in the post processing step. We formally prove that our method is optimal and also does not incur any false dismissal. We show the effectiveness of our method by extensive experiments. The results show that our method achieves significant speed-up in the post processing step 3.91 to 9.42 times when using a data set of real-world stock sequences and 4.97 to 5.61 times when using data sets of a large volume of synthetic sequences. Also, the results show that our method reduces the weight of the post processing step in entire subsequence matching from about 90% to less than 70%. This implies that our method successfully resolves th performance bottleneck in subsequence matching. As a result, our method provides excellent performance in entire subsequence matching. The experimental results reveal that it is 3.05 to 5.60 times faster when using a data set of real-world stock sequences and 3.68 to 4.21 times faster when using data sets of a large volume of synthetic sequences compared with the previous one.

Two-dimensional Velocity Measurements of Campbell Glacier in East Antarctica Using Coarse-to-fine SAR Offset Tracking Approach of KOMPSAT-5 Satellite Image (KOMPSAT-5 위성영상의 Coarse-to-fine SAR 오프셋트래킹 기법을 활용한 동남극 Campbell Glacier의 2차원 이동속도 관측)

  • Chae, Sung-Ho;Lee, Kwang-Jae;Lee, Sungu
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_3
    • /
    • pp.2035-2046
    • /
    • 2021
  • Glacier movement speed is the most basic measurement for glacial dynamics research and is a very important indicator in predicting sea level rise due to climate change. In this study, the two-dimensional velocity measurements of Campbell Glacier located in Terra Nova Bay in East Antarctica were observed through the SAR offset tracking technique. For this purpose, domestic KOMPSAT-5 SAR satellite images taken on July 9, 2021 and August 6, 2021 were acquired. The Multi-kernel SAR offset tracking proposed through previous studies is a technique to obtain the optimal result that satisfies both resolution and precision. However, since offset tracking is repeatedly performed according to the size of the kernel, intensive computational power and time are required. Therefore, in this study, we strategically proposed a coarse-to-fine offset tracking approach. Through coarse-to-fine SAR offset tracking, it is possible to obtain a result with improved observation precision (especially, about 4 times in azimuth direction) while maintaining resolution compared to general offset tracking results. Using this proposed technique, a two-dimensional velocity measurements of Campbell Glacier were generated. As a result of analyzing the two-dimensional movement velocity image, it was observed that the grounding line of Campbell Glacier exists at approximately latitude -74.56N. The flow velocity of Campbell Glacier Tongue analyzed in this study (185-237 m/yr) increased compared to that of 1988-1989 (140-240 m/yr). And compared to the flow velocity (181-268 m/yr) in 2010-2012, the movement speed near the ground line was similar, but it was confirmed that the movement speed at the end of the Campbell Glacier Tongue decreased. However, there is a possibility that this is an error that occurs because the study result of this study is an annual rate of glacier movement that occurred for 28 days. For accurate comparison, it will be necessary to expand the data in time series and accurately calculate the annual rate. Through this study, the two-dimensional velocity measurements of the glacier were observed for the first time using the KOMPSAT-5 satellite image, a domestic X-band SAR satellite. It was confirmed that the coarse-to-fine SAR offset tracking approach of the KOMPSAT-5 SAR image is very useful for observing the two-dimensional velocity of glacier movements.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.