• Title/Summary/Keyword: 연산시간 감소

Search Result 398, Processing Time 0.032 seconds

A Design of the OOPP(Optimized Online Portfolio Platform) using Enterprise Competency Information (기업 직무 정보를 활용한 OOPP(Optimized Online Portfolio Platform)설계)

  • Jung, Bogeun;Park, Jinuk;Lee, ByungKwan
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.493-506
    • /
    • 2018
  • This paper proposes the OOPP(Optimized Online Portfolio Platform) design for the job seekers to search for the job competency necessary for employment and to write and manage portfolio online efficiently. The OOPP consists of three modules. First, JDCM(Job Data Collection Module) stores the help-wanted advertisements of job information sites in a spreadsheet. Second, CSM(Competency Statistical Model) classifies core competencies for each job by text-mining the collected help-wanted ads. Third, OBBM(Optimize Browser Behavior Module) makes users to look up data rapidly by improving the processing speed of a browser. In addition, The OBBM consists of the PSES(Parallel Search Engine Sub-Module) optimizing the computation of a Search Engine and the OILS(Optimized Image Loading Sub-Module) optimizing the loading of image text, etc. The performance analysis of the CSM shows that there is little difference in accuracy between the CSM and the actual advertisement because its data accuracy is 99.4~100%. If Browser optimization is done by using the OBBM, working time is reduced by about 68.37%. Therefore, the OOPP makes users look up the analyzed result in the web page rapidly by analyzing the help-wanted ads. of job information sites accurately.

The Prediction of Durability Performance for Chloride Ingress in Fly Ash Concrete by Artificial Neural Network Algorithm (인공 신경망 알고리즘을 활용한 플라이애시 콘크리트의 염해 내구성능 예측)

  • Kwon, Seung-Jun;Yoon, Yong-Sik
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.5
    • /
    • pp.127-134
    • /
    • 2022
  • In this study, RCPTs (Rapid Chloride Penetration Test) were performed for fly ash concrete with curing age of 4 ~ 6 years. The concrete mixtures were prepared with 3 levels of water to binder ratio (0.37, 0.42, and 0.47) and 2 levels of substitution ratio of fly ash (0 and 30%), and the improved passed charges of chloride ion behavior were quantitatively analyzed. Additionally, the results were trained through the univariate time series models consisted of GRU (Gated Recurrent Unit) algorithm and those from the models were evaluated. As the result of the RCPT, fly ash concrete showed the reduced passed charges with period and an more improved resistance to chloride penetration than OPC concrete. At the final evaluation period (6 years), fly ash concrete showed 'Very low' grade in all W/B (water to binder) ratio, however OPC concrete showed 'Moderate' grade in the condition with the highest W/B ratio (0.47). The adopted algorithm of GRU for this study can analyze time series data and has the advantage like operation efficiency. The deep learning model with 4 hidden layers was designed, and it provided a reasonable prediction results of passed charge. The deep learning model from this study has a limitation of single consideration of a univariate time series characteristic, but it is in the developing process of providing various characteristics of concrete like strength and diffusion coefficient through additional studies.

Radio location algorithm in microcellular wide-band CDMA environment (마이크로 셀룰라 Wide-band CDMA 환경에서의 위치 추정 알고리즘)

  • Chang, Jin-Weon;Han, Il;Sung, Dan-Keun;Shin, Bung-Chul;Hong, Een-Kee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.8
    • /
    • pp.2052-2063
    • /
    • 1998
  • Various full-scale radio location systems have been developed since ground-based radio navigation systems appeared during World War II, and more recently global positioning systems (GPS) have been widely used as a representative location system. In addition, radio location systems based on cellular systems are intensively being studied as cellular services become more and more popular. However, these studies have been focused mainly on macrocellular systems of which based stations are mutually synchronized. There has been no study about systems of which based stations are asynchronous. In this paper, we proposed two radio location algorithms in microcellular CDMA systems of which base stations are asychronous. The one is to estimate the position of a personal station at the center of rectangular shaped area which approximates the realistic common area. The other, as a method based on road map, is to first find candidate positions, the centers of roads pseudo-range-distant from the base station which the personal station belongs to and then is to estimate the position by monitoring the pilot signal strengths of neighboring base stations. We compare these two algorithms with three wide-spread algorithms through computer simulations and investigate interference effect on measuring pseudo ranges. The proposed algorithms require no recursive calculations and yield smaller position error than the existing algorithms because of less affection of non-line-of-signt propagation in microcellular environments.

  • PDF

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.

A Study on the Method of Minimizing the Bit-Rate Overhead of H.264 Video when Encrypting the Region of Interest (관심영역 암호화 시 발생하는 H.264 영상의 비트레이트 오버헤드 최소화 방법 연구)

  • Son, Dongyeol;Kim, Jimin;Ji, Cheongmin;Kim, Kangseok;Kim, Kihyung;Hong, Manpyo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.2
    • /
    • pp.311-326
    • /
    • 2018
  • This paper has experimented using News sample video with QCIF ($176{\times}144$) resolution in JM v10.2 code of H.264/AVC-MPEG. The region of interest (ROI) to be encrypted occurred the drift by unnecessarily referring to each frame continuously in accordance with the characteristics of the motion prediction and compensation of the H.264 standard. In order to mitigate the drift, the latest related research method of re-inserting encrypted I-picture into a certain period leads to an increase in the amount of additional computation that becomes the factor increasing the bit-rate overhead of the entire video. Therefore, the reference search range of the block and the frame in the ROI to be encrypted is restricted in the motion prediction and compensation for each frame, and the reference search range in the non-ROI not to be encrypted is not restricted to maintain the normal encoding efficiency. In this way, after encoding the video with restricted reference search range, this article proposes a method of RC4 bit-stream encryption for the ROI such as the face to be able to identify in order to protect personal information in the video. Also, it is compared and analyzed the experimental results after implementing the unencrypted original video, the latest related research method, and the proposed method in the condition of the same environment. In contrast to the latest related research method, the bit-rate overhead of the proposed method is 2.35% higher than that of the original video and 14.93% lower than that of the latest related method, while mitigating temporal drift through the proposed method. These improved results have verified by experiments of this study.

Two-dimensional Velocity Measurements of Campbell Glacier in East Antarctica Using Coarse-to-fine SAR Offset Tracking Approach of KOMPSAT-5 Satellite Image (KOMPSAT-5 위성영상의 Coarse-to-fine SAR 오프셋트래킹 기법을 활용한 동남극 Campbell Glacier의 2차원 이동속도 관측)

  • Chae, Sung-Ho;Lee, Kwang-Jae;Lee, Sungu
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_3
    • /
    • pp.2035-2046
    • /
    • 2021
  • Glacier movement speed is the most basic measurement for glacial dynamics research and is a very important indicator in predicting sea level rise due to climate change. In this study, the two-dimensional velocity measurements of Campbell Glacier located in Terra Nova Bay in East Antarctica were observed through the SAR offset tracking technique. For this purpose, domestic KOMPSAT-5 SAR satellite images taken on July 9, 2021 and August 6, 2021 were acquired. The Multi-kernel SAR offset tracking proposed through previous studies is a technique to obtain the optimal result that satisfies both resolution and precision. However, since offset tracking is repeatedly performed according to the size of the kernel, intensive computational power and time are required. Therefore, in this study, we strategically proposed a coarse-to-fine offset tracking approach. Through coarse-to-fine SAR offset tracking, it is possible to obtain a result with improved observation precision (especially, about 4 times in azimuth direction) while maintaining resolution compared to general offset tracking results. Using this proposed technique, a two-dimensional velocity measurements of Campbell Glacier were generated. As a result of analyzing the two-dimensional movement velocity image, it was observed that the grounding line of Campbell Glacier exists at approximately latitude -74.56N. The flow velocity of Campbell Glacier Tongue analyzed in this study (185-237 m/yr) increased compared to that of 1988-1989 (140-240 m/yr). And compared to the flow velocity (181-268 m/yr) in 2010-2012, the movement speed near the ground line was similar, but it was confirmed that the movement speed at the end of the Campbell Glacier Tongue decreased. However, there is a possibility that this is an error that occurs because the study result of this study is an annual rate of glacier movement that occurred for 28 days. For accurate comparison, it will be necessary to expand the data in time series and accurately calculate the annual rate. Through this study, the two-dimensional velocity measurements of the glacier were observed for the first time using the KOMPSAT-5 satellite image, a domestic X-band SAR satellite. It was confirmed that the coarse-to-fine SAR offset tracking approach of the KOMPSAT-5 SAR image is very useful for observing the two-dimensional velocity of glacier movements.