• Title/Summary/Keyword: ART2 algorithm

Search Result 223, Processing Time 0.025 seconds

Developments of Local Festival Mobile Application and Data Analysis System Applying Beacon (비콘을 활용한 위치기반 지역축제 모바일 애플리케이션과 데이터 분석 시스템 개발)

  • Kim, Song I;Kim, Won Pyo;Jeong, Chul
    • Korea Science and Art Forum
    • /
    • v.31
    • /
    • pp.21-32
    • /
    • 2017
  • Local festivals form the regional cultures and atmosphere of communication; they increase the demand of domestic tourism businesses and thus, have an important role in ripple effects (e.g. regional image improvement, tourist influx, job creation, regional contents development, and local product sales) and economic revitalization. IoT (Internet of Thing) technologies have been developed especially, beacon-one of the IoT services has been applied as plenty of types and forms both domestically and internationally. However, notwithstanding expansion of current digital mobile technologies, it still remains as difficult for the individual to track the information about all the local festivals and to fulfill the tourists' needs of enjoying festivals given the weak strategic approaches and advertisement activities. Furthermore, current festival-related mobile applications don't function well as delivering information and have numerous contents issues (e.g. ways of information delivery within the festival places, independent application usage for each festival, one time usage due to one time event). This research, based on the background mentioned above, aims to develop the local festival mobile application and data analysis system applying beacon technology. First of all, three algorithms were developed, namely, 'festival crowding algorithm', 'visitor stats algorithm', and 'customized information algorithm', and then beta test was followed with the developed application and data analysis system. As a result, they could form the database of visitors' types and behaviors, and provide functions and services, such as personalized information, waiting time for festival contents, and 'hot place' function. Besides, in Google Play store, they also got the titles given with more than 13,000 downloads within first three months and as the most exposed application related with festivals; and, thus, got credited with their marketability and excellence. This research follows this order: chapter 2 shows the literature review of local festival related with technology development, beacon service, and festival application. In Chapter 3, design plans and conditions are described of developing local festival mobile application and data analysis system with beacon. Chapter 4 evaluates the results of the beta performance test to verify applicability of the developed application and data analysis system, and lastly, chapter 5 explains the conclusion and suggests the future research.

3D-Distortion Based Rate Distortion Optimization for Video-Based Point Cloud Compression

  • Yihao Fu;Liquan Shen;Tianyi Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.435-449
    • /
    • 2023
  • The state-of-the-art video-based point cloud compression(V-PCC) has a high efficiency of compressing 3D point cloud by projecting points onto 2D images. These images are then padded and compressed by High-Efficiency Video Coding(HEVC). Pixels in padded 2D images are classified into three groups including origin pixels, padded pixels and unoccupied pixels. Origin pixels are generated from projection of 3D point cloud. Padded pixels and unoccupied pixels are generated by copying values from origin pixels during image padding. For padded pixels, they are reconstructed to 3D space during geometry reconstruction as well as origin pixels. For unoccupied pixels, they are not reconstructed. The rate distortion optimization(RDO) used in HEVC is mainly aimed at keeping the balance between video distortion and video bitrates. However, traditional RDO is unreliable for padded pixels and unoccupied pixels, which leads to significant waste of bits in geometry reconstruction. In this paper, we propose a new RDO scheme which takes 3D-Distortion into account instead of traditional video distortion for padded pixels and unoccupied pixels. Firstly, these pixels are classified based on the occupancy map. Secondly, different strategies are applied to these pixels to calculate their 3D-Distortions. Finally, the obtained 3D-Distortions replace the sum square error(SSE) during the full RDO process in intra prediction and inter prediction. The proposed method is applied to geometry frames. Experimental results show that the proposed algorithm achieves an average of 31.41% and 6.14% bitrate saving for D1 metric in Random Access setting and All Intra setting on geometry videos compared with V-PCC anchor.

Monitoring on Crop Condition using Remote Sensing and Model (원격탐사와 모델을 이용한 작황 모니터링)

  • Lee, Kyung-do;Park, Chan-won;Na, Sang-il;Jung, Myung-Pyo;Kim, Junhwan
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_2
    • /
    • pp.617-620
    • /
    • 2017
  • The periodic monitoring of crop conditions and timely estimation of crop yield are of great importance for supporting agricultural decision-makings, as well as for effectively coping with food security issues. Remote sensing has been regarded as one of effective tools for crop condition monitoring and crop type classification. Since 2010, RDA (Rural Development Administration) has been developing technology for monitoring on crop condition using remote sensing and model. These special papers address recent state-of-the-art of remote sensing and geospatial technologies for providing operational agricultural information, such as, crop yield estimation methods using remote sensing data and process-oriented model, crop classification algorithm, monitoring and prediction of weather and climate based on remote sensing data,system design and architecture of crop monitoring system, history on rice yield forecasting method.

SuperDepthTransfer: Depth Extraction from Image Using Instance-Based Learning with Superpixels

  • Zhu, Yuesheng;Jiang, Yifeng;Huang, Zhuandi;Luo, Guibo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4968-4986
    • /
    • 2017
  • In this paper, we primarily address the difficulty of automatic generation of a plausible depth map from a single image in an unstructured environment. The aim is to extrapolate a depth map with a more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Our technique, which is fundamentally based on a preexisting DepthTransfer algorithm, transfers depth information at the level of superpixels. This occurs within a framework that replaces a pixel basis with one of instance-based learning. A vital superpixels feature enhancing matching precision is posterior incorporation of predictive semantic labels into the depth extraction procedure. Finally, a modified Cross Bilateral Filter is leveraged to augment the final depth field. For training and evaluation, experiments were conducted using the Make3D Range Image Dataset and vividly demonstrate that this depth estimation method outperforms state-of-the-art methods for the correlation coefficient metric, mean log10 error and root mean squared error, and achieves comparable performance for the average relative error metric in both efficacy and computational efficiency. This approach can be utilized to automatically convert 2D images into stereo for 3D visualization, producing anaglyph images that are visually superior in realism and simultaneously more immersive.

Development and Performance Evaluation of the First Model of 4D CT-Scanner

  • Endo, Masahiro;Mori, Shinichiro;Tsunoo, Takanori;Kandatsu, Susumu;Tanada, Shuji;Aradate, Hiroshi;Saito, Yasuo;Miyazaki, Hiroaki;Satoh, Kazumasa;Matsusita, Satoshi;Kusakabe, Masahiro
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.373-375
    • /
    • 2002
  • 4D CT is a dynamic volume imaging system of moving organs with an image quality comparable to conventional CT, and is realized with continuous and high-speed cone-beam CT. In order to realize 4D CT, we have developed a novel 2D detector on the basis of the present CT technology, and mounted it on the gantry frame of the state-of-the-art CT-scanner. In the present report we describe the design of the first model of 4D CT-scanner as well as the early results of performance test. The x-ray detector for the 4D CT-scanner is a discrete pixel detector in which pixel data are measured by an independent detector element. The numbers of elements are 912 (channels) ${\times}$ 256 (segments) and the element size is approximately 1mm ${\times}$ 1mm. Data sampling rate is 900views(frames)/sec, and dynamic range of A/D converter is 16bits. The rotation speed of the gantry is l.0sec/rotation. Data transfer system between rotating and stationary parts in the gantry consists of laser diode and photodiode pairs, and achieves net transfer speed of 5Gbps. Volume data of 512${\times}$512${\times}$256 voxels are reconstructed with FDK algorithm by parallel use of 128 microprocessors. Normal volunteers and several phantoms were scanned with the scanner to demonstrate high image quality.

  • PDF

AVS Video Decoder Implementation for Multimedia DSP (멀티미디어 DSP를 위한 AVS 비디오 복호화기 구현)

  • Kang, Dae-Beom;Sim, Dong-Gyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.5
    • /
    • pp.151-161
    • /
    • 2009
  • Audio Video Standard (AVS) is the audio and video compression standard that was developed for domestic video applications in China. AVS employs low complexity tools to minimize degradation of RD performance of the state-the-art video codec, H.264/AVC. The AVS video codec consists of $8{\times}8$ block prediction and the same size transform to improve compression efficiency for VGA and higher resolution sequences. Currently, the AVS has been adopted more and more for IPTV services and mobile applications in China. So, many consumer electronics companies and multimedia-related laboratories have been developing applications and chips for the AVS. In this paper, we implemented the AVS video decoder and optimize it on TI's Davinci EVM DSP board. For improving the decoding speed and clocks, we removed unnecessary memory operations and we also used high-speed VLD algorithm, linear assembly, intrinsic functions and so forth. Test results show that decoding speed of the optimized decoder is $5{\sim}7$ times faster than that of the reference software (RM 5.2J).

Feature Extraction Using Trace Transform for Insect Footprint Recognition (곤충 발자국 패턴 인식을 위한 Trace Transform 기반의 특징값 추출)

  • Shin, Bok-Suk;Cho, Kyoung-Won;Cha, Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.6
    • /
    • pp.1095-1100
    • /
    • 2008
  • In a process of insect foot recognition, footprint segments as basic areas for recognition need to be extracted from scanned insect footprints and appropriate features should be found from the footprint segments in order to discriminate kinds of insects, because the characteristics of the features are important to classify insects. In this paper, we propose methods for automatic footprint segmentation and feature extraction. We use a Trace transform method in order to find out appropriate features from the extracted segments by the above methods. The Trace transform method builds a new type of data structure from the segmented images by functions using parallel trace lines and the new type of data structure has characteristics invariant to translation, rotation and reflection of images. This data structure is converted to Triple features by Diametric and Circus functions, and the Triple features are used for discriminating patterns of insect footprints. In this paper, we show that the Triple features found by the proposed methods are enough distinguishable and appropriate for classifying kinds of insects.

Complexity Estimation Based Work Load Balancing for a Parallel Lidar Waveform Decomposition Algorithm

  • Jung, Jin-Ha;Crawford, Melba M.;Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.547-557
    • /
    • 2009
  • LIDAR (LIght Detection And Ranging) is an active remote sensing technology which provides 3D coordinates of the Earth's surface by performing range measurements from the sensor. Early small footprint LIDAR systems recorded multiple discrete returns from the back-scattered energy. Recent advances in LIDAR hardware now make it possible to record full digital waveforms of the returned energy. LIDAR waveform decomposition involves separating the return waveform into a mixture of components which are then used to characterize the original data. The most common statistical mixture model used for this process is the Gaussian mixture. Waveform decomposition plays an important role in LIDAR waveform processing, since the resulting components are expected to represent reflection surfaces within waveform footprints. Hence the decomposition results ultimately affect the interpretation of LIDAR waveform data. Computational requirements in the waveform decomposition process result from two factors; (1) estimation of the number of components in a mixture and the resulting parameter estimates, which are inter-related and cannot be solved separately, and (2) parameter optimization does not have a closed form solution, and thus needs to be solved iteratively. The current state-of-the-art airborne LIDAR system acquires more than 50,000 waveforms per second, so decomposing the enormous number of waveforms is challenging using traditional single processor architecture. To tackle this issue, four parallel LIDAR waveform decomposition algorithms with different work load balancing schemes - (1) no weighting, (2) a decomposition results-based linear weighting, (3) a decomposition results-based squared weighting, and (4) a decomposition time-based linear weighting - were developed and tested with varying number of processors (8-256). The results were compared in terms of efficiency. Overall, the decomposition time-based linear weighting work load balancing approach yielded the best performance among four approaches.

The Development of the Recovery System of the Destroyed Epigraph - Focused on the Chinese standard script - (훼손된 금석문 판독시스템 개발 - 해서체를 중심으로 -)

  • Jang, Seon-Phil
    • Korean Journal of Heritage: History & Science
    • /
    • v.50 no.2
    • /
    • pp.80-93
    • /
    • 2017
  • This study proposes a new scientific measurement method for damaged epigraph. In this new method, the Chinese characters are converted and coordinates are created for this measurement. This method is then used to decipher partially damaged characters from the parts of the coordinated characters that are damaged and intact. The Chinese characters are divided into 9 square parts by the position of their Chinese Radicals. The unknown characters are then compared and deciphered dependent upon the character shape in 9 square parts that have been created. This method is more scientific, accurate, and makes it easier to find related characters than deciphering through contexts, which is current method. When creating a new software based on this algorithm, it will be especially useful in deciphering an old manuscript or a epigraph that made ancient Chinese characters which are not currently in use. This study will also be helpful in deciphering semi-cursive styled or cursive styled epigraph, as well as semi-cursive styled or cursive styled damaged characters during follow-up research.

A Study on the Automated Payment System for Artificial Intelligence-Based Product Recognition in the Age of Contactless Services

  • Kim, Heeyoung;Hong, Hotak;Ryu, Gihwan;Kim, Dongmin
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.2
    • /
    • pp.100-105
    • /
    • 2021
  • Contactless service is rapidly emerging as a new growth strategy due to consumers who are reluctant to the face-to-face situation in the global pandemic of coronavirus disease 2019 (COVID-19), and various technologies are being developed to support the fast-growing contactless service market. In particular, the restaurant industry is one of the most desperate industrial fields requiring technologies for contactless service, and the representative technical case should be a kiosk, which has the advantage of reducing labor costs for the restaurant owners and provides psychological relaxation and satisfaction to the customer. In this paper, we propose a solution to the restaurant's store operation through the unmanned kiosk using a state-of-the-art artificial intelligence (AI) technology of image recognition. Especially, for the products that do not have barcodes in bakeries, fresh foods (fruits, vegetables, etc.), and autonomous restaurants on highways, which cause increased labor costs and many hassles, our proposed system should be very useful. The proposed system recognizes products without barcodes on the ground of image-based AI algorithm technology and makes automatic payments. To test the proposed system feasibility, we established an AI vision system using a commercial camera and conducted an image recognition test by training object detection AI models using donut images. The proposed system has a self-learning system with mismatched information in operation. The self-learning AI technology allows us to upgrade the recognition performance continuously. We proposed a fully automated payment system with AI vision technology and showed system feasibility by the performance test. The system realizes contactless service for self-checkout in the restaurant business area and improves the cost-saving in managing human resources.