• Title/Summary/Keyword: ART2 algorithm

Search Result 220, Processing Time 0.032 seconds

Smoothed Group-Sparsity Iterative Hard Thresholding Recovery for Compressive Sensing of Color Image (컬러 영상의 압축센싱을 위한 평활 그룹-희소성 기반 반복적 경성 임계 복원)

  • Nguyen, Viet Anh;Dinh, Khanh Quoc;Van Trinh, Chien;Park, Younghyeon;Jeon, Byeungwoo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.173-180
    • /
    • 2014
  • Compressive sensing is a new signal acquisition paradigm that enables sparse/compressible signal to be sampled under the Nyquist-rate. To fully benefit from its much simplified acquisition process, huge efforts have been made on improving the performance of compressive sensing recovery. However, concerning color images, compressive sensing recovery lacks in addressing image characteristics like energy distribution or human visual system. In order to overcome the problem, this paper proposes a new group-sparsity hard thresholding process by preserving some RGB-grouped coefficients important in both terms of energy and perceptual sensitivity. Moreover, a smoothed group-sparsity iterative hard thresholding algorithm for compressive sensing of color images is proposed by incorporating a frame-based filter with group-sparsity hard thresholding process. In this way, our proposed method not only pursues sparsity of image in transform domain but also pursues smoothness of image in spatial domain. Experimental results show average PSNR gains up to 2.7dB over the state-of-the-art group-sparsity smoothed recovery method.

A Study on AI Algorithm that can be used to Arts Exhibition : Focusing on the Development and Evaluation of the Chatbot Model (예술 전시에 활용 가능한 AI 알고리즘 연구 : 챗봇 모델 개발 및 평가를 중심으로)

  • Choi, Hak-Hyeon;Yoon, Mi-Ra
    • Journal of Korea Entertainment Industry Association
    • /
    • v.15 no.4
    • /
    • pp.369-381
    • /
    • 2021
  • Artificial Intelligence(AI) technology can be used in arts exhibitions ranging from planning exhibitions, filed progress, and evaluation. AI has been expanded its scope from planning exhibition and guidance services to tools for creating arts. This paper focuses on chatbots that utilize exhibition and AI technology convergence to provide information and services. To study more specifically, I developed a chatbot for exhibition services using the Naver Clova chatbot tool and information from the National Museum of Modern and Contemporary Art(MMCA), Korea. In this study, information was limited to viewing and exhibition rather than all information of the MMCA, and the chatbot was developed which provides a scenario type to get an answering user want to gain through a button and a text question and answer(Q&A) type to directly input a question. As a result of evaluating the chatbot with six items according to ELIZA's chatbot evaluation scale, a score of 4.2 out of 5 was derived by completing the development of a chatbot to be used to deliver viewing and exhibition information. The future research task is to create a perfect chatbot model that can be used in an actual arts exhibition space by connecting the developed chatbot with continuous scenario answers, resolving text Q&A-type answer failures and errors, and expanding additional services.

Obstacle Avoidance of Unmanned Surface Vehicle based on 3D Lidar for VFH Algorithm (무인수상정의 장애물 회피를 위한 3차원 라이다 기반 VFH 알고리즘 연구)

  • Weon, Ihn-Sik;Lee, Soon-Geul;Ryu, Jae-Kwan
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.3
    • /
    • pp.945-953
    • /
    • 2018
  • In this paper, we use 3-D LIDAR for obstacle detection and avoidance maneuver for autonomous unmanned operation. It is aimed to avoid obstacle avoidance in unmanned water under marine condition using only single sensor. 3D lidar uses Quanergy's M8 sensor to collect surrounding obstacle data and includes layer information and intensity information in obstacle information. The collected data is converted into a three-dimensional Cartesian coordinate system, which is then mapped to a two-dimensional coordinate system. The data including the obstacle information converted into the two-dimensional coordinate system includes noise data on the water surface. So, basically, the noise data generated regularly is defined by defining a hypothetical region of interest based on the assumption of unmanned water. The noise data generated thereafter are set to a threshold value in the histogram data calculated by the Vector Field Histogram, And the noise data is removed in proportion to the amount of noise. Using the removed data, the relative object was searched according to the unmanned averaging motion, and the density map of the data was made while keeping one cell on the virtual grid map. A polar histogram was generated for the generated obstacle map, and the avoidance direction was selected using the boundary value.

Parameter search methodology of support vector machines for improving performance (속도 향상을 위한 서포트 벡터 머신의 파라미터 탐색 방법론)

  • Lee, Sung-Bo;Kim, Jae-young;Kim, Cheol-Hong;Kim, Jong-Myon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.3
    • /
    • pp.329-337
    • /
    • 2017
  • This paper proposes a search method that explores parameters C and σ values of support vector machines (SVM) to improve performance while maintaining search accuracy. A traditional grid search method requires tremendous computational times because it searches all available combinations of C and σ values to find optimal combinations which provide the best performance of SVM. To address this issue, this paper proposes a deep search method that reduces computational time. In the first stage, it divides C-σ- accurate metrics into four regions, searches a median value of each region, and then selects a point of the highest accurate value as a start point. In the second stage, the selected start points are re-divided into four regions, and then the highest accurate point is assigned as a new search point. In the third stage, after eight points near the search point. are explored and the highest accurate value is assigned as a new search point, corresponding points are divided into four parts and it calculates an accurate value. In the last stage, it is continued until an accurate metric value is the highest compared to the neighborhood point values. If it is not satisfied, it is repeated from the second stage with the input level value. Experimental results using normal and defect bearings show that the proposed deep search algorithm outperforms the conventional algorithms in terms of performance and search time.

Vessel Tracking Algorithm using Multiple Local Smooth Paths (지역적 다수의 경로를 이용한 혈관 추적 알고리즘)

  • Jeon, Byunghwan;Jang, Yeonggul;Han, Dongjin;Shim, Hackjoon;Park, Hyungbok;Chang, Hyuk-Jae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.6
    • /
    • pp.137-145
    • /
    • 2016
  • A novel tracking method is proposed to find coronary artery using high-order curve model in coronary CTA(Computed Tomography Angiography). The proposed method quickly generates numerous artificial trajectories represented by high-order curves, and each trajectory has its own cost. The only high-ranked trajectories, located in the target structure, are selected depending on their costs, and then an optimal curve as the centerline will be found. After tracking, each optimal curve segment is connected, where optimal curve segments share the same point, to a single curve and it is a piecewise smooth curve. We demonstrated the high-order curve is a proper model for classification of coronary artery. The experimental results on public data set sho that the proposed method is comparable at both accuracy and running time to the state-of-the-art methods.

File System Support for Multimedia Streaming in Internet Home Appliances (인터넷 홈서버를 위한 스트리밍 전용 파일 시스템)

  • 박진연;송승호;진종현;원유집;박승민;김정기
    • Journal of Broadcast Engineering
    • /
    • v.6 no.3
    • /
    • pp.246-259
    • /
    • 2001
  • Due to recent rapid deployment of Internet streaming service and digital broadcasting service, the issue of how to efficiently support streaming workload in so called "Internet Home Appliance" receives prime interests from industry as well as academia. The underlying dilemma is that it may not be feasible to put cutting edge CPU, boards, disks and other peripherals into that type of device. The primary reason is its cost. Usually, Internet Home Appliances has its dedicated usage, e.g. Internet Radio, and thus it does not require high-end CPU nor high-end Va subsystem. The same reasoning applies to I/O subsystem. In Internet Home Appliances dedicated to handle compressed moving picture, it is not equipped with high end SCSI disk with fast rotational speed. Thus, it is mandatory to devise elaborate software algorithm to exploit the available hardware resources and maximize the efficiency of the system. This paper presents our experiences in the design and implementation of a new multimedia file system which can efficiently deliver the required disk bandwidth for a periodic I/O workload. We have implemented the file system on the Linux operating system, and examined itsperformance under streaming I/O workload. The results of the study show that the proposed file system exhibits superior performance than the Linux Ext2 file system under streaming I/O workload. The result of this work not only contribute to advance the state f art file system technology for multimedia streaming but also put forth the software which is readily available and can be deployed. deployed.

  • PDF

Road Extraction from Images Using Semantic Segmentation Algorithm (영상 기반 Semantic Segmentation 알고리즘을 이용한 도로 추출)

  • Oh, Haeng Yeol;Jeon, Seung Bae;Kim, Geon;Jeong, Myeong-Hun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.239-247
    • /
    • 2022
  • Cities are becoming more complex due to rapid industrialization and population growth in modern times. In particular, urban areas are rapidly changing due to housing site development, reconstruction, and demolition. Thus accurate road information is necessary for various purposes, such as High Definition Map for autonomous car driving. In the case of the Republic of Korea, accurate spatial information can be generated by making a map through the existing map production process. However, targeting a large area is limited due to time and money. Road, one of the map elements, is a hub and essential means of transportation that provides many different resources for human civilization. Therefore, it is essential to update road information accurately and quickly. This study uses Semantic Segmentation algorithms Such as LinkNet, D-LinkNet, and NL-LinkNet to extract roads from drone images and then apply hyperparameter optimization to models with the highest performance. As a result, the LinkNet model using pre-trained ResNet-34 as the encoder achieved 85.125 mIoU. Subsequent studies should focus on comparing the results of this study with those of studies using state-of-the-art object detection algorithms or semi-supervised learning-based Semantic Segmentation techniques. The results of this study can be applied to improve the speed of the existing map update process.

Efficient Deep Learning Approaches for Active Fire Detection Using Himawari-8 Geostationary Satellite Images (Himawari-8 정지궤도 위성 영상을 활용한 딥러닝 기반 산불 탐지의 효율적 방안 제시)

  • Sihyun Lee;Yoojin Kang;Taejun Sung;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.979-995
    • /
    • 2023
  • As wildfires are difficult to predict, real-time monitoring is crucial for a timely response. Geostationary satellite images are very useful for active fire detection because they can monitor a vast area with high temporal resolution (e.g., 2 min). Existing satellite-based active fire detection algorithms detect thermal outliers using threshold values based on the statistical analysis of brightness temperature. However, the difficulty in establishing suitable thresholds for such threshold-based methods hinders their ability to detect fires with low intensity and achieve generalized performance. In light of these challenges, machine learning has emerged as a potential-solution. Until now, relatively simple techniques such as random forest, Vanilla convolutional neural network (CNN), and U-net have been applied for active fire detection. Therefore, this study proposed an active fire detection algorithm using state-of-the-art (SOTA) deep learning techniques using data from the Advanced Himawari Imager and evaluated it over East Asia and Australia. The SOTA model was developed by applying EfficientNet and lion optimizer, and the results were compared with the model using the Vanilla CNN structure. EfficientNet outperformed CNN with F1-scores of 0.88 and 0.83 in East Asia and Australia, respectively. The performance was better after using weighted loss, equal sampling, and image augmentation techniques to fix data imbalance issues compared to before the techniques were used, resulting in F1-scores of 0.92 in East Asia and 0.84 in Australia. It is anticipated that timely responses facilitated by the SOTA deep learning-based approach for active fire detection will effectively mitigate the damage caused by wildfires.

Evaluation of Dose Change by Using the Deformable Image Registration (DIR) on the Intensity Modulated Radiation Therapy (IMRT) with Glottis Cancer (성문암 세기조절 방사선치료에서 변형영상정합을 이용한 선량변화 평가)

  • Kim, Woo Chul;Min, Chul Kee;Lee, Suk;Choi, Sang Hyoun;Cho, Kwang Hwan;Jung, Jae Hong;Kim, Eun Seog;Yeo, Seung-Gu;Kwon, Soo-Il;Lee, Kil-Dong
    • Progress in Medical Physics
    • /
    • v.25 no.3
    • /
    • pp.167-175
    • /
    • 2014
  • The purpose of this study is to evaluate the variation of the dose which is delivered to the patients with glottis cancer under IMRT (intensity modulated radiation therapy) by using the 3D registration with CBCT (cone beam CT) images and the DIR (deformable image registration) techniques. The CBCT images which were obtained at a one-week interval were reconstructed by using B-spline algorithm in DIR system, and doses were recalculated based on the newly obtained CBCT images. The dose distributions to the tumor and the critical organs were compared with reference. For the change of volume depending on weight at 3 to 5 weeks, there was increased of 1.38~2.04 kg on average. For the body surface depending on weight, there was decreased of 2.1 mm. The dose with transmitted to the carotid since three weeks was increased compared be more than 8.76% planned, and the thyroid gland was decreased to 26.4%. For the physical evaluation factors of the tumor, PITV, TCI, rDHI, mDHI, and CN were decreased to 4.32%, 5.78%, 44.54%, 12.32%, and 7.11%, respectively. Moreover, $D_{max}$, $D_{mean}$, $V_{67.50}$, and $D_{95}$ for PTV were increased or decreased to 2.99%, 1.52%, 5.78%, and 11.94%, respectively. Although there was no change of volume depending on weight, the change of body types occurred, and IMRT with the narrow composure margin sensitively responded to such a changing. For the glottis IMRT, the patient's weight changes should be observed and recorded to evaluate the actual dose distribution by using the DIR techniques, and more the adaptive treatment planning during the treatment course is needed to deliver the accurate dose to the patients.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.