• Title/Summary/Keyword: MAP algorithm

Search Result 1,986, Processing Time 0.03 seconds

A Feasibility Study on Using Neural Network for Dose Calculation in Radiation Treatment (방사선 치료 선량 계산을 위한 신경회로망의 적용 타당성)

  • Lee, Sang Kyung;Kim, Yong Nam;Kim, Soo Kon
    • Journal of Radiation Protection and Research
    • /
    • v.40 no.1
    • /
    • pp.55-64
    • /
    • 2015
  • Dose calculations which are a crucial requirement for radiotherapy treatment planning systems require accuracy and rapid calculations. The conventional radiotherapy treatment planning dose algorithms are rapid but lack precision. Monte Carlo methods are time consuming but the most accurate. The new combined system that Monte Carlo methods calculate part of interesting domain and the rest is calculated by neural can calculate the dose distribution rapidly and accurately. The preliminary study showed that neural networks can map functions which contain discontinuous points and inflection points which the dose distributions in inhomogeneous media also have. Performance results between scaled conjugated gradient algorithm and Levenberg-Marquardt algorithm which are used for training the neural network with a different number of neurons were compared. Finally, the dose distributions of homogeneous phantom calculated by a commercialized treatment planning system were used as training data of the neural network. In the case of homogeneous phantom;the mean squared error of percent depth dose was 0.00214. Further works are programmed to develop the neural network model for 3-dimensinal dose calculations in homogeneous phantoms and inhomogeneous phantoms.

Implementation of Monitoring System of the Living Waste based on Artificial Intelligence and IoT (AI 및 IoT 기반의 생활 폐기물 모니터링 시스템 구현)

  • Kim, Sang-Hyun;Kang, Young-Hoon;Yoon, Dal-Hwan
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.302-310
    • /
    • 2020
  • In this paper, we have implemented the living waste analysis system based on IoT and AI(Artificial Intelligence), and proposed effective waste process and management method. The Jeju location have the strong point to devise a stratagem and estimate waste quantization, rather than others. Especially, we can recognized the amount variation of waste to the residence people compare to the sightseer number, and the good example a specific waste duty. Thus this paper have developed the IoT device for interconnecting the existed CCTV camera, and use the AI algorithm to analysis the waste image. By using these decision of image analysis, we can inform their deal commend and a decided information to the map of the waste cars. In order to evaluate the performance of IoT, we have experimented the electromagnetic compatibility under a national official authorization KN-32, KN61000-4-2~6, and obtained the stable experimental results. In the further experimental results, we can applicable for an data structure for precise definition command by using the simulated several waste image with artificial intelligence algorithm.

Finding Stop Position of Taxis using IoV data and road segment algorithm (IoV 데이터와 도로 분할 알고리즘을 이용한 택시 정차위치 파악)

  • Lim, Dong-jin;Onueam, Athita;Jung, Han-min
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.590-592
    • /
    • 2018
  • Taxis that are illegally parked on the road to catch customer can cause traffic congestion and sometimes cause traffic accidents. Stop position of taxis is determined by the long term experience of taxi drivers. In this study, We provide information to taxi drivers and customer who visit in first time through finding stop position of taxis by time. To do this, we used the Internet of Vehicle (IoV) data collected from sensors installed in 40 taxis. Previous studies attempted by forming a cluster around a taxi. Since this method is centered on a taxi, the position of the cluster changes depending on the location of the taxi. In this study, we use a road segmentation algorithm to solve these problems. Unlike the previous studies, since the cluster is formed around the road, the position of the cluster is fixed and it is not affected by the number of taxis, so it is possible to grasp the stop position in real time. The road segmentation is made up of 30m units, and map the taxi location data divided into hourly, weekday, and weekend to the nearest point. As a result of the mapping, it was difficult to see a big difference in the time of week because there were few taxis to operate on weekends, but in case of weekdays, the difference of stop position between the commute time zone and the night time zone was confirmed. The results of this study suggest that it will be possible to propose the prevention of taxi illegally driving taxi and the location of the taxi stand.

  • PDF

Automatic Extraction of Training Dataset Using Expectation Maximization Algorithm - for Automatic Supervised Classification of Road Networks (기대최대화 알고리즘을 활용한 도로노면 training 자료 자동추출에 관한 연구 - 감독분류를 통한 도로 네트워크의 자동추출을 위하여)

  • Han, You-Kyung;Choi, Jae-Wan;Lee, Jae-Bin;Yu, Ki-Yun;Kim, Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.2
    • /
    • pp.289-297
    • /
    • 2009
  • In the paper, we propose the methodology to extract training dataset automatically for supervised classification of road networks. For the preprocessing, we co-register the airborne photos, LIDAR data and large-scale digital maps and then, create orthophotos and intensity images. By overlaying the large-scale digital maps onto generated images, we can extract the initial training dataset for the supervised classification of road networks. However, the initial training information is distorted because there are errors propagated from registration process and, also, there are generally various objects in the road networks such as asphalt, road marks, vegetation, cars and so on. As such, to generate the training information only for the road surface, we apply the Expectation Maximization technique and finally, extract the training dataset of the road surface. For the accuracy test, we compare the training dataset with manually extracted ones. Through the statistical tests, we can identify that the developed method is valid.

Traffic Information Extraction and Application When Utilizing Vehicle GPS Information (차량의 GPS 정보를 활용한 도로정보 추출 및 적용 방법)

  • Lee, Jong-Sung;Jeon, Min-Ho;Cho, Kyoung-Woo;Oh, Chang-Heon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.2960-2965
    • /
    • 2013
  • Customized services for individuals in analysis of recently collected GPS information have been investigated in various aspects. As the size of collected GPS data gets larger, a variety of services is being released accordingly. Existing studies, however, are limited to presenting service models for users while there is little study on developing intelligent computing technologies in the introduction of GPS information into the system. This study suggests an algorithm to analyze traffic information by introducing GPS information into the system in order to take the lead among intelligent computing technologies. The suggested algorithm analyzes a map by means of the collected vehicle GPS information and sectional traffic information interpretation method; thus, the computer judges the traffic information collected by humans. The experiment result shows that the traffic information was properly analyzed upon the utilization of the given data. Although a small quantity of analyzed data was less reliable, the system maintained high reliability as the data was sufficient.

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

A study on image region analysis and image enhancement using detail descriptor (디테일 디스크립터를 이용한 이미지 영역 분석과 개선에 관한 연구)

  • Lim, Jae Sung;Jeong, Young-Tak;Lee, Ji-Hyeok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.6
    • /
    • pp.728-735
    • /
    • 2017
  • With the proliferation of digital devices, the devices have generated considerable additive white Gaussian noise while acquiring digital images. The most well-known denoising methods focused on eliminating the noise, so detailed components that include image information were removed proportionally while eliminating the image noise. The proposed algorithm provides a method that preserves the details and effectively removes the noise. In this proposed method, the goal is to separate meaningful detail information in image noise environment using the edge strength and edge connectivity. Consequently, even as the noise level increases, it shows denoising results better than the other benchmark methods because proposed method extracts the connected detail component information. In addition, the proposed method effectively eliminated the noise for various noise levels; compared to the benchmark algorithms, the proposed algorithm shows a highly structural similarity index(SSIM) value and peak signal-to-noise ratio(PSNR) value, respectively. As shown the result of high SSIMs, it was confirmed that the SSIMs of the denoising results includes a human visual system(HVS).

GIS Information Generation for Electric Mobility Aids Based on Object Recognition Model (객체 인식 모델 기반 전동 이동 보조기용 GIS 정보 생성)

  • Je-Seung Woo;Sun-Gi Hong;Dong-Seok Park;Jun-Mo Park
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.200-208
    • /
    • 2022
  • In this study, an automatic information collection system and geographic information construction algorithm for the transportation disadvantaged using electric mobility aids are implemented using an object recognition model. Recognizes objects that the disabled person encounters while moving, and acquires coordinate information. It provides an improved route selection map compared to the existing geographic information for the disabled. Data collection consists of a total of four layers including the HW layer. It collects image information and location information, transmits them to the server, recognizes, and extracts data necessary for geographic information generation through the process of classification. A driving experiment is conducted in an actual barrier-free zone, and during this process, it is confirmed how efficiently the algorithm for collecting actual data and generating geographic information is generated.The geographic information processing performance was confirmed to be 70.92 EA/s in the first round, 70.69 EA/s in the second round, and 70.98 EA/s in the third round, with an average of 70.86 EA/s in three experiments, and it took about 4 seconds to be reflected in the actual geographic information. From the experimental results, it was confirmed that the walking weak using electric mobility aids can drive safely using new geographic information provided faster than now.

Development of a Compound Classification Process for Improving the Correctness of Land Information Analysis in Satellite Imagery - Using Principal Component Analysis, Canonical Correlation Classification Algorithm and Multitemporal Imagery - (위성영상의 토지정보 분석정확도 향상을 위한 응용체계의 개발 - 다중시기 영상과 주성분분석 및 정준상관분류 알고리즘을 이용하여 -)

  • Park, Min-Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.4D
    • /
    • pp.569-577
    • /
    • 2008
  • The purpose of this study is focused on the development of compound classification process by mixing multitemporal data and annexing a specific image enhancement technique with a specific image classification algorithm, to gain more accurate land information from satellite imagery. That is, this study suggests the classification process using canonical correlation classification technique after principal component analysis for the mixed multitemporal data. The result of this proposed classification process is compared with the canonical correlation classification result of one date images, multitemporal imagery and a mixed image after principal component analysis for one date images. The satellite images which are used are the Landsat 5 TM images acquired on July 26, 1994 and September 1, 1996. Ground truth data for accuracy assessment is obtained from topographic map and aerial photograph, and all of the study area is used for accuracy assessment. The proposed compound classification process showed superior efficiency to appling canonical correlation classification technique for only one date image in classification accuracy by 8.2%. Especially, it was valid in classifying mixed urban area correctly. Conclusively, to improve the classification accuracy when extracting land cover information using Landsat TM image, appling canonical correlation classification technique after principal component analysis for multitemporal imagery is very useful.

GIS Optimization for Bigdata Analysis and AI Applying (Bigdata 분석과 인공지능 적용한 GIS 최적화 연구)

  • Kwak, Eun-young;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.171-173
    • /
    • 2022
  • The 4th industrial revolution technology is developing people's lives more efficiently. GIS provided on the Internet services such as traffic information and time information makes people getting more quickly to destination. National geographic information service(NGIS) and each local government are making basic data to investigate SOC accessibility for analyzing optimal point. To construct the shortest distance, the accessibility from the starting point to the arrival point is analyzed. Applying road network map, the starting point and the ending point, the shortest distance, the optimal accessibility is calculated by using Dijkstra algorithm. The analysis information from multiple starting points to multiple destinations was required more than 3 steps of manual analysis to decide the position for the optimal point, within about 0.1% error. It took more time to process the many-to-many (M×N) calculation, requiring at least 32G memory specification of the computer. If an optimal proximity analysis service is provided at a desired location more versatile, it is possible to efficiently analyze locations that are vulnerable to business start-up and living facilities access, and facility selection for the public.

  • PDF