• Title/Summary/Keyword: Visualization of information

Search Result 2,143, Processing Time 0.026 seconds

Operational Ship Monitoring Based on Multi-platforms (Satellite, UAV, HF Radar, AIS) (다중 플랫폼(위성, 무인기, AIS, HF 레이더)에 기반한 시나리오별 선박탐지 모니터링)

  • Kim, Sang-Wan;Kim, Donghan;Lee, Yoon-Kyung;Lee, Impyeong;Lee, Sangho;Kim, Junghoon;Kim, Keunyong;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_2
    • /
    • pp.379-399
    • /
    • 2020
  • The detection of illegal ship is one of the key factors in building a marine surveillance system. Effective marine surveillance requires the means for continuous monitoring over a wide area. In this study, the possibility of ship detection monitoring based on satellite SAR, HF radar, UAV and AIS integration was investigated. Considering the characteristics of time and spatial resolution for each platform, the ship monitoring scenario consisted of a regular surveillance system using HFR data and AIS data, and an event monitoring system using satellites and UAVs. The regular surveillance system still has limitations in detecting a small ship and accuracy due to the low spatial resolution of HF radar data. However, the event monitoring system using satellite SAR data effectively detects illegal ships using AIS data, and the ship speed and heading direction estimated from SAR images or ship tracking information using HF radar data can be used as the main information for the transition to UAV monitoring. For the validation of monitoring scenario, a comprehensive field experiment was conducted from June 25 to June 26, 2019, at the west side of Hongwon Port in Seocheon. KOMPSAT-5 SAR images, UAV data, HF radar data and AIS data were successfully collected and analyzed by applying each developed algorithm. The developed system will be the basis for the regular and event ship monitoring scenarios as well as the visualization of data and analysis results collected from multiple platforms.

Implementation of Reporting Tool Supporting OLAP and Data Mining Analysis Using XMLA (XMLA를 사용한 OLAP과 데이타 마이닝 분석이 가능한 리포팅 툴의 구현)

  • Choe, Jee-Woong;Kim, Myung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.3
    • /
    • pp.154-166
    • /
    • 2009
  • Database query and reporting tools, OLAP tools and data mining tools are typical front-end tools in Business Intelligence environment which is able to support gathering, consolidating and analyzing data produced from business operation activities and provide access to the result to enterprise's users. Traditional reporting tools have an advantage of creating sophisticated dynamic reports including SQL query result sets, which look like documents produced by word processors, and publishing the reports to the Web environment, but data source for the tools is limited to RDBMS. On the other hand, OLAP tools and data mining tools have an advantage of providing powerful information analysis functions on each own way, but built-in visualization components for analysis results are limited to tables or some charts. Thus, this paper presents a system that integrates three typical front-end tools to complement one another for BI environment. Traditional reporting tools only have a query editor for generating SQL statements to bring data from RDBMS. However, the reporting tool presented by this paper can extract data also from OLAP and data mining servers, because editors for OLAP and data mining query requests are added into this tool. Traditional systems produce all documents in the server side. This structure enables reporting tools to avoid repetitive process to generate documents, when many clients intend to access the same dynamic document. But, because this system targets that a few users generate documents for data analysis, this tool generates documents at the client side. Therefore, the tool has a processing mechanism to deal with a number of data despite the limited memory capacity of the report viewer in the client side. Also, this reporting tool has data structure for integrating data from three kinds of data sources into one document. Finally, most of traditional front-end tools for BI are dependent on data source architecture from specific vendor. To overcome the problem, this system uses XMLA that is a protocol based on web service to access to data sources for OLAP and data mining services from various vendors.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Design of Cloud-Based Data Analysis System for Culture Medium Management in Smart Greenhouses (스마트온실 배양액 관리를 위한 클라우드 기반 데이터 분석시스템 설계)

  • Heo, Jeong-Wook;Park, Kyeong-Hun;Lee, Jae-Su;Hong, Seung-Gil;Lee, Gong-In;Baek, Jeong-Hyun
    • Korean Journal of Environmental Agriculture
    • /
    • v.37 no.4
    • /
    • pp.251-259
    • /
    • 2018
  • BACKGROUND: Various culture media have been used for hydroponic cultures of horticultural plants under the smart greenhouses with natural and artificial light types. Management of the culture medium for the control of medium amounts and/or necessary components absorbed by plants during the cultivation period is performed with ICT (Information and Communication Technology) and/or IoT (Internet of Things) in a smart farm system. This study was conducted to develop the cloud-based data analysis system for effective management of culture medium applying to hydroponic culture and plant growth in smart greenhouses. METHODS AND RESULTS: Conventional inorganic Yamazaki and organic media derived from agricultural byproducts such as a immature fruit, leaf, or stem were used for hydroponic culture media. Component changes of the solutions according to the growth stage were monitored and plant growth was observed. Red and green lettuce seedlings (Lactuca sativa L.) which developed 2~3 true leaves were considered as plant materials. The seedlings were hydroponically grown in the smart greenhouse with fluorescent and light-emitting diodes (LEDs) lights of $150{\mu}mol/m^2/s$ light intensity for 35 days. Growth data of the seedlings were classified and stored to develop the relational database in the virtual machine which was generated from an open stack cloud system on the base of growth parameter. Relation of the plant growth and nutrient absorption pattern of 9 inorganic components inside the media during the cultivation period was investigated. The stored data associated with component changes and growth parameters were visualized on the web through the web framework and Node JS. CONCLUSION: Time-series changes of inorganic components in the culture media were observed. The increases of the unfolded leaves or fresh weight of the seedlings were mainly dependent on the macroelements such as a $NO_3-N$, and affected by the different inorganic and organic media. Though the data analysis system was developed, actual measurement data were offered by using the user smart device, and analysis and comparison of the data were visualized graphically in time series based on the cloud database. Agricultural management in data visualization and/or plant growth can be implemented by the data analysis system under whole agricultural sites regardless of various culture environmental changes.

Usefulness of Region Cut Subtraction in Fusion & MIP 3D Reconstruction Image (Fusion & Maximum Intensity Projection 3D 재구성 영상에서 Region Cut Subtraction의 유용성)

  • Moon, A-Reum;Chi, Yong-Gi;Choi, Sung-Wook;Lee, Hyuk;Lee, Kyoo-Bok;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.18-23
    • /
    • 2010
  • Purpose: PET/CT combines functional and morphologic data and increases diagnostic accuracy in a variety of malignancies. Especially reconstructed Fusion PET/CT images or MIP (Maximum Intensity Projection) images from a 2-dimensional image to a 3-dimensional one are useful in visualization of the lesion. But in Fusion & MIP 3D reconstruction image, due to hot uptake by urine or urostomy bag, lesion is overlapped so it is difficult that we can distinguish the lesion with the naked eye. This research tries to improve a distinction by removing parts of hot uptake. Materials and Methods: This research has been conducted the object of patients who have went to our hospital from September 2008 to March 2009 and have a lot of urine of remaining volume as disease of uterus, bladder, rectum in the result of PET/CT examination. We used GE Company's Advantage Workstation AW4.3 05 Version Volume Viewer program. As an analysis method, set up ROI in region of removal in axial volume image, select Cut Outside and apply same method in coronal volume image. Next, adjust minimum value in Threshold of 3D Tools, select subtraction in Advanced Processing. It makes Fusion & MIP images and compares them with the image no using Region Cut Definition. Results: In Fusion & MIP 3D reconstruction image, it makes Fusion & MIP images and compares them by using Advantage Workstation AW4.3 05's Region Cut Subtraction, parts of hot uptake according to patient's urine can be removed. Distinction of lesion was clearly reconstructed in image using Region Cut Definition. Conclusion: After examining the patients showing hot uptake on account of volume of urine intake in bladder, in process of reconstruction image, if parts of hot uptake would be removed, it could contribute to offering much better diagnostic information than image subtraction of conventional method. Especially in case of disease of uterus, bladder and rectum, it will be helpful for qualitative improvement of image.

  • PDF

Analysis of Traffic Accidents Injury Severity in Seoul using Decision Trees and Spatiotemporal Data Visualization (의사결정나무와 시공간 시각화를 통한 서울시 교통사고 심각도 요인 분석)

  • Kang, Youngok;Son, Serin;Cho, Nahye
    • Journal of Cadastre & Land InformatiX
    • /
    • v.47 no.2
    • /
    • pp.233-254
    • /
    • 2017
  • The purpose of this study is to analyze the main factors influencing the severity of traffic accidents and to visualize spatiotemporal characteristics of traffic accidents in Seoul. To do this, we collected the traffic accident data that occurred in Seoul for four years from 2012 to 2015, and classified as slight, serious, and death traffic accidents according to the severity of traffic accidents. The analysis of spatiotemporal characteristics of traffic accidents was performed by kernel density analysis, hotspot analysis, space time cube analysis, and Emerging HotSpot Analysis. The factors affecting the severity of traffic accidents were analyzed using decision tree model. The results show that traffic accidents in Seoul are more frequent in suburbs than in central areas. Especially, traffic accidents concentrated in some commercial and entertainment areas in Seocho and Gangnam, and the traffic accidents were more and more intense over time. In the case of death traffic accidents, there were statistically significant hotspot areas in Yeongdeungpo-gu, Guro-gu, Jongno-gu, Jung-gu and Seongbuk. However, hotspots of death traffic accidents by time zone resulted in different patterns. In terms of traffic accident severity, the type of accident is the most important factor. The type of the road, the type of the vehicle, the time of the traffic accident, and the type of the violation of the regulations were ranked in order of importance. Regarding decision rules that cause serious traffic accidents, in case of van or truck, there is a high probability that a serious traffic accident will occur at a place where the width of the road is wide and the vehicle speed is high. In case of bicycle, car, motorcycle or the others there is a high probability that a serious traffic accident will occur under the same circumstances in the dawn time.

Differences in Eye Movement during the Observing of Spiders by University Students' Cognitive Style - Heat map and Gaze plot analysis - (대학생의 인지양식에 따라 거미 관찰에서 나타나는 안구 운동의 차이 - Heat map과 Gaze plot 분석을 중심으로 -)

  • Yang, Il-Ho;Choi, Hyun-Dong;Jeong, Mi-Yeon;Lim, Sung-Man
    • Journal of Science Education
    • /
    • v.37 no.1
    • /
    • pp.142-156
    • /
    • 2013
  • The purpose of this study was to analyze observation characteristics through eye movement according to cognitive style. For this, developed observation task that can be shown the difference between wholistic cognitive style group and analytic cognitive style group, measured eye movement of university students who has different cognitive style, as given observation task. It is confirmed the difference between two cognitive style groups by analysing gathered statistics and visualization data. The findings of this study were as follows; First, Compared observation sequence and pattern by cognitive style, analytic cognitive style group is concerned with spider first and moving on surrounding environment, whereas wholistic cognitive style group had not fixed pattern as observing spider itself and surrounding area of spider alternately or looking closely on particular part at first. When observing entire feature and partial feature, wholistic cognitive style group was moving on Fixation from outstanding factor without fixed pattern, analytic cognitive style had certain directivity and repetitive investigation. Second, compared the ratio of observation, analytic cognitive style group gave a large part to spider the very thing, wholistic cognitive style group gave weight to surrounding area of spider, and analytic group shown higher concentration on observing partial feature, wholistic cognitive style group shown higher concentration on observing wholistic feature. Wholistic cognitive style group gave importance to partial features in surrounding area, and wholistic feature of spider than analytic cognitive style group, analytic cognitive style group was focus on partial features of spider than wholistic cognitive style group. Through the result of this study, there are differences of observing time, frequency, object, area, sequence, pattern and ratio from cognitive styles. It is shown the reason why each student has varied outcome, from the difference of information following their cognitive style, and the result of this study help to figure out and give direction to what observation fulfillment is suitable for each student.

  • PDF

Numerical Calculations of IASCC Test Worker Exposure using Process Simulations (공정 시뮬레이션을 이용한 조사유기응력부식균열 시험 작업자 피폭량의 전산 해석에 관한 연구)

  • Chang, Kyu-Ho;Kim, Hae-Woong;Kim, Chang-Kyu;Park, Kwang-Soo;Kwak, Dae-In
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.6
    • /
    • pp.803-811
    • /
    • 2021
  • In this study, the exposure amount of IASCC test worker was evaluated by applying the process simulation technology. Using DELMIA Version 5, a commercial process simulation code, IASCC test facility, hot cells, and workers were prepared, and IASCC test activities were implemented, and the cumulative exposure of workers passing through the dose-distributed space could be evaluated through user coding. In order to simulate behavior of workers, human manikins with a degree of freedom of 200 or more imitating the human musculoskeletal system were applied. In order to calculate the worker's exposure, the coordinates, start time, and retention period for each posture were extracted by accessing the sub-information of the human manikin task, and the cumulative exposure was calculated by multiplying the spatial dose value by the posture retention time. The spatial dose for the exposure evaluation was calculated using MCNP6 Version 1.0, and the calculated spatial dose was embedded into the process simulation domain. As a result of comparing and analyzing the results of exposure evaluation by process simulation and typical exposure evaluation, the annual exposure to daily test work in the regular entrance was predicted at similar levels, 0.388 mSv/year and 1.334 mSv/year, respectively. Exposure assessment was also performed on special tasks performed in areas with high spatial doses, and tasks with high exposure could be easily identified, and work improvement plans could be derived intuitively through human manikin posture and spatial dose visualization of the tasks.

Development and Feasibility Study for Phase Contrast MR Angiography at Low Tesla Open-MRI System (저자장 자기공명영상 시스템에서의 위상대조도 혈관조영기법의 개발과 그 유용성에 대한 연구)

  • Lee, Dong-Hoon;Hong, Cheol-Pyo;Lee, Man-Woo;Han, Bong-Soo
    • Progress in Medical Physics
    • /
    • v.23 no.3
    • /
    • pp.177-187
    • /
    • 2012
  • Magnetic resonance angiography (MRA) techniques are widely used in diagnosis of vascular disorders such as hemadostenosis and aneurism. Especially, phase contrast (PC) MRA technique, which is a typical non contrast-enhanced MRA technique, provides not only the anatomy of blood vessels but also flow velocity. In this study, we developed the 2- and 3-dimensional PC MRA pulse sequences for a low magnetic field MRI system. Vessel images were acquired using 2D and 3D PC MRA and the velocities of the blood flow were measured in the superior sagittal sinus, straight sinus and the confluence of the two. The 2D PC MRA provided the good quality of vascular images for large vessels but the poor quality for small ones. Although 3D PC MRA gave more improved visualization of small vessels than 2D PC MRA, the image quality was not enough to be used for diagnosis of the small vessels due to the low SNR and field homogeneity of the low field MRI system. The measured blood velocities were $25.46{\pm}0.73cm/sec$, $24.02{\pm}0.34cm/sec$ and $26.15{\pm}1.50cm/sec$ in the superior sagittal sinus, straight sinus and the confluence of the two, respectively, which showed good agreement with the previous experimental values. Thus, the developed PC MRA technique for low field MRI system is expected to provide the useful velocity information to diagnose the large brain vessels.

Construction of Gene Network System Associated with Economic Traits in Cattle (소의 경제형질 관련 유전자 네트워크 분석 시스템 구축)

  • Lim, Dajeong;Kim, Hyung-Yong;Cho, Yong-Min;Chai, Han-Ha;Park, Jong-Eun;Lim, Kyu-Sang;Lee, Seung-Su
    • Journal of Life Science
    • /
    • v.26 no.8
    • /
    • pp.904-910
    • /
    • 2016
  • Complex traits are determined by the combined effects of many loci and are affected by gene networks or biological pathways. Systems biology approaches have an important role in the identification of candidate genes related to complex diseases or traits at the system level. The gene network analysis has been performed by diverse types of methods such as gene co-expression, gene regulatory relationships, protein-protein interaction (PPI) and genetic networks. Moreover, the network-based methods were described for predicting gene functions such as graph theoretic method, neighborhood counting based methods and weighted function. However, there are a limited number of researches in livestock. The present study systemically analyzed genes associated with 102 types of economic traits based on the Animal Trait Ontology (ATO) and identified their relationships based on the gene co-expression network and PPI network in cattle. Then, we constructed the two types of gene network databases and network visualization system (http://www.nabc.go.kr/cg). We used a gene co-expression network analysis from the bovine expression value of bovine genes to generate gene co-expression network. PPI network was constructed from Human protein reference database based on the orthologous relationship between human and cattle. Finally, candidate genes and their network relationships were identified in each trait. They were typologically centered with large degree and betweenness centrality (BC) value in the gene network. The ontle program was applied to generate the database and to visualize the gene network results. This information would serve as valuable resources for exploiting genomic functions that influence economically and agriculturally important traits in cattle.