• Title/Summary/Keyword: Information visualization

Search Result 2,108, Processing Time 0.032 seconds

Sustaining Dramatic Communication Between the Audience and Characters through a Realization : (관객과 인물의 극적소통을 위한 사실화연구 : 영화 '시'를 중심으로)

  • Kim, Dong-Hyun
    • Cartoon and Animation Studies
    • /
    • s.24
    • /
    • pp.173-197
    • /
    • 2011
  • Through a story, the audience moves between fiction and reality. A story is an emotional experience that appeals to human feeling. The rational function of a story is to convey knowledge and information, and its emotional function is to touch the audience. Moreover, these aspects of a story are linked to its language, text, and imagery. This paper focuses on the emotional function of a story. In a experiential story, the audience's emotional response is a result of maximum dramatic communication between them and the characters. Through psychological and mental communion with the characters, the audience becomes immersed in the story when they emotionally identify with the characters, and dramatic communication is achieved. However, dramatic communication is mostly achieved instantaneously. The elements of a film need to be realized to sustain dramatic communication such that the audience continues to be immersed in the story. The audience can identify with the characters who are placed in real-life situations by considering the characters' external and internal aspects. External search pertains to the tangible aspects of the character such as its background, life, and conversation. Through the audience's external search, the characters communicate with the audience. Internal search deals with aspects of the characters' personality such as their self-concept, desires, and internal conflicts. Through internal search, the audience understands the inner side of the characters. In this process, a film director should ensure that the acting depicts the inner side of the characters. In other words, the director should perfectly depict the internal and external elements of a human on screen. Appropriate visualization can lead to dramatic communication with the characters and thereby create the audience's emotional response. Considering these techniques, this paper focuses on the scenes of the film "Poetry" in which dramatic communication with the characters creates the audience's emotional response. Accordingly, the audience plays a role in sustaining dramatic communication for the physical screen time of a film.

Usefulness of Region Cut Subtraction in Fusion & MIP 3D Reconstruction Image (Fusion & Maximum Intensity Projection 3D 재구성 영상에서 Region Cut Subtraction의 유용성)

  • Moon, A-Reum;Chi, Yong-Gi;Choi, Sung-Wook;Lee, Hyuk;Lee, Kyoo-Bok;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.18-23
    • /
    • 2010
  • Purpose: PET/CT combines functional and morphologic data and increases diagnostic accuracy in a variety of malignancies. Especially reconstructed Fusion PET/CT images or MIP (Maximum Intensity Projection) images from a 2-dimensional image to a 3-dimensional one are useful in visualization of the lesion. But in Fusion & MIP 3D reconstruction image, due to hot uptake by urine or urostomy bag, lesion is overlapped so it is difficult that we can distinguish the lesion with the naked eye. This research tries to improve a distinction by removing parts of hot uptake. Materials and Methods: This research has been conducted the object of patients who have went to our hospital from September 2008 to March 2009 and have a lot of urine of remaining volume as disease of uterus, bladder, rectum in the result of PET/CT examination. We used GE Company's Advantage Workstation AW4.3 05 Version Volume Viewer program. As an analysis method, set up ROI in region of removal in axial volume image, select Cut Outside and apply same method in coronal volume image. Next, adjust minimum value in Threshold of 3D Tools, select subtraction in Advanced Processing. It makes Fusion & MIP images and compares them with the image no using Region Cut Definition. Results: In Fusion & MIP 3D reconstruction image, it makes Fusion & MIP images and compares them by using Advantage Workstation AW4.3 05's Region Cut Subtraction, parts of hot uptake according to patient's urine can be removed. Distinction of lesion was clearly reconstructed in image using Region Cut Definition. Conclusion: After examining the patients showing hot uptake on account of volume of urine intake in bladder, in process of reconstruction image, if parts of hot uptake would be removed, it could contribute to offering much better diagnostic information than image subtraction of conventional method. Especially in case of disease of uterus, bladder and rectum, it will be helpful for qualitative improvement of image.

  • PDF

Analysis of Traffic Accidents Injury Severity in Seoul using Decision Trees and Spatiotemporal Data Visualization (의사결정나무와 시공간 시각화를 통한 서울시 교통사고 심각도 요인 분석)

  • Kang, Youngok;Son, Serin;Cho, Nahye
    • Journal of Cadastre & Land InformatiX
    • /
    • v.47 no.2
    • /
    • pp.233-254
    • /
    • 2017
  • The purpose of this study is to analyze the main factors influencing the severity of traffic accidents and to visualize spatiotemporal characteristics of traffic accidents in Seoul. To do this, we collected the traffic accident data that occurred in Seoul for four years from 2012 to 2015, and classified as slight, serious, and death traffic accidents according to the severity of traffic accidents. The analysis of spatiotemporal characteristics of traffic accidents was performed by kernel density analysis, hotspot analysis, space time cube analysis, and Emerging HotSpot Analysis. The factors affecting the severity of traffic accidents were analyzed using decision tree model. The results show that traffic accidents in Seoul are more frequent in suburbs than in central areas. Especially, traffic accidents concentrated in some commercial and entertainment areas in Seocho and Gangnam, and the traffic accidents were more and more intense over time. In the case of death traffic accidents, there were statistically significant hotspot areas in Yeongdeungpo-gu, Guro-gu, Jongno-gu, Jung-gu and Seongbuk. However, hotspots of death traffic accidents by time zone resulted in different patterns. In terms of traffic accident severity, the type of accident is the most important factor. The type of the road, the type of the vehicle, the time of the traffic accident, and the type of the violation of the regulations were ranked in order of importance. Regarding decision rules that cause serious traffic accidents, in case of van or truck, there is a high probability that a serious traffic accident will occur at a place where the width of the road is wide and the vehicle speed is high. In case of bicycle, car, motorcycle or the others there is a high probability that a serious traffic accident will occur under the same circumstances in the dawn time.

Construction of Gene Network System Associated with Economic Traits in Cattle (소의 경제형질 관련 유전자 네트워크 분석 시스템 구축)

  • Lim, Dajeong;Kim, Hyung-Yong;Cho, Yong-Min;Chai, Han-Ha;Park, Jong-Eun;Lim, Kyu-Sang;Lee, Seung-Su
    • Journal of Life Science
    • /
    • v.26 no.8
    • /
    • pp.904-910
    • /
    • 2016
  • Complex traits are determined by the combined effects of many loci and are affected by gene networks or biological pathways. Systems biology approaches have an important role in the identification of candidate genes related to complex diseases or traits at the system level. The gene network analysis has been performed by diverse types of methods such as gene co-expression, gene regulatory relationships, protein-protein interaction (PPI) and genetic networks. Moreover, the network-based methods were described for predicting gene functions such as graph theoretic method, neighborhood counting based methods and weighted function. However, there are a limited number of researches in livestock. The present study systemically analyzed genes associated with 102 types of economic traits based on the Animal Trait Ontology (ATO) and identified their relationships based on the gene co-expression network and PPI network in cattle. Then, we constructed the two types of gene network databases and network visualization system (http://www.nabc.go.kr/cg). We used a gene co-expression network analysis from the bovine expression value of bovine genes to generate gene co-expression network. PPI network was constructed from Human protein reference database based on the orthologous relationship between human and cattle. Finally, candidate genes and their network relationships were identified in each trait. They were typologically centered with large degree and betweenness centrality (BC) value in the gene network. The ontle program was applied to generate the database and to visualize the gene network results. This information would serve as valuable resources for exploiting genomic functions that influence economically and agriculturally important traits in cattle.

Numerical Calculations of IASCC Test Worker Exposure using Process Simulations (공정 시뮬레이션을 이용한 조사유기응력부식균열 시험 작업자 피폭량의 전산 해석에 관한 연구)

  • Chang, Kyu-Ho;Kim, Hae-Woong;Kim, Chang-Kyu;Park, Kwang-Soo;Kwak, Dae-In
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.6
    • /
    • pp.803-811
    • /
    • 2021
  • In this study, the exposure amount of IASCC test worker was evaluated by applying the process simulation technology. Using DELMIA Version 5, a commercial process simulation code, IASCC test facility, hot cells, and workers were prepared, and IASCC test activities were implemented, and the cumulative exposure of workers passing through the dose-distributed space could be evaluated through user coding. In order to simulate behavior of workers, human manikins with a degree of freedom of 200 or more imitating the human musculoskeletal system were applied. In order to calculate the worker's exposure, the coordinates, start time, and retention period for each posture were extracted by accessing the sub-information of the human manikin task, and the cumulative exposure was calculated by multiplying the spatial dose value by the posture retention time. The spatial dose for the exposure evaluation was calculated using MCNP6 Version 1.0, and the calculated spatial dose was embedded into the process simulation domain. As a result of comparing and analyzing the results of exposure evaluation by process simulation and typical exposure evaluation, the annual exposure to daily test work in the regular entrance was predicted at similar levels, 0.388 mSv/year and 1.334 mSv/year, respectively. Exposure assessment was also performed on special tasks performed in areas with high spatial doses, and tasks with high exposure could be easily identified, and work improvement plans could be derived intuitively through human manikin posture and spatial dose visualization of the tasks.

Design of Cloud-Based Data Analysis System for Culture Medium Management in Smart Greenhouses (스마트온실 배양액 관리를 위한 클라우드 기반 데이터 분석시스템 설계)

  • Heo, Jeong-Wook;Park, Kyeong-Hun;Lee, Jae-Su;Hong, Seung-Gil;Lee, Gong-In;Baek, Jeong-Hyun
    • Korean Journal of Environmental Agriculture
    • /
    • v.37 no.4
    • /
    • pp.251-259
    • /
    • 2018
  • BACKGROUND: Various culture media have been used for hydroponic cultures of horticultural plants under the smart greenhouses with natural and artificial light types. Management of the culture medium for the control of medium amounts and/or necessary components absorbed by plants during the cultivation period is performed with ICT (Information and Communication Technology) and/or IoT (Internet of Things) in a smart farm system. This study was conducted to develop the cloud-based data analysis system for effective management of culture medium applying to hydroponic culture and plant growth in smart greenhouses. METHODS AND RESULTS: Conventional inorganic Yamazaki and organic media derived from agricultural byproducts such as a immature fruit, leaf, or stem were used for hydroponic culture media. Component changes of the solutions according to the growth stage were monitored and plant growth was observed. Red and green lettuce seedlings (Lactuca sativa L.) which developed 2~3 true leaves were considered as plant materials. The seedlings were hydroponically grown in the smart greenhouse with fluorescent and light-emitting diodes (LEDs) lights of $150{\mu}mol/m^2/s$ light intensity for 35 days. Growth data of the seedlings were classified and stored to develop the relational database in the virtual machine which was generated from an open stack cloud system on the base of growth parameter. Relation of the plant growth and nutrient absorption pattern of 9 inorganic components inside the media during the cultivation period was investigated. The stored data associated with component changes and growth parameters were visualized on the web through the web framework and Node JS. CONCLUSION: Time-series changes of inorganic components in the culture media were observed. The increases of the unfolded leaves or fresh weight of the seedlings were mainly dependent on the macroelements such as a $NO_3-N$, and affected by the different inorganic and organic media. Though the data analysis system was developed, actual measurement data were offered by using the user smart device, and analysis and comparison of the data were visualized graphically in time series based on the cloud database. Agricultural management in data visualization and/or plant growth can be implemented by the data analysis system under whole agricultural sites regardless of various culture environmental changes.

Strategy for Store Management Using SOM Based on RFM (RFM 기반 SOM을 이용한 매장관리 전략 도출)

  • Jeong, Yoon Jeong;Choi, Il Young;Kim, Jae Kyeong;Choi, Ju Choel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.93-112
    • /
    • 2015
  • Depending on the change in consumer's consumption pattern, existing retail shop has evolved in hypermarket or convenience store offering grocery and daily products mostly. Therefore, it is important to maintain the inventory levels and proper product configuration for effectively utilize the limited space in the retail store and increasing sales. Accordingly, this study proposed proper product configuration and inventory level strategy based on RFM(Recency, Frequency, Monetary) model and SOM(self-organizing map) for manage the retail shop effectively. RFM model is analytic model to analyze customer behaviors based on the past customer's buying activities. And it can differentiates important customers from large data by three variables. R represents recency, which refers to the last purchase of commodities. The latest consuming customer has bigger R. F represents frequency, which refers to the number of transactions in a particular period and M represents monetary, which refers to consumption money amount in a particular period. Thus, RFM method has been known to be a very effective model for customer segmentation. In this study, using a normalized value of the RFM variables, SOM cluster analysis was performed. SOM is regarded as one of the most distinguished artificial neural network models in the unsupervised learning tool space. It is a popular tool for clustering and visualization of high dimensional data in such a way that similar items are grouped spatially close to one another. In particular, it has been successfully applied in various technical fields for finding patterns. In our research, the procedure tries to find sales patterns by analyzing product sales records with Recency, Frequency and Monetary values. And to suggest a business strategy, we conduct the decision tree based on SOM results. To validate the proposed procedure in this study, we adopted the M-mart data collected between 2014.01.01~2014.12.31. Each product get the value of R, F, M, and they are clustered by 9 using SOM. And we also performed three tests using the weekday data, weekend data, whole data in order to analyze the sales pattern change. In order to propose the strategy of each cluster, we examine the criteria of product clustering. The clusters through the SOM can be explained by the characteristics of these clusters of decision trees. As a result, we can suggest the inventory management strategy of each 9 clusters through the suggested procedures of the study. The highest of all three value(R, F, M) cluster's products need to have high level of the inventory as well as to be disposed in a place where it can be increasing customer's path. In contrast, the lowest of all three value(R, F, M) cluster's products need to have low level of inventory as well as to be disposed in a place where visibility is low. The highest R value cluster's products is usually new releases products, and need to be placed on the front of the store. And, manager should decrease inventory levels gradually in the highest F value cluster's products purchased in the past. Because, we assume that cluster has lower R value and the M value than the average value of good. And it can be deduced that product are sold poorly in recent days and total sales also will be lower than the frequency. The procedure presented in this study is expected to contribute to raising the profitability of the retail store. The paper is organized as follows. The second chapter briefly reviews the literature related to this study. The third chapter suggests procedures for research proposals, and the fourth chapter applied suggested procedure using the actual product sales data. Finally, the fifth chapter described the conclusion of the study and further research.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.