• Title/Summary/Keyword: Small computer

Search Result 3,221, Processing Time 0.036 seconds

Estimation of Economic Losses on the Agricultural Sector in Gangwon Province, Korea, Based on the Baekdusan Volcanic Ash Damage Scenario (백두산 화산재 피해 시나리오에 따른 강원도 지역 농작물의 경제적 피해 추정)

  • Lee, Yun-Jung;Kim, Su-Do;Chun, Joonseok;Woo, Gyun
    • Journal of the Korean earth science society
    • /
    • v.34 no.6
    • /
    • pp.515-523
    • /
    • 2013
  • The eastern coast of South Korea is expected to be damaged by volcanic ash when Mt. Baekdusan volcano erupts. Even if the amount of volcanic ash is small, it can be fatal on the agricultural sector withering many plants and causing soil acidification. Thus, in this paper, we aim to estimate agricultural losses caused by the volcanic ash and to visualize them with Google map. To estimate the volcanic ash losses, a damage assessment model is needed. As the volcanic ash hazard depends on the kind of a crops and the ash thickness, the fragility function of damage assessment model should represent the relation between ash thickness and damage rate of crops. Thus, we model the fragility function using the damage rate for each crop of RiskScape. The volcanic ash losses can be calculated with the agricultural output and the price of each crop using the fragility function. This paper also represents the estimated result of the losses in Gangwon province, which is most likely to get damaged by volcanic ashes in Korea. According to the result with gross agricultural output of Gangwon province in 2010, the amount of volcanic ash losses runs nearly 635,124 million wons in Korean currency if volcanic ash is accumulated over four millimeters. This amount represents about 50% of the gross agricultural output of Gangwon province. We consider the damage only for the crops in this paper. However, a volcanic ash fall has the potential to damage the assets for a farm, including the soil fertility and installations. Thus, to estimate the total amount of volcanic ash damage for the whole agricultural sectors, these collateral damages should also be considered.

Development of Independent Target Approximation by Auto-computation of 3-D Distribution Units for Stereotactic Radiosurgery (정위적 방사선 수술시 3차원적 공간상 단위분포들의 자동계산법에 의한 간접적 병소 근사화 방법의 개발)

  • Choi Kyoung Sik;Oh Seung Jong;Lee Jeong Woo;Kim Jeung Kee;Suh Tae Suk;Choe Bo Young;Kim Moon Chan;Chung Hyun-Tai
    • Progress in Medical Physics
    • /
    • v.16 no.1
    • /
    • pp.24-31
    • /
    • 2005
  • The stereotactic radiosurgery (SRS) describes a method of delivering a high dose of radiation to a small tar-get volume in the brain, generally in a single fraction, while the dose delivered to the surrounding normal tissue should be minimized. To perform automatic plan of the SRS, a new method of multi-isocenter/shot linear accelerator (linac) and gamma knife (GK) radiosurgery treatment plan was developed, based on a physical lattice structure in target. The optimal radiosurgical plan had been constructed by many beam parameters in a linear accelerator or gamma knife-based radiation therapy. In this work, an isocenter/shot was modeled as a sphere, which is equal to the circular collimator/helmet hole size because the dimension of the 50% isodose level in the dose profile is similar to its size. In a computer-aided system, it accomplished first an automatic arrangement of multi-isocenter/shot considering two parameters such as positions and collimator/helmet sizes for each isocenter/shot. Simultaneously, an irregularly shaped target was approximated by cubic structures through computation of voxel units. The treatment planning method by the technique was evaluated as a dose distribution by dose volume histograms, dose conformity, and dose homogeneity to targets. For irregularly shaped targets, the new method performed optimal multi-isocenter packing, and it only took a few seconds in a computer-aided system. The targets were included in a more than 50% isodose curve. The dose conformity was ordinarily acceptable levels and the dose homogeneity was always less than 2.0, satisfying for various targets referred to Radiation Therapy Oncology Group (RTOG) SRS criteria. In conclusion, this approach by physical lattice structure could be a useful radiosurgical plan without restrictions in the various tumor shapes and the different modality techniques such as linac and GK for SRS.

  • PDF

Improving Performance of Recommendation Systems Using Topic Modeling (사용자 관심 이슈 분석을 통한 추천시스템 성능 향상 방안)

  • Choi, Seongi;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.101-116
    • /
    • 2015
  • Recently, due to the development of smart devices and social media, vast amounts of information with the various forms were accumulated. Particularly, considerable research efforts are being directed towards analyzing unstructured big data to resolve various social problems. Accordingly, focus of data-driven decision-making is being moved from structured data analysis to unstructured one. Also, in the field of recommendation system, which is the typical area of data-driven decision-making, the need of using unstructured data has been steadily increased to improve system performance. Approaches to improve the performance of recommendation systems can be found in two aspects- improving algorithms and acquiring useful data with high quality. Traditionally, most efforts to improve the performance of recommendation system were made by the former approach, while the latter approach has not attracted much attention relatively. In this sense, efforts to utilize unstructured data from variable sources are very timely and necessary. Particularly, as the interests of users are directly connected with their needs, identifying the interests of the user through unstructured big data analysis can be a crew for improving performance of recommendation systems. In this sense, this study proposes the methodology of improving recommendation system by measuring interests of the user. Specially, this study proposes the method to quantify interests of the user by analyzing user's internet usage patterns, and to predict user's repurchase based upon the discovered preferences. There are two important modules in this study. The first module predicts repurchase probability of each category through analyzing users' purchase history. We include the first module to our research scope for comparing the accuracy of traditional purchase-based prediction model to our new model presented in the second module. This procedure extracts purchase history of users. The core part of our methodology is in the second module. This module extracts users' interests by analyzing news articles the users have read. The second module constructs a correspondence matrix between topics and news articles by performing topic modeling on real world news articles. And then, the module analyzes users' news access patterns and then constructs a correspondence matrix between articles and users. After that, by merging the results of the previous processes in the second module, we can obtain a correspondence matrix between users and topics. This matrix describes users' interests in a structured manner. Finally, by using the matrix, the second module builds a model for predicting repurchase probability of each category. In this paper, we also provide experimental results of our performance evaluation. The outline of data used our experiments is as follows. We acquired web transaction data of 5,000 panels from a company that is specialized to analyzing ranks of internet sites. At first we extracted 15,000 URLs of news articles published from July 2012 to June 2013 from the original data and we crawled main contents of the news articles. After that we selected 2,615 users who have read at least one of the extracted news articles. Among the 2,615 users, we discovered that the number of target users who purchase at least one items from our target shopping mall 'G' is 359. In the experiments, we analyzed purchase history and news access records of the 359 internet users. From the performance evaluation, we found that our prediction model using both users' interests and purchase history outperforms a prediction model using only users' purchase history from a view point of misclassification ratio. In detail, our model outperformed the traditional one in appliance, beauty, computer, culture, digital, fashion, and sports categories when artificial neural network based models were used. Similarly, our model outperformed the traditional one in beauty, computer, digital, fashion, food, and furniture categories when decision tree based models were used although the improvement is very small.

지점우량 자료의 분포형 설정과 내용안전년수에 따르는 확률강우량에 관한 고찰 - 국내 3개지점 서울, 부산 및 대구를 중심으로 -

  • Lee, Won-Hwan;Lee, Gil-Chun;Jeong, Yeon-Gyu
    • Water for future
    • /
    • v.5 no.1
    • /
    • pp.27-36
    • /
    • 1972
  • This thesis is the study of the rainfall probability depth in the major areas of Korea, such as Seoul, Pusan and Taegu. The purpose of the paper is to analyze the rainfall in connection with the safe planning of the hydraulic structures and with the project life. The methodology used in this paper is the statistical treatment of the rainfall data in the above three areas. The scheme of the paper is the following. 1. The complementation of the rainfall data We tried to select the maximm values among the values gained by the three methods: Fourier Series Method, Trend Diagram Method and Mean Value Method. By the selection of the maximum values we tried to complement the rainfall data lacking in order to prevent calamities. 2. The statistical treatment of the data The data are ordered by the small numbers, transformed into log, $\sqrt{}, \sqrt[3]{}, \sqrt[4], and$\sqrt[5], and calculated their statistical values through the electronic computer. 3. The examination of the distribution types and the determination of the optimum distibution types By the $x^2-Test$ the distribution types of rainfall data are examined, and rejected some part of the data in order to seek the normal rainfall distribution types. In this way, the optimum distribution types are determined. 4. The computation of rainfall probability depth in the safety project life We tried to study the interrelation between the return period and the safety project life, and to present the rainfall probability depth of the safety project life. In conclusion we set up the optimum distribution types of the rainfall depths, formulated the optimum distributions, and presented the chart of the rainfall probability depth about the factor of safety and the project life.ct life.

  • PDF

The current state and prospects of travel business development under the COVID-19 pandemic

  • Tkachenko, Tetiana;Pryhara, Olha;Zatsepina, Nataly;Bryk, Stepan;Holubets, Iryna;Havryliuk, Alla
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.664-674
    • /
    • 2021
  • The relevance of this scientific research is determined by the negative impact of the COVID-19 pandemic on the current trends and dynamics of world tourism development. This article aims to identify patterns of development of the modern tourist market, analysis of problems and prospects of development in the context of the COVID-19 pandemic. Materials and methods. General scientific methods and methods of research are used in the work: analysis, synthesis, comparison, analysis of statistical data. The analysis of the viewpoints of foreign and domestic authors on the research of the international tourist market allowed us to substantiate the actual directions of tourism development due to the influence of negative factors connected with the spread of a new coronavirus infection COVID-19. Economic-statistical, abstract-logical, and economic-mathematical methods of research were used during the process of study and data processing. Results. The analysis of the current state of the tourist market by world regions was carried out. It was found that tourism is one of the most affected sectors from COVID-19, as, by the end of 2020, the total number of tourist arrivals in the world decreased by 74% compared to the same period in 2019. The consequence of this decline was a loss of total global tourism revenues by the end of 2020, which equaled $1.3 trillion. 27% of all destinations are completely closed to international tourism. At the end of 2020, the economy of international tourism has shrunk by about 80%. In 2020 the world traveled 98 million fewer people (-83%) relative to the same period last year. Tourism was hit hardest by the pandemic in the Asia-Pacific region, where travel restrictions are as strict as possible. International arrivals in this region fell by 84% (300 million). The Middle East and Africa recorded declines of 75 and 70 percent. Despite a small and short-lived recovery in the summer of 2020, Europe lost 71% of the tourist flow, with the European continent recording the largest drop in absolute terms compared with 2019, 500 million. In North and South America, foreign arrivals declined. It is revealed that a significant decrease in tourist flows leads to a massive loss of jobs, a sharp decline in foreign exchange earnings and taxes, which limits the ability of states to support the tourism industry. Three possible scenarios of exit of the tourist industry from the crisis, reflecting the most probable changes of monthly tourist flows, are considered. The characteristics of respondents from Ukraine, Germany, and the USA and their attitude to travel depending on gender, age, education level, professional status, and monthly income are presented. About 57% of respondents from Ukraine, Poland, and the United States were planning a tourist trip in 2021. Note that people with higher or secondary education were more willing to plan such a trip. The results of the empirical study confirm that interest in domestic tourism has increased significantly in 2021. The regression model of dependence of the number of domestic tourist trips on the example of Ukraine with time tendency (t) and seasonal variations (Turˆt = 7288,498 - 20,58t - 410,88∑5) it forecast for 2020, which allows stabilizing the process of tourist trips after the pandemic to use this model to forecast for any country. Discussion. We should emphasize the seriousness of the COVID-19 pandemic and the fact that many experts and scientists believe in the long-term recovery of the tourism industry. In our opinion, the governments of the countries need to refocus on domestic tourism and deal with infrastructure development, search for new niches, formats, formation of new package deals in new - domestic - segment (new products' development (tourist routes, exhibitions, sightseeing programs, special rehabilitation programs after COVID) -19 in sanatoriums, etc.); creation of individual offers for different target audiences). Conclusions. Thus, the identified trends are associated with a decrease in the number of tourist flows, the negative impact of the pandemic on employment and income from tourism activities. International tourism needs two to four years before it returns to the level of 2019.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Estimation of SCS Runoff Curve Number and Hydrograph by Using Highly Detailed Soil Map(1:5,000) in a Small Watershed, Sosu-myeon, Goesan-gun (SCS-CN 산정을 위한 수치세부정밀토양도 활용과 괴산군 소수면 소유역의 물 유출량 평가)

  • Hong, Suk-Young;Jung, Kang-Ho;Choi, Chol-Uong;Jang, Min-Won;Kim, Yi-Hyun;Sonn, Yeon-Kyu;Ha, Sang-Keun
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.43 no.3
    • /
    • pp.363-373
    • /
    • 2010
  • "Curve number" (CN) indicates the runoff potential of an area. The US Soil Conservation Service (SCS)'s CN method is a simple, widely used, and efficient method for estimating the runoff from a rainfall event in a particular area, especially in ungauged basins. The use of soil maps requested from end-users was dominant up to about 80% of total use for estimating CN based rainfall-runoff. This study introduce the use of soil maps with respect to hydrologic and watershed management focused on hydrologic soil group and a case study resulted in assessing effective rainfall and runoff hydrograph based on SCS-CN method in a small watershed. The ratio of distribution areas for hydrologic soil group based on detailed soil map (1:25,000) of Korea were 42.2% (A), 29.4% (B), 18.5% (C), and 9.9% (D) for HSG 1995, and 35.1% (A), 15.7% (B), 5.5% (C), and 43.7% (D) for HSG 2006, respectively. The ratio of D group in HSG 2006 accounted for 43.7% of the total and 34.1% reclassified from A, B, and C groups of HSG 1995. Similarity between HSG 1995 and 2006 was about 55%. Our study area was located in Sosu-myeon, Goesan-gun including an approx. 44 $km^2$-catchment, Chungchungbuk-do. We used a digital elevation model (DEM) to delineate the catchments. The soils were classified into 4 hydrologic soil groups on the basis of measured infiltration rate and a model of the representative soils of the study area reported by Jung et al. 2006. Digital soil maps (1:5,000) were used for classifying hydrologic soil groups on the basis of soil series unit. Using high resolution satellite images, we delineated the boundary of each field or other parcel on computer screen, then surveyed the land use and cover in each. We calculated CN for each and used those data and a land use and cover map and a hydrologic soil map to estimate runoff. CN values, which are ranged from 0 (no runoff) to 100 (all precipitation runs off), of the catchment were 73 by HSG 1995 and 79 by HSG 2006, respectively. Each runoff response, peak runoff and time-to-peak, was examined using the SCS triangular synthetic unit hydrograph, and the results of HSG 2006 showed better agreement with the field observed data than those with use of HSG 1995.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Natural Language Processing Model for Data Visualization Interaction in Chatbot Environment (챗봇 환경에서 데이터 시각화 인터랙션을 위한 자연어처리 모델)

  • Oh, Sang Heon;Hur, Su Jin;Kim, Sung-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.11
    • /
    • pp.281-290
    • /
    • 2020
  • With the spread of smartphones, services that want to use personalized data are increasing. In particular, healthcare-related services deal with a variety of data, and data visualization techniques are used to effectively show this. As data visualization techniques are used, interactions in visualization are also naturally emphasized. In the PC environment, since the interaction for data visualization is performed with a mouse, various filtering for data is provided. On the other hand, in the case of interaction in a mobile environment, the screen size is small and it is difficult to recognize whether or not the interaction is possible, so that only limited visualization provided by the app can be provided through a button touch method. In order to overcome the limitation of interaction in such a mobile environment, we intend to enable data visualization interactions through conversations with chatbots so that users can check individual data through various visualizations. To do this, it is necessary to convert the user's query into a query and retrieve the result data through the converted query in the database that is storing data periodically. There are many studies currently being done to convert natural language into queries, but research on converting user queries into queries based on visualization has not been done yet. Therefore, in this paper, we will focus on query generation in a situation where a data visualization technique has been determined in advance. Supported interactions are filtering on task x-axis values and comparison between two groups. The test scenario utilized data on the number of steps, and filtering for the x-axis period was shown as a bar graph, and a comparison between the two groups was shown as a line graph. In order to develop a natural language processing model that can receive requested information through visualization, about 15,800 training data were collected through a survey of 1,000 people. As a result of algorithm development and performance evaluation, about 89% accuracy in classification model and 99% accuracy in query generation model was obtained.