• Title/Summary/Keyword: Graphical User Interface(GUI)

Search Result 250, Processing Time 0.028 seconds

Development of a Test Framework for Functional and Non-functional Verification of Distributed Systems (분산 시스템의 기능 및 비기능 검증을 위한 테스트 프레임워크 개발)

  • Yun, Sangpil;Seo, Yongjin;Min, Bup-Ki;Kim, Hyeon Soo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.5
    • /
    • pp.107-121
    • /
    • 2014
  • Distributed systems are collection of physically distributed computers linked by a network. General use of wired/wireless Internet enables users to make use of distributed service anytime and anywhere. The explosive growth of distributed services strongly requires functional verification of services as well as verification of non-functional elements such as service quality. In order to verify distributed services it is necessary to build a test environment for distributed systems. Because, however, distributed systems are composed of physically distributed nodes, efforts to construct a test environment are required more than those in a test environment for a monolithic system. In this paper we propose a test framework to verify functional and non-functional features of distributed systems. The suggested framework automatically generates test cases through the message sequence charts, and includes a test driver composed of the virtual nodes which can simulate the physically distributed nodes. The test result can be checked easily through the various graphs and the graphical user interface (GUI). The test framework can reduce testing efforts for a distributed system and can enhance the reliability of the system.

Design and Implementation of a Data-Driven Defect and Linearity Assessment Monitoring System for Electric Power Steering (전동식 파워 스티어링을 위한 데이터 기반 결함 및 선형성 평가 모니터링 시스템의 설계 구현)

  • Lawal Alabe Wale;Kimleang Kea;Youngsun Han;Tea-Kyung Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.2
    • /
    • pp.61-69
    • /
    • 2023
  • In recent years, due to heightened environmental awareness, Electric Power Steering (EPS) has been increasingly adopted as the steering control unit in manufactured vehicles. This has had numerous benefits, such as improved steering power, elimination of hydraulic hose leaks and reduced fuel consumption. However, for EPS systems to respond to actions, sensors must be employed; this means that the consistency of the sensor's linear variation is integral to the stability of the steering response. To ensure quality control, a reliable method for detecting defects and assessing linearity is required to assess the sensitivity of the EPS sensor to changes in the internal design characters. This paper proposes a data-driven defect and linearity assessment monitoring system, which can be used to analyze EPS component defects and linearity based on vehicle speed interval division. The approach is validated experimentally using data collected from an EPS test jig and is further enhanced by the inclusion of a Graphical User Interface (GUI). Based on the design, the developed system effectively performs defect detection with an accuracy of 0.99 percent and obtains a linearity assessment score at varying vehicle speeds.

Study on the Quantitative Analysis of the Major Environmental Effecting Factors for Selecting the Railway Route (철도노선선정에 영향을 미치는 주요환경항목 정량화에 관한 연구)

  • Kim, Dong-ki;Park, Yong-Gul;Jung, Woo-Sung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.6D
    • /
    • pp.761-770
    • /
    • 2009
  • The energy efficiency and environment-friendly aspect of the railway system would be superior to other on-land ransportation systems. In a preliminary feasibility study stage and selection of optimal railway route, the energy efficiency and problems related to environment are usually considered. For the selection of optimal railway route, geographical features and facility of management are generally considered. Environment effect factors for the selection of environment-friendly railway router are focused and studied in this paper. In this study, various analysis of opinion of specialists (railway, environment, transport, urban planning, survey) and the guideline for construction of environment-friendly railway were accomplished. From these results of various analysis, 7 major categories (topography/geology, flora and fauna, Nature Property, air quality, water quality, noise/vibration, visual impact/cultural assets) were extracted. To select environment friendly railway route, many alternatives should be compared optimal route must be selected by a comprehensive assessment considering these 7 categories. To solve this problem, the selected method was AHP which simplifies the complex problems utilizing hierarchy, quantifying qualitative problems through 1:1 comparison, and extracting objective conclusions by maintaining consistency. As a result, a GUIbased program was developed which provides basic values of weighted parameters of each category defined by specialists, and a quantification of detailed assessment guidelines to ensures consistency.

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.

Preliminary Study on Performance Evaluation of a Stacking-structure Compton Camera by Using Compton Imaging Simulator (Compton Imaging Simulator를 이용한 다층 구조 컴프턴 카메라 성능평가 예비 연구)

  • Lee, Se-Hyung;Park, Sung-Ho;Seo, Hee;Park, Jin-Hyung;Kim, Chan-Hyeong;Lee, Ju-Hahn;Lee, Chun-Sik;Lee, Jae-Sung
    • Progress in Medical Physics
    • /
    • v.20 no.2
    • /
    • pp.51-61
    • /
    • 2009
  • A Compton camera, which is based on the geometrical interpretation of Compton scattering, is a very promising gamma-ray imaging device considering its several advantages over the conventional gamma-ray imaging devices: high imaging sensitivity, 3-D imaging capability from a fixed position, multi-tracing functionality, and almost no limitation in photon energy. In the present study, a Monte Carlo-based, user-friendly Compton imaging simulator was developed in the form of a graphical user interface (GUI) based on Geant4 and $MATLAB^{TM}$. The simulator was tested against the experimental result of the double-scattering Compton camera, which is under development at Hanyang University in Korea. The imaging resolution of the simulated Compton image well agreed with that of the measured image. The imaging sensitivity of the measured data was 2~3 times higher than that of the simulated data, which is due to the fact that the measured data contains the random coincidence events. The performance of a stacking-structure type Compton camera was evaluated by using the simulator. The result shows that the Compton camera shows its highest performance when it uses 4 layers of scatterer detectors.

  • PDF

Implementation of an Intelligent Audio Graphic Equalizer System (지능형 오디오 그래픽 이퀄라이저 시스템 구현)

  • Lee Kang-Kyu;Cho Youn-Ho;Park Kyu-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.76-83
    • /
    • 2006
  • A main objective of audio equalizer is for user to tailor acoustic frequency response to increase sound comfort and example applications of audio equalizer includes large-scale audio system to portable audio such as mobile MP3 player. Up to now, all the audio equalizer requires manual setting to equalize frequency bands to create suitable sound quality for each genre of music. In this paper, we propose an intelligent audio graphic equalizer system that automatically classifies the music genre using music content analysis and then the music sound is boosted with the given frequency gains according to the classified musical genre when playback. In order to reproduce comfort sound, the musical genre is determined based on two-step hierarchical algorithm - coarse-level and fine-level classification. It can prevent annoying sound reproduction due to the sudden change of the equalizer gains at the beginning of the music playback. Each stage of the music classification experiments shows at least 80% of success with complete genre classification and equalizer operation within 2 sec. Simple S/W graphical user interface of 3-band automatic equalizer is implemented using visual C on personal computer.

Development of Structural Reliability Analysis Platform of FERUM-MIDAS for Reliability-Based Safety Evaluation of Bridges (신뢰도 기반 교량 안전성 평가를 위한 구조신뢰성 해석 플랫폼 FERUM-MIDAS의 개발)

  • Lee, Seungjun;Lee, Young-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.11
    • /
    • pp.884-891
    • /
    • 2020
  • The collapse of bridges can cause massive casualties and economic losses. Therefore, it is thus essential to evaluate the structural safety of bridges. For this task, structural reliability analysis, considering various bridge-related uncertainty factors, is often used. This paper proposes a new computational platform to perform structural reliability analysis for bridges and evaluate their structural safety under various loading conditions. For this purpose, a software package of reliability analysis, Finite Element Reliability Using MATLAB (FERUM), was integrated with MIDAS/CIVIL, which is a widely-used commercial software package specialized for bridges. Furthermore, a graphical user interface (GUI) control module has been added to FERUM to overcome the limitations of software operation. In this study, the proposed platform was applied to a simple frame structure, and the analysis results of the FORM (First-Order Reliability Method) and MCS (Monte Carlo simulation), which are representative reliability analysis methods, were compared. The proposed platform was verified by confirming that the calculated failure probability difference was less than 5%. In addition, the structural safety of a pre-stressed concrete (PSC) bridge was evaluated considering the KL-510 vehicle model. The proposed new structural reliability analysis platform is expected to enable an effective reliability-based safety evaluation of bridges.

A rock physics simulator and its application for $CO_2$ sequestration process ($CO_2$ 격리 처리를 위한 암석물리학 모의실헝장치와 그 응용)

  • Li, Ruiping;Dodds, Kevin;Siggins, A.F.;Urosevic, Milovan
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.1
    • /
    • pp.67-72
    • /
    • 2006
  • Injection of $CO_2$ into underground saline formations, due to their large storage capacity, is probably the most promising approach for the reduction of $CO_2$ emissions into the atmosphere. $CO_2$ storage must be carefully planned and monitored to ensure that the $CO_2$ is safely retained in the formation for periods of at least thousands of years. Seismic methods, particularly for offshore reservoirs, are the primary tool for monitoring the injection process and distribution of $CO_2$ in the reservoir over time provided that reservoir properties are favourable. Seismic methods are equally essential for the characterisation of a potential trap, determining the reservoir properties, and estimating its capacity. Hence, an assessment of the change in seismic response to $CO_2$ storage needs to be carried out at a very early stage. This must be revisited at later stages, to assess potential changes in seismic response arising from changes in fluid properties or mineral composition that may arise from chemical interactions between the host rock and the $CO_2$. Thus, carefully structured modelling of the seismic response changes caused by injection of $CO_2$ into a reservoir over time helps in the design of a long-term monitoring program. For that purpose we have developed a Graphical User Interface (GUI) driven rock physics simulator, designed to model both short and long-term 4D seismic responses to injected $CO_2$. The application incorporates $CO_2$ phase changes, local pressure and temperature changes. chemical reactions and mineral precipitation. By incorporating anisotropic Gassmann equations into the simulator, the seismic response of faults and fractures reactivated by $CO_2$ can also be predicted. We show field examples (potential $CO_2$ sequestration sites offshore and onshore) where we have tested our rock physics simulator. 4D seismic responses are modelled to help design the monitoring program.

Misconception on the Yellow Sea Warm Current in Secondary-School Textbooks and Development of Teaching Materials for Ocean Current Data Visualization (중등학교 교과서 황해난류 오개념 분석 및 해류 데이터 시각화 수업자료 개발)

  • Su-Ran Kim;Kyung-Ae Park;Do-Seong Byun;Kwang-Young Jeong;Byoung-Ju Choi
    • Journal of the Korean earth science society
    • /
    • v.44 no.1
    • /
    • pp.13-35
    • /
    • 2023
  • Ocean currents play the most important role in causing and controlling global climate change. The water depth of the Yellow Sea is very shallow compared to the East Sea, and the circulation and currents of seawater are quite complicated owing to the influence of various wind fields, ocean currents, and river discharge with low-salinity seawater. The Yellow Sea Warm Current (YSWC) is one of the most representative currents of the Yellow Sea in winter and is closely related to the weather of the southwest coast of the Korean Peninsula, so it needs to be treated as important in secondary-school textbooks. Based on the 2015 revised national educational curriculum, secondary-school science and earth science textbooks were analyzed for content related to the YSWC. In addition, a questionnaire survey of secondary-school science teachers was conducted to investigate their perceptions of the temporal variability of ocean currents. Most teachers appeared to have the incorrect knowledge that the YSWC moves north all year round to the west coast of the Korean Peninsula and is strong in the summer like a general warm current. The YSWC does not have strong seasonal variability in current strength, unlike the North Korean Cold Current (NKCC), but does not exist all year round and appears only in winter. These errors in teachers' subject knowledge had a background similar to why they had a misconception that the NKCC was strong in winter. Therefore, errors in textbook contents on the YSWC were analyzed and presented. In addition, to develop students' and teachers' data literacy, class materials on the YSWC that can be used in inquiry activities were developed. A graphical user interface (GUI) program that can visualize the sea surface temperature of the Yellow Sea was introduced, and a program displaying the spatial distribution of water temperature and salinity was developed using World Ocean Atlas (WOA) 2018 oceanic in-situ measurements of water temperature and salinity data and ocean numerical model reanalysis field data. This data visualization materials using oceanic data is expected to improve teachers' misunderstandings and serve as an opportunity to cultivate both students and teachers' ocean and data literacy.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.