• Title/Summary/Keyword: Graphical user interface

Search Result 460, Processing Time 0.026 seconds

Design and Implementation of a Data-Driven Defect and Linearity Assessment Monitoring System for Electric Power Steering (전동식 파워 스티어링을 위한 데이터 기반 결함 및 선형성 평가 모니터링 시스템의 설계 구현)

  • Lawal Alabe Wale;Kimleang Kea;Youngsun Han;Tea-Kyung Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.2
    • /
    • pp.61-69
    • /
    • 2023
  • In recent years, due to heightened environmental awareness, Electric Power Steering (EPS) has been increasingly adopted as the steering control unit in manufactured vehicles. This has had numerous benefits, such as improved steering power, elimination of hydraulic hose leaks and reduced fuel consumption. However, for EPS systems to respond to actions, sensors must be employed; this means that the consistency of the sensor's linear variation is integral to the stability of the steering response. To ensure quality control, a reliable method for detecting defects and assessing linearity is required to assess the sensitivity of the EPS sensor to changes in the internal design characters. This paper proposes a data-driven defect and linearity assessment monitoring system, which can be used to analyze EPS component defects and linearity based on vehicle speed interval division. The approach is validated experimentally using data collected from an EPS test jig and is further enhanced by the inclusion of a Graphical User Interface (GUI). Based on the design, the developed system effectively performs defect detection with an accuracy of 0.99 percent and obtains a linearity assessment score at varying vehicle speeds.

Study on the Quantitative Analysis of the Major Environmental Effecting Factors for Selecting the Railway Route (철도노선선정에 영향을 미치는 주요환경항목 정량화에 관한 연구)

  • Kim, Dong-ki;Park, Yong-Gul;Jung, Woo-Sung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.6D
    • /
    • pp.761-770
    • /
    • 2009
  • The energy efficiency and environment-friendly aspect of the railway system would be superior to other on-land ransportation systems. In a preliminary feasibility study stage and selection of optimal railway route, the energy efficiency and problems related to environment are usually considered. For the selection of optimal railway route, geographical features and facility of management are generally considered. Environment effect factors for the selection of environment-friendly railway router are focused and studied in this paper. In this study, various analysis of opinion of specialists (railway, environment, transport, urban planning, survey) and the guideline for construction of environment-friendly railway were accomplished. From these results of various analysis, 7 major categories (topography/geology, flora and fauna, Nature Property, air quality, water quality, noise/vibration, visual impact/cultural assets) were extracted. To select environment friendly railway route, many alternatives should be compared optimal route must be selected by a comprehensive assessment considering these 7 categories. To solve this problem, the selected method was AHP which simplifies the complex problems utilizing hierarchy, quantifying qualitative problems through 1:1 comparison, and extracting objective conclusions by maintaining consistency. As a result, a GUIbased program was developed which provides basic values of weighted parameters of each category defined by specialists, and a quantification of detailed assessment guidelines to ensures consistency.

Predicting the splitting tensile strength of manufactured-sand concrete containing stone nano-powder through advanced machine learning techniques

  • Manish Kewalramani;Hanan Samadi;Adil Hussein Mohammed;Arsalan Mahmoodzadeh;Ibrahim Albaijan;Hawkar Hashim Ibrahim;Saleh Alsulamy
    • Advances in nano research
    • /
    • v.16 no.4
    • /
    • pp.375-394
    • /
    • 2024
  • The extensive utilization of concrete has given rise to environmental concerns, specifically concerning the depletion of river sand. To address this issue, waste deposits can provide manufactured-sand (MS) as a substitute for river sand. The objective of this study is to explore the application of machine learning techniques to facilitate the production of manufactured-sand concrete (MSC) containing stone nano-powder through estimating the splitting tensile strength (STS) containing compressive strength of cement (CSC), tensile strength of cement (TSC), curing age (CA), maximum size of the crushed stone (Dmax), stone nano-powder content (SNC), fineness modulus of sand (FMS), water to cement ratio (W/C), sand ratio (SR), and slump (S). To achieve this goal, a total of 310 data points, encompassing nine influential factors affecting the mechanical properties of MSC, are collected through laboratory tests. Subsequently, the gathered dataset is divided into two subsets, one for training and the other for testing; comprising 90% (280 samples) and 10% (30 samples) of the total data, respectively. By employing the generated dataset, novel models were developed for evaluating the STS of MSC in relation to the nine input features. The analysis results revealed significant correlations between the CSC and the curing age CA with STS. Moreover, when delving into sensitivity analysis using an empirical model, it becomes apparent that parameters such as the FMS and the W/C exert minimal influence on the STS. We employed various loss functions to gauge the effectiveness and precision of our methodologies. Impressively, the outcomes of our devised models exhibited commendable accuracy and reliability, with all models displaying an R-squared value surpassing 0.75 and loss function values approaching insignificance. To further refine the estimation of STS for engineering endeavors, we also developed a user-friendly graphical interface for our machine learning models. These proposed models present a practical alternative to laborious, expensive, and complex laboratory techniques, thereby simplifying the production of mortar specimens.

Development of Multimedia Annotation and Retrieval System using MPEG-7 based Semantic Metadata Model (MPEG-7 기반 의미적 메타데이터 모델을 이용한 멀티미디어 주석 및 검색 시스템의 개발)

  • An, Hyoung-Geun;Koh, Jae-Jin
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.573-584
    • /
    • 2007
  • As multimedia information recently increases fast, various types of retrieval of multimedia data are becoming issues of great importance. For the efficient multimedia data processing, semantics based retrieval techniques are required that can extract the meaning contents of multimedia data. Existing retrieval methods of multimedia data are annotation-based retrieval, feature-based retrieval and annotation and feature integration based retrieval. These systems take annotator a lot of efforts and time and we should perform complicated calculation for feature extraction. In addition. created data have shortcomings that we should go through static search that do not change. Also, user-friendly and semantic searching techniques are not supported. This paper proposes to develop S-MARS(Semantic Metadata-based Multimedia Annotation and Retrieval System) which can represent and extract multimedia data efficiently using MPEG-7. The system provides a graphical user interface for annotating, searching, and browsing multimedia data. It is implemented on the basis of the semantic metadata model to represent multimedia information. The semantic metadata about multimedia data is organized on the basis of multimedia description schema using XML schema that basically comply with the MPEG-7 standard. In conclusion. the proposed scheme can be easily implemented on any multimedia platforms supporting XML technology. It can be utilized to enable efficient semantic metadata sharing between systems, and it will contribute to improving the retrieval correctness and the user's satisfaction on embedding based multimedia retrieval algorithm method.

Development of a Window Program for Searching CpG Island (CpG Island 검색용 윈도우 프로그램 개발)

  • Kim, Ki-Bong
    • Journal of Life Science
    • /
    • v.18 no.8
    • /
    • pp.1132-1139
    • /
    • 2008
  • A CpG island is a short stretch of DNA in which the frequency of the CG dinucleotide is higher than other regions. CpG islands are present in the promoters and exonic regions of approximately $30{\sim}60$% of mammalian genes so they are useful markers for genes in organisms containing 5-methylcytosine in their genomes. Recent evidence supports the notion that the hypermethylation of CpG island, by silencing tumor suppressor genes, plays a major causal role in cancer, which has been described in almost every tumor types. In this respect, CpG island search by computational methods is very helpful for cancer research and computational promoter and gene predictions. I therefore developed a window program (called CpGi) on the basis of CpG island criteria defined by D. Takai and P. A. Jones. The program 'CpGi' was implemented in Visual C++ 6.0 and can determine the locations of CpG islands using diverse parameters (%GC, Obs (CpG)/Exp (CpG), window size, step size, gap value, # of CpG, length) specified by user. The analysis result of CpGi provides a graphical map of CpG islands and G+C% plot, where more detailed information on CpG island can be obtained through pop-up window. Two human contigs, i.e. AP00524 (from chromosome 22) and NT_029490.3 (from chromosome 21), were used to compare the performance of CpGi and two other public programs for the accuracy of search results. The two other programs used in the performance comparison are Emboss-CpGPlot and CpG Island Searcher that are web-based public CpG island search programs. The comparison result showed that CpGi is on a level with or outperforms Emboss-CpGPlot and CpG Island Searcher. Having a simple and easy-to-use user interface, CpGi would be a very useful tool for genome analysis and CpG island research. To obtain a copy of CpGi for academic use only, contact corresponding author.

Modeling and Intelligent Control for Activated Sludge Process (활성슬러지 공정을 위한 모델링과 지능제어의 적용)

  • Cheon, Seong-pyo;Kim, Bongchul;Kim, Sungshin;Kim, Chang-Won;Kim, Sanghyun;Woo, Hae-Jin
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.22 no.10
    • /
    • pp.1905-1919
    • /
    • 2000
  • The main motivation of this research is to develop an intelligent control strategy for Activated Sludge Process (ASP). ASP is a complex and nonlinear dynamic system because of the characteristic of wastewater, the change in influent flow rate, weather conditions, and etc. The mathematical model of ASP also includes uncertainties which are ignored or not considered by process engineer or controller designer. The ASP is generally controlled by a PID controller that consists of fixed proportional, integral, and derivative gain values. The PID gains are adjusted by the expert who has much experience in the ASP. The ASP model based on $Matlab^{(R)}5.3/Simulink^{(R)}3.0$ is developed in this paper. The performance of the model is tested by IWA(International Water Association) and COST(European Cooperation in the field of Scientific and Technical Research) data that include steady-state results during 14 days. The advantage of the developed model is that the user can easily modify or change the controller by the help of the graphical user interface. The ASP model as a typical nonlinear system can be used to simulate and test the proposed controller for an educational purpose. Various control methods are applied to the ASP model and the control results are compared to apply the proposed intelligent control strategy to a real ASP. Three control methods are designed and tested: conventional PID controller, fuzzy logic control approach to modify setpoints, and fuzzy-PID control method. The proposed setpoints changer based on the fuzzy logic shows a better performance and robustness under disturbances. The objective function can be defined and included in the proposed control strategy to improve the effluent water quality and to reduce the operating cost in a real ASP.

  • PDF

A rock physics simulator and its application for $CO_2$ sequestration process ($CO_2$ 격리 처리를 위한 암석물리학 모의실헝장치와 그 응용)

  • Li, Ruiping;Dodds, Kevin;Siggins, A.F.;Urosevic, Milovan
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.1
    • /
    • pp.67-72
    • /
    • 2006
  • Injection of $CO_2$ into underground saline formations, due to their large storage capacity, is probably the most promising approach for the reduction of $CO_2$ emissions into the atmosphere. $CO_2$ storage must be carefully planned and monitored to ensure that the $CO_2$ is safely retained in the formation for periods of at least thousands of years. Seismic methods, particularly for offshore reservoirs, are the primary tool for monitoring the injection process and distribution of $CO_2$ in the reservoir over time provided that reservoir properties are favourable. Seismic methods are equally essential for the characterisation of a potential trap, determining the reservoir properties, and estimating its capacity. Hence, an assessment of the change in seismic response to $CO_2$ storage needs to be carried out at a very early stage. This must be revisited at later stages, to assess potential changes in seismic response arising from changes in fluid properties or mineral composition that may arise from chemical interactions between the host rock and the $CO_2$. Thus, carefully structured modelling of the seismic response changes caused by injection of $CO_2$ into a reservoir over time helps in the design of a long-term monitoring program. For that purpose we have developed a Graphical User Interface (GUI) driven rock physics simulator, designed to model both short and long-term 4D seismic responses to injected $CO_2$. The application incorporates $CO_2$ phase changes, local pressure and temperature changes. chemical reactions and mineral precipitation. By incorporating anisotropic Gassmann equations into the simulator, the seismic response of faults and fractures reactivated by $CO_2$ can also be predicted. We show field examples (potential $CO_2$ sequestration sites offshore and onshore) where we have tested our rock physics simulator. 4D seismic responses are modelled to help design the monitoring program.

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.

Misconception on the Yellow Sea Warm Current in Secondary-School Textbooks and Development of Teaching Materials for Ocean Current Data Visualization (중등학교 교과서 황해난류 오개념 분석 및 해류 데이터 시각화 수업자료 개발)

  • Su-Ran Kim;Kyung-Ae Park;Do-Seong Byun;Kwang-Young Jeong;Byoung-Ju Choi
    • Journal of the Korean earth science society
    • /
    • v.44 no.1
    • /
    • pp.13-35
    • /
    • 2023
  • Ocean currents play the most important role in causing and controlling global climate change. The water depth of the Yellow Sea is very shallow compared to the East Sea, and the circulation and currents of seawater are quite complicated owing to the influence of various wind fields, ocean currents, and river discharge with low-salinity seawater. The Yellow Sea Warm Current (YSWC) is one of the most representative currents of the Yellow Sea in winter and is closely related to the weather of the southwest coast of the Korean Peninsula, so it needs to be treated as important in secondary-school textbooks. Based on the 2015 revised national educational curriculum, secondary-school science and earth science textbooks were analyzed for content related to the YSWC. In addition, a questionnaire survey of secondary-school science teachers was conducted to investigate their perceptions of the temporal variability of ocean currents. Most teachers appeared to have the incorrect knowledge that the YSWC moves north all year round to the west coast of the Korean Peninsula and is strong in the summer like a general warm current. The YSWC does not have strong seasonal variability in current strength, unlike the North Korean Cold Current (NKCC), but does not exist all year round and appears only in winter. These errors in teachers' subject knowledge had a background similar to why they had a misconception that the NKCC was strong in winter. Therefore, errors in textbook contents on the YSWC were analyzed and presented. In addition, to develop students' and teachers' data literacy, class materials on the YSWC that can be used in inquiry activities were developed. A graphical user interface (GUI) program that can visualize the sea surface temperature of the Yellow Sea was introduced, and a program displaying the spatial distribution of water temperature and salinity was developed using World Ocean Atlas (WOA) 2018 oceanic in-situ measurements of water temperature and salinity data and ocean numerical model reanalysis field data. This data visualization materials using oceanic data is expected to improve teachers' misunderstandings and serve as an opportunity to cultivate both students and teachers' ocean and data literacy.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.