• Title/Summary/Keyword: Automated software

Search Result 534, Processing Time 0.021 seconds

Automation of Dobson Spectrophotometer(No.124) for Ozone Measurements (돕슨 분광광도계(No.124)의 오존 자동관측시스템화)

  • Kim, Jhoon;Park, Sang-Seo;Moon, Kyung-Jung;Koo, Ja-Ho;Lee, Yun-Gon;Miyagawa, Koji;Cho, Hi-Ku
    • Atmosphere
    • /
    • v.17 no.4
    • /
    • pp.339-348
    • /
    • 2007
  • Global Environment Laboratory at Yonsei University in Seoul ($37.57^{\circ}N$, $126.95^{\circ}E$) has carried out the ozone layer monitoring program in the framework of the Global Ozone Observing System of the World Meteorlogical Organization (WMO/GAW/GO3OS Station No. 252) since May of 1984. The daily measurements of total ozone and the vertical distribution of ozone amount have been made with the Dobson Spectrophotometer (No.124) on the roof of the Science Building on Yonsei campus. From 2004 through 2006, major parts of the manual operations are automated in measuring total ozone amount and vertical ozone profile through Umkehr method, and calibrating instrument by standard lamp tests with new hardware and software including step motor, rotary encoder, controller, and visual display. This system takes full advantage of Windows interface and information technology to realize adaptability to the latest Windows PC and flexible data processing system. This automatic system also utilizes card slot of desktop personal computer to control various types of boards in the driving unit for operating Dobson spectrophotometer and testing devices. Thus, by automating most of the manual work both in instrument operation and in data processing, subjective human errors and individual differences are eliminated. It is therefore found that the ozone data quality has been distinctly upgraded after automation of the Dobson instrument.

Digital Photogrammetry and Its Role in GIS

  • 조규전;조우석
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.14 no.1
    • /
    • pp.1-7
    • /
    • 1996
  • The idea of digital photogrammetry was first introduced into the photogrammetric community in early 1960s'. At that time, it was impossible to implement the idea due to inferior computer and digital image processing technology With the recent advancements in computer hardware/software and image processing techniques, digital photogrammetry has made its entry into the field of photogrammetry. The advent of digital photogrammetry also resulted from the increasing amount of digital data acquired through satellites, CCD cameras and digital scanning of photographs. Obviously, the major distinction between conventional photogrammetry and digital photogrammetry lies in the nature of primary input data (analogue versus digital), which could lead to a fully automated digital photogrammetric workstation. However, since digital photogrammetry is in its infant stage, virtually every task is an unsolved problem due to lack of understanding of theories and techniques. Upon considering the increasing demand of efficient digital mapping method and economical GIS database generation, the union of GIS and digital photogrammetry becomes ever clear. In this paper, the author addresses the current status of digital photogrammetry such as digital imagery and digital photogrammetric workstation as well as the role of digital photogrammetry in GIS.

  • PDF

An Automated Code Generation for Both Improving Performance and Detecting Error in Self-Adaptive Modules (자가 적응 모듈의 성능 개선과 오류 탐지를 위한 코드 자동 생성 기법)

  • Lee, Joon-Hoon;Park, Jeong-Min;Lee, Eun-Seok
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.9
    • /
    • pp.538-546
    • /
    • 2008
  • It has limits that system administrator deals with many problems occurred in systems because computing environments are increasingly complex. It is issued that systems have an ability to recognize system's situations and adapt them by itself in order to resolve these limits. But it requires much experiences and knowledge to build the Self-Adaptive System. The difficulty that builds the Self-Adaptive System has been problems. This paper proposes a technique that generates automatically the codes of the Self-Adaptive System in order to make the system to be built more easily. This Self-Adaptive System resolves partially the problems about ineffectiveness of the exceeded usage of the system resource that was previous research's problem and incorrect operation that is occurred by external factors such as virus. In this paper, we applied the proposed approach to the file transfer module that is in the video conferencing system in order to evaluate it. We compared the length of the codes, the number of Classes that are created by the developers, and development time. We have confirmed this approach to have the effectiveness.

An Automated Test Data Generator for Debugging Esterel Programs (에스테렐 프로그램 디버깅을 위한 테스트 데이터 자동 생성)

  • Yun, Jeong-Han;Cho, Min-Kyung;Seo, Sun-Ae;Han, Tai-Sook
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.793-799
    • /
    • 2009
  • Esterel is an imperative synchronous language that is well-adopted to specify reactive systems. Programmers sometimes want simple validations that can be applied while the system is under development. Since a reactive system reacts to environment changes, a test data is a sequence of input events. Generating proper test data by hand is complex and error-prone. Although several test data generators exist, they are hard to learn and use. Mostly, system designers need test data to reach a specific status of a target program. In this paper, we develop a test data generator to generate test input sequences for debugging Esterel programs. Our tool is focused on easy usage; users can describe test data properties with simple specifications. We show a case study in which the test data generator is used for a practical development process.

Combining Multiple Classifiers for Automatic Classification of Email Documents (전자우편 문서의 자동분류를 위한 다중 분류기 결합)

  • Lee, Jae-Haeng;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.3
    • /
    • pp.192-201
    • /
    • 2002
  • Automated text classification is considered as an important method to manage and process a huge amount of documents in digital forms that are widespread and continuously increasing. Recently, text classification has been addressed with machine learning technologies such as k-nearest neighbor, decision tree, support vector machine and neural networks. However, only few investigations in text classification are studied on real problems but on well-organized text corpus, and do not show their usefulness. This paper proposes and analyzes text classification methods for a real application, email document classification task. First, we propose a combining method of multiple neural networks that improves the performance through the combinations with maximum and neural networks. Second, we present another strategy of combining multiple machine learning classifiers. Voting, Borda count and neural networks improve the overall classification performance. Experimental results show the usefulness of the proposed methods for a real application domain, yielding more than 90% precision rates.

Index Ontology Repository for Video Contents (비디오 콘텐츠를 위한 색인 온톨로지 저장소)

  • Hwang, Woo-Yeon;Yang, Jung-Jin
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.10
    • /
    • pp.1499-1507
    • /
    • 2009
  • With the abundance of digital contents, the necessity of precise indexing technology is consistently required. To meet these requirements, the intelligent software entity needs to be the subject of information retrieval and the interoperability among intelligent entities including human must be supported. In this paper, we analyze the unifying framework for multi-modality indexing that Snoek and Worring proposed. Our work investigates the method of improving the authenticity of indexing information in contents-based automated indexing techniques. It supports the creation and control of abstracted high-level indexing information through ontological concepts of Semantic Web skills. Moreover, it attempts to present the fundamental model that allows interoperability between human and machine and between machine and machine. The memory-residence model of processing ontology is inappropriate in order to take-in an enormous amount of indexing information. The use of ontology repository and inference engine is required for consistent retrieval and reasoning of logically expressed knowledge. Our work presents an experiment for storing and retrieving the designed knowledge by using the Minerva ontology repository, which demonstrates satisfied techniques and efficient requirements. At last, the efficient indexing possibility with related research is also considered.

  • PDF

Development of A Software Tool for Automatic Trim Steel Design of Press Die Using CATIA API (CATIA API를 활용한 프레스금형 트림스틸 설계 자동화 S/W 모듈 개발)

  • Kim, Gang-Yeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.3
    • /
    • pp.72-77
    • /
    • 2017
  • This paper focuses on the development of a supporting S/W tool for the automated design of an automotive press trim die. To define the die design process based on automation, we analyze the press die design process of the current industry and group repetitive works in the 3D modeling process. The proposed system consists of two modules, namely the template models of the trim steel parts and UI function for their auto-positioning. Four kinds of template models are developed to adapt to various situations and the rules of the interaction formula which are used for checking and correcting the directions of the datum point, datum curve, datum plane are implemented to eliminate errors. The system was developed using CATIA Knowledgeware, CAA(CATIA SDK) and Visual C++, in order for it to function as a plug-in module of CATIA V5, which is one of the major 3D CAD systems in the manufacturing industry. The developed system was tested by applying it to various panels of current automobiles and the results showed that it reduces the time-cost by 74% compared to the traditional method.

Accuracy of virtual models in the assessment of maxillary defects

  • Kamburoglu, Kivanc;Kursun, Sebnem;Kilic, Cenk;Ozen, Tuncer
    • Imaging Science in Dentistry
    • /
    • v.45 no.1
    • /
    • pp.23-29
    • /
    • 2015
  • Purpose: This study aimed to assess the reliability of measurements performed on three-dimensional (3D) virtual models of maxillary defects obtained using cone-beam computed tomography (CBCT) and 3D optical scanning. Materials and Methods: Mechanical cavities simulating maxillary defects were prepared on the hard palate of nine cadavers. Images were obtained using a CBCT unit at three different fields-of-views (FOVs) and voxel sizes: 1) $60{\times}60mm$ FOV, $0.125mm^3$ ($FOV_{60}$); 2) $80{\times}80mm$ FOV, $0.160mm^3$ ($FOV_{80}$); and 3) $100{\times}100mm$ FOV, $0.250mm^3$ ($FOV_{100}$). Superimposition of the images was performed using software called VRMesh Design. Automated volume measurements were conducted, and differences between surfaces were demonstrated. Silicon impressions obtained from the defects were also scanned with a 3D optical scanner. Virtual models obtained using VRMesh Design were compared with impressions obtained by scanning silicon models. Gold standard volumes of the impression models were then compared with CBCT and 3D scanner measurements. Further, the general linear model was used, and the significance was set to p=0.05. Results: A comparison of the results obtained by the observers and methods revealed the p values to be smaller than 0.05, suggesting that the measurement variations were caused by both methods and observers along with the different cadaver specimens used. Further, the 3D scanner measurements were closer to the gold standard measurements when compared to the CBCT measurements. Conclusion: In the assessment of artificially created maxillary defects, the 3D scanner measurements were more accurate than the CBCT measurements.

A Study on an Extraction of the Geometric Characteristics of the Pyongchang River basin by Using Geographic Information System (GIS를 활용한 유역의 하천 형태학적 특성 추출에 관한 연구)

  • Hahm, Chang-Hahk
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.4 no.1 s.6
    • /
    • pp.115-119
    • /
    • 1996
  • odel). One of important tasks for hydrological analysis is the division of watershed. It can be an essential factor amThe main objective of this study is to extract of the geometric characteristics of the Pyongchang River basin, headwaters of the South Ran River. A GIS is capable of extracting various hydrological factors from DEM(digital elevation mong various geometric characteristics of watershed. In this study, watershed itself and other geometric factors of watershed are extracted from DEM by using a GIS technique. The manual process of tasks to obtain geometric characteristics of watershed is automated. by using the function of ARC/INFO software as a GIS package. Scanned data is used for this study and it is converted to DEM data Various forms of representation of spatial data are handled in main modules and a GRID module of ARC/INFO. A GRID module is used on a stream in order to define watershed boundary, so it would be possible to obtain the watersheds. Also, a flowdirection, stream networks and others are generated. The results show that GIS can aid watershed management and research and surveillance. Also the geometric characteristics as parameters of watershed can be quantified by a using GIS technique. Resonable results can be obtained as compared with conventional graphic methods.

  • PDF

Study on development of vessel shore report management system for IMO MSP 8

  • Rind, Sobia;Mo, Soo-Jong;Yu, Yung-Ho
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.5
    • /
    • pp.418-428
    • /
    • 2016
  • In this study, a Vessel Shore Report Management System (VSRMS) is developed for the International Maritime Organization (IMO), Maritime Service Portfolio (MSP) Number 8, which comprises vessel shore reporting. Several documents have to be completed before the arrival/departure of a vessel at a port, as each national port has its own reporting format and data. The present vessel reporting system is inefficient, time-consuming, and involves excessive paperwork, which results in duplications and errors. To solve this problem, in this study, the vessel reporting formats and data contents of various national ports are investigated, as at present, the reporting documents required by the current IMO standard includes insufficient information which is requested by national ports. Initially, the vessel reporting information of various national ports are collected and analyzed. Subsequently, a database structure for managing vessel reporting data for ports worldwide is devised. To make the transfer of data and the exchange of information of vessel reports much more reliable, efficient, and paper-free, VSRMS, which is a software application for the simplification and facilitation of vessel report formalities, is developed. This application is developed using the latest Microsoft C#.Net Programming Language in the Microsoft Visual Studio framework 4.5. It provides a user interface and a backend MySQL server used for database management. SAP Crystal Reports 2013 is used for designing and generating vessel reports in the original report formats. The VSRMS can facilitate vessel reporting and improve data accuracy through the reduction of input data, efficient data exchange, and reduction of the cost of communication. Adoption of the VSRMS will allow the vessel shore reporting system to be automated, resulting in enhanced work efficiency for shipping companies. Based on this information system and architecture, the consensus of various international organizations, such as the IMO, the International Association of Marine Aids to Navigation and Lighthouse Authorities (IALA), the Federation of National Associations of Ship Brokers and Agents (FONASBA), and the Baltic and International Maritime Council (BIMCO), is required so that vessel reporting is standardized internationally.