• Title/Summary/Keyword: 소프트웨어 패키지

Search Result 199, Processing Time 0.029 seconds

A Benchmark of Open Source Data Mining Package for Thermal Environment Modeling in Smart Farm(R, OpenCV, OpenNN and Orange) (스마트팜 열환경 모델링을 위한 Open source 기반 Data mining 기법 분석)

  • Lee, Jun-Yeob;Oh, Jong-wo;Lee, DongHoon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.168-168
    • /
    • 2017
  • ICT 융합 스마트팜 내의 환경계측 센서, 영상 및 사양관리 시스템의 증가에도 불구하고 이들 장비에서 확보되는 데이터를 적절히 유효하게 활용하는 기술이 미흡한 실정이다. 돈사의 경우 가축의 복지수준, 성장 변화를 실시간으로 모니터링 및 예측할 수 있는 데이터 분석 및 모델링 기술 확보가 필요하다. 이를 위해선 가축의 생리적 변화 및 행동적 변화를 조기에 감지하고 가축의 복지수준을 실시간으로 감시하고 분석 및 예측 기술이 필요한데 이를 위한 대표적인 정보 통신 공학적 접근법 중에 하나가 Data mining 이다. Data mining에 대한 연구 수행에 필요한 다양한 소프트웨어 중에서 Open source로 제공이 되는 4가지 도구를 비교 분석하였다. 스마트 돈사 내에서 열환경 모델링을 목표로 한 데이터 분석에서 고려해야할 요인으로 데이터 분석 알고리즘 도출 시간, 시각화 기능, 타 라이브러리와 연계 기능 등을 중점 적으로 분석하였다. 선정된 4가지 분석 도구는 1) R(https://cran.r-project.org), 2) OpenCV(http://opencv.org), 3) OpenNN (http://www.opennn.net), 4) Orange(http://orange.biolab.si) 이다. 비교 분석을 수행한 운영체제는 Linux-Ubuntu 16.04.4 LTS(X64)이며, CPU의 클럭속도는 3.6 Ghz, 메모리는 64 Gb를 설치하였다. 개발언어 측면에서 살펴보면 1) R 스크립트, 2) C/C++, Python, Java, 3) C++, 4) C/C++, Python, Cython을 지원하여 C/C++ 언어와 Python 개발 언어가 상대적으로 유리하였다. 데이터 분석 알고리즘의 경우 소스코드 범위에서 라이브러리를 제공하는 경우 Cross-Platform 개발이 가능하여 여러 운영체제에서 개발한 결과를 별도의 Porting 과정을 거치지 않고 사용할 수 있었다. 빌트인 라이브러리 경우 순서대로 R 의 경우 가장 많은 수의 Data mining 알고리즘을 제공하고 있다. 이는 R 운영 환경 자체가 개방형으로 되어 있어 온라인에서 추가되는 새로운 라이브러리를 클라우드를 통하여 공유하기 때문인 것으로 판단되었다. OpenCV의 경우 영상 처리에 강점이 있었으며, OpenNN은 신경망학습과 관련된 라이브러리를 소스코드 레벨에서 공개한 것이 강점이라 할 수 있다. Orage의 경우 라이브러리 집합을 제공하는 것에 중점을 둔 다른 패키지와 달리 시각화 기능 및 망 구성 등 사용자 인터페이스를 통합하여 운영한 것이 강점이라 할 수 있다. 열환경 모델링에 요구되는 시간 복잡도에 대응하기 위한 부가 정보 처리 기술에 대한 연구를 수행하여 스마트팜 열환경 모델링을 실시간으로 구현할 수 있는 방안 연구를 수행할 것이다.

  • PDF

Performance Evaluation of Output Queueing ATM Switch with Finite Buffer Using Stochastic Activity Networks (SAN을 이용한 제한된 버퍼 크기를 갖는 출력큐잉 ATM 스위치 성능평가)

  • Jang, Kyung-Soo;Shin, Ho-Jin;Shin, Dong-Ryeol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.8
    • /
    • pp.2484-2496
    • /
    • 2000
  • High speed switches have been developing to interconnect a large number of nodes. It is important to analyze the switch performance under various conditions to satisfy the requirements. Queueing analysis, in general, has the intrinsic problem of large state space dimension and complex computation. In fact, The petri net is a graphical and mathematical model. It is suitable for various applications, in particular, manufacturing systems. It can deal with parallelism, concurrence, deadlock avoidance, and asynchronism. Currently it has been applied to the performance of computer networks and protocol verifications. This paper presents a framework for modeling and analyzing ATM switch using stochastic activity networks (SANs). In this paper, we provide the ATM switch model using SANs to extend easily and an approximate analysis method to apply A TM switch models, which significantly reduce the complexity of the model solution. Cell arrival process in output-buffered Queueing A TM switch with finite buffer is modeled as Markov Modulated Poisson Process (MMPP), which is able to accurately represent real traffic and capture the characteristics of bursty traffic. We analyze the performance of the switch in terms of cell-loss ratio (CLR), mean Queue length and mean delay time. We show that the SAN model is very useful in A TM switch model in that the gates have the capability of implementing of scheduling algorithm.

  • PDF

The Study on Stability Channel Technology by Using Groyne in Alluvial Stream - Riverside Protection Techniques by Using Groyne - (충적하천에서 수제에 의한 안정하도 확보기술에 관한 연구 - 수제에 의한 하안보호 기법 -)

  • Park, Hyo-Gil;Jung, Sung-Soon;Kim, Chul-Moon;Ahn, Won-Sik;Jee, Hong-Kee
    • Journal of Wetlands Research
    • /
    • v.13 no.1
    • /
    • pp.79-94
    • /
    • 2011
  • As demonstrated in study for non-submerged groynes, the flow field is predominantly two-dimensional, with mainly horizontal eddies. The eddies shed form the tips of the groynes and migrate in the flow direction. These eddies have horizontal dimensions in the order of tens of meters and time-scales in the order of minutes. In the standard flow simulations, these motions are usually not resolved, due to a too coarse grid, too large time steps and, more importantly, the use of inadequate turbulence modelling. using for example a k-${\varepsilon}$ model, it is necessary to introduce substantial modifications. Therefore simulation resolved in this study, were carried out using the DELFT-3D-MOR programme, which is part of the DELFT3D software package of WL/Delft Hydraulics and In this study, apply a two-dimensional depth-averaged model, taking an horizontal large eddy simulation(HLES). The bed morphology computed when using HLES, as well as the associated time-scale, is similar to what has been obseved in a field case. When using a mean-flow model with-out HELS, the bed morphology is less realistic and the morphological time-scale is much larger. This slow development is the result of neglecting(or averaging). the strong velocity fluctuations associated with the time-varying eddy formation.

Development of Pre-Service and In-Service Information Management System (iSIMS) (원전 가동전/중 검사정보관리 시스템 개발)

  • Yoo, H.J.;Choi, S.N.;Kim, H.N.;Kim, Y.H.;Yang, S.H.
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.24 no.4
    • /
    • pp.390-395
    • /
    • 2004
  • The iSTMS is a web-based integrated information system supporting Pre-Service and In-Service Inspection(PSI/ISI) processes for the nuclear power plants of KHNP(Korea Hydro & Nuclear Power Co. Ltd.). The system provides a full spectrum coverage of the inspection processes from the planning stage to the final report of examination in accordance with applicable codes, standards, and regulatory requirements. The major functions of the system includes the inspection planning, examination, reporting, project control and status reporting, resource management as well as objects search and navigation. The system also provides two dimensional or three dimensional visualization interface to identify the location and geometry of components and weld areas subject to examination in collaboration with database applications. The iSIMS is implemented with commercial software packages such as database management system, 2-D and 3-D visualization tool, etc., which provide open, updated and verified foundations. This paper describes the key functions and the technologies for the implementation of the iSIMS.

Development of Flood Runoff Characteristics Nomograph for Small Catchment Using R-Programming (R-프로그래밍을 이용한 소유역 홍수유출특성 노모그래프 개발)

  • Jang, Cheol Hee;Kim, Hyeon Jun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2015.05a
    • /
    • pp.590-590
    • /
    • 2015
  • 본 연구는 집중호우에 의한 홍수예측 및 소유역의 유출거동에 대한 수문학적 민감성(susceptibility) 규명을 목적으로 강우강도, 지속기간 및 토양포화도 변화에 따른 홍수유출특성을 분석하여 유역의 유출거동 민감성을 표출할 수 있는 노모그래프를 개발하였다. 개별 홍수사상에 대한 유출거동 특성 분석을 위하여 한국건설기술연구원의 대표 시험유역인 설마천 유역의 과거 17년간(1996 ~ 2012)의 10분 간격의 강우량 및 유출량 자료를 수집하여 홍수유출해석을 수행하였다. 설마천 시험유역의 일누가강우량 100mm 이상, 50개 홍수사상에 대한 홍수유출해석은 유역 물순환 해석모형인 CAT(Catchment hydrological cycle Assessment Tool)을 이용하였으며 모의결과를 바탕으로 홍수사상별 지체시간, 강우강도, 지속기간 및 토양포화도 변화에 따른 홍수유출특성을 상세히 분석하였다. 이 중에서도 지체시간은 유역반응을 나타내는 시간변수로서 수문모델링 및 홍수량예측에 매우 중요한 요소이다. 특히, 강우량에 대한 홍수량의 반응이 빠른 소유역의 경우에 홍수량예측에 큰 영향을 미친다. 따라서 강우강도, 지속기간, 토양포화도의 변화량에 대한 지체시간의 거동을 R 프로그래밍 언어 및 3D Surfer를 이용하여 분석한 후 최종적으로 소유역의 홍수유출 특성을 나타내는 3차원 홍수 유출특성 노모그래프를 개발하였다. 분석에 사용된 R 프로그래밍 언어는 통계 계산과 그래픽을 위한 프로그래밍 언어이자 소프트웨어 환경으로 데이터의 조작 및 수치연산, 시각화를 수행할 수 있는 기능을 여러 패키지를 통해 구현할 수 있다. 따라서 본 연구에서는 R을 이용하여 10분 단위의 강우 및 유출량 자료를 1시간 및 1일 자료로 구축하고 17년간의 과거 홍수사상을 분리하여 추출하는 R 홍수유출해석 시스템을 개발하였으며 추출된 홍수사상을 관측 유출량 및 관측 토양수분을 포함하여 시각화함으로써 강우 및 토양수분 변화에 따른 소유역의 유출거동 민감성을 확인할 수 있었다. 분석 결과, 지체시간은 강우지속기간 및 토양포화도에 민감한 거동특성을 나타냈으며 토양포화도는 첨두홍수량의 변화에 민감한 영향을 주는 것으로 확인되었다. 개발된 3차원 홍수유출특성 노모그래프는 유역의 규모 및 지형물리학적 특성에 따라 다양하게 나타날 것으로 판단되며 여러 계측유역에 적용함으로써 유역별 홍수유출 반응특성을 정량화할 필요가 있다. 즉, 강우강도, 지속기간, 지체시간, 포화도 등의 변화에 따른 유역의 홍수유출 반응특성을 규명함으로써 미계측 유역의 홍수량예측 실무에 활용할 수 있을 것으로 판단된다.

  • PDF

Usefulness of applying Macro for Brain SPECT Processing (Brain SPECT Processing에 있어서 Macro Program 사용의 유용성)

  • Kim, Gye-Hwan;Lee, Hong-Jae;Kim, Jin-Eui;Kim, Hyeon-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.35-39
    • /
    • 2009
  • Purpose: Diagnostic and functional imaging softwares in Nuclear Medicine have been developed significantly. But, there are some limitations which like take a lot of time. In this article, we introduced that the basic concept of macro to help understanding macro and its application to Brain SPECT processing. We adopted macro software to SPM processing and PACS verify processing of Brain SPECT processing. Materials and Methods: In Brain SPECT, we choose SPM processing and two PACS works which have large portion of a work. SPM is the software package to analyze neuroimaging data. And purpose of SPM is quantitative analysis between groups. Results are made by complicated process such as realignment, normalization, smoothing and mapping. We made this process to be more simple by using macro program. After sending image to PACS, we directly input coordinates of mouse using simple macro program for processes of color mapping, adjustment of gray scale, copy, cut and match. So we compared time for making result by hand with making result by macro program. Finally, we got results by applying times to number of studies in 2007. Results: In 2007, the number of SPM studies were 115 and the number of PACS studies were 834 according to Diamox study. It was taken 10 to 15 minutes for SPM work by hand according to expertness and 5 minutes and a half was uniformly needed using Macro. After applying needed time to the number of studies, we calculated an average time per a year. When using SPM work by hand according to expertness, 1150 to 1725 minutes (19 to 29 hours) were needed and 632 seconds (11 hours) were needed for using Macro. When using PACS work by hand, 2 to 3 minutes were needed and for using Macro, 45 seconds were needed. After applying theses time to the number of studies, when working by hand, 1668 to 2502 minutes (28 to 42 hours) were needed and for using Macro, 625 minutes (10 hours) were needed. Following by these results, it was shown that 1043 to 1877 (17 to 31 hours were saved. Therefore, we could save 45 to 63% for SPM, 62 to 75% for PACS work and 55 to 70% for total brain SPECT processing in 2007. Conclusions: On the basis of the number of studies, there was significant time saved when we applied Macro to brain SPECT processing and also it was shown that even though work is taken a little time, there is a possibility to save lots of time according to the number of studies. It gives time on technologist's side which makes radiological technologist more concentrate for patients and reduce probability of mistake. Appling Macro to brain SPECT processing helps for both of radiological technologists and patients and contribute to improve quality of hospital service.

  • PDF

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF