• Title/Summary/Keyword: IT 아키텍처

Search Result 813, Processing Time 0.026 seconds

Estimation of Rice Heading Date of Paddy Rice from Slanted and Top-view Images Using Deep Learning Classification Model (딥 러닝 분류 모델을 이용한 직하방과 경사각 영상 기반의 벼 출수기 판별)

  • Hyeok-jin Bak;Wan-Gyu Sang;Sungyul Chang;Dongwon Kwon;Woo-jin Im;Ji-hyeon Lee;Nam-jin Chung;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.337-345
    • /
    • 2023
  • Estimating the rice heading date is one of the most crucial agricultural tasks related to productivity. However, due to abnormal climates around the world, it is becoming increasingly challenging to estimate the rice heading date. Therefore, a more objective classification method for estimating the rice heading date is needed than the existing methods. This study, we aimed to classify the rice heading stage from various images using a CNN classification model. We collected top-view images taken from a drone and a phenotyping tower, as well as slanted-view images captured with a RGB camera. The collected images underwent preprocessing to prepare them as input data for the CNN model. The CNN architectures employed were ResNet50, InceptionV3, and VGG19, which are commonly used in image classification models. The accuracy of the models all showed an accuracy of 0.98 or higher regardless of each architecture and type of image. We also used Grad-CAM to visually check which features of the image the model looked at and classified. Then verified our model accurately measure the rice heading date in paddy fields. The rice heading date was estimated to be approximately one day apart on average in the four paddy fields. This method suggests that the water head can be estimated automatically and quantitatively when estimating the rice heading date from various paddy field monitoring images.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Cache Performance Analysis of Multiprocessor Systems for OLTP Applications based on a Memory-Resident DBMS (메모리 상주 DBMS 기반의 OLTP 응용을 위한 다중프로세서 시스템 캐쉬 성능 분석)

  • Chung, Yong-Wha;Hahn, Woo-Jong;Yoon, Suk-Han;Park, Jin-Won;Lee, Kang-Woo;Kim, Yang-Woo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.4
    • /
    • pp.383-392
    • /
    • 2000
  • Currently, multiprocessors are evaluated almost exclusively with scientific applications. Commercial applications are rarely explored because it is difficult to obtain the source codes of commercial DBMS. Even when the source code is available, such as for POSTGRES, understanding the source code enough to perform detailed meaningful performance evaluations is a daunting task for computer architects.To evaluate multiprocessors with commercial applications, we have developed our own DBMS, called EZDB. EZDB is a parallelized DBMS, loosely inspired from POSTGRES, and running on top of a software architecture simulator. It is capable of executing parallel programs written in SQL. Contrary to POSTGRES, EZDB is not intended as a prototype for a production-quality DBMS. Its purpose is to easily run and evaluate the performance of commercial applications on multiprocessor architectures. To illustrate the usefulness of EZDB, we showed the cache performance data collected for the TPC-B benchmark on a shared-memory multiprocessor. The simulation results showed that the data structures exhibited unique sharing characteristics and that their locality properties and working sets were very different from those in scientific applications.

  • PDF

Implementing an Integrated System for R&D Results Management (연구성과물 통합 관리 시스템 구현)

  • Shin, Sung-Ho;Um, Jung-Ho;Seo, Dong-Min;Lee, Seung-Woo;Choi, Sung-Pil;Jung, Han-Min
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.8
    • /
    • pp.411-419
    • /
    • 2012
  • In case that R&D results from R&D projects are well managed and archived, the research institutes can transfer the valuable technologies related to R&D results with some costs to corporations. However, it is still difficult to maintain and reuse R&D results because they are managed by each person or each department and not integrated between R&D results. Therefore, the government should undertake to manage R&D results overall by collecting meta data and distribute analyzed information from meta data. Each researching institute also makes an efforts to manage R&D results focusing on their reusing. For this purpose, in this paper, we present a process to manage R&D results; insert meta data of R&D results to the system, upload files of R&D results to the database of the system, inquire, and use meta data of R&D results. Based on the process, we design a system architecture for managing R&D results. In addition, it should be mainly considered to design a global schema for integrating R&D results into one database. The system shows detailed information on R&D results and provides R&D results conveniently to users. We expect that we may reduce the cost of reusing R&D results and improve the quality of R&D results with designing efficiently a process and a global schema of R&D result management system.

LASPI: Hardware friendly LArge-scale stereo matching using Support Point Interpolation (LASPI: 지원점 보간법을 이용한 H/W 구현에 용이한 스테레오 매칭 방법)

  • Park, Sanghyun;Ghimire, Deepak;Kim, Jung-guk;Han, Youngki
    • Journal of KIISE
    • /
    • v.44 no.9
    • /
    • pp.932-945
    • /
    • 2017
  • In this paper, a new hardware and software architecture for a stereo vision processing system including rectification, disparity estimation, and visualization was developed. The developed method, named LArge scale stereo matching method using Support Point Interpolation (LASPI), shows excellence in real-time processing for obtaining dense disparity maps from high quality image regions that contain high density support points. In the real-time processing of high definition (HD) images, LASPI does not degrade the quality level of disparity maps compared to existing stereo-matching methods such as Efficient LArge-scale Stereo matching (ELAS). LASPI has been designed to meet a high frame-rate, accurate distance resolution performance, and a low resource usage even in a limited resource environment. These characteristics enable LASPI to be deployed to safety-critical applications such as an obstacle recognition system and distance detection system for autonomous vehicles. A Field Programmable Gate Array (FPGA) for the LASPI algorithm has been implemented in order to support parallel processing and 4-stage pipelining. From various experiments, it was verified that the developed FPGA system (Xilinx Virtex-7 FPGA, 148.5MHz Clock) is capable of processing 30 HD ($1280{\times}720pixels$) frames per second in real-time while it generates disparity maps that are applicable to real vehicles.

(Domain Design Method to Support Effective Reuse in Component-Based Software Development) (컴포넌트 기반 소프트웨어 개발의 효율적인 재사용성을 지원하기 위한 도메인 설계 방법)

  • 문미경;박준석;염근혁
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.398-413
    • /
    • 2003
  • Component-based Software Development(CBSD) supported by both component and reusability can reduce development time and cost, and also can achieve high productivity. To support component reusability systematically domain analysis and design in parallel with CBSD-process is needed. And also it is needed to suggest objective analysis process to fine out commonality and variability in domain, which is lacked in current domain analysis and design method. And to abstract domain component from the information which is well reflected in domain model, and to express it in domain architecture is needed. In this paper, we suggest the method to define, analyze and design domain systematically for enhancing reusability effectively in Component-base Software Development. We abstract components which can be reusable in domain, in other word, which have commonality from requirement analysis level. We sustain and refine them. And we reflect them to the products of each level. From these process, we can produce the domain component which have commonality. On this basis, we can design domain architecture. In this paper, to produce reusable software we investigate new systematic approach to domain analysis and design from the view point of software reusability.

An Extension of the DBMax for Data Warehouse Performance Administration (데이터 웨어하우스 성능 관리를 위한 DBMax의 확장)

  • Kim, Eun-Ju;Young, Hwan-Seung;Lee, Sang-Won
    • The KIPS Transactions:PartD
    • /
    • v.10D no.3
    • /
    • pp.407-416
    • /
    • 2003
  • As the usage of database systems dramatically increases and the amount of data pouring into them is massive, the performance administration techniques for using database systems effectively are getting more important. Especially in data warehouses, the performance management is much more significant mainly because of large volume of data and complex queries. The objectives and characteristics of data warehouses are different from those of other operational systems so adequate techniques for performance monitoring and tuning are needed. In this paper we extend functionalities of the DBMax, a performance administration tool for Oracle database systems, to apply it to data warehouse systems. First we analyze requirements based on summary management and ETL functions they are supported for data warehouse performance improvement in Oracle 9i. Then, we design architecture for extending DBMax functionalities and implement it. In specifics, we support SQL tuning by providing details of schema objects for summary management and ETL processes and statistics information. Also we provide new function that advises useful materialized views on workload extracted from DBMax log files and analyze usage of existing materialized views.

Multi-Dimensional Keyword Search and Analysis of Hotel Review Data Using Multi-Dimensional Text Cubes (다차원 텍스트 큐브를 이용한 호텔 리뷰 데이터의 다차원 키워드 검색 및 분석)

  • Kim, Namsoo;Lee, Suan;Jo, Sunhwa;Kim, Jinho
    • Journal of Information Technology and Architecture
    • /
    • v.11 no.1
    • /
    • pp.63-73
    • /
    • 2014
  • As the advance of WWW, unstructured data including texts are taking users' interests more and more. These unstructured data created by WWW users represent users' subjective opinions thus we can get very useful information such as users' personal tastes or perspectives from them if we analyze appropriately. In this paper, we provide various analysis efficiently for unstructured text documents by taking advantage of OLAP (On-Line Analytical Processing) multidimensional cube technology. OLAP cubes have been widely used for the multidimensional analysis for structured data such as simple alphabetic and numberic data but they didn't have used for unstructured data consisting of long texts. In order to provide multidimensional analysis for unstructured text data, however, Text Cube model has been proposed precently. It incorporates term frequency and inverted index as measurements to search and analyze text databases which play key roles in information retrieval. The primary goal of this paper is to apply this text cube model to a real data set from in an Internet site sharing hotel information and to provide multidimensional analysis for users' reviews on hotels written in texts. To achieve this goal, we first build text cubes for the hotel review data. By using the text cubes, we design and implement the system which provides multidimensional keyword search features to search and to analyze review texts on various dimensions. This system will be able to help users to get valuable guest-subjective summary information easily. Furthermore, this paper evaluats the proposed systems through various experiments and it reveals the effectiveness of the system.

An Empirical Study on Linux I/O stack for the Lifetime of SSD Perspective (SSD 수명 관점에서 리눅스 I/O 스택에 대한 실험적 분석)

  • Jeong, Nam Ki;Han, Tae Hee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.9
    • /
    • pp.54-62
    • /
    • 2015
  • Although NAND flash-based SSD (Solid-State Drive) provides superior performance in comparison to HDD (Hard Disk Drive), it has a major drawback in write endurance. As a result, the lifetime of SSD is determined by the workload and thus it becomes a big challenge in current technology trend of such as the shifting from SLC (Single Level Cell) to MLC (Multi Level cell) and even TLC (Triple Level Cell). Most previous studies have dealt with wear-leveling or improving SSD lifetime regarding hardware architecture. In this paper, we propose the optimal configuration of host I/O stack focusing on file system, I/O scheduler, and link power management using JEDEC enterprise workloads in terms of WAF (Write Amplification Factor) which represents the efficiency perspective of SSD life time especially for host write processing into flash memory. Experimental analysis shows that the optimum configuration of I/O stack for the perspective of SSD lifetime is MinPower-Dead-XFS which prolongs the lifetime of SSD approximately 2.6 times in comparison with MaxPower-Cfq-Ext4, the best performance combination. Though the performance was reduced by 13%, this contributions demonstrates a considerable aspect of SSD lifetime in relation to I/O stack optimization.

Scenario-Based Implementation Synthesis for Real-Time Object-Oriented Models (실시간 객체 지향 모델을 위한 시나리오 기반 구현 합성)

  • Kim, Sae-Hwa;Park, Ji-Yong;Hong, Seong-Soo
    • The KIPS Transactions:PartD
    • /
    • v.12D no.7 s.103
    • /
    • pp.1049-1064
    • /
    • 2005
  • The demands of increasingly complicated software have led to the proliferation of object-oriented design methodologies in embedded systems. To execute a system designed with objects in target hardware, a task set should be derived from the objects, representing how many tasks reside in the system and which task processes which event arriving at an object. The derived task set greatly influences the responsiveness of the system. Nevertheless, it is very difficult to derive an optimal task set due to the discrepancy between objects and tasks. Therefore, the common method currently used by developers is to repetitively try various task sets. This paper proposes Scenario-based Implementation Synthesis Architecture (SISA) to solve this problem. SISA encompasses a method for deriving a task set from a system designed with objects as well as its supporting development tools and run-time system architecture. A system designed with SISA not only consists of the smallest possible number of tasks, but also guarantees that the response time for each event in the system is minimized. We have fully implemented SISA by extending the ResoRT development tool and applied it to an existing industrial PBX system. The experimental results show that maximum response times were reduced $30.3\%$ on average compared to when the task set was derived by the best known existing methods.