• Title/Summary/Keyword: Computer software

Search Result 8,378, Processing Time 0.037 seconds

A Smart Farm Environment Optimization and Yield Prediction Platform based on IoT and Deep Learning (IoT 및 딥 러닝 기반 스마트 팜 환경 최적화 및 수확량 예측 플랫폼)

  • Choi, Hokil;Ahn, Heuihak;Jeong, Yina;Lee, Byungkwan
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.672-680
    • /
    • 2019
  • This paper proposes "A Smart Farm Environment Optimization and Yield Prediction Platform based on IoT and Deep Learning" which gathers bio-sensor data from farms, diagnoses the diseases of growing crops, and predicts the year's harvest. The platform collects all the information currently available such as weather and soil microbes, optimizes the farm environment so that the crops can grow well, diagnoses the crop's diseases by using the leaves of the crops being grown on the farm, and predicts this year's harvest by using all the information on the farm. The result shows that the average accuracy of the AEOM is about 15% higher than that of the RF and about 8% higher than the GBD. Although data increases, the accuracy is reduced less than that of the RF or GBD. The linear regression shows that the slope of accuracy is -3.641E-4 for the ReLU, -4.0710E-4 for the Sigmoid, and -7.4534E-4 for the step function. Therefore, as the amount of test data increases, the ReLU is more accurate than the other two activation functions. This paper is a platform for managing the entire farm and, if introduced to actual farms, will greatly contribute to the development of smart farms in Korea.

User Access Patterns Discovery based on Apriori Algorithm under Web Logs (웹 로그에서의 Apriori 알고리즘 기반 사용자 액세스 패턴 발견)

  • Ran, Cong-Lin;Joung, Suck-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.681-689
    • /
    • 2019
  • Web usage pattern discovery is an advanced means by using web log data, and it's also a specific application of data mining technology in Web log data mining. In education Data Mining (DM) is the application of Data Mining techniques to educational data (such as Web logs of University, e-learning, adaptive hypermedia and intelligent tutoring systems, etc.), and so, its objective is to analyze these types of data in order to resolve educational research issues. In this paper, the Web log data of a university are used as the research object of data mining. With using the database OLAP technology the Web log data are preprocessed into the data format that can be used for data mining, and the processing results are stored into the MSSQL. At the same time the basic data statistics and analysis are completed based on the processed Web log records. In addition, we introduced the Apriori Algorithm of Web usage pattern mining and its implementation process, developed the Apriori Algorithm program in Python development environment, then gave the performance of the Apriori Algorithm and realized the mining of Web user access pattern. The results have important theoretical significance for the application of the patterns in the development of teaching systems. The next research is to explore the improvement of the Apriori Algorithm in the distributed computing environment.

An Efficient Parallelization Implementation of PU-level ME for Fast HEVC Encoding (고속 HEVC 부호화를 위한 효율적인 PU레벨 움직임예측 병렬화 구현)

  • Park, Soobin;Choi, Kiho;Park, Sang-Hyo;Jang, Euee Seon
    • Journal of Broadcast Engineering
    • /
    • v.18 no.2
    • /
    • pp.178-184
    • /
    • 2013
  • In this paper, we propose an efficient parallelization technique of PU-level motion estimation (ME) in the next generation video coding standard, high efficiency video coding (HEVC) to reduce the time complexity of video encoding. It is difficult to encode video in real-time because ME has significant complexity (i.e., 80 percent at the encoder). In order to solve this problem, various techniques have been studied, and among them is the parallelization, which is carefully concerned in algorithm-level ME design. In this regard, merge estimation method using merge estimation region (MER) that enables ME to be designed in parallel has been proposed; but, parallel ME based on MER has still unconsidered problems to be implemented ideally in HEVC test model (HM). Therefore, we propose two strategies to implement stable parallel ME using MER in HM. Through experimental results, the excellence of our proposed methods is shown; the encoding time using the proposed method is reduced by 25.64 percent on average of that of HM which uses sequential ME.

Development of PC Based Signal Postprocessing System in MR Spectroscopy: Normal Brain Spectrum in 1.5T MR Spectroscopy (PC를 이용한 자기공명분광 신호처리분석 시스템 개발: 1.5T MR Spectroscopy에서의 정상인 뇌 분광 신호)

  • 백문영;강원석;이현용;신운재;은충기
    • Investigative Magnetic Resonance Imaging
    • /
    • v.4 no.2
    • /
    • pp.128-135
    • /
    • 2000
  • Purpose : The aim of this study is to develope the Magnetic Resonance Spectroscopy(MRS) data processing S/W which plays an important role as a diagnostic tool in clinical field. Materials and methods : Post-processing software of MRS based on graphical user interface(GUI) under windows operating system of personal computer(PC) was developed using MATLAB(Mathwork, U.S.A.). This tool contains many functions to increase the quality of spectrum data such as DC correction, zero filling, line broadening, Gauss-Lorentzian filtering, phase correction, etc. And we obtained the normal human brain $^1H$ MRS data from parietal white matter, basal ganglia and occipital grey matter region using 1.5T Gyroscan ACS-NT R6 (philips, Amsterdam, Netherland) MRS package. The analysis of the MRS peaks were performed by obtaining the ratio of peak area. Results : The peak ratios of NAA/Cr, Cho/Cr, MI/Cr for the different MRS machines have a little different values. But these peak ratios were not significantly different between different echo time MRS peak ratios in the same machine (p<0.05). Conclusion : MRS post-processing S/W based on GUI using PC was developed and applied to the analysis of normal human brain $^1H$ MRS. This independent MRS processing job increases the performance and throughput of patient scan of main console. Finally, we suggest that the database for normal in-yivo human MRS data should be obtained before clinical applications.

  • PDF

Implementation of a TCP/IP Offload Engine Using Lightweight TCP/IP on an Embedded System (임베디드 시스템상에서 Lightweight TCP/IP를 이용한 TCP/IP Offload Engine의 구현)

  • Yoon In-Su;Chung Sang-Hwa;Choi Bong-Sik;Jun Yong-Tae
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.7
    • /
    • pp.413-420
    • /
    • 2006
  • The speed of present-day network technology exceeds a gigabit and is developing rapidly. When using TCP/IP in these high-speed networks, a high load is incurred in processing TCP/IP protocol in a host CPU. To solve this problem, research has been carried out into TCP/IP Offload Engine (TOE). The TOE processes TCP/IP on a network adapter instead of using a host CPU; this reduces the processing burden on the host CPU. In this paper, we developed two software-based TOEs. One is the TOE implementation using an embedded Linux. The other is the TOE implementation using Lightweight TCP/IP (lwIP). The TOE using an embedded Linux did not have the bandwidth more than 62Mbps. To overcome the poor performance of the TOE using an embedded Linux, we ported the lwIP to the embedded system and enhanced the lwIP for the high performance. We eliminated the memory copy overhead of the lwIP. We added a delayed ACK and a TCP Segmentation Offload (TSO) features to the lwIP and modified the default parameters of the lwIP for large data transfer. With the aid of these modifications, the TOE using the modified lwIP shows a bandwidth of 194 Mbps.

Customizable Global Job Scheduler for Computational Grid (계산 그리드를 위한 커스터마이즈 가능한 글로벌 작업 스케줄러)

  • Hwang Sun-Tae;Heo Dae-Young
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.7
    • /
    • pp.370-379
    • /
    • 2006
  • Computational grid provides the environment which integrates v 따 ious computing resources. Grid environment is more complex and various than traditional computing environment, and consists of various resources where various software packages are installed in different platforms. For more efficient usage of computational grid, therefore, some kind of integration is required to manage grid resources more effectively. In this paper, a global scheduler is suggested, which integrates grid resources at meta level with applying various scheduling policies. The global scheduler consists of a mechanical part and three policies. The mechanical part mainly search user queues and resource queues to select appropriate job and computing resource. An algorithm for the mechanical part is defined and optimized. Three policies are user selecting policy, resource selecting policy, and executing policy. These can be defined newly and replaced with new one freely while operation of computational grid is temporarily holding. User selecting policy, for example, can be defined to select a certain user with higher priority than other users, resource selecting policy is for selecting the computing resource which is matched well with user's requirements, and executing policy is to overcome communication overheads on grid middleware. Finally, various algorithms for user selecting policy are defined only in terms of user fairness, and their performances are compared.

Parallel SystemC Cosimulation using Virtual Synchronization (가상 동기화 기법을 이용한 SystemC 통합시뮬레이션의 병렬 수행)

  • Yi, Young-Min;Kwon, Seong-Nam;Ha, Soon-Hoi
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.12
    • /
    • pp.867-879
    • /
    • 2006
  • This paper concerns fast and time accurate HW/SW cosimulation for MPSoC(Multi-Processor System-on-chip) architecture where multiple software and/or hardware components exist. It is becoming more and more common to use MPSoC architecture to design complex embedded systems. In cosimulation of such architecture, as the number of the component simulators participating in the cosimulation increases, the time synchronization overhead among simulators increases, thereby resulting in low overall cosimulation performance. Although SystemC cosimulation frameworks show high cosimulation performance, it is in inverse proportion to the number of simulators. In this paper, we extend the novel technique, called virtual synchronization, which boosts cosimulation speed by reducing time synchronization overhead: (1) SystemC simulation is supported seamlessly in the virtual synchronization framework without requiring the modification on SystemC kernel (2) Parallel execution of component simulators with virtual synchronization is supported. We compared the performance and accuracy of the proposed parallel SystemC cosimulation framework with MaxSim, a well-known commercial SystemC cosimulation framework, and the proposed one showed 11 times faster performance for H.263 decoder example, while the accuracy was maintained below 5%.

Optimal Sequence Alignment Algorithm Using Space Division Technique (공간 분할 방법을 이용한 최적 서열정렬 알고리즘)

  • Ahn, Heui-Kook;Roh, Hi-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.5
    • /
    • pp.397-406
    • /
    • 2007
  • The problem of finding an optimal alignment between sequence A and B can be solved by dynamic programming algorithm(DPA) efficiently. But, if the length of string was longer, the problem might not be solvable because it requires O(m*n) time and space complexity.(where, $m={\mid}A{\mid},\;n={\mid}B{\mid}$) For space, Hirschberg developed a linear space and quadratic time algorithm, so computer memory was no longer a limiting factor for long sequences. As computers's processor and memory become faster and larger, a method is needed to speed processing up, although which uses more space. For this purpose, we present an algorithm which will solve the problem in quadratic time and linear space. By using division method, It computes optimal alignment faster than LSA, although requires more memory. We generalized the algorithm about division problem for not being divided into integer and pruned additional space by entry/exit node concept. Through the proofness and experiment, we identified that our algorithm uses d*(m+n) space and a little more (m*n) time faster than LSA.

An Evaluation of Human Sensibility on Perceived Texture for Real Haptic Representation (사실적인 햅틱 표현을 위한 질감지각 감성 평가)

  • Kim, Seung-Chan;Kyung, Ki-Uk;Sohn, Jin-Hun;Kwon, Dong-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.10
    • /
    • pp.900-909
    • /
    • 2007
  • This paper describes an experiment on the evaluation of human sensibility by monitoring responses to changes In the frequency and amplitude of a tactile display system. Preliminary tasks were performed to obtain effective adjectives concerning texture perception. The number of collected adjectives was originally 33. This number of adjectives was reduced to 14 by a suitability survey that asked whether an adjective is suitable for expressing a texture feeling. Finally after performing a semantic similarity evaluation, the number of adjectives was further reduced to ten and these ten were used in the main experiment. In the main experiment, selected sandpaper types and 15 selected combinations of frequencies and amplitudes of a tactile display were utilized to quantitatively evaluate the ten adjectives using a bipolar seven-point scale. The data show that a relationship exists between the independent variables(frequency, amplitude, and grit site) and the dependent variable(perceived texture). That is, the change of frequency and amplitude is directly related to perceived roughness or essential elements of human tactile sensitivity found in the preliminary experiment.

Design and Implementation of a Large-Scale Spatial Reasoner Using MapReduce Framework (맵리듀스 프레임워크를 이용한 대용량 공간 추론기의 설계 및 구현)

  • Nam, Sang Ha;Kim, In Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.10
    • /
    • pp.397-406
    • /
    • 2014
  • In order to answer the questions successfully on behalf of the human in DeepQA environments such as Jeopardy! of the American quiz show, the computer is required to have the capability of fast temporal and spatial reasoning on a large-scale commonsense knowledge base. In this paper, we present a scalable spatial reasoning algorithm for deriving efficiently new directional and topological relations using the MapReduce framework, one of well-known parallel distributed computing environments. The proposed reasoning algorithm assumes as input a large-scale spatial knowledge base including CSD-9 directional relations and RCC-8 topological relations. To infer new directional and topological relations from the given spatial knowledge base, it performs the cross-consistency checks as well as the path-consistency checks on the knowledge base. To maximize the parallelism of reasoning computations according to the principle of the MapReduce framework, we design the algorithm to partition effectively the large knowledge base into smaller ones and distribute them over multiple computing nodes at the map phase. And then, at the reduce phase, the algorithm infers the new knowledge from distributed spatial knowledge bases. Through experiments performed on the sample knowledge base with the MapReduce-based implementation of our algorithm, we proved the high performance of our large-scale spatial reasoner.