• Title/Summary/Keyword: file distribution

Search Result 183, Processing Time 0.027 seconds

A Scalable Resource-Lookup Protocol for Internet File System Considering the Computing Power of a Peer (피어의 컴퓨팅 능력을 고려한 인터넷 파일 시스템을 위한 확장성 있는 자원 탐색 프로토콜 설계)

  • Jung Il-dong;You Young-ho;Lee Jong-hwan;Kim Kyongsok
    • Journal of KIISE:Information Networking
    • /
    • v.32 no.1
    • /
    • pp.89-99
    • /
    • 2005
  • Advances of Internet and rC accelerate distribution and sharing of information, which make P2P(Peer-to-Peer) computing paradigm appear P2P computing Paradigm is the computing paradigm that shares computing resources and services between users directly. A fundamental problem that confronts Peer-to-Peer applications is the efficient location of the node that stoles a desired item. P2P systems treat the majority of their components as equivalent. This purist philosophy is useful from an academic standpoint, since it simplifies algorithmic analysis. In reality, however, some peers are more equal than others. We propose the P2P protocol considering differences of capabilities of computers, which is ignored in previous researches. And we examine the possibility and applications of the protocol. Simulating the Magic Square, we estimate the performances of the protocol with the number of hop and network round time. Finally, we analyze the performance of the protocol with the numerical formula. We call our p2p protocol the Magic Square. Although the numbers that magic square contains have no meaning, the sum of the numbers in magic square is same in each row, column, and main diagonal. The design goals of our p2p protocol are similar query response time and query path length between request peer and response peer, although the network information stored in each peer is not important.

Data Processing Architecture for Cloud and Big Data Services in Terms of Cost Saving (비용절감 측면에서 클라우드, 빅데이터 서비스를 위한 대용량 데이터 처리 아키텍쳐)

  • Lee, Byoung-Yup;Park, Jae-Yeol;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.5
    • /
    • pp.570-581
    • /
    • 2015
  • In recent years, many institutions predict that cloud services and big data will be popular IT trends in the near future. A number of leading IT vendors are focusing on practical solutions and services for cloud and big data. In addition, cloud has the advantage of unrestricted in selecting resources for business model based on a variety of internet-based technologies which is the reason that provisioning and virtualization technologies for active resource expansion has been attracting attention as a leading technology above all the other technologies. Big data took data prediction model to another level by providing the base for the analysis of unstructured data that could not have been analyzed in the past. Since what cloud services and big data have in common is the services and analysis based on mass amount of data, efficient operation and designing of mass data has become a critical issue from the early stage of development. Thus, in this paper, I would like to establish data processing architecture based on technological requirements of mass data for cloud and big data services. Particularly, I would like to introduce requirements that must be met in order for distributed file system to engage in cloud computing, and efficient compression technology requirements of mass data for big data and cloud computing in terms of cost-saving, as well as technological requirements of open-source-based system such as Hadoop eco system distributed file system and memory database that are available in cloud computing.

A proposal for managing electronic document of the government (정부기관의 전자문서관리 방향)

  • Lee, Jae-Ha;Yoon, Dai-Hyun
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.1 no.1
    • /
    • pp.245-257
    • /
    • 2001
  • Recently, The government has been brought in the electronic document system. So, It's been increasing the job processing with the electronic approval and the distribution business. However because of the variety of the storage type of electronic document, it's expected many difficulties in the public-usage and etemity-preservation of the information later. also, There are several problems to manage electronic document system, for example, absence of the important function for managing records, etc. So, We propose the methodology as a way to solve several problems of managing electronic document in this paper. It grows the business which is produced in processing the electronic records management. The kind of document file produced by the government is various. Through introducing the standard format of document file, hereafter it has an effect on helpfulness in standardizing the electronic document system, and people recognize the situation of problem to append the important function of the preservation and usage for the electronic document system. The key task is to make the document system with keeping records and following functions according to the law of records management. As applying the standard electronic document system to manage records, the records of the processing section to the data center and then the records of the data center transfer to the government records and archives center. So, the records which be transferred can be preservative and available. The record, such as visual and auditory record which is not easy to digitalize, can be digitally preservative and available in the government records and archives center.

Development of a Gridded Simulation Support System for Rice Growth Based on the ORYZA2000 Model (ORYZA2000 모델에 기반한 격자형 벼 생육 모의 지원 시스템 개발)

  • Hyun, Shinwoo;Yoo, Byoung Hyun;Park, Jinyu;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.4
    • /
    • pp.270-279
    • /
    • 2017
  • Regional assessment of crop productivity using a gridded simulation approach could aid policy making and crop management. Still, little effort has been made to develop the systems that allows gridded simulations of crop growth using ORYZA 2000 model, which has been used for predicting rice yield in Korea. The objectives of this study were to develop a series of data processing modules for creating input data files, running the crop model, and aggregating output files in a region of interest using gridded data files. These modules were implemented using C++ and R to make the best use of the features provided by these programming languages. In a case study, 13000 input files in a plain text format were prepared using daily gridded weather data that had spatial resolution of 1km and 12.5 km for the period of 2001-2010. Using the text files as inputs to ORYZA2000 model, crop yield simulations were performed for each grid cell using a scenario of crop management practices. After output files were created for grid cells that represent a paddy rice field in South Korea, each output file was aggregated into an output file in the netCDF format. It was found that the spatial pattern of crop yield was relatively similar to actual distribution of yields in Korea, although there were biases of crop yield depending on regions. It seemed that those differences resulted from uncertainties incurred in input data, e.g., transplanting date, cultivar in an area, as well as weather data. Our results indicated that a set of tools developed in this study would be useful for gridded simulation of different crop models. In the further study, it would be worthwhile to take into account compatibility to a modeling interface library for integrated simulation of an agricultural ecosystem.

Practical Virtual Compensator Design with Dynamic Multi-Leaf Collimator(dMLC) from Iso-Dose Distribution

  • Song, Ju-Young;Suh, Tae-Suk;Lee, Hyung-Koo;Choe, Bo-Young;Ahn, Seung-Do;Park, Eun-Kyung;Kim, Jong-Hoon;Lee, Sang-Wook;Yi, Byong-Yong
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.129-132
    • /
    • 2002
  • The practical virtual compensator, which uses a dynamic multi-leaf collimator (dMLC) and three-dimensional radiation therapy planning (3D RTP) system, was designed. And the feasibility study of the virtual compensator was done to verify that the virtual compensator acts a role as the replacement of the physical compensator. Design procedure consists of three steps. The first step is to generate the isodose distributions from the 3D RTP system (Render Plan, Elekta). Then isodose line pattern was used as the compensator pattern. Pre-determined compensating ratio was applied to generate the fluence map for the compensator design. The second step is to generate the leaf sequence file with Ma's algorithm in the respect of optimum MU-efficiency. All the procedure was done with home-made software. The last step is the QA procedure which performs the comparison of the dose distributions which are produced from the irradiation with the virtual compensator and from the calculation by 3D RTP. In this study, a phantom was fabricated for the verification of properness of the designed compensator. It is consisted of the styrofoam part which mimics irregular shaped contour or the missing tissues and the mini water phantom. Inhomogeneous dose distribution due to the styrofoam missing tissue could be calculated with the RTP system. The film dosimetry in the phantom with and without the compensator showed significant improvement of the dose distributions. The virtual compensator designed in this study was proved to be a replacement of the physical compensator in the practical point of view.

  • PDF

Algorithm for the design of a Virtual Compensator Using the Multileaf Collimator and 3D RTP System (다엽콜리메터와 삼차원 방사선치료계획장치를 이용한 가상 선량보상체 설계 알고리듬)

  • 송주영;이병용;최태진
    • Progress in Medical Physics
    • /
    • v.12 no.2
    • /
    • pp.185-191
    • /
    • 2001
  • The virtual compensator which are realized using a multileaf collimator(MLC) and three-dimensional radiation therapy Planning(3D RTP) system was designed. And the feasibility study of the virtual compensator was done to verify that it can do the function of the conventional compensator properly. As a model for the design of compensator, styrofoam phantom and mini water phantom were prepared to simulate the missing tissue area and the calculated dose distribution was produced through the 3D RTP system. The fluence maps which are basic materials for the design of virtual compensator were produced based on the dose distribution and the MLC leaf sequence file was made for the realization of the produced fluence map. Ma's algorithm were applied to design the MLC leaf sequence and all the design tools were programmed with IDL5.4. To verify the feasibility of the designed virtual compensator, the results of irradiation with or without a virtual compensator were analyzed by comparing the irradiated films inserted into the mini water phantom. The higher dose area produced due to the missing tissue was removed and intended regular dose distribution was achieved when the virtual compensator was applied.

  • PDF

Realization of a Web-based Distribution System for the Monitoring of Business Press Releases and News Gathering Robots (기업 보도자료 모니터링을 위한 웹기반 배포시스템 및 기사 수집로봇 구현)

  • Shin, Myeong-Sook;Oh, Jung-Jin;Lee, Joon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.12
    • /
    • pp.103-111
    • /
    • 2013
  • At present, a variety of Korean news stories have been about important online content and its importance in the press is becoming higher. Diverse news from businesses are provided to the public as press releases through newspapers or broadcasting media. For such news to become information for a press release, enterprises visit reporters, use e-mails, faxes, or couriers to deliver the information. However, such methods have problems with time, human resources, expenses, and file damage. Also, with these methods it is bothersome for enterprises to check what has been released and for the press to make frequent contact with enterprises for interviews and for content to be released. Therefore, this study aimed to realize a distribution system which enterprises can use to distribute data to be released to the press and to easily check what is to be released while the press can ask for interview requests in a simple way, as well as a news gathering robot that can collects news on the enterprises involved from articles online or in portal sites.

A Study On Recommend System Using Co-occurrence Matrix and Hadoop Distribution Processing (동시발생 행렬과 하둡 분산처리를 이용한 추천시스템에 관한 연구)

  • Kim, Chang-Bok;Chung, Jae-Pil
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.5
    • /
    • pp.468-475
    • /
    • 2014
  • The recommend system is getting more difficult real time recommend by lager preference data set, computing power and recommend algorithm. For this reason, recommend system is proceeding actively one's studies toward distribute processing method of large preference data set. This paper studied distribute processing method of large preference data set using hadoop distribute processing platform and mahout machine learning library. The recommend algorithm is used Co-occurrence Matrix similar to item Collaborative Filtering. The Co-occurrence Matrix can do distribute processing by many node of hadoop cluster, and it needs many computation scale but can reduce computation scale by distribute processing. This paper has simplified distribute processing of co-occurrence matrix by changes over from four stage to three stage. As a result, this paper can reduce mapreduce job and can generate recommend file. And it has a fast processing speed, and reduce map output data.

Instagram Users Behavior Analysis in a Digital Forensic Perspective (디지털 포렌식 관점에서의 인스타그램 사용자 행위 분석)

  • Seo, Seunghee;Kim, Yeog;Lee, Changhoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.2
    • /
    • pp.407-416
    • /
    • 2018
  • Instagram is a Social Network Service(SNS) that has recently become popular among people of all ages and it makes people to construct social relations and share hobbies, daily routines, and useful information. However, since the uploaded information can be accessed by arbitrary users and it is easily shared with others, frauds, stalking, misrepresentation, impersonation, an infringement of copyright and malware distribution are reported. For this reason, it is necessary to analyze Instagram from a view of digital forensics but the research involved is very insufficient. So in this paper, We performed reverse engineering and dynamic analysis of Instagram from a view of digital forensics in the Android environment. As a result, we checked three database files that contain user behavior analysis data such as chat content, chat targets, posted photos, and cookie information. And we found the path to save 4 files and the xml file to save various data. Also we propose ways to use the above results in digital forensics.

Spatial Analysis of Nonpoint Source Pollutant Loading from the Imha dam Watershed using L-THIA (L-THIA를 이용한 낙동강수계 임하댐유역 비점오염원의 공간적 분포해석)

  • Jeon, Ji-Hong;Cha, Daniel K.;Choi, Donghyuk;Kim, Tae-Dong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.55 no.1
    • /
    • pp.17-29
    • /
    • 2013
  • Long-Term Hydrologic Impact Assessment (L-THIA) model which is a distributed watershed model was applied to analyze the spatial distribution of surface runoff and nonpoint source pollutant loading from Imha watershed during 2001~2010. L-THIA CN Calibration Tool linked with SCE-UA was developed to calibrate surface runoff automatically. Calibration (2001~2005) and validation (2006~2010) of monthly surface runoff were represented as 'very good' model performance showing 0.91 for calibration and 0.89 for validation as Nash-Sutcliffe (NS) values. Average annual surface runoff from Imha watershed was 218.4 mm and Banbyun subwatershed was much more than other watersheds due to poor hydrologic condition. Average annual nonpoint source pollutant loading from Imha wateshed were 2,295 ton/year for $BOD_5$, 14,752 ton/year for SS, 358 ton/year for T-N, and 79 ton/year for T-P. Amount of pollutant loading and pollutant loading rates from Banbyun watershed were much higher than other watersheds. As results of analysis of loading rate from grid size ($30m{\times}30m$), most of high 10 % of loading rate were generated from upland. Therefore, major hot spot area to manage nonpoint source pollution in Imha watershed is the combination of upland and Banbyun subwatershed. L-THIA model is easy to use and prepare input file and useful tool to manage nonpoint source pollution at screening level.