• Title/Summary/Keyword: Internet-Distributed computing

Search Result 339, Processing Time 0.024 seconds

Video Retrieval System supporting Adaptive Streaming Service (적응형 스트리밍 서비스를 지원하는 비디오 검색 시스템)

  • 이윤채;전형수;장옥배
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.1
    • /
    • pp.1-12
    • /
    • 2003
  • Recently, many researches into distributed processing on Internet, and multimedia data processing have been performed. Rapid and convenient multimedia services supplied with high quality and high speed are to be needed. In this paper, we design and implement clip-based video retrieval system on the Web enviroment in real-time. Our system consists of the content-based indexing system supporting convenient services for video content providers, and the Web-based retrieval system in order to make it easy and various information retrieval for users in the Web. Three important methods are used in the content-based indexing system, key frame extracting method by dividing video data, clip file creation method by clustering related information, and video database construction method by using clip unit. In Web-based retrieval system, retrieval method ny using a key word, two dimension browsing method of key frame, and real-time display method of the clip are used. In this paper, we design and implement the system that supports real-time display method of the clip are used. In this paper, we design and implement the system that supports real-time retrieval for video clips on Web environment and provides the multimedia service in stability. The proposed methods show a usefulness of video content providing, and provide an easy method for serching intented video content.

Construction of an Audio Steganography Botnet Based on Telegram Messenger (텔레그램 메신저 기반의 오디오 스테가노그래피 봇넷 구축)

  • Jeon, Jin;Cho, Youngho
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.127-134
    • /
    • 2022
  • Steganography is a hidden technique in which secret messages are hidden in various multimedia files, and it is widely exploited for cyber crime and attacks because it is very difficult for third parties other than senders and receivers to identify the presence of hidden information in communication messages. Botnet typically consists of botmasters, bots, and C&C (Command & Control) servers, and is a botmasters-controlled network with various structures such as centralized, distributed (P2P), and hybrid. Recently, in order to enhance the concealment of botnets, research on Stego Botnet, which uses SNS platforms instead of C&C servers and performs C&C communication by applying steganography techniques, has been actively conducted, but image or video media-oriented stego botnet techniques have been studied. On the other hand, audio files such as various sound sources and recording files are also actively shared on SNS, so research on stego botnet based on audio steganography is needed. Therefore, in this study, we present the results of comparative analysis on hidden capacity by file type and tool through experiments, using a stego botnet that performs C&C hidden communication using audio files as a cover medium in Telegram Messenger.

A Study on Efficient AI Model Drift Detection Methods for MLOps (MLOps를 위한 효율적인 AI 모델 드리프트 탐지방안 연구)

  • Ye-eun Lee;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.5
    • /
    • pp.17-27
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology develops and its practicality increases, it is widely used in various application fields in real life. At this time, the AI model is basically learned based on various statistical properties of the learning data and then distributed to the system, but unexpected changes in the data in a rapidly changing data situation cause a decrease in the model's performance. In particular, as it becomes important to find drift signals of deployed models in order to respond to new and unknown attacks that are constantly created in the security field, the need for lifecycle management of the entire model is gradually emerging. In general, it can be detected through performance changes in the model's accuracy and error rate (loss), but there are limitations in the usage environment in that an actual label for the model prediction result is required, and the detection of the point where the actual drift occurs is uncertain. there is. This is because the model's error rate is greatly influenced by various external environmental factors, model selection and parameter settings, and new input data, so it is necessary to precisely determine when actual drift in the data occurs based only on the corresponding value. There are limits to this. Therefore, this paper proposes a method to detect when actual drift occurs through an Anomaly analysis technique based on XAI (eXplainable Artificial Intelligence). As a result of testing a classification model that detects DGA (Domain Generation Algorithm), anomaly scores were extracted through the SHAP(Shapley Additive exPlanations) Value of the data after distribution, and as a result, it was confirmed that efficient drift point detection was possible.

The Construction of QoS Integration Platform for Real-time Negotiation and Adaptation Stream Service in Distributed Object Computing Environments (분산 객체 컴퓨팅 환경에서 실시간 협약 및 적응 스트림 서비스를 위한 QoS 통합 플랫폼의 구축)

  • Jun, Byung-Taek;Kim, Myung-Hee;Joo, Su-Chong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11S
    • /
    • pp.3651-3667
    • /
    • 2000
  • Recently, in the distributed multimedia environments based on internet, as radical growing technologies, the most of researchers focus on both streaming technology and distributed object thchnology, Specially, the studies which are tried to integrate the streaming services on the distributed object technology have been progressing. These technologies are applied to various stream service mamgements and protocols. However, the stream service management mexlels which are being proposed by the existing researches are insufficient for suporting the QoS of stream services. Besides, the existing models have the problems that cannot support the extensibility and the reusability, when the QoS-reiatedfunctions are being developed as a sub-module which is suited on the specific-purpose application services. For solving these problems, in this paper. we suggested a QoS Integrated platform which can extend and reuse using the distributed object technologies, and guarantee the QoS of the stream services. A structure of platform we suggested consists of three components such as User Control Module(UCM), QoS Management Module(QoSM) and Stream Object. Stream Object has Send/Receive operations for transmitting the RTP packets over TCP/IP. User Control ModuleI(UCM) controls Stream Objects via the COREA service objects. QoS Management Modulel(QoSM) has the functions which maintain the QoS of stream service between the UCMs in client and server. As QoS control methexlologies, procedures of resource monitoring, negotiation, and resource adaptation are executed via the interactions among these comiXments mentioned above. For constmcting this QoS integrated platform, we first implemented the modules mentioned above independently, and then, used IDL for defining interfaces among these mexlules so that can support platform independence, interoperability and portability base on COREA. This platform is constructed using OrbixWeb 3.1c following CORBA specification on Solaris 2.5/2.7, Java language, Java, Java Media Framework API 2.0, Mini-SQL1.0.16 and multimedia equipments. As results for verifying this platform functionally, we showed executing results of each module we mentioned above, and a numerical data obtained from QoS control procedures on client and server's GUI, while stream service is executing on our platform.

  • PDF

An exploratory study on Social Network Services in the context of Web 2.0 period (웹 2.0 시대의 SNS(Social Network Service)에 관한 고찰)

  • Lee, Seok-Yong;Jung, Lee-Sang
    • Management & Information Systems Review
    • /
    • v.29 no.4
    • /
    • pp.143-167
    • /
    • 2010
  • Diverse research topics relating to Social Network Services (SNS) such as, social affective factors in relationships among internet users, social capital value of SNS, comparing attributes why users are intending to participate in SNS, user's lifestyle and their preferences, and the exploratory seeking potential of SNS as a social capital need to be focused on. However, these researches that have been undertaken only consider facts at a particular period of the changing computing environment. In accordance with this indispensability, the integrated view on what technical, social and business characteristics and attributes need to be acknowledged. The purpose of this study is to analyze the evolving attributes and characteristics of SNS from Web 1.0 to Mobile web 2.0 through the Web 2.0 and Mobile 1.0 period. Based on the relevant literature, the attributes that drive the changing technological, social and business aspects of SNS have been developed and analyzed. This exploratory study analyzed major attributes and relationships between SNS and users by changing the paradigms which represented each period. It classified and chronicled each period by representing paradigms and deducted the attributes by considering three aspects such as technological, social and business administration. The major findings of this study are, firstly, the web based computing environment has been changed into the platform attribute for users in the aspect of technology. Users can only read, listen and view information through the web site in the early stages, but now it is possible that users can create, modify and distribute all kinds of information. Secondly, the few knowledge producers of web services have been changed into a collective intelligence by groups of people in the aspect of society. Information authority has been distributed and there is no limit to its spread. Many businesses recognized the potential of the SNS and they are considering how to utilize these advantages toward channel of promotion and marketing. Thirdly, the conventional marketing channel has been changed into oral transmission by using SNS. The market of innovative mobile technology such as smart phones, which provide convenience and access-ability toward customers, has been enlarged. New opportunities to build friendly relationship between business and customers as a new marketing chance have been created. Finally, the role of the consumer has been changed into the leading role of a prosumer. Users can create, modify and distribute information, and are performing the dual role of customer and producer.

  • PDF

An Algorithm to Detect P2P Heavy Traffic based on Flow Transport Characteristics (플로우 전달 특성 기반의 P2P 헤비 트래픽 검출 알고리즘)

  • Choi, Byeong-Geol;Lee, Si-Young;Seo, Yeong-Il;Yu, Zhibin;Jun, Jae-Hyun;Kim, Sung-Ho
    • Journal of KIISE:Information Networking
    • /
    • v.37 no.5
    • /
    • pp.317-326
    • /
    • 2010
  • Nowadays, transmission bandwidth for network traffic is increasing and the type is varied such as peer-to-peer (PZP), real-time video, and so on, because distributed computing environment is spread and various network-based applications are developed. However, as PZP traffic occupies much volume among Internet backbone traffics, transmission bandwidth and quality of service(QoS) of other network applications such as web, ftp, and real-time video cannot be guaranteed. In previous research, the port-based technique which checks well-known port number and the Deep Packet Inspection(DPI) technique which checks the payload of packets were suggested for solving the problem of the P2P traffics, however there were difficulties to apply those methods to detection of P2P traffics because P2P applications are not used well-known port number and payload of packets may be encrypted. A proposed algorithm for identifying P2P heavy traffics based on flow transport parameters and behavioral characteristics can solve the problem of the port-based technique and the DPI technique. The focus of this paper is to identify P2P heavy traffic flows rather than all P2P traffics. P2P traffics are consist of two steps i)searching the opposite peer which have some contents ii) downloading the contents from one or more peers. We define P2P flow patterns on these P2P applications' features and then implement the system to classify P2P heavy traffics.

GWB: An integrated software system for Managing and Analyzing Genomic Sequences (GWB: 유전자 서열 데이터의 관리와 분석을 위한 통합 소프트웨어 시스템)

  • Kim In-Cheol;Jin Hoon
    • Journal of Internet Computing and Services
    • /
    • v.5 no.5
    • /
    • pp.1-15
    • /
    • 2004
  • In this paper, we explain the design and implementation of GWB(Gene WorkBench), which is a web-based, integrated system for efficiently managing and analyzing genomic sequences, Most existing software systems handling genomic sequences rarely provide both managing facilities and analyzing facilities. The analysis programs also tend to be unit programs that include just single or some part of the required functions. Moreover, these programs are widely distributed over Internet and require different execution environments. As lots of manual and conversion works are required for using these programs together, many life science researchers suffer great inconveniences. in order to overcome the problems of existing systems and provide a more convenient one for helping genomic researches in effective ways, this paper integrates both managing facilities and analyzing facilities into a single system called GWB. Most important issues regarding the design of GWB are how to integrate many different analysis programs into a single software system, and how to provide data or databases of different formats required to run these programs. In order to address these issues, GWB integrates different analysis programs byusing common input/output interfaces called wrappers, suggests a common format of genomic sequence data, organizes local databases consisting of a relational database and an indexed sequential file, and provides facilities for converting data among several well-known different formats and exporting local databases into XML files.

  • PDF

Location Service Modeling of Distributed GIS for Replication Geospatial Information Object Management (중복 지리정보 객체 관리를 위한 분산 지리정보 시스템의 위치 서비스 모델링)

  • Jeong, Chang-Won;Lee, Won-Jung;Lee, Jae-Wan;Joo, Su-Chong
    • The KIPS Transactions:PartD
    • /
    • v.13D no.7 s.110
    • /
    • pp.985-996
    • /
    • 2006
  • As the internet technologies develop, the geographic information system environment is changing to the web-based service. Since geospatial information of the existing Web-GIS services were developed independently, there is no interoperability to support diverse map formats. In spite of the same geospatial information object it can be used for various proposes that is duplicated in GIS separately. It needs intelligent strategies for optimal replica selection, which is identification of replication geospatial information objects. And for management of replication objects, OMG, GLOBE and GRID computing suggested related frameworks. But these researches are not thorough going enough in case of geospatial information object. This paper presents a model of location service, which is supported for optimal selection among replication and management of replication objects. It is consist of tree main services. The first is binding service which can save names and properties of object defined by users according to service offers and enable clients to search them on the service of offers. The second is location service which can manage location information with contact records. And obtains performance information by the Load Sharing Facility on system independently with contact address. The third is intelligent selection service which can obtain basic/performance information from the binding service/location service and provide both faster access and better performance characteristics by rules as intelligent model based on rough sets. For the validity of location service model, this research presents the processes of location service execution with Graphic User Interface.

Sea Fog Level Estimation based on Maritime Digital Image for Protection of Aids to Navigation (항로표지 보호를 위한 디지털 영상기반 해무 강도 측정 알고리즘)

  • Ryu, Eun-Ji;Lee, Hyo-Chan;Cho, Sung-Yoon;Kwon, Ki-Won;Im, Tae-Ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.6
    • /
    • pp.25-32
    • /
    • 2021
  • In line with future changes in the marine environment, Aids to Navigation has been used in various fields and their use is increasing. The term "Aids to Navigation" means an aid to navigation prescribed by Ordinance of the Ministry of Oceans and Fisheries which shows navigating ships the position and direction of the ships, position of obstacles, etc. through lights, shapes, colors, sound, radio waves, etc. Also now the use of Aids to Navigation is transforming into a means of identifying and recording the marine weather environment by mounting various sensors and cameras. However, Aids to Navigation are mainly lost due to collisions with ships, and in particular, safety accidents occur because of poor observation visibility due to sea fog. The inflow of sea fog poses risks to ports and sea transportation, and it is not easy to predict sea fog because of the large difference in the possibility of occurrence depending on time and region. In addition, it is difficult to manage individually due to the features of Aids to Navigation distributed throughout the sea. To solve this problem, this paper aims to identify the marine weather environment by estimating sea fog level approximately with images taken by cameras mounted on Aids to Navigation and to resolve safety accidents caused by weather. Instead of optical and temperature sensors that are difficult to install and expensive to measure sea fog level, sea fog level is measured through the use of general images of cameras mounted on Aids to Navigation. Furthermore, as a prior study for real-time sea fog level estimation in various seas, the sea fog level criteria are presented using the Haze Model and Dark Channel Prior. A specific threshold value is set in the image through Dark Channel Prior(DCP), and based on this, the number of pixels without sea fog is found in the entire image to estimate the sea fog level. Experimental results demonstrate the possibility of estimating the sea fog level using synthetic haze image dataset and real haze image dataset.