• Title/Summary/Keyword: computing model

Search Result 3,349, Processing Time 0.031 seconds

Rethinking OTT regulation based on the global OTT market trends and regulation cases (OTT 서비스의 유형과 주요국의 규제 정책에 대한 고찰)

  • Kim, Suwon;Kim, Daewon
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.143-156
    • /
    • 2019
  • Discussion on OTT regulation has become fiercer, as OTT services' impacts on the global and domestic media market have been exponentially growing. In South Korea, it is argued that, on the basis of the similarity between television program and OTT's video content, OTT needs to be regulated in order to protect fair competition and to control sociocultural effects. In many of the discussions, developed countries' cases have been used for supporting OTT regulation. In this paper, we first analyzed the global OTT market trends based on our own categorization of OTT services. then we assessed the validity of the application of the foreign cases in the current OTT regulation debates in Korea. We proposed six OTT types (aggregation, mediation, mediation-aggregation, multi-screen, outlet, and outlet-linear) simultaneously considering service operator' origin, business model, content format, and content delivery. These services have been consistently evolving, and the OTT market has been increasingly competitive especially around content differentiation. Regulators must be wary of hastily introducing competition regulation to the dynamically innovating OTT market. The foreign cases, including the US, EU, the UK, and Japan, hardly seem to be appropriate bases for strengthening OTT regulation. Rather, they were focused more on promoting competition in the domestic media market and enriching the content ecosystem. Therefore, we need to consider revision of the outdated media regulation frameworks instead of fitting OTT under them, and to recognize the priority of securing practical jurisdiction on global service providers before capturing local players into the conventional regulation systems.

An Efficient BotNet Detection Scheme Exploiting Word2Vec and Accelerated Hierarchical Density-based Clustering (Word2Vec과 가속화 계층적 밀집도 기반 클러스터링을 활용한 효율적 봇넷 탐지 기법)

  • Lee, Taeil;Kim, Kwanhyun;Lee, Jihyun;Lee, Suchul
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.11-20
    • /
    • 2019
  • Numerous enterprises, organizations and individual users are exposed to large DDoS (Distributed Denial of Service) attacks. DDoS attacks are performed through a BotNet, which is composed of a number of computers infected with a malware, e.g., zombie PCs and a special computer that controls the zombie PCs within a hierarchical chain of a command system. In order to detect a malware, a malware detection software or a vaccine program must identify the malware signature through an in-depth analysis, and these signatures need to be updated in priori. This is time consuming and costly. In this paper, we propose a botnet detection scheme that does not require a periodic signature update using an artificial neural network model. The proposed scheme exploits Word2Vec and accelerated hierarchical density-based clustering. Botnet detection performance of the proposed method was evaluated using the CTU-13 dataset. The experimental result shows that the detection rate is 99.9%, which outperforms the conventional method.

A Method for Body Keypoint Localization based on Object Detection using the RGB-D information (RGB-D 정보를 이용한 객체 탐지 기반의 신체 키포인트 검출 방법)

  • Park, Seohee;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.85-92
    • /
    • 2017
  • Recently, in the field of video surveillance, a Deep Learning based learning method has been applied to a method of detecting a moving person in a video and analyzing the behavior of a detected person. The human activity recognition, which is one of the fields this intelligent image analysis technology, detects the object and goes through the process of detecting the body keypoint to recognize the behavior of the detected object. In this paper, we propose a method for Body Keypoint Localization based on Object Detection using RGB-D information. First, the moving object is segmented and detected from the background using color information and depth information generated by the two cameras. The input image generated by rescaling the detected object region using RGB-D information is applied to Convolutional Pose Machines for one person's pose estimation. CPM are used to generate Belief Maps for 14 body parts per person and to detect body keypoints based on Belief Maps. This method provides an accurate region for objects to detect keypoints an can be extended from single Body Keypoint Localization to multiple Body Keypoint Localization through the integration of individual Body Keypoint Localization. In the future, it is possible to generate a model for human pose estimation using the detected keypoints and contribute to the field of human activity recognition.

Implementation of Policy based In-depth Searching for Identical Entities and Cleansing System in LOD Cloud (LOD 클라우드에서의 연결정책 기반 동일개체 심층검색 및 정제 시스템 구현)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.67-77
    • /
    • 2018
  • This paper suggests that LOD establishes its own link policy and publishes it to LOD cloud to provide identity among entities in different LODs. For specifying the link policy, we proposed vocabulary set founded on RDF model as well. We implemented Policy based In-depth Searching and Cleansing(PISC for short) system that proceeds in-depth searching across LODs by referencing the link policies. PISC has been published on Github. LODs have participated voluntarily to LOD cloud so that degree of the entity identity needs to be evaluated. PISC, therefore, evaluates the identities and cleanses the searched entities to confine them to that exceed user's criterion of entity identity level. As for searching results, PISC provides entity's detailed contents which have been collected from diverse LODs and ontology customized to the content. Simulation of PISC has been performed on DBpedia's 5 LODs. We found that similarity of 0.9 of source and target RDF triples' objects provided appropriate expansion ratio and inclusion ratio of searching result. For sufficient identity of searched entities, 3 or more target LODs are required to be specified in link policy.

The Effect on the Characteristics of Urban Storm Runoff due to the Space Allocation of Design Rainfall and the Partition of the Subbasin (도시유역에서의 강우 공간분포 및 소유역분할이 유출특성에 미치는 영향)

  • Lee, Jong-Tae;Lee, Sang-Tae
    • Journal of Korea Water Resources Association
    • /
    • v.30 no.2
    • /
    • pp.177-191
    • /
    • 1997
  • The influences of the space allocation of design rainfall and partition of the subbasin on the characteristics of urban storm runoff was investigated for the 6 drainage basins by applying SWMM model. It show the deviation of -54.68∼18.77% in the peak discharge when we applied the composed JUFF quantiles to the two zones which are divided by upper and lower region of the basin. Then it is compared with the value for the case of using uniform rainfall distribution all over the drainage. Therefore, it would be helpful to decrease the flood risk when we adopt the space distribution of the design rainfall. The effects of the partitioning the drainage on the computing result shows various responses because of the surface characteristics of the each basin such as slope, imperviousness ratio, buy we can get closer result to the measured value as we make the subbasin detailed. If we use the concept of the skewness and area ratio when we determine the width of subbasin, we can improve the computed result even with fewer number of subbasins. We expect reasonable results which close into the measured results in the range of relative error, 25%, when we divide the basin into more than 3 subbasins and the total urban drainage area is less than 10$\textrm{km}^2$.

  • PDF

The development of parallel computation method for the fire-driven-flow in the subway station (도시철도역사에서 화재유동에 대한 병렬계산방법연구)

  • Jang, Yong-Jun;Lee, Chang-Hyun;Kim, Hag-Beom;Park, Won-Hee
    • Proceedings of the KSR Conference
    • /
    • 2008.06a
    • /
    • pp.1809-1815
    • /
    • 2008
  • This experiment simulated the fire driven flow of an underground station through parallel processing method. Fire analysis program FDS(Fire Dynamics Simulation), using LES(Large Eddy Simulation), has been used and a 6-node parallel cluster, each node with 3.0Ghz_2set installed, has been used for parallel computation. Simulation model was based on the Kwangju-geumnan subway station. Underground station, and the total time for simulation was set at 600s. First, the whole underground passage was divided to 1-Mesh and 8-Mesh in order to compare the parallel computation of a single CPU and Multi-CPU. With matrix numbers($15{\times}10^6$) more than what a single CPU can handle, fire driven flow from the center of the platform and the subway itself was analyzed. As a result, there seemed to be almost no difference between the single CPU's result and the Multi-CPU's ones. $3{\times}10^6$ grid point one employed to test the computing time with 2CPU and 7CPU computation were computable two times and fire times faster than 1CPU respectively. In this study it was confirmed that CPU could be overcome by using parallel computation.

  • PDF

Design and Implementation of Initial OpenSHMEM Based on PCI Express (PCI Express 기반 OpenSHMEM 초기 설계 및 구현)

  • Joo, Young-Woong;Choi, Min
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.3
    • /
    • pp.105-112
    • /
    • 2017
  • PCI Express is a bus technology that connects the processor and the peripheral I/O devices that widely used as an industry standard because it has the characteristics of high-speed, low power. In addition, PCI Express is system interconnect technology such as Ethernet and Infiniband used in high-performance computing and computer cluster. PGAS(partitioned global address space) programming model is often used to implement the one-sided RDMA(remote direct memory access) from multi-host systems, such as computer clusters. In this paper, we design and implement a OpenSHMEM API based on PCI Express maintaining the existing features of OpenSHMEM to implement RDMA based on PCI Express. We perform experiment with implemented OpenSHMEM API through a matrix multiplication example from system which PCs connected with NTB(non-transparent bridge) technology of PCI Express. The PCI Express interconnection network is currently very expensive and is not yet widely available to the general public. Nevertheless, we actually implemented and evaluated a PCI Express based interconnection network on the RDK evaluation board. In addition, we have implemented the OpenSHMEM software stack, which is of great interest recently.

Performance Optimization Strategies for Fully Utilizing Apache Spark (아파치 스파크 활용 극대화를 위한 성능 최적화 기법)

  • Myung, Rohyoung;Yu, Heonchang;Choi, Sukyong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.1
    • /
    • pp.9-18
    • /
    • 2018
  • Enhancing performance of big data analytics in distributed environment has been issued because most of the big data related applications such as machine learning techniques and streaming services generally utilize distributed computing frameworks. Thus, optimizing performance of those applications at Spark has been actively researched. Since optimizing performance of the applications at distributed environment is challenging because it not only needs optimizing the applications themselves but also requires tuning of the distributed system configuration parameters. Although prior researches made a huge effort to improve execution performance, most of them only focused on one of three performance optimization aspect: application design, system tuning, hardware utilization. Thus, they couldn't handle an orchestration of those aspects. In this paper, we deeply analyze and model the application processing procedure of the Spark. Through the analyzed results, we propose performance optimization schemes for each step of the procedure: inner stage and outer stage. We also propose appropriate partitioning mechanism by analyzing relationship between partitioning parallelism and performance of the applications. We applied those three performance optimization schemes to WordCount, Pagerank, and Kmeans which are basic big data analytics and found nearly 50% performance improvement when all of those schemes are applied.

An Efficient Walkthrough from Two Images using Spidery Mesh Interface and View Morphing (Spidery 매쉬 인터페이스와 뷰 모핑을 이용한 두 이미지로부터의 효율적인 3차원 애니메이션)

  • Cho, Hang-Shin;Kim, Chang-Hun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.2
    • /
    • pp.132-140
    • /
    • 2001
  • This paper proposes an efficient walktlu-ough animation from two images of the same scene. To make animation easily and fast, Tour Into the Picture(TIP) enables walkthrough animation from single image but lacks the reality of its foreground object when the viewpoint moves from side to side, and view morphing uses only 2D transition between two images but restricts its camera path on the line between two views. By combining advantages of these two image-based techniques, this paper suggests a new virtual navigation technique which enable natural scene transformation when the viewpoint changes in the side-to-side direction as well as in the depth direction. In our method, view morphing is employed only in foreground objects , and background scene which is perceived carelessly is mapped into cube-like 3D model as in TIP, so as to save laborious 3D reconstruction costs and improve visual realism simultaneously. To do this, we newly define a camera transformation between two images from the relationship of the spidery mesh transformation and its corresponding 3D view change. The result animation shows that our method creates a realistic 3D virtual navigation using a simple interface.

  • PDF

Analysis of Natural Ventilation Rates of Venlo-type Greenhouse Built on Reclaimed Lands using CFD (전산유체역학을 통한 간척지 내 벤로형 온실의 자연환기량 분석)

  • Lee, Sang-Yeon;Lee, In-Bok;Kwon, Kyeong-Seok;Ha, Tae-Hwan;Yeo, Uk-Hyeon;Park, Se-Jun;Kim, Rack-Woo;Jo, Ye-Seul;Lee, Seung-No
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.57 no.6
    • /
    • pp.21-33
    • /
    • 2015
  • Recently, the Korean government announced a new development plan for a large-scale greenhouse complex in reclaimed lands. Wind environments of reclaimed land are entirely different from those of inland. Many standard books for ventilation design didn't include qualitative standard for natural ventilation. In this study, natural ventilation rates were analyzed to suggest standard for ventilation design of venlo type greenhouse built on reclaimed land. CFD (Computational Fluid Dynamics) simulation models were designed according to the number of spans, wind conditions and vent openings. The wind profile at a reclaimed land was designed using ESDU (Engineering Sciences Data Unit) code. Using the designed CFD simulation model, ventilation rates were computed using mass flow rate and tracer gas decay method. Additionally computed natural ventilation rates were evaluated by comparing with ventilation requirements. As a result of this study, ventilation rates were decreased with increasing of the number of spans. Ventilation rates were linearly increased with increasing of wind speed. When the wind speed was $1.0\;m{\cdot}s^{-1}$, only side vent was open and wind direction was $45^{\circ}$, homogeneity of ventilation rate at 0~1 m height is the worst. Finally, chart for computing natural ventilation rate was suggested. The chart was expected to be used for establishing standard of ventilation design.