• Title/Summary/Keyword: processing architecture

Search Result 2,746, Processing Time 0.035 seconds

Efficient Privacy-Preserving Duplicate Elimination in Edge Computing Environment Based on Trusted Execution Environment (신뢰실행환경기반 엣지컴퓨팅 환경에서의 암호문에 대한 효율적 프라이버시 보존 데이터 중복제거)

  • Koo, Dongyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.9
    • /
    • pp.305-316
    • /
    • 2022
  • With the flood of digital data owing to the Internet of Things and big data, cloud service providers that process and store vast amount of data from multiple users can apply duplicate data elimination technique for efficient data management. The user experience can be improved as the notion of edge computing paradigm is introduced as an extension of the cloud computing to improve problems such as network congestion to a central cloud server and reduced computational efficiency. However, the addition of a new edge device that is not entirely reliable in the edge computing may cause increase in the computational complexity for additional cryptographic operations to preserve data privacy in duplicate identification and elimination process. In this paper, we propose an efficiency-improved duplicate data elimination protocol while preserving data privacy with an optimized user-edge-cloud communication framework by utilizing a trusted execution environment. Direct sharing of secret information between the user and the central cloud server can minimize the computational complexity in edge devices and enables the use of efficient encryption algorithms at the side of cloud service providers. Users also improve the user experience by offloading data to edge devices, enabling duplicate elimination and independent activity. Through experiments, efficiency of the proposed scheme has been analyzed such as up to 78x improvements in computation during data outsourcing process compared to the previous study which does not exploit trusted execution environment in edge computing architecture.

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

제주도 지하수자원의 최적 개발가능량 선정에 관한 수리지질학적 연구

  • 한정상;김창길;김남종;한규상
    • Proceedings of the Korean Society of Soil and Groundwater Environment Conference
    • /
    • 1994.07a
    • /
    • pp.184-215
    • /
    • 1994
  • The Hydrogeologic data of 455 water wells comprising geologic and aquifer test were analyzed to determine hydrogeoloic characteristics of Cheju island. The groundwater of Cheju island is occurred in unconsolidated pyroclastic deposits interbedded in highly jointed basaltic and andesic rocks as high level, basal and parabasal types order unconfined condition. The average transmissivity and specific yield of the aquifer are at about 29,300m$^2$/day and 0.12 respectively. The total storage of groundwater is estimated about 44 billion cubic meters(m$^3$). Average annual precipitation is about 3390 million m$^3$ among which average recharge amount is estimated 1494 million m$^3$ equivalent 44.1% of annual precipitation with 638 million m$^3$ of runoff and 1256 million m$^3$ of evapotranspiration. Based on groundwater budget analysis, the sustainable yield is about 620 million m$^3$(41% of annual recharge)and rest of it is discharging into the sea. The geologic logs of recently drilled thermal water wens indicate that very low-permeable marine sediments(Sehwa-ri formation) composed of loosely cemented sandy sat derived from mainly volcanic ashes, at the 1st stage volcanic activity of the area was situated at the 120$\pm$68m below sea level. And also the other low-permeable sedimentary rock called Segipo-formation which is deemed younger than former marine sediment is occured at the area covering north-west and western part of Cheju at the $\pm$70m below sea level. If these impermeable beds are distributed as a basal formation of fresh water zone of Cheju, most of groundwater in Cheju will be para-basal type. These formations will be one of the most important hydrogeologic boundary and groundwater occurences in the area.

  • PDF

An Improved CBRP using Secondary Header in Ad-Hoc network (Ad-Hoc 네트워크에서 보조헤더를 이용한 개선된 클러스터 기반의 라우팅 프로토콜)

  • Hur, Tai-Sung
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.1
    • /
    • pp.31-38
    • /
    • 2008
  • Ad-Hoc network is a network architecture which has no backbone network and is deployed temporarily and rapidly in emergency or war without fixed mobile infrastructures. All communications between network entities are carried in ad-hoc networks over the wireless medium. Due to the radio communications being extremely vulnerable to propagation impairments, connectivity between network nodes is not guaranteed. Therefore, many new algorithms have been studied recently. This study proposes the secondary header approach to the cluster based routing protocol (CBRP). The primary header becomes abnormal status so that the primary header can not participate in the communications between network entities, the secondary header immediately replaces the primary header without selecting process of the new primary header. This improves the routing interruption problem that occurs when a header is moving out from a cluster or in the abnormal status. The performances of proposed algorithm ACBRP(Advanced Cluster Based Routing Protocol) are compared with CBRP. The cost of the primary header reelection of ACBRP is simulated. And results are presented in order to show the effectiveness of the algorithm.

  • PDF

A study of SCM strategic plan: Focusing on the case of LG electronics (공급사슬 관리 구축전략에 관한 연구: LG전자 사례 중심으로)

  • Lee, Gi-Wan;Lee, Sang-Youn
    • Journal of Distribution Science
    • /
    • v.9 no.3
    • /
    • pp.83-94
    • /
    • 2011
  • Most domestic companies, with the exclusion of major firms, are reluctant to implement a supply chain management (SCM) network into their operations. Most small- and medium-sized enterprises are not even aware of SCM. Due to the inherent total-systems efficiency of SCM, it coordinates domestic manufacturers, subcontractors, distributors, and physical distributors and cuts down on cost of inventory control, as well as demand management. Furthermore, a lack of SCM causes a decrease in competitiveness for domestic companies. The reason lies in the fundamentality of SCM, which is the characteristic of information sharing, process innovation throughout SCM, and the vast range of problems the SCM management tool is able to address. This study suggests the contemplation and reformation of the current SCM situation by analyzing the SCM strategic plan, discourses and logical discussions on the topic, and a successful case for adapting SCM; hence, the study plans to productively "process" SCM. First, it is necessary to contemplate the theoretical background of SCM before discussing how to successfully process SCM. I will describe the concept and background of SCM in Chapter 2, with a definition of SCM, types of SCM promotional activities, fields of SCM, necessity of applying SCM, and the effects of SCM. All of the defects in currently processing SCM will be introduced in Chapter 3. Discussion items include the following: the Bullwhip Effect; the breakdown in supply chain and sales networks due to e-business; the issue that even though the key to a successful SCM is cooperation between the production and distribution company, during the process of SCM, the companies, many times, put their profits first, resulting in a possible defect in demands estimation. Furthermore, the problems of processing SCM in a domestic distribution-production company concern Information Technology; for example, the new system introduced to the company is not compatible with the pre-existing document architecture. Second, for effective management, distribution and production companies should cooperate and enhance their partnership in the aspect of the corporation; however, in reality, this seldom occurs. Third, in the aspect of the work process, introducing SCM could provoke corporations during the integration of the distribution-production process. Fourth, to increase the achievement of the SCM strategy process, they need to set up a cross-functional team; however, many times, business partners lack the cooperation and business-information sharing tools necessary to effect the transition to SCM. Chapter 4 will address an SCM strategic plan and a case study of LG Electronics. The purpose of the strategic plan, strategic plans for types of business, adopting SCM in a distribution company, and the global supply chain process of LG Electronics will be introduced. The conclusion of the study is located in Chapter 5, which addresses the issue of the fierce competition that companies currently face in the global market environment and their increased investment in SCM, in order to better cope with short product life cycle and high customer expectations. The SCM management system has evolved through the adaptation of improved information, communication, and transportation technologies; now, it demands the utilization of various strategic resources. The introduction of SCM provides benefits to the management of a network of interconnected businesses by securing customer loyalty with cost and time savings, derived through the consolidation of many distribution systems; additionally, SCM helps enterprises form a wide range of marketing strategies. Thus, we could conclude that not only the distributors but all types of businesses should adopt the systems approach to supply chain strategies. SCM deals with the basic stream of distribution and increases the value of a company by replacing physical distribution with information. By the company obtaining and sharing ready information, it is able to create customer satisfaction at the end point of delivery to the consumer.

  • PDF

A 13b 100MS/s 0.70㎟ 45nm CMOS ADC for IF-Domain Signal Processing Systems (IF 대역 신호처리 시스템 응용을 위한 13비트 100MS/s 0.70㎟ 45nm CMOS ADC)

  • Park, Jun-Sang;An, Tai-Ji;Ahn, Gil-Cho;Lee, Mun-Kyo;Go, Min-Ho;Lee, Seung-Hoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.46-55
    • /
    • 2016
  • This work proposes a 13b 100MS/s 45nm CMOS ADC with a high dynamic performance for IF-domain high-speed signal processing systems based on a four-step pipeline architecture to optimize operating specifications. The SHA employs a wideband high-speed sampling network properly to process high-frequency input signals exceeding a sampling frequency. The SHA and MDACs adopt a two-stage amplifier with a gain-boosting technique to obtain the required high DC gain and the wide signal-swing range, while the amplifier and bias circuits use the same unit-size devices repeatedly to minimize device mismatch. Furthermore, a separate analog power supply voltage for on-chip current and voltage references minimizes performance degradation caused by the undesired noise and interference from adjacent functional blocks during high-speed operation. The proposed ADC occupies an active die area of $0.70mm^2$, based on various process-insensitive layout techniques to minimize the physical process imperfection effects. The prototype ADC in a 45nm CMOS demonstrates a measured DNL and INL within 0.77LSB and 1.57LSB, with a maximum SNDR and SFDR of 64.2dB and 78.4dB at 100MS/s, respectively. The ADC is implemented with long-channel devices rather than minimum channel-length devices available in this CMOS technology to process a wide input range of $2.0V_{PP}$ for the required system and to obtain a high dynamic performance at IF-domain input signal bands. The ADC consumes 425.0mW with a single analog voltage of 2.5V and two digital voltages of 2.5V and 1.1V.

Toxicity of Organic Waste-Contaminated Soil on Earthworm (Eisenia fetida) (유기성 폐기물에 의해 오염된 토양이 지렁이에게 미치는 독성)

  • Na, Young-Eun;Bang, Hae-Son;Kim, Myung-Hyun;Lee, Jeong-Taek;Ahn, Young-Joon;Yoon, Seong-Tak
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.40 no.1
    • /
    • pp.51-56
    • /
    • 2007
  • The toxicities of contaminated soils with 8 consecutive year applications of three levels (12.5, 25.0, and $50.0t\;dry\;matter\;ha^{-1}yr^{-1}$) of four organic sludge [municipal sewage sludge (MSS), industrial sewage sludge (ISS), alcohol fermentation processing sludge (AFPS) and leather processing sludge (LPS)] on earthworm (Eisenia fetida) were examined by using microcosm container in the laboratory. Results were compared with those of pig manure compost (PMC) treated soil. In tests with three treatment levels (12.5, 25.0, and 50.0 t per plot), ISS treated soil showed higher contents of Cu (18.9~26.2 fold), Cr (7.7~34.7 fold), and Ni (14.8~18.8 fold) at 8 years post treatment, than PMC treated soil. LPS treated soil showed higher contents of Cr (35.7~268.0 fold) and Ni (4.5~7.6 fold) than PMC treated soil. There were no great differences in heavy metal contents among MSS, AFPS, and PMC treated soils. In these contaminated soils, earthworm mortalities of MSS and AFPS treated soils at 8 weeks post-exposure were similar to those of PMC treated soil regardless of each treatment level. Toxic effect (26.7~96.7 mortality) on the ISS and LPS treated soils was significantly higher than one of PMC treated soil, with an exception of LPS soil treated with 25.0 t per plot. At 16 weeks post-exposure, earthworm mortalities of AFPS' 12.5 and 25.0 t treated soils were similar to those of PMC treated soil. Toxic effect (53.3~100 mortality) on the 12.5, 25.0, and 50.0 t treated soils of MSS, ISS and LPS, and AFPS' 50.0 t treated soils was significantly higher than those of PMC treated soil. The data suggested that the 12.5, 25.0, and 50.0 t of MSS, ISS and LPS, and AFPS' 50.0 t treated soils were evaluated to have toxicity on earthworm.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Implementation of Reporting Tool Supporting OLAP and Data Mining Analysis Using XMLA (XMLA를 사용한 OLAP과 데이타 마이닝 분석이 가능한 리포팅 툴의 구현)

  • Choe, Jee-Woong;Kim, Myung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.3
    • /
    • pp.154-166
    • /
    • 2009
  • Database query and reporting tools, OLAP tools and data mining tools are typical front-end tools in Business Intelligence environment which is able to support gathering, consolidating and analyzing data produced from business operation activities and provide access to the result to enterprise's users. Traditional reporting tools have an advantage of creating sophisticated dynamic reports including SQL query result sets, which look like documents produced by word processors, and publishing the reports to the Web environment, but data source for the tools is limited to RDBMS. On the other hand, OLAP tools and data mining tools have an advantage of providing powerful information analysis functions on each own way, but built-in visualization components for analysis results are limited to tables or some charts. Thus, this paper presents a system that integrates three typical front-end tools to complement one another for BI environment. Traditional reporting tools only have a query editor for generating SQL statements to bring data from RDBMS. However, the reporting tool presented by this paper can extract data also from OLAP and data mining servers, because editors for OLAP and data mining query requests are added into this tool. Traditional systems produce all documents in the server side. This structure enables reporting tools to avoid repetitive process to generate documents, when many clients intend to access the same dynamic document. But, because this system targets that a few users generate documents for data analysis, this tool generates documents at the client side. Therefore, the tool has a processing mechanism to deal with a number of data despite the limited memory capacity of the report viewer in the client side. Also, this reporting tool has data structure for integrating data from three kinds of data sources into one document. Finally, most of traditional front-end tools for BI are dependent on data source architecture from specific vendor. To overcome the problem, this system uses XMLA that is a protocol based on web service to access to data sources for OLAP and data mining services from various vendors.

The new explore of the animated content using OculusVR - Focusing on the VR platform and killer content - (오큘러스 VR (Oculus VR)를 이용한 애니메이션 콘텐츠의 새로운 모색 - VR 플랫폼과 킬러콘텐츠를 중심으로 -)

  • Lee, Jong-Han
    • Cartoon and Animation Studies
    • /
    • s.45
    • /
    • pp.197-214
    • /
    • 2016
  • Augmented Reality, virtual reality in recently attracted attention throughout the world. and Mix them mixed reality etc., it has had a significant impact on the overall pop culture beyond the scope of science and technology. The world's leading IT company : Google, Apple, Samsung, Microsoft, Sony, LG is focusing on development of AR, VR technology for the public. The many large and small companies developed VR hardware, VR software, VR content. It does not look that makes a human a human operation in the cognitive experience of certain places or situations or invisible through Specific platforms or program is Encompass a common technique that a realization of the virtual space. In particular, out of the three-dimensional image reveals the limitations of the conventional two-dimensional structure - 180, 360 degree images provided by the subjective and objective symptoms such as vision and sense of time and got participants to select it. VR technology that can significantly induce the commitment and participation is Industry as well as to the general public which leads to the attention of colostrum. It was introduced more than 10 related VR works Year 2015 Sundance Film Festival New Frontier program. The appearance VR content : medical, architecture, shopping, movies, animations. Also, 360 individuals can be produced by the camera / video sharing VR is becoming an interactive tunnel between two possible users. Nevertheless, This confusion of values, moral degeneration and the realization of a virtual space that has been pointed out that the inherent. 4K or HUD, location tracking, motion sensors, processing power, and superior 3D graphics, touch, smell, 4D technology, 3D audio technology - It developed more than ever and possible approaches to reality. Thereafter, This is because the moral degeneration, identity, generational conflict, and escapism concerns. Animation is also seeking costs in this category Reality. Despite the similarities rather it has that image, and may be the reason that the animation is pushed back to the VR content creation. However, it is focused on the game and VR technology and the platform that is entertaining, but also seek new points within the animation staying in the flat Given that eventually consist of visual images is clear that VR sought. Finally, What is the reality created in the virtual space using VR technology could be applied to the animation? So it can be seen that the common interest is research on what methods and means applied.