• Title/Summary/Keyword: Combined processing process

Search Result 241, Processing Time 0.035 seconds

Compensation Analysis of Cell Delay Variation for ATM Transmission in the TDMA Method (TDMA 방식에서 ATM 전송을 위한 셀 지연 변이의 보상 해석)

  • Kim, Jeong-Ho;Choe, Gyeong-Su
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.2
    • /
    • pp.295-304
    • /
    • 1996
  • Toprovide economical BISDN service, with which integration process of many types of media is possible, it is necessary to construct a system with ground network and satellite network combined. The method for this type of transmission using satellite is TDMA that can provide services to many users in various area. However, the most difficult task to connect TDMA which uses synchronous method to ATM which used asynchronous transfer mode is the deterioration n of ATM transmission quality such as cell delay variation. Therefore, it is necessary to develop delay variation compensation method which can confront to the ATM. Efficient ways to use satellite links under the conditions such that maximum efficiency of the delay variation is limited under the required value, and the burst characteristic of transmission cell does not increase are being researched for translation between in ATM and TDMA. This paper points out the problems when time stamp method, reviewd in ground network, is applied to the satellite links to compensate the delay variation .To solve the problem, discrete cell count method is introduced along with the calculation of transmission capacity and error rate.Also, from the observation of stab-ility of the system and verification of reliability even when singal error occurred in the cell transmission timing information, the proposed compensation method appeared to be excellent.

  • PDF

An Integrated Processing Method for Image and Sensing Data Based on Location in Mobile Sensor Networks (이동 센서 네트워크에서 위치 기반의 동영상 및 센싱 데이터 통합 처리 방안)

  • Ko, Minjung;Jung, Juyoung;Boo, Junpil;Kim, Dohyun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.8 no.5
    • /
    • pp.65-71
    • /
    • 2008
  • Recently, the research is progressing on the SWE(Sensor Web Enablement) platform of OGC(Open Geospatial Consortium) to provide the sensing data and moving pictures collected in a sensor network through the Internet Web. However, existed research does not deal with moving objects like cars, trains, ships, and person. Therefore, we present a method to deal with integrated sensing data collected by GPS device, sensor network, and image devices. Also, this paper proposes an integrated processing method for image and sensing data based on location in mobile sensor networks. Additionally, according to proposed methods, we design and implement the combine adapter. This combine adapter receives a contexts data, and provides the common interface included parsing, queueing, creating unified message function. We verity the proposed method which deal with the integrated sensing data based on combine adapter efficiently. Therefore, the research is expected to help the development of a various context information service based on location in future.

  • PDF

Efficient Coverage Guided IoT Firmware Fuzzing Technique Using Combined Emulation (복합 에뮬레이션을 이용한 효율적인 커버리지 가이드 IoT 펌웨어 퍼징 기법)

  • Kim, Hyun-Wook;Kim, Ju-Hwan;Yun, Joobeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.847-857
    • /
    • 2020
  • As IoT equipment is commercialized, Bluetooth or wireless networks will be built into general living devices such as IP cameras, door locks, cars and TVs. Security for IoT equipment is becoming more important because IoT equipment shares a lot of information through the network and collects personal information and operates the system. In addition, web-based attacks and application attacks currently account for a significant portion of cyber threats, and security experts are analyzing the vulnerabilities of cyber attacks through manual analysis to secure them. However, since it is virtually impossible to analyze vulnerabilities with only manual analysis, researchers studying system security are currently working on automated vulnerability detection systems, and Firm-AFL, published recently in USENIX, proposed a system by conducting a study on fuzzing processing speed and efficiency using a coverage-based fuzzer. However, the existing tools were focused on the fuzzing processing speed of the firmware, and as a result, they did not find any vulnerability in various paths. In this paper, we propose IoTFirmFuzz, which finds more paths, resolves constraints, and discovers more crashes by strengthening the mutation process to find vulnerabilities in various paths not found in existing tools.

Recycling Industry of Urban Mines by Applying Non-Ferrous Metallurgical Processes in Japan (비철제련(非鐵製鍊) 프로세스를 이용한 일본(日本)의 도시광산(都市鑛山) 재자원화산업(再資源化産業))

  • Oh, Jae-Hyun;Kim, Joon-Soo;Moon, Suk-Min;Min, Ji-Won
    • Resources Recycling
    • /
    • v.20 no.3
    • /
    • pp.12-27
    • /
    • 2011
  • DOWA group has been working on metal recycling applying the smelting and refining process of KOSAKA Smelter. DOWA has developed it's metal recycling technologies through the treatment of black ore(complex sulfide ores) that contain many kinds of non-ferrous metals. In addition to these special technologies, DOWA has strengthened its hydrometallurgical process of precious metals and ability to deal with low-grade materials such as used electrical appliances or vehicles. On the other hand, JX Nippon Mining & Metals Corporation(JX-NMMC) carries out its metal recycling and industrial waste treatment businesses employing advanced separation, extraction and refining technologies developed through its extensive experience in the smelting of non-ferrous metals. JX-NMMC collects approximately 100,000t/y of copper and precious metal scraps from waste sources such as electronic parts, mobile phones, catalytic converters, print circuit boards and gold plated parts. These items are recycled through the smelting and refining operations of Saganoseki smelter and Hitachi Metal-recycling complex(HMC). In this like, metal recycling industries combined with environmental business service in Japan have been developed through excellent technologies for mineral processing and non-ferrous smelting. Also, both group, Dowa and JX-NMMC, were contributed to establish Japan's recycling-oriented society as the typical leading company of non-ferrous smelting. Now. it is an important issue to set up the collection system for e-waste.

Microstructural Change by Hot Forging Process of Korean Traditional Forged High Tin Bronze (전통기술로 제작된 방짜유기의 열간 단조 과정별 미세조직 변화)

  • Lee, Jae-sung;Jeon, Ik-hwan;Park, Jang-sik
    • Journal of Conservation Science
    • /
    • v.34 no.6
    • /
    • pp.493-502
    • /
    • 2018
  • Currently, the fabrication of a high-tin bronze spoon by traditional manufacturing techniques involves 10 steps in the bronze ware workshop. Hot forging has a major influence on manufacturing and involves two to three steps. The dendritic ${\alpha}$-phase in the microstructure of the high-tin bronze spoon is refined and finely dispersed through hot forging. In addition, twinning is observed in the ${\alpha}$-phase of the hammered part, and the ${\alpha}$-phase microstructure gradually transform from a polygon to a circular shape due to hammering. In this process, the adjacent ${\alpha}$-phases overlap with each other and remain combined after quenching. The microstructure with the overlapping is also observed in bronze artifacts, and this shows the correlation with technical system. The results of the experimental hot forging of Cu-22%Sn alloys show that the decrease in in the amount of the dendritic microstructure, which forms during casting, is in proportion to the number of processing steps and that the refined grain obtained by hammering contributes to the improvement in the strength of the material. From the hammering marks, which are observed on both the bronze artifact excavated from archaeological sites and on the high-tin bronze spoon produced in the traditional workshop, it is presumed that the knowledge regarding the unrecorded manufacturing system of bronze ware in ancient times has been passed down in a traditional way up to the system used currently.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Effects of Virtual Reality Images on Body Stability : Focused on Hand Stability (VR 영상이 신체 안정성에 미치는 영향 : 손 안정성을 중심으로)

  • Han, Seung Jo;Kim, Sun-Uk;Koo, Kyo-Chan;Lee, Kyun-Joo;Cho, Min-Su
    • Journal of Digital Convergence
    • /
    • v.15 no.8
    • /
    • pp.391-400
    • /
    • 2017
  • The purpose of this paper is to present the effect of image stimulation on body stability as a conceptual model and to investigate the effect of image stimulus(2D, VR) on body stability(hand stability) through experiments Recently, stereoscopic images such as virtual and augmented reality are combined with smart phones and exercise equipments, and the diffusion is becoming active. The possibility of a safety accident or human error is also increasing as it temporarily affects the balance of the body and hand stability after the image stimulus is removed. The conceptual model is presented based on the results of previous studies. Based on the experimental results, the conceptual model has been explained in combination with the human information processing process and cognitive resource models that take place in the brain. Twenty subjects were exposed to 2D and VR stimuli, and display fatigue was measured by cybersickness questionnaire and hand stability by hand steadiness tester. Experimental results show that VR images induce higher display fatigue and lower hand stability than 2D. In this study, it is meaningful that hand stability according to image type and display fatigue level which have not been tried yet is revealed through conceptual model and experiment.

Human Visual Perception-Based Quantization For Efficiency HEVC Encoder (HEVC 부호화기 고효율 압축을 위한 인지시각 특징기반 양자화 방법)

  • Kim, Young-Woong;Ahn, Yong-Jo;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.28-41
    • /
    • 2017
  • In this paper, the fast encoding algorithm in High Efficiency Video Coding (HEVC) encoder was studied. For the encoding efficiency, the current HEVC reference software is divided the input image into Coding Tree Unit (CTU). then, it should be re-divided into CU up to maximum depth in form of quad-tree for RDO (Rate-Distortion Optimization) in encoding precess. But, it is one of the reason why complexity is high in the encoding precess. In this paper, to reduce the high complexity in the encoding process, it proposed the method by determining the maximum depth of the CU using a hierarchical clustering at the pre-processing. The hierarchical clustering results represented an average combination of motion vectors (MV) on neighboring blocks. Experimental results showed that the proposed method could achieve an average of 16% time saving with minimal BD-rate loss at 1080p video resolution. When combined the previous fast algorithm, the proposed method could achieve an average 45.13% time saving with 1.84% BD-rate loss.

MPEG Video Segmentation using Two-stage Neural Networks and Hierarchical Frame Search (2단계 신경망과 계층적 프레임 탐색 방법을 이용한 MPEG 비디오 분할)

  • Kim, Joo-Min;Choi, Yeong-Woo;Chung, Ku-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.1_2
    • /
    • pp.114-125
    • /
    • 2002
  • In this paper, we are proposing a hierarchical segmentation method that first segments the video data into units of shots by detecting cut and dissolve, and then decides types of camera operations or object movements in each shot. In our previous work[1], each picture group is divided into one of the three detailed categories, Shot(in case of scene change), Move(in case of camera operation or object movement) and Static(in case of almost no change between images), by analysing DC(Direct Current) component of I(Intra) frame. In this process, we have designed two-stage hierarchical neural network with inputs of various multiple features combined. Then, the system detects the accurate shot position, types of camera operations or object movements by searching P(Predicted), B(Bi-directional) frames of the current picture group selectively and hierarchically. Also, the statistical distributions of macro block types in P or B frames are used for the accurate detection of cut position, and another neural network with inputs of macro block types and motion vectors method can reduce the processing time by using only DC coefficients of I frames without decoding and by searching P, B frames selectively and hierarchically. The proposed method classified the picture groups in the accuracy of 93.9-100.0% and the cuts in the accuracy of 96.1-100.0% with three different together is used to detect dissolve, types of camera operations and object movements. The proposed types of video data. Also, it classified the types of camera movements or object movements in the accuracy of 90.13% and 89.28% with two different types of video data.

A Performance Improvement of Linux TCP Networking by Data Structure Reuse (자료 구조 재사용을 이용한 리눅스 TCP 네트워킹 성능 개선)

  • Kim, Seokkoo;Chung, Kyusik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.8
    • /
    • pp.261-270
    • /
    • 2014
  • As Internet traffic increases recently, much effort has been put on improving the performance of a web server. In addition to hardware side solutions such as replacement by high-end hardware or expansion of the number of servers, there are software side solutions to improve performance. Recent studies on these software side solutions have been actively performed. In this paper, we identify performance degradation problems occurring in a conventional TCP networking reception process and propose a way to solve them. We improve performance by combining three kinds of existing methods for Linux Networking Performance Improvement and two kinds of newly proposed methods in this paper. The three existing methods include 1) an allocation method of a packet flow to a core in a multi-core environment, 2) ITR(Interrupt Throttle Rate) method to control excessive interrupt requests, and 3) sk_buff data structure recycling. The two newly proposed methods are fd data structure recycling and epoll_event data structure recycling. Through experiments in a web server environment, we verify the effect of our two proposed methods and its combination with the three existing methods for performance improvement, respectively. We use three kinds of web servers: a simple web server, Lighttpd generally used in Linux, and Apache. In a simple web server environment, fd data structure recycling and epoll_event data structure recycling bring out performance improvement by about 7 % and 6%, respectively. If they are combined with the three existing methods, performance is improved by up to 40% in total. In a Lighttpd and an Apache web server environment, the combination of five methods brings out performance improvement by up to 36% and 20% in total, respectively.