• 제목/요약/키워드: Information-processing theory

Search Result 603, Processing Time 0.024 seconds

Design of Software and Hardware Modules for a TCP/IP Offload Engine with Separated Transmission and Reception Paths (송수신 분리형 TCP/IP Offload Engine을 위한 소프트웨어 및 하드웨어 모듈의 설계)

  • Jang Hank-Kok;Chung Sang-Hwa;Choi Young-In
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.691-698
    • /
    • 2006
  • TCP/IP Offload Engine (TOE) is a technology that processes TCP/IP on a network adapter instead of a host CPU to reduce protocol processing overhead from the host CPU. There have been some approaches to implementing TOE: software TOE based on an embedded processor; hardware TOE based on ASIC implementation; and hybrid TOE in which software and hardware functions are combined. In this paper, we designed software modules and hardware modules for a hybrid TOE on an FPGA that had two processor cores. Software modules are based on the embedded Linux. Hardware modules are for data transmission (TX) and reception (RX). One core controls the TX path and the other controls the RX path of the Linux. This TX/RX path separation mechanism can reduce task switching overheads between processes and overcome poor performance of single embedded processor. Hardware modules deal with creating headers for outgoing packets, processing headers of incoming packets, and fetching or storing data from or to the host memory by DMA. These can make it possible to improve the performance of data transmission and reception. We proved performance of the TOE with separated transmission and reception paths by performing experiments with a TOE network adapter that was equipped with the FPGA having processor cores.

Design and Evaluation of a NIC-Driven Host-Independent Network System (네트워크 인터페이스 카드에 기반한 호스트 독립적인 네트워크 시스템의 설계 및 성능평가)

  • Yim Keun Soo;Cha Hojung;Koh Kern
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.11
    • /
    • pp.626-634
    • /
    • 2004
  • In a client-server model, network server systems suffer from both heavy communication and computational loads. While communication channels become increasingly speedy, the existing protocol stack architectures still include mainly three performance bottlenecks of protocol stack processing, system call, and network interrupt overheads. To address these obstacles, in this paper we present a host-independent network system where a network interface card (NIC) is utilized in an efficient manner. First, by offloading network-related portion to the NIC, the host can fully utilize its processing power for other useful purposes. Second, it eliminates the system call overhead, such as context-switching and memory copy operations, since the host communicates with the NIC through its user-level libraries. Third, it a] so reduces the network interrupt operation count as the host handles the interrupt in a segment instead of a packet. The experimental results show that the proposed network system reduces the host CPU overhead for communication system by 68-71%. It also shows that the proposed system improves the communication speed by 11-83% under heavy computational and communication load conditions.

A New Dual Connective Network Resource Allocation Scheme Using Two Bargaining Solution (이중 협상 해법을 이용한 새로운 다중 접속 네트워크에서 자원 할당 기법)

  • Chon, Woo Sun;Kim, Sung Wook
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.8
    • /
    • pp.215-222
    • /
    • 2021
  • In order to alleviate the limited resource problem and interference problem in cellular networks, the dual connectivity technology has been introduced with the cooperation of small cell base stations. In this paper, we design a new efficient and fair resource allocation scheme for the dual connectivity technology. Based on two different bargaining solutions - Generalizing Tempered Aspiration bargaining solution and Gupta and Livne bargaining solution, we develop a two-stage radio resource allocation method. At the first stage, radio resource is divided into two groups, such as real-time and non-real-time data services, by using the Generalizing Tempered Aspiration bargaining solution. At the second stage, the minimum request processing speeds for users in both groups are guaranteed by using the Gupta and Livne bargaining solution. These two-step approach can allocate the 5G radio resource sequentially while maximizing the network system performance. Finally, the performance evaluation confirms that the proposed scheme can get a better performance than other existing protocols in terms of overall system throughput, fairness, and communication failure rate according to an increase in service requests.

Generalization of error decision rules in a grammar checker using Korean WordNet, KorLex (명사 어휘의미망을 활용한 문법 검사기의 문맥 오류 결정 규칙 일반화)

  • So, Gil-Ja;Lee, Seung-Hee;Kwon, Hyuk-Chul
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.405-414
    • /
    • 2011
  • Korean grammar checkers typically detect context-dependent errors by employing heuristic rules that are manually formulated by a language expert. These rules are appended each time a new error pattern is detected. However, such grammar checkers are not consistent. In order to resolve this shortcoming, we propose new method for generalizing error decision rules to detect the above errors. For this purpose, we use an existing thesaurus KorLex, which is the Korean version of Princeton WordNet. KorLex has hierarchical word senses for nouns, but does not contain any information about the relationships between cases in a sentence. Through the Tree Cut Model and the MDL(minimum description length) model based on information theory, we extract noun classes from KorLex and generalize error decision rules from these noun classes. In order to verify the accuracy of the new method in an experiment, we extracted nouns used as an object of the four predicates usually confused from a large corpus, and subsequently extracted noun classes from these nouns. We found that the number of error decision rules generalized from these noun classes has decreased to about 64.8%. In conclusion, the precision of our grammar checker exceeds that of conventional ones by 6.2%.

PVC Classification based on QRS Pattern using QS Interval and R Wave Amplitude (QRS 패턴에 의한 QS 간격과 R파의 진폭을 이용한 조기심실수축 분류)

  • Cho, Ik-Sung;Kwon, Hyeog-Soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.4
    • /
    • pp.825-832
    • /
    • 2014
  • Previous works for detecting arrhythmia have mostly used nonlinear method such as artificial neural network, fuzzy theory, support vector machine to increase classification accuracy. Most methods require accurate detection of P-QRS-T point, higher computational cost and larger processing time. Even if some methods have the advantage in low complexity, but they generally suffer form low sensitivity. Also, it is difficult to detect PVC accurately because of the various QRS pattern by person's individual difference. Therefore it is necessary to design an efficient algorithm that classifies PVC based on QRS pattern in realtime and decreases computational cost by extracting minimal feature. In this paper, we propose PVC classification based on QRS pattern using QS interval and R wave amplitude. For this purpose, we detected R wave, RR interval, QRS pattern from noise-free ECG signal through the preprocessing method. Also, we classified PVC in realtime through QS interval and R wave amplitude. The performance of R wave detection, PVC classification is evaluated by using 9 record of MIT-BIH arrhythmia database that included over 30 PVC. The achieved scores indicate the average of 99.02% in R wave detection and the rate of 93.72% in PVC classification.

Process Networks of Ecohydrological Systems in a Temperate Deciduous Forest: A Complex Systems Perspective (온대활엽수림 생태수문계의 과정망: 복잡계 관점)

  • Yun, Juyeol;Kim, Sehee;Kang, Minseok;Cho, Chun-Ho;Chun, Jung-Hwa;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.16 no.3
    • /
    • pp.157-168
    • /
    • 2014
  • From a complex systems perspective, ecohydrological systems in forests may be characterized with (1) large networks of components which give rise to complex collective behaviors, (2) sophisticated information processing, and (3) adaptation through self-organization and learning processes. In order to demonstrate such characteristics, we applied the recently proposed 'process networks' approach to a temperate deciduous forest in Gwangneung National Arboretum in Korea. The process network analysis clearly delineated the forest ecohydrological systems as the hierarchical networks of information flows and feedback loops with various time scales among different variables. Several subsystems were identified such as synoptic subsystem (SS), atmospheric boundary layer subsystem (ABLS), biophysical subsystem (BPS), and biophysicochemical subsystem (BPCS). These subsystems were assembled/disassembled through the couplings/decouplings of feedback loops to form/deform newly aggregated subsystems (e.g., regional subsystem) - an evidence for self-organizing processes of a complex system. Our results imply that, despite natural and human disturbances, ecosystems grow and develop through self-organization while maintaining dynamic equilibrium, thereby continuously adapting to environmental changes. Ecosystem integrity is preserved when the system's self-organizing processes are preserved, something that happens naturally if we maintain the context for self-organization. From this perspective, the process networks approach makes sense.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Study on the Applicability of High Frequency Seismic Reflection Method to the Inspection of Tunnel Lining Structures - Physical Modeling Approach - (터널 지보구조 진단을 위한 고주파수 탄성파 반사법의 응용성 연구 - 모형 실험을 중심으로 -)

  • Kim, Jung-Yul;Kim, Yoo-Sung;Shin, Yong-Suk;Hyun, Hye-Ja;Jung, Hyun-Key
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.2 no.3
    • /
    • pp.37-45
    • /
    • 2000
  • In recent years two reflection methods, i.e. GPR and seismic Impact-Echo, are usually performed to obtain the information about tunnel lining structures composed of concrete lining, shotcrete, water barrier, and voids at the back of lining. However, they do not lead to a desirable resolution sufficient for the inspection of tunnel safety, due to many problems of interest including primarily (1) inner thin layers of lining structure itself in comparison with the wavelength of source wavelets, (2) dominant unwanted surface wave arrivals, (3) inadequate measuring strategy. In this sense, seismic physical modeling is a useful tool, with the use of the full information about the known physical model, to handle such problems, especially to study problems of wave propagation in such fine structures that are not amenable to theory and field works as well. Thus, this paper deals with various results of seismic physical modeling to enable to show a possibility of detecting the inner layer boundaries of tunnel lining structures. To this end, a physical model analogous to a lining structure was built up, measured and processed in the same way as performed in regular reflection surveys. The evaluated seismic section gives a clear picture of the lining structure, that will open up more consistent direction of research into the development of an efficient measuring and processing technology.

  • PDF

An Algorithm for Spot Addressing in Microarray using Regular Grid Structure Searching (균일 격자 구조 탐색을 이용한 마이크로어레이 반점 주소 결정 알고리즘)

  • 진희정;조환규
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.9
    • /
    • pp.514-526
    • /
    • 2004
  • Microarray is a new technique for gene expression experiment, which has gained biologist's attention for recent years. This technology enables us to obtain hundreds and thousands of expression of gene or genotype at once using microarray Since it requires manual work to analyze patterns of gene expression, we want to develop an effective and automated tools to analyze microarray image. However it is difficult to analyze DNA chip images automatically due to several problems such as the variation of spot position, the irregularity of spot shape and size, and sample contamination. Especially, one of the most difficult problems in microarray analysis is the block and spot addressing, which is performed by manual or semi automated work in all the commercial tools. In this paper we propose a new algorithm to address the position of spot and block using a new concept of regular structure grid searching. In our algorithm, first we construct maximal I-regular sequences from the set of input points. Secondly we calculate the rotational angle and unit distance. Finally, we construct I-regularity graph by allowing pseudo points and then we compute the spot/block address using this graph. Experiment results showed that our algorithm is highly robust and reliable. Supplement information is available on http://jade.cs.pusan.ac.kr/~autogrid.

A Case Study on Big Data Analysis of Performing Arts Consumer for Audience Development (관객개발을 위한 공연예술 소비자 빅데이터 분석 사례 고찰)

  • Kim, Sun-Young;Yi, Eui-Shin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.286-299
    • /
    • 2017
  • The Korean performing arts has been facing stagnation due to oversupply, lack of effective distribution system, and insufficient business models. In order to overcome these difficulties, it is necessary to improve the efficiency and accuracy of marketing by using more objective market data, and to secure audience development and loyalty. This study considers the viewpoint that 'Big Data' could provide more general and accurate statistics and could ultimately promote tailoring services for performances. We examine the first case of Big Data analysis conducted by a credit card company as well as Big Data's characteristics, analytical techniques, and the theoretical background of performing arts consumer analysis. The purpose of this study is to identify the meaning and limitations of the analysis case on performing arts by Big Data and to overcome these limitations. As a result of the case study, incompleteness of credit card data for performance buyers, limits of verification of existing theory, low utilization, consumer propensity and limit of analysis of purchase driver were derived. In addition, as a solution to overcome these problems, it is possible to identify genre and performances, and to collect qualitative information, such as prospectors information, that can identify trends and purchase factors.combination with surveys, and purchase motives through mashups with social data. This research is ultimately the starting point of how the study of performing arts consumers should be done in the Big Data era and what changes should be sought. Based on our research results, we expect more concrete qualitative analysis cases for the development of audiences, and continue developing solutions for Big Data analysis and processing that accurately represent the performing arts market.