• Title/Summary/Keyword: computer models

Search Result 3,921, Processing Time 0.029 seconds

A Register-Based Caching Technique for the Advanced Performance of Multithreaded Models (다중스레드 모델의 성능 향상을 위한 가용 레지스터 기반 캐슁 기법)

  • Go, Hun-Jun;Gwon, Yeong-Pil;Yu, Won-Hui
    • The KIPS Transactions:PartA
    • /
    • v.8A no.2
    • /
    • pp.107-116
    • /
    • 2001
  • A multithreaded model is a hybrid one which combines locality of execution of the von Neumann model with asynchronous data availability and implicit parallelism of the dataflow model. Much researches that have been made toward the advanced performance of multithreaded models are about the cache memory which have been proved to be efficient in the von Neumann model. To use an instruction cache or operand cache, the multithreaded models must have cache memories. If cache memories are added to the multithreaded model, they may have the disadvantage of high implementation cost in the mode. To solve these problems, we did not add cache memory but applied the method of executing the caching by using available registers of the multithreaded models. The available register-based caching method is one that use the registers which are not used on the execution of threads. It may accomplish the same effect as the cache memory. The multithreaded models can compute the number of available registers to be used during the process of the register optimization, and therefore this method can be easily applied on the models. By applying this method, we can also remove the access conflict and the bottleneck of frame memories. When we applied the proposed available register-based caching method, we found that there was an improved performance of the multithreaded model. Also, when the available-register-based caching method is compared with the cache based caching method, we found that there was the almost same execution overhead.

  • PDF

Interactive Colision Detection for Deformable Models using Streaming AABBs

  • Zhang, Xinyu;Kim, Young-J.
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02c
    • /
    • pp.306-317
    • /
    • 2007
  • We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At run-time, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30~100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB culling algorithm [2] and observed about two times improvement.

  • PDF

Modeling and Simulation of LEACH Protocol to Analyze DEVS Kernel-models in Sensor Networks

  • Nam, Su Man;Kim, Hwa Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.97-103
    • /
    • 2020
  • Wireless sensor networks collect and analyze sensing data in a variety of environments without human intervention. The sensor network changes its lifetime depending on routing protocols initially installed. In addition, it is difficult to modify the routing path during operating the network because sensors must consume a lot of energy resource. It is important to measure the network performance through simulation before building the sensor network into the real field. This paper proposes a WSN model for a low-energy adaptive clustering hierarchy protocol using DEVS kernel models. The proposed model is implemented with the sub models (i.e. broadcast model and controlled model) of the kernel model. Experimental results indicate that the broadcast model based WSN model showed lower CPU resource usage and higher message delivery than the broadcast model.

Data modeling and algorithms design for implementing Competency-based Learning Outcomes Assessment System (역량기반 학습성과 평가 시스템 구현을 위한 데이터 모델링 및 알고리즘 설계)

  • Chung, Hyun-Sook;Kim, Jung-Min
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.11
    • /
    • pp.335-344
    • /
    • 2021
  • The purpose of this paper is the development of course data models and learning achievement computation algorithms for enabling the course-embedded assessment(CEA), which is essential of competency-based education in higher education. The previous works related CEA have weakness in the development of the systematic solution for CEA computation. In this paper, we propose data models and algorithms to implement competency-based assessment system. Our data models are composed of a layered architecture of learning outcomes, learning modules and activities, and an associative matrix of learning outcomes and activities. The proposed methods can be applied to the development of the course-embedded assessment system as core modules. We evaluated the effectiveness of our proposed models through applying the models to a practical course, Java Programing. From the result of the experiments we found that our models can be used in the assessment system as a core module.

An Ensemble Approach to Detect Fake News Spreaders on Twitter

  • Sarwar, Muhammad Nabeel;UlAmin, Riaz;Jabeen, Sidra
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.294-302
    • /
    • 2022
  • Detection of fake news is a complex and a challenging task. Generation of fake news is very hard to stop, only steps to control its circulation may help in minimizing its impacts. Humans tend to believe in misleading false information. Researcher started with social media sites to categorize in terms of real or fake news. False information misleads any individual or an organization that may cause of big failure and any financial loss. Automatic system for detection of false information circulating on social media is an emerging area of research. It is gaining attention of both industry and academia since US presidential elections 2016. Fake news has negative and severe effects on individuals and organizations elongating its hostile effects on the society. Prediction of fake news in timely manner is important. This research focuses on detection of fake news spreaders. In this context, overall, 6 models are developed during this research, trained and tested with dataset of PAN 2020. Four approaches N-gram based; user statistics-based models are trained with different values of hyper parameters. Extensive grid search with cross validation is applied in each machine learning model. In N-gram based models, out of numerous machine learning models this research focused on better results yielding algorithms, assessed by deep reading of state-of-the-art related work in the field. For better accuracy, author aimed at developing models using Random Forest, Logistic Regression, SVM, and XGBoost. All four machine learning algorithms were trained with cross validated grid search hyper parameters. Advantages of this research over previous work is user statistics-based model and then ensemble learning model. Which were designed in a way to help classifying Twitter users as fake news spreader or not with highest reliability. User statistical model used 17 features, on the basis of which it categorized a Twitter user as malicious. New dataset based on predictions of machine learning models was constructed. And then Three techniques of simple mean, logistic regression and random forest in combination with ensemble model is applied. Logistic regression combined in ensemble model gave best training and testing results, achieving an accuracy of 72%.

Enhancing LoRA Fine-tuning Performance Using Curriculum Learning

  • Daegeon Kim;Namgyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.43-54
    • /
    • 2024
  • Recently, there has been a lot of research on utilizing Language Models, and Large Language Models have achieved innovative results in various tasks. However, the practical application faces limitations due to the constrained resources and costs required to utilize Large Language Models. Consequently, there has been recent attention towards methods to effectively utilize models within given resources. Curriculum Learning, a methodology that categorizes training data according to difficulty and learns sequentially, has been attracting attention, but it has the limitation that the method of measuring difficulty is complex or not universal. Therefore, in this study, we propose a methodology based on data heterogeneity-based Curriculum Learning that measures the difficulty of data using reliable prior information and facilitates easy utilization across various tasks. To evaluate the performance of the proposed methodology, experiments were conducted using 5,000 specialized documents in the field of information communication technology and 4,917 documents in the field of healthcare. The results confirm that the proposed methodology outperforms traditional fine-tuning in terms of classification accuracy in both LoRA fine-tuning and full fine-tuning.

A Study on Improvement of Dynamic Object Detection using Dense Grid Model and Anchor Model (고밀도 그리드 모델과 앵커모델을 이용한 동적 객체검지 향상에 관한 연구)

  • Yun, Borin;Lee, Sun Woo;Choi, Ho Kyung;Lee, Sangmin;Kwon, Jang Woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.3
    • /
    • pp.98-110
    • /
    • 2018
  • In this paper, we propose both Dense grid model and Anchor model to improve the recognition rate of dynamic objects. Two experiments are conducted to study the performance of two proposed CNNs models (Dense grid model and Anchor model), which are to detect dynamic objects. In the first experiment, YOLO-v2 network is adjusted, and then fine-tuned on KITTI datasets. The Dense grid model and Anchor model are then compared with YOLO-v2. Regarding to the evaluation, the two models outperform YOLO-v2 from 6.26% to 10.99% on car detection at different difficulty levels. In the second experiment, this paper conducted further training of the models on a new dataset. The two models outperform YOLO-v2 up to 22.40% on car detection at different difficulty levels.

ROLE OF COMPUTER SIMULATION MODELING IN PESTICIDE ENVIRONMENTAL RISK ASSESSMENT

  • Wauchope, R.Don;Linders, Jan B.H.J.
    • Proceedings of the Korea Society of Environmental Toocicology Conference
    • /
    • 2003.10a
    • /
    • pp.91-93
    • /
    • 2003
  • It has been estimated that the equivalent of approximately $US 50 billion has been spent on research on the behavior and fate of pesticides in the environment since Rachel Carson published “Silent Spring” in 1962. Much of the resulting knowledge has been summarized explicitly in computer algorithms in a variety of empirical, deterministic, and probabilistic simulation models. These models describe and predict the transport, degradation and resultant concentrations of pesticides in various compartments of the environment during and after application. In many cases the known errors of model predictions are large. For this reason they are typically designed to be “conservative”, i.e., err on the side of over-prediction of concentrations in order to err on the side of safety. These predictions are then compared with toxicity data, from tests of the pesticide on a series of standard representative biota, including terrestrial and aquatic indicator species and higher animals (e.g., wildlife and humans). The models' predictions are good enough in some cases to provide screening of those compounds which are very unlikely to do harm, and to indicate those compounds which must be investigated further. If further investigation is indicated a more detailed (and therefore more complicated) model may be employed to give a better estimate, or field experiments may be required. A model may be used to explore “what if” questions leading to possible alternative pesticide usage patterns which give lower potential environmental concentrations and allowable exposures. We are currently at a maturing stage in this research where the knowledge base of pesticide behavior in the environmental is growing more slowly than in the past. However, innovative uses are being made of the explosion in available computer technology to use models to take ever more advantage of the knowledge we have. In this presentation, current developments in the state of the art as practiced in North America and Europe will be presented. Specifically, we will look at the efforts of the ‘Focus’ consortium in the European Union, and the ‘EMWG’ consortium in North America. These groups have been innovative in developing a process and mechanisms for discussion amongst academic, agriculture, industry and regulatory scientists, for consensus adoption of research advances into risk management methodology.

  • PDF

Model Multiplicity (UML) Versus Model Singularity in System Requirements and Design

  • Al-Fedaghi, Sabah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.4
    • /
    • pp.103-114
    • /
    • 2021
  • A conceptual model can be used to manage complexity in both the design and implementation phases of the system development life cycle. Such a model requires a firm grasp of the abstract principles on which a system is based, as well as an understanding of the high-level nature of the representation of entities and processes. In this context, models can have distinct architectural characteristics. This paper discusses model multiplicity (e.g., unified modeling language [UML]), model singularity (e.g., object-process methodology [OPM], thinging machine [TM]), and a heterogeneous model that involves multiplicity and singularity. The basic idea of model multiplicity is that it is not possible to present all views in a single representation, so a number of models are used, with each model representing a different view. The model singularity approach uses only a single unified model that assimilates its subsystems into one system. This paper is concerned with current approaches, especially in software engineering texts, where multimodal UML is introduced as the general-purpose modeling language (i.e., UML is modeling). In such a situation, we suggest raising the issue of multiplicity versus singularity in modeling. This would foster a basic appreciation of the UML advantages and difficulties that may be faced during modeling, especially in the educational setting. Furthermore, we advocate the claim that a multiplicity of views does not necessitate a multiplicity of models. The model singularity approach can represent multiple views (static, behavior) without resorting to a collection of multiple models with various notations. We present an example of such a model where the static representation is developed first. Then, the dynamic view and behavioral representations are built by incorporating a decomposition strategy interleaved with the notion of time.

A Realistic Path Loss Model for Real-time Communication in the Urban Grid Environment for Vehicular Ad hoc Networks

  • Mostajeran, Ehsan;Noor, Rafidah Md;Anisi, Mohammad Hossein;Ahmedy, Ismail;Khan, Fawad Ali
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4698-4716
    • /
    • 2017
  • Wireless signal transmission is influenced by environmental effects. These effects have also been challenging for Vehicular Ad hoc Network (VANET) in real-time communication. More specifically, in an urban environment, with high mobility among vehicles, a vehicle's status from the transmitter can instantly trigger from line of sight to non-line of sight, which may cause loss of real-time communication. In order to overcome this, a deterministic signal propagation model is required, which has less complexity and more feasibility of implementation. Hence, we propose a realistic path loss model which adopts ray tracing technique for VANET in a grid urban environment with less computational complexity. To evaluate the model, it is applied to a vehicular simulation scenario. The results obtained are compared with different path loss models in the same scenario based on path loss value and application layer performance analysis. The proposed path loss model provides higher loss value in dB compared to other models. Nevertheless, the performance of vehicle-vehicle communication, which is evaluated by the packet delivery ratio with different vehicle transmitter density verifies improvement in real-time vehicle-vehicle communication. In conclusion, we present a realistic path loss model that improves vehicle-vehicle wireless real-time communication in the grid urban environment.