• Title/Summary/Keyword: Poisson process.

Search Result 484, Processing Time 0.023 seconds

A Probabilistic Handover Scheme for Enhancing Spectral Efficiency in Drone-based Wireless Communication Systems (드론 기반의 무선 통신 시스템에서 주파수 효율 향상을 위한 확률적 핸드오버 기법)

  • Jang, Hwan Won;Woo, Dong Hyuck;Hwang, Ho Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1220-1226
    • /
    • 2021
  • In this paper, we propose a probabilistic handover scheme for enhancing spectral efficiency in drone-based wireless communication systems. When a moving drone base station (DBS) provides the drone-based wireless communication service to a user equipment (UE) located on the ground, our proposed handover scheme considers the distance between DBS and UE and small scale fading. In addition, our proposed handover scheme considers a handover probability to mitigate the signalling overhead that may occur when performing frequent handovers. Through simulations for drone-based wireless communication systems, we evaluate the spectral efficiency and the handover probability of our proposed handover scheme and the conventional handover scheme. The simulation results show that our proposed handover scheme can achieve higher average spectral efficiency than the conventional handover scheme which considers only the distance between DBS and UE.

Resource Allocation for D2D Communication in Cellular Networks Based on Stochastic Geometry and Graph-coloring Theory

  • Xu, Fangmin;Zou, Pengkai;Wang, Haiquan;Cao, Haiyan;Fang, Xin;Hu, Zhirui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4946-4960
    • /
    • 2020
  • In a device-to-device (D2D) underlaid cellular network, there exist two types of co-channel interference. One type is inter-layer interference caused by spectrum reuse between D2D transmitters and cellular users (CUEs). Another type is intra-layer interference caused by spectrum sharing among D2D pairs. To mitigate the inter-layer interference, we first derive the interference limited area (ILA) to protect the coverage probability of cellular users by modeling D2D users' location as a Poisson point process, where a D2D transmitter is allowed to reuse the spectrum of the CUE only if the D2D transmitter is outside the ILA of the CUE. To coordinate the intra-layer interference, the spectrum sharing criterion of D2D pairs is derived based on the (signal-to-interference ratio) SIR requirement of D2D communication. Based on this criterion, D2D pairs are allowed to share the spectrum when one D2D pair is far from another sufficiently. Furthermore, to maximize the energy efficiency of the system, a resource allocation scheme is proposed according to weighted graph coloring theory and the proposed ILA restriction. Simulation results show that our proposed scheme provides significant performance gains over the conventional scheme and the random allocation scheme.

Closed-form Expressions of Vector Magnetic and Magnetic Gradient Tensor due to a Line Segment (선형 이상체에 의한 벡터 자력 및 자력 변화율 텐서 반응식)

  • Rim, Hyoungrea
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.2
    • /
    • pp.85-92
    • /
    • 2022
  • An elongated object in one direction can be approximated as a line segment. Here, the closed-form expressions of a line segment's vector magnetic and magnetic gradient tensor are required to interpret responses by a line segment. Therefore, the analytical expressions of the vector magnetic and magnetic gradient tensor are derived. The vector magnetic is converted from the existing gravity gradient tensor using Poisson's relation where the gravity gradient tensor caused by a line segment can be transformed into a vector magnetic. Then, the magnetic gradient tensor is derived by differentiating the vector magnetic with respect to each axis in the Cartesian coordinate system. The synthetic total magnetic data simulated by an iron pile on boreholes are inverted by a nonlinear inversion process so that the physical parameters of the iron pile, including the beginning point, the length, orientation, and magnetization vector are successfully estimated.

Analysis of a Queueing Model with a Two-stage Group-testing Policy (이단계 그룹검사를 갖는 대기행렬모형의 분석)

  • Won Seok Yang
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.4
    • /
    • pp.53-60
    • /
    • 2022
  • In a group-testing method, instead of testing a sample, for example, blood individually, a batch of samples are pooled and tested simultaneously. If the pooled test is positive (or defective), each sample is tested individually. However, if negative (or good), the test is terminated at one pooled test because all samples in the batch are negative. This paper considers a queueing system with a two-stage group-testing policy. Samples arrive at the system according to a Poisson process. The system has a single server which starts a two-stage group test in a batch whenever the number of samples in the system reaches exactly a predetermined size. In the first stage, samples are pooled and tested simultaneously. If the pooled test is negative, the test is terminated. However, if positive, the samples are divided into two equally sized subgroups and each subgroup is applied to a group test in the second stage, respectively. The server performs pooled tests and individual tests sequentially. The testing time of a sample and a batch follow general distributions, respectively. In this paper, we derive the steady-state probability generating function of the system size at an arbitrary time, applying a bulk queuing model. In addition, we present queuing performance metrics such as the offered load, output rate, allowable input rate, and mean waiting time. In numerical examples with various prevalence rates, we show that the second-stage group-testing system can be more efficient than a one-stage group-testing system or an individual-testing system in terms of the allowable input rates and the waiting time. The two-stage group-testing system considered in this paper is very simple, so it is expected to be applicable in the field of COVID-19.

Therapeutic Duplication as a Medication Error Risk in Fixed-Dose Combination Drugs for Dyslipidemia: A Nationwide Study

  • Wonbin Choi;Hyunji Koo;Kyeong Hye Jeong;Eunyoung Kim;Seung-Hun You;Min-Taek Lee;Sun-Young Jung
    • Korean Journal of Clinical Pharmacy
    • /
    • v.33 no.3
    • /
    • pp.168-177
    • /
    • 2023
  • Background & Objectives: Fixed-dose combinations (FDCs) offer advantages in adherence and cost-effectiveness compared to free combinations (FCs), but they can also complicate the prescribing process, potentially leading to therapeutic duplication (TD). This study aimed to identify the prescribing patterns of FDCs for dyslipidemia and investigate their associated risk of TD. Methods: This was a retrospective cohort study involving drugs that included statins, using Health Insurance Review & Assessment Service-National Patient Sample (HIRA-NPS) data from 2018. The unit of analysis was a prescription claim. The primary outcome was TD. The risk ratio of TD was calculated and adjusted for patient, prescriber, and the number of cardiovascular drugs prescribed using a multivariable Poisson model. Results: Our study included 252,797 FDC prescriptions and 515,666 FC prescriptions. Of the FDC group, 46.52% were male patients and 56.21% were aged 41 to 65. Ezetimibe was included in 71.61% of the FDC group, but only 0.25% of the FC group. TD occurred in 0.18% of the FDC group, and the adjusted risk ratio of TD in FDC prescriptions compared to FC was 6. 44 (95% CI 5. 30-7. 82). Conclusions: Prescribing FDCs for dyslipidemia was associated with a higher risk of TD compared to free combinations. Despite the relatively low absolute prevalence of TD, the findings underline the necessity for strategies to mitigate this risk when prescribing FDCs for dyslipidemia. Our study suggests the potential utility of Clinical Decision Support Systems and standardizing nomenclature in reducing medication errors, providing valuable insights for clinical practice and future research.

The Comparative Study of Software Optimal Release Time for the Distribution Based on Shape parameter (형상모수에 근거한 소프트웨어 최적방출시기에 관한 비교 연구)

  • Shin, Hyun-Cheul;Kim, Hee-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.8
    • /
    • pp.1-9
    • /
    • 2009
  • In this paper, make a study decision problem called an optimal release policies after testing a software system in development phase and transfer it to the user. When correcting or modifying the software, because of the possibility of introducing new faults when correcting or modifying the software, infinite failure non-homogeneous Poisson process models presented and propose an optimal release policies of the life distribution applied fixed shape parameter distribution which can capture the increasing/decreasing nature of the failure occurrence rate per fault. In this paper, discuss optimal software release policies which minimize a total average software cost of development and maintenance under the constraint of satisfying a software reliability requirement. In a numerical example, after trend test applied and estimated the parameters using maximum likelihood estimation of inter-failure time data, make out estimating software optimal release time

Drivers for Technology Transfer of Government-funded Research Institute: Focusing on Food Research and Development Projects (정부출연연구기관 식품연구개발사업의 기술이전 성과동인 분석)

  • Mirim Jeong;Seungwoon Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.39-52
    • /
    • 2023
  • In this study, project information of government-funded research institute in the food field was collected and analyzed to systematically identify the factors affecting the process of transferring technological achievements of public research institute to the private sector. This study hypothesized that human resources, financial resources, and technological characteristics as input factors of R&D projects affect output factors, such as research papers or patents produced by R&D projects. Moreover, these outputs would serve as drivers of the technology transfer as one of the R&D outcomes. Linear Regression Analysis and Poisson Regression Analysis were conducted to empirically and sequentially investigate the relationship between input factors and output and outcome of R&D projects and the results are as follows: First, the principle investigator's career and participating researcher's size as human resource factors have an influence on both the number of SCI (science citation index) papers and patent registration. Second, the research duration and research expenses for the current year have an influence on the number of SCI papers and patent registrations, which are the main outputs of R&D projects. Third, the technology life cycle affects the number of SCI papers and patent registrations. Lastly, the higher the number of SCI papers and patent registrations, the more it affected the number of technology transfers and the amount of technology transfer contract.

A Comparative Study on Reliability Attributes for Software Reliability Model Dependent on Lindley and Erlang Life Distribution (랜들리 및 어랑 수명분포에 의존한 소프트웨어 신뢰성 모형에 대한 신뢰도 속성 비교 연구)

  • Yang, Tae-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.5
    • /
    • pp.469-475
    • /
    • 2017
  • Software reliability is one of the most basic and essential problems in software development. In order to detect the software failure phenomenon, the intensity function, which is the instantaneous failure rate in the non-homogeneous Poisson process, can have the property that it is constant, non-increasing or non-decreasing independently at the failure time. In this study, was compared the reliability performance of the software reliability model using the Landely lifetime distribution with the intensity function decreasing pattern and Erlang lifetime distribution from increasing to decreasing pattern in the software product testing process. In order to identify the software failure phenomenon, the parametric estimation was applied to the maximum likelihood estimation method. Therefore, in this paper, was compared and evaluated software reliability using software failure interval time data. As a result, the reliability of the Landely model is higher than that of the Erlang distribution model. But, in the Erlang distribution model, the higher the shape parameter, the higher the reliability. Through this study, the software design department will be able to help the software design by applying various life distribution and shape parameters, and providing software reliability attributes data and basic knowledge to software reliability model using software failure analysis.

Optimal Release Problems based on a Stochastic Differential Equation Model Under the Distributed Software Development Environments (분산 소프트웨어 개발환경에 대한 확률 미분 방정식 모델을 이용한 최적 배포 문제)

  • Lee Jae-Ki;Nam Sang-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.7A
    • /
    • pp.649-658
    • /
    • 2006
  • Recently, Software Development was applied to new-approach methods as a various form : client-server system and web-programing, object-orient concept, distributed development with a network environments. On the other hand, it be concerned about the distributed development technology and increasing of object-oriented methodology. These technology is spread out the software quality and improve of software production, reduction of the software develop working. Futures, we considered about the distributed software development technique with a many workstation. In this paper, we discussed optimal release problem based on a stochastic differential equation model for the distributed Software development environments. In the past, the software reliability applied to quality a rough guess with a software development process and approach by the estimation of reliability for a test progress. But, in this paper, we decided to optimal release times two method: first, SRGM with an error counting model in fault detection phase by NHPP. Second, fault detection is change of continuous random variable by SDE(stochastic differential equation). Here, we decide to optimal release time as a minimum cost form the detected failure data and debugging fault data during the system test phase and operational phase. Especially, we discussed to limitation of reliability considering of total software cost probability distribution.

Development of a Traffic Accident Prediction Model for Urban Signalized Intersections (도시부 신호교차로 안전성 향상을 위한 사고예측모형 개발)

  • Park, Jun-Tae;Lee, Soo-Beom;Kim, Jang-Wook;Lee, Dong-Min
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.4
    • /
    • pp.99-110
    • /
    • 2008
  • It is commonly estimated that there is a much higher potential for accidents at a crossroads than along a single road due to its plethora of conflicting points. According to the 2006 figures by the National Police Agency, the number of traffic accidents at crossroads is greatly increasing compared to that along single roads. Among others, crossroads installed with traffic signals have more varied influential factors for traffic accidents and leave much more room for improvement than ones without traffic signals; thus, it is expected that a noticeable effect could be achieved in safety if proper counter-measures against the hazards at a crossroads were taken together with an estimate of causes for accidents This research managed to develop models for accident forecasts and accident intensity by applying data on accident history and site inspection of crossroads, targeting four selected downtown crossroads installed with traffic signals. The research was done by roughly dividing the process into four stages: first, analyze the accident model examined before; second, select variables affecting traffic accidents; third, develop a model for traffic accident forecasting by using a statistics-based methodology; and fourth, carry out the verification process of the models.