• Title/Summary/Keyword: Time-Weighting

Search Result 513, Processing Time 0.033 seconds

Quantification Methods for Software Entity Complexity with Hybrid Metrics (혼성 메트릭을 이용한 소프트웨어 개체 복잡도 정량화 기법)

  • Hong, Euii-Seok;Kim, Tae-Guun
    • The KIPS Transactions:PartD
    • /
    • v.8D no.3
    • /
    • pp.233-240
    • /
    • 2001
  • As software technology is in progress and software quantification is getting more important, many metrics have been proposed to quantify a variety of system entities. These metrics can be classified into two different forms : scalar metric and metric vector. Though some recent studies pointed out the composition problem of the scalar metric form, many scalar metrics are successfully used in software development organizations due to their practical applications. In this paper, it is concluded that hybrid metric form weighting external complexity is most suitable for scalar metric form. With this concept, a general framework for hybrid metrics construction independent of the development methodologies and target system type is proposed. This framework was successfully used in two projects that quantify the analysis phase of the structured methodology and the design phase of the object oriented real-time system, respectively. Any organization can quantify system entities in a short time using this framework.

  • PDF

Development of Monte Carlo Simulation Code for the Dose Calculation of the Stereotactic Radiosurgery (뇌 정위 방사선수술의 선량 계산을 위한 몬테카를로 시뮬레이션 코드 개발)

  • Kang, Jeongku;Lee, Dong Joon
    • Progress in Medical Physics
    • /
    • v.23 no.4
    • /
    • pp.303-308
    • /
    • 2012
  • The Geant4 based Monte Carlo code for the application of stereotactic radiosurgery was developed. The probability density function and cumulative density function to determine the incident photon energy were calculated from pre-calculated energy spectrum for the linac by multiplying the weighting factors corresponding to the energy bins. The messenger class to transfer the various MLC fields generated by the planning system was used. The rotation matrix of rotateX and rotateY were used for simulating gantry and table rotation respectively. We construct accelerator world and phantom world in the main world coordinate to rotate accelerator and phantom world independently. We used dicomHandler class object to convert from the dicom binary file to the text file which contains the matrix number, pixel size, pixel's HU, bit size, padding value and high bits order. We reconstruct this class object to work fine. We also reconstruct the PrimaryGeneratorAction class to speed up the calculation time. because of the huge calculation time we discard search process of the ThitsMap and used direct access method from the first to the last element to produce the result files.

An efficient channel searching method based on channel list for independent type cognitive radio systems (독립형 무선 인지 시스템에서 채널 목록 기반의 효과적 채널 검색)

  • Lee, Young-Doo;Koo, In-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.7
    • /
    • pp.1422-1428
    • /
    • 2009
  • In this paper, we consider an independent type cognitive radio system where secondary users can utilize empty channels that are not currently used by primary users having the license to these channels. In the previous works, secondary users search channels sequently or randomly to detect activities of primary user on channels. These channel searching methods however are not suitable to the characteristics of the wireless environment. Therefore, we propose a channel searching method based on the channel list for the purpose of reducing the channel searching time and improving the throughput of secondary users. In the proposed method, we firstly determine weighting value of each channel based on the history of channel activities of primary users and add the weighing value to current channel state buffer. And then, we search an empty channel from channel with smallest value to one with the biggest value. Finally, we compare the performances of the proposed method with those of the sequential channel searching and the random channel searching methods in terms of average channel searching time and average number of transmissions of secondary user.

Non-hierarchical Clustering based Hybrid Recommendation using Context Knowledge (상황 지식을 이용한 비계층적 군집 기반 하이브리드 추천)

  • Baek, Ji-Won;Kim, Min-Jeong;Park, Roy C.;Jung, Hoill;Chung, Kyungyong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.3
    • /
    • pp.138-144
    • /
    • 2019
  • In a modern society, people are concerned seriously about their travel destinations depending on time, economic problem. In this paper, we propose an non-hierarchical clustering based hybrid recommendation using context knowledge. The proposed method is personalized way of recommended knowledge about preferred travel places according to the user's location, place, and weather. Based on 14 attributes from the data collected through the survey, users with similar characteristics are grouped using a non-hierarchical clustering based hybrid recommendation. This makes more accurate recommendation by weighting implicit and explicit data. The users can be recommended a preferred travel destination without spending unnecessary time. The performance evaluation uses accuracy, recall, F-measure. The evaluation result was shown 0.636 accuracy, 0.723 recall, and 0.676 F-measure.

Particle Simulation Modelling of a Beam Forming Structure in Negative-Ion-Based Neutral Beam Injector (중성빔 입사장치에서 빔형성 구조의 입자모사 모형)

  • Park, Byoung-Lyong;Hong, Sang-Hee
    • Nuclear Engineering and Technology
    • /
    • v.21 no.1
    • /
    • pp.40-47
    • /
    • 1989
  • For the effective design of a beam forming structure of the negative-ion-based neutral beam injector, a computer program based on a particle simulation model is developed for the calculation of charged particle motions in the electrostatic fields. The motions of negative ions inside the acceleration tube of a multiple-aperture triode are computed at finite time steps. The electrostatic potentials are obtained from the Poisson's equation by the finite difference method. The successive overrelaxation method is used to solve the matrix equation. The particle and force weighting methods are used on a cloud-in-cell model. The optimum design of the beam forming structure has been studied by using this computer code for the various conditions of elctrodes. The effects of the acceleration-deceleration gap distance, the thickness of the deceleration electrode and the shape of the acceleration electrode on beam trajectories are exmined to find the minimum beam divergence. Some numerical illustrations are presented for the particle movements at finite time steps in the beam forming tubes. It is found in this particle simulation modelling that the shape of the acceleration electrode is the most significant factor of beam divergence.

  • PDF

Design and Implementation of Image Detection System Using Vertical Histogram-Based Shadow Removal Algorithm (수직 히스토그램 기반 그림자 제거 알고리즘을 이용한 영상 감지 시스템 설계 및 구현)

  • Jang, Young-Hwan;Lee, Jae-Chul;Park, Seok-Cheon;Lee, Bong-Gyou;Lee, Sang-Soon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.91-99
    • /
    • 2020
  • For the shadow removal technology that is the base technology of the image detection system, real-time image processing has a problem that the processing speed is reduced due to the calculation complexity and it is also sensitive to illumination or light because shadows are removed only by the difference in brightness. Therefore, in this paper, we improved real-time performance by reducing the calculation complexity through the removal of the weighting part in order to solve the problem of the conventional system. In addition, we designed and evaluated an image detection system based on a shadow removal algorithm that could improve the shadow recognition rate using a vertical histogram. The evaluation results confirmed that the average speed increased by approximately 5.6ms and the detection rate improved by approximately 5.5%p compared to the conventional image detection system.

Performance Improvement of CPSP Based TDOA Estimation Using the Preemphasis (프리엠퍼시스를 이용한 CPSP 기반의 도달시간차이 추정 성능 개선)

  • Kwon, Hong-Seok;Bae, Keun-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.5
    • /
    • pp.461-470
    • /
    • 2009
  • We investigate and analyze the problems encountered in frame-based estimation of TDOA (Time Difference of Arrival) using CPSP function. Spectral leakage occurring in framing of a speech signal by a rectangular window could make estimation of CPSP spectrum inaccurate. Framing with other windows to reduce the spectral leakage distorts the signal due to the asynchronous weighting around the frame specifically both ends of the frame. These problems degrade the performance of the CPSP-based TDOA estimation. In this paper, we propose a method to alleviate those problems by pre-emphasis of the speech signal. It reduces the influence of the spectral leakage by reducing dynamic range of the spectrum of a speech signal with pre-emphasis. To validate the proposed method of pre-emphasis, we carry out TDOA estimation experiments in various noise and reverberation conditions, Experimental results have shown that the framing of pre-emphasized microphone output by a rectangular window achieves higher success rate of TDOA estimation than any other framing methods.

Finding Weighted Sequential Patterns over Data Streams via a Gap-based Weighting Approach (발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.55-75
    • /
    • 2010
  • Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledge embedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

A Study on Mass Rescue Operation Utilizing an Oil Boom (오일펜스를 활용한 다수 인명의 구조에 관한 연구)

  • Jeong, Bong Hun;Choi, Hyun Kue;Park, Gap Jun;Ha, Seung Young
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.24 no.6
    • /
    • pp.686-693
    • /
    • 2018
  • After the Sewol ferry-sinking incident in 2014, the public interest in safety at sea increased. In order to save and secure the initial response time required for sea rescues, not only the rescue organization, but also the victim needs to save and maintain golden time to secure the necessary time for rescue personnel. The purpose of this study was to investigate ways to maintain the psychological stability of victims during their rescue in the case of a mass rescue operation by using the oil boom installed on board oil spill response vessels. Through buoyancy tests and the development of oil booms in sea areas, it confirmed the buoyancy of two adults weighing 70 kg each per meter of oil boom could be maintained when a lifeline was installed on the side of the oil boom, and that it was possible to keep afloat four persons weighing 70 kg each on both sides of the oil boom. It also confirmed the buoyancy for three adults weighting 70 kg each per eight meters was maintained when riding on the top of the oil boom. As a method of rescue, it was found that the fastest and most accurate way to rescue victims was a rescue boat held at the rear end of the oil boom to lead to victims. In conclusion, the rescue team could utilize the oil boom installed on board the oil spill response vessel located near the marine accident site to save and secure the initial response time required for the rescue team to arrive. The victims in distress holding onto the lifeline or riding on the top of oil boom kept afloat at sea could maintain their psychological stability until the mass rescue operation initiated.

Low Resolution Depth Interpolation using High Resolution Color Image (고해상도 색상 영상을 이용한 저해상도 깊이 영상 보간법)

  • Lee, Gyo-Yoon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.2 no.4
    • /
    • pp.60-65
    • /
    • 2013
  • In this paper, we propose a high-resolution disparity map generation method using a low-resolution time-of-flight (TOF) depth camera and color camera. The TOF depth camera is efficient since it measures the range information of objects using the infra-red (IR) signal in real-time. It also quantizes the range information and provides the depth image. However, there are some problems of the TOF depth camera, such as noise and lens distortion. Moreover, the output resolution of the TOF depth camera is too small for 3D applications. Therefore, it is essential to not only reduce the noise and distortion but also enlarge the output resolution of the TOF depth image. Our proposed method generates a depth map for a color image using the TOF camera and the color camera simultaneously. We warp the depth value at each pixel to the color image position. The color image is segmented using the mean-shift segmentation method. We define a cost function that consists of color values and segmented color values. We apply a weighted average filter whose weighting factor is defined by the random walk probability using the defined cost function of the block. Experimental results show that the proposed method generates the depth map efficiently and we can reconstruct good virtual view images.

  • PDF