• Title/Summary/Keyword: non-real time process

Search Result 246, Processing Time 0.02 seconds

Robust Dynamic Projection Mapping onto Deforming Flexible Moving Surface-like Objects (유연한 동적 변형물체에 대한 견고한 다이내믹 프로젝션맵핑)

  • Kim, Hyo-Jung;Park, Jinho
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.897-906
    • /
    • 2017
  • Projection Mapping, also known as Spatial Augmented Reality(SAR) has attracted much attention recently and used for many division, which can augment physical objects with projected various virtual replications. However, conventional approaches towards projection mapping have faced some limitations. Target objects' geometric transformation property does not considered, and movements of flexible objects-like paper are hard to handle, such as folding and bending as natural interaction. Also, precise registration and tracking has been a cumbersome process in the past. While there have been many researches on Projection Mapping on static objects, dynamic projection mapping that can keep tracking of a moving flexible target and aligning the projection at interactive level is still a challenge. Therefore, this paper propose a new method using Unity3D and ARToolkit for high-speed robust tracking and dynamic projection mapping onto non-rigid deforming objects rapidly and interactively. The method consists of four stages, forming cubic bezier surface, process of rendering transformation values, multiple marker recognition and tracking, and webcam real time-lapse imaging. Users can fold, curve, bend and twist to make interaction. This method can achieve three high-quality results. First, the system can detect the strong deformation of objects. Second, it reduces the occlusion error which reduces the misalignment between the target object and the projected video. Lastly, the accuracy and the robustness of this method can make result values to be projected exactly onto the target object in real-time with high-speed and precise transformation tracking.

Lagrangian Particle Dispersion Modeling Intercomparison : Internal Versus Foreign Modeling Results on the Nuclear Spill Event (방사능 누출 사례일의 국내.외 라그랑지안 입자확산 모델링 결과 비교)

  • 김철희;송창근
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.19 no.3
    • /
    • pp.249-261
    • /
    • 2003
  • A three-dimensional mesoscale atmospheric dispersion modeling system consisting of the Lagrangian particle dispersion model (LPDM) and the meteorological mesoscale model (MM5) was employed to simulate the transport and dispersion of non-reactive pollutant during the nuclear spill event occurred from Sep. 31 to Oct. 3, 1999 in Tokaimura city, Japan. For the comparative analysis of numerical experiment, two more sets of foreign mesoscale modeling system; NCEP (National Centers for Environmental Prediction) and DWD (Deutscher Wetter Dienst) were also applied to address the applicability of air pollution dispersion predictions. We noticed that the simulated results of horizontal wind direction and wind velocity from three meteorological modeling showed remarkably different spatial variations, mainly due to the different horizontal resolutions. How-ever, the dispersion process by LPDM was well characterized by meteorological wind fields, and the time-dependent dilution factors ($\chi$/Q) were found to be qualitatively simulated in accordance with each mesocale meteorogical wind field, suggesting that LPDM has the potential for the use of the real time control at optimization of the urban air pollution provided detailed meteorological wind fields. This paper mainly pertains to the mesoscale modeling approaches, but the results imply that the resolution of meteorological model and the implementation of the relevant scale of air quality model lead to better prediction capabilities in local or urban scale air pollution modeling.

INNOVATION ALGORITHM IN ARMA PROCESS

  • Sreenivasan, M.;Sumathi, K.
    • Journal of applied mathematics & informatics
    • /
    • v.5 no.2
    • /
    • pp.373-382
    • /
    • 1998
  • Most of the works in Time Series Analysis are based on the Auto Regressive Integrated Moving Average (ARIMA) models presented by Box and Jeckins(1976). If the data exhibits no ap-parent deviation from stationarity and if it has rapidly decreasing autocorrelation function then a suitable ARIMA(p,q) model is fit to the given data. Selection of the orders of p and q is one of the crucial steps in Time Series Analysis. Most of the methods to determine p and q are based on the autocorrelation function and partial autocor-relation function as suggested by Box and Jenkins (1976). many new techniques have emerged in the literature and it is found that most of them are over very little use in determining the orders of p and q when both of them are non-zero. The Durbin-Levinson algorithm and Innovation algorithm (Brockwell and Davis 1987) are used as recur-sive methods for computing best linear predictors in an ARMA(p,q)model. These algorithms are modified to yield an effective method for ARMA model identification so that the values of order p and q can be determined from them. The new method is developed and its validity and usefulness is illustrated by many theoretical examples. This method can also be applied to an real world data.

Adaptive Priority Queue-driven Task Scheduling for Sensor Data Processing in IoT Environments (사물인터넷 환경에서 센서데이터의 처리를 위한 적응형 우선순위 큐 기반의 작업 스케줄링)

  • Lee, Mijin;Lee, Jong Sik;Han, Young Shin
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.9
    • /
    • pp.1559-1566
    • /
    • 2017
  • Recently in the IoT(Internet of Things) environment, a data collection in real-time through device's sensor has increased with an emergence of various devices. Collected data from IoT environment shows a large scale, non-uniform generation cycle and atypical. For this reason, the distributed processing technique is required to analyze the IoT sensor data. However if you do not consider the optimal scheduling for data and the processor of IoT in a distributed processing environment complexity increase the amount in assigning a task, the user is difficult to guarantee the QoS(Quality of Service) for the sensor data. In this paper, we propose APQTA(Adaptive Priority Queue-driven Task Allocation method for sensor data processing) to efficiently process the sensor data generated by the IoT environment. APQTA is to separate the data into job and by applying the priority allocation scheduling based on the deadline to ensure that guarantee the QoS at the same time increasing the efficiency of the data processing.

A restoration of the transfer error that used edge direction of an image (영상의 모서리 방향을 이용한 전송 오차의 복원)

  • Lee, Chang-Hee;Ryou, Hee-Sahm;Ra, Keuk-Hwan
    • 전자공학회논문지 IE
    • /
    • v.44 no.1
    • /
    • pp.15-19
    • /
    • 2007
  • A study to have read does an improvement of an error restoration technology based on the edge direction interpolation that a stop image cared for inside frame correction more than with an image restoration way of a transfer error or with an aim. A way proposed to is based on edge direction detection method of a block utilizing the edge direction which will adjust a part damaged a sweater to a remaining part here. The rest of error pixel used non linear Midian filter for process later data information by the final stage and did interpolation. The examination result shows a good recuperation tendency and low accounts time of a way proposed to realization possibility of a real time image processing.

Relationship among Degree of Time-delay, Input Variables, and Model Predictability in the Development Process of Non-linear Ecological Model in a River Ecosystem (비선형 시계열 하천생태모형 개발과정 중 시간지연단계와 입력변수, 모형 예측성 간 관계평가)

  • Jeong, Kwang-Seuk;Kim, Dong-Kyun;Yoon, Ju-Duk;La, Geung-Hwan;Kim, Hyun-Woo;Joo, Gea-Jae
    • Korean Journal of Ecology and Environment
    • /
    • v.43 no.1
    • /
    • pp.161-167
    • /
    • 2010
  • In this study, we implemented an experimental approach of ecological model development in order to emphasize the importance of input variable selection with respect to time-delayed arrangement between input and output variables. Time-series modeling requires relevant input variable selection for the prediction of a specific output variable (e.g. density of a species). Inadequate variable utility for input often causes increase of model construction time and low efficiency of developed model when applied to real world representation. Therefore, for future prediction, researchers have to decide number of time-delay (e.g. months, weeks or days; t-n) to predict a certain phenomenon at current time t. We prepared a total of 3,900 equation models produced by Time-Series Optimized Genetic Programming (TSOGP) algorithm, for the prediction of monthly averaged density of a potamic phytoplankton species Stephanodiscus hantzschii, considering future prediction from 0- (no future prediction) to 12-months ahead (interval by 1 month; 300 equations per each month-delay). From the investigation of model structure, input variable selectivity was obviously affected by the time-delay arrangement, and the model predictability was related with the type of input variables. From the results, we can conclude that, although Machine Learning (ML) algorithms which have popularly been used in Ecological Informatics (EI) provide high performance in future prediction of ecological entities, the efficiency of models would be lowered unless relevant input variables are selectively used.

A Construction and Operation Analysis of Group Management Network about Control Devices based on CIM Level 3 (CIM 계층 3에서 제어 기기들의 그룹 관리 네트워크 구축과 운영 해석)

  • 김정호
    • The Journal of Society for e-Business Studies
    • /
    • v.4 no.1
    • /
    • pp.87-101
    • /
    • 1999
  • To operate the automatic devices of manufacturing process more effectively and to solve the needs of the resource sharing, network technology is applied to the control devices located in common manufacturing zone and operated by connecting them. In this paper, functional standard of the network layers are set as physical and data link layer of IEEE 802.2, 802.4, and VMD application layer and ISO-CIM reference model. Then, they are divided as minimized architecture, designed as group objects which perform group management and service objects which organizes and operates the group. For the stability in this network, this paper measures the variation of data packet length and node number and analyzes the variated value of the waiting time for the network operation. For the method of the analysis, non-exhausted service method are selected, and the arrival rates of the each data packet to the nodes that are assumed to form a Poission distribution. Then, queue model is set as M/G/1, and the analysis equation for waiting time is found. For the evalution of the performance, the length of the data packet varies from 10 bytes to 100 bytes in the operation of the group management network, the variation of the wating time is less than 10 msec. Since the waiting time in this case is less than 10 msec, response time is fast enough. Furthermore, to evaluate the real time processing of the group management network, it shows if the number of nodes is less than 40, and the average arrival time is less than 40 packet/sec, it can perform stable operation even taking the overhead such as software delay time, indicated packet service, and transmissin safety margin.

  • PDF

Relationship Between Expression of Gastrokine 1 and Clinicopathological Characteristics in Gastric Cancer Patients

  • Xiao, Jiang-Wei;Chen, Jia-Hui;Ren, Ming-Yang;Tian, Xiao-Bing;Wang, Chong-Shu
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.13 no.11
    • /
    • pp.5897-5901
    • /
    • 2012
  • The aim of the study was to clarify the role of gastrokine 1 in the process of formation and development of gastric cancer. The expression of gastrokine 1 in gastric cancer and corresponding non-cancerous gastric tissues of 52 gastric cancer patients was assessed with the real-time fluorescence quantitative polymerase chain reaction (RT-PCR) and immunohistochemistry. We also analyzed the relationship between the expression level and clinicopathological characteristics. Gastrokine 1 gene and protein expression in gastric cancer tissues was in both cases significantly lower than in corresponding non-cancerous gastric tissues (both P<0.01), but no significant relationship was found with clinicopathological parameters including tumor location, depth of invasion, differentiation, lymph node metastasis, stage, gender, age and carcinoembryonic antigen (CEA), and carbohydrate antigen 19-9 (CA19-9) level in peripheral blood preoperation of patients (P>0.05, respectively). Furthermore, gastrokine 1 gene expression was markedly lower in gastric cancer tissues of Helicobacter pylori (HP)-positive patients than negative ones (P<0.05). The result of the study showed that gastrokine 1 might play a significant role in the process of formation and development of gastric cancer as an anti-oncogene. Its effect might be weakened by HP infection.

A Study of Line-shaped Echo Detection Method using Naive Bayesian Classifier (나이브 베이지안 분류기를 이용한 선에코 탐지 방법에 대한 연구)

  • Lee, Hansoo;Kim, Sungshin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.4
    • /
    • pp.360-365
    • /
    • 2014
  • There are many types of advanced devices for weather prediction process such as weather radar, satellite, radiosonde, and other weather observation devices. Among them, the weather radar is an essential device for weather forecasting because the radar has many advantages like wide observation area, high spatial and time resolution, and so on. In order to analyze the weather radar observation result, we should know the inside structure and data. Some non-precipitation echoes exist inside of the observed radar data. And these echoes affect decreased accuracy of weather forecasting. Therefore, this paper suggests a method that could remove line-shaped non-precipitation echo from raw radar data. The line-shaped echoes are distinguished from the raw radar data and extracted their own features. These extracted data pairs are used as learning data for naive bayesian classifier. After the learning process, the constructed naive bayesian classifier is applied to real case that includes not only line-shaped echo but also other precipitation echoes. From the experiments, we confirm that the conclusion that suggested naive bayesian classifier could distinguish line-shaped echo effectively.

Applying a Forced Censoring Technique with Accelerated Modeling for Improving Estimation of Extremely Small Percentiles of Strengths

  • Chen Weiwei;Leon Ramon V.;Young Timothy M.;Guess Frank M.
    • International Journal of Reliability and Applications
    • /
    • v.7 no.1
    • /
    • pp.27-39
    • /
    • 2006
  • Many real world cases in material failure analysis do not follow perfectly the normal distribution. Forcing of the normality assumption may lead to inaccurate predictions and poor product quality. We examine the failure process of the internal bond (IB or tensile strength) of medium density fiberboard (MDF). We propose a forced censoring technique that closer fits the lower tails of strength distributions and better estimates extremely smaller percentiles, which may be valuable to continuous quality improvement initiatives. Further analyses are performed to build an accelerated common-shaped Weibull model for different product types using the $JMP^{(R)}$ Survival and Reliability platform. In this paper, a forced censoring technique is implemented for the first time as a software module, using $JMP^{(R)}$ Scripting Language (JSL) to expedite data processing, which is crucial for real-time manufacturing settings. Also, we use JSL to automate the task of fitting an accelerated Weibull model and testing model homogeneity in the shape parameter. Finally, a package script is written to readily provide field engineers customized reporting for model visualization, parameter estimation, and percentile forecasting. Our approach may be more accurate for product conformance evaluation, plus help reduce the cost of destructive testing and data management due to reduced frequency of testing. It may also be valuable for preventing field failure and improved product safety even when destructive testing is not reduced by yielding higher precision intervals at the same confidence level.

  • PDF