• Title/Summary/Keyword: Joint Detection Algorithm

Search Result 82, Processing Time 0.025 seconds

Asymptotic Performance of ML Sequence Estimator Using an Array of Antennas for Coded Synchronous Multiuser DS-CDMA Systems

  • Kim, Sang G.;Byung K. Yi;Raymond Pickholtz
    • Journal of Communications and Networks
    • /
    • v.1 no.3
    • /
    • pp.182-188
    • /
    • 1999
  • The optimal joint maximum-likelihood sequence estima-for using an array of antennas is derived for synchronous direct sequence-code division multiple access (DS-CDMA) system. Each user employs a rate 1/n convolutional code for channel coding for the additive white Gaussian noise (AWGN) channel. The array re-ceiver structure is composed of beamformers in the users' direc-tions followed by a bank of matched filters. The decoder is imple-mented using a Viterbi algorithm whose states depend on the num-ber of users and the constraint length of the convolutional code. The asymptotic array multiuser coding gain(AAMCG)is defined to encompass the asymptotic multiuser coding gain and the spatial information on users' locations in the system. We derive the upper and lower bounds of the AAMCG. As an example, the upper and lower bounds of AAMCG are obtained for the two user case where each user employes the maximum free distance convolutional code with rate 1/2. The enar-far resistance property is also investigated considering the number of antenna elements and user separations in the space.

  • PDF

Optimizations for Mobile MIMO Relay Molecular Communication via Diffusion with Network Coding

  • Cheng, Zhen;Sun, Jie;Yan, Jun;Tu, Yuchun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.4
    • /
    • pp.1373-1391
    • /
    • 2022
  • We investigate mobile multiple-input multiple-output (MIMO) molecular communication via diffusion (MCvD) system which is consisted of two source nodes, two destination nodes and one relay node in the mobile three-dimensional channel. First, the combinations of decode-and-forward (DF) relaying protocol and network coding (NC) scheme are implemented at relay node. The adaptive thresholds at relay node and destination nodes can be obtained by maximum a posteriori (MAP) probability detection method. Then the mathematical expressions of the average bit error probability (BEP) of this mobile MIMO MCvD system based on DF and NC scheme are derived. Furthermore, in order to minimize the average BEP, we establish the optimization problem with optimization variables which include the ratio of the number of emitted molecules at two source nodes and the initial position of relay node. We put forward an iterative scheme based on block coordinate descent algorithm which can be used to solve the optimization problem and get optimal values of the optimization variables simultaneously. Finally, the numerical results reveal that the proposed iterative method has good convergence behavior. The average BEP performance of this system can be improved by performing the joint optimizations.

Three-dimensional human activity recognition by forming a movement polygon using posture skeletal data from depth sensor

  • Vishwakarma, Dinesh Kumar;Jain, Konark
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.286-299
    • /
    • 2022
  • Human activity recognition in real time is a challenging task. Recently, a plethora of studies has been proposed using deep learning architectures. The implementation of these architectures requires the high computing power of the machine and a massive database. However, handcrafted features-based machine learning models need less computing power and very accurate where features are effectively extracted. In this study, we propose a handcrafted model based on three-dimensional sequential skeleton data. The human body skeleton movement over a frame is computed through joint positions in a frame. The joints of these skeletal frames are projected into two-dimensional space, forming a "movement polygon." These polygons are further transformed into a one-dimensional space by computing amplitudes at different angles from the centroid of polygons. The feature vector is formed by the sampling of these amplitudes at different angles. The performance of the algorithm is evaluated using a support vector machine on four public datasets: MSR Action3D, Berkeley MHAD, TST Fall Detection, and NTU-RGB+D, and the highest accuracies achieved on these datasets are 94.13%, 93.34%, 95.7%, and 86.8%, respectively. These accuracies are compared with similar state-of-the-art and show superior performance.

An Integrated and Complementary Evaluation System for Judging the Severity of Knee Osteoarthritis Using CNN (CNN 기반 슬관절 골관절염 중증도 판단을 위한 통합 보완된 등급 판정 시스템)

  • YeChan Yoon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.4
    • /
    • pp.77-89
    • /
    • 2024
  • Knee osteoarthritis (OA) is a very common musculoskeletal disorder worldwide. The assessment of knee osteoarthritis, which requires a rapid and accurate initial diagnosis, is determined to be different depending on the currently dispersed classification system, and each classification system has different criteria. Also, because the medical staff directly sees and reads the X-ray pictures, it depends on the subjective opinion of the medical staff, and it takes time to establish an accurate diagnosis and a clear treatment plan. Therefore, in this study, we designed the stenosis length measurement algorithm and Osteophyte detection and length measurement algorithm, which are the criteria for determining the knee osteoarthritis grade, separately using CNN, which is a deep learning technique. In addition, we would like to create a grading system that integrates and complements the existing classification system and show results that match the judgments of actual medical staff. Based on publicly available OAI (Osteoarthritis Initiative) data, a total of 9,786 knee osteoarthritis data were used in this study, eventually achieving an Accuracy of 69.8% and an F1 score of 76.65%.

Process Fault Probability Generation via ARIMA Time Series Modeling of Etch Tool Data

  • Arshad, Muhammad Zeeshan;Nawaz, Javeria;Park, Jin-Su;Shin, Sung-Won;Hong, Sang-Jeen
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.02a
    • /
    • pp.241-241
    • /
    • 2012
  • Semiconductor industry has been taking the advantage of improvements in process technology in order to maintain reduced device geometries and stringent performance specifications. This results in semiconductor manufacturing processes became hundreds in sequence, it is continuously expected to be increased. This may in turn reduce the yield. With a large amount of investment at stake, this motivates tighter process control and fault diagnosis. The continuous improvement in semiconductor industry demands advancements in process control and monitoring to the same degree. Any fault in the process must be detected and classified with a high degree of precision, and it is desired to be diagnosed if possible. The detected abnormality in the system is then classified to locate the source of the variation. The performance of a fault detection system is directly reflected in the yield. Therefore a highly capable fault detection system is always desirable. In this research, time series modeling of the data from an etch equipment has been investigated for the ultimate purpose of fault diagnosis. The tool data consisted of number of different parameters each being recorded at fixed time points. As the data had been collected for a number of runs, it was not synchronized due to variable delays and offsets in data acquisition system and networks. The data was then synchronized using a variant of Dynamic Time Warping (DTW) algorithm. The AutoRegressive Integrated Moving Average (ARIMA) model was then applied on the synchronized data. The ARIMA model combines both the Autoregressive model and the Moving Average model to relate the present value of the time series to its past values. As the new values of parameters are received from the equipment, the model uses them and the previous ones to provide predictions of one step ahead for each parameter. The statistical comparison of these predictions with the actual values, gives us the each parameter's probability of fault, at each time point and (once a run gets finished) for each run. This work will be extended by applying a suitable probability generating function and combining the probabilities of different parameters using Dempster-Shafer Theory (DST). DST provides a way to combine evidence that is available from different sources and gives a joint degree of belief in a hypothesis. This will give us a combined belief of fault in the process with a high precision.

  • PDF

A study on measurement and compensation of automobile door gap using optical triangulation algorithm (광 삼각법 측정 알고리즘을 이용한 자동차 도어 간격 측정 및 보정에 관한 연구)

  • Kang, Dong-Sung;Lee, Jeong-woo;Ko, Kang-Ho;Kim, Tae-Min;Park, Kyu-Bag;Park, Jung Rae;Kim, Ji-Hun;Choi, Doo-Sun;Lim, Dong-Wook
    • Design & Manufacturing
    • /
    • v.14 no.1
    • /
    • pp.8-14
    • /
    • 2020
  • In general, auto parts production assembly line is assembled and produced by automatic mounting by an automated robot. In such a production site, quality problems such as misalignment of parts (doors, trunks, roofs, etc.) to be assembled with the vehicle body or collision between assembly robots and components are often caused. In order to solve such a problem, the quality of parts is manually inspected by using mechanical jig devices outside the automated production line. Automotive inspection technology is the most commonly used field of vision, which includes surface inspection such as mounting hole spacing and defect detection, body panel dents and bends. It is used for guiding, providing location information to the robot controller to adjust the robot's path to improve process productivity and manufacturing flexibility. The most difficult weighing and measuring technology is to calibrate the surface analysis and position and characteristics between parts by storing images of the part to be measured that enters the camera's field of view mounted on the side or top of the part. The problem of the machine vision device applied to the automobile production line is that the lighting conditions inside the factory are severely changed due to various weather changes such as morning-evening, rainy days and sunny days through the exterior window of the assembly production plant. In addition, since the material of the vehicle body parts is a steel sheet, the reflection of light is very severe, which causes a problem in that the quality of the captured image is greatly changed even with a small light change. In this study, the distance between the car body and the door part and the door are acquired by the measuring device combining the laser slit light source and the LED pattern light source. The result is transferred to the joint robot for assembling parts at the optimum position between parts, and the assembly is done at the optimal position by changing the angle and step.

Statistical Analysis of Clustered Interval-Censored Data with Informative Cluster Size (정보적군집 크기를 가진 군집화된 구간 중도절단자료 분석을 위한결합모형의 적용)

  • Kim, Yang-Jin;Yoo, Han-Na
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.5
    • /
    • pp.689-696
    • /
    • 2010
  • Interval-censored data are commonly found in studies of diseases that progress without symptoms, which require clinical evaluation for detection. Several techniques have been suggested with independent assumption. However, the assumption will not be valid if observations come from clusters. Furthermore, when the cluster size relates to response variables, commonly used methods can bring biased results. For example, in a study on lymphatic filariasis, a parasitic disease where worms make several nests in the infected person's lymphatic vessels and reside until adulthood, the response variable of interest is the nest-extinction times. Since the extinction times of nests are checked by repeated ultrasound examinations, exact extinction times are not observed. Instead, data are composed of two examination points: the last examination time with living worms and the first examination time with dead worms. Furthermore, as Williamson et al. (2008) pointed out, larger nests show a tendency for low clearance rates. This association has been denoted as an informative cluster size. To analyze the relationship between the numbers of nests and interval-censored nest-extinction times, this study proposes a joint model for the relationship between cluster size and clustered interval-censored failure data.

Calibration and Validation Activities for Earth Observation Mission Future Evolution for GMES

  • LECOMTE Pascal
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.237-240
    • /
    • 2005
  • Calibration and Validation are major element of any space borne Earth Observation Mission. These activities are the major objective of the commissioning phases but routine activities shall be maintained during the whole mission in order to maintain the quality of the product delivered to the users or at least to fully characterise the evolution with time of the product quality. With the launch of ERS-l in 1991, the European Space Agency decided to put in place a group dedicated to these activities, along with the daily monitoring of the product quality for anomaly detection and algorithm evolution. These four elements are all strongly linked together. Today this group is fully responsible for the monitoring of two ESA missions, ERS-2 and Envisat, for a total of 12 instruments of various types, preparing itself for the Earth Explorer series of five. other satellites (Cryosat, Goce, SMOS, ADM-Aeolus, Swarm) and at various levels in past and future Third Party Missions such as Landsat, J-ERS, ALOS and KOMPSAT. The Joint proposal by the European Union and the European Space Agency for a 'Global Monitoring for Environment and Security' project (GMES), triggers a review of the scope of these activities in a much wider framework than the handling of single missions with specific tools, methods and activities. Because of the global objective of this proposal, it is necessary to put in place Multi-Mission Calibration and Validation systems and procedures. GMES Calibration and Validation activities will rely on multi source data access, interoperability, long-term data preservation, and definition standards to facilitate the above objectives. The scope of this presentation is to give an overview of the current Calibration and Validation activities at ESA, and the planned evolution in the context of GMES.

  • PDF

Heavy Snowfall Disaster Response using Multiple Satellite Imagery Information (다중 위성정보를 활용한 폭설재난 대응)

  • Kim, Seong Sam;Choi, Jae Won;Goo, Sin Hoi;Park, Young Jin
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.4
    • /
    • pp.135-143
    • /
    • 2012
  • Remote sensing which observes repeatedly the whole Earth and GIS-based decision-making technology have been utilized widely in disaster management such as early warning monitoring, damage investigation, emergent rescue and response, rapid recovery etc. In addition, various countermeasures of national level to collect timely satellite imagery in emergency have been considered through the operation of a satellite with onboard multiple sensors as well as the practical joint use of satellite imagery by collaboration with space agencies of the world. In order to respond heavy snowfall disaster occurred on the east coast of the Korean Peninsula in February 2011, snow-covered regions were analyzed and detected in this study through NDSI(Normalized Difference Snow Index) considering reflectance of wavelength for MODIS sensor and change detection algorithm using satellite imagery collected from International Charter. We present the application case of National Disaster Management Institute(NDMI) which supported timely decision-making through GIS spatial analysis with various spatial data and snow cover map.

Joint Precoding Technique for Interference Cancellation in Multiuser MIMO Relay Networks for LTE-Advanced System (LTE-Advanced 시스템의 다중 사용자 MIMO Relay 네트워크에서 간섭 제거를 위한 Joint Precoding 기술)

  • Malik, Saransh;Moon, Sang-Mi;Kim, Bo-Ra;Kim, Cheol-Sung;Hwang, In-Tae
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.6
    • /
    • pp.15-26
    • /
    • 2012
  • In this paper, we perform interference cancellation in multiuser MIMO (Multiple Input Multiple Output) relay network with improved Amplify-and-Forward (AF) and Decode-and-Forward (DF) relay protocols. The work of interference cancellation is followed by evolved NodeB (eNB), Relay Node (RN) and User Equipment (UE) to improve the error performance of whole transmission system with the explicit use of relay node. In order to perform interference cancellation, we use Dirty Paper Coding (DPC) and Thomilson Harashima Precoding (THP) allied with detection techniques Zero Forcing (ZF), Minimum Mean Square Error (MMSE), Successive Interference Cancellation (SIC) and Ordered Successive Interference Cancellation (OSIC). These basic techniques are studied and improved in the proposal by using the functions of relay node. The performance is improved by Decode-and-Forward which enhance the cancellation of interference in two layers at the cooperative relay node. The interference cancellation using weighted vectors is performed between eNB and RN. In the final results of the research, we conclude that in contrast with the conventional algorithms, the proposed algorithm shows better performance in lower SNR regime. The simulation results show the considerable improvement in the bit error performance by the proposed scheme in the LTE-Advanced system.