• Title/Summary/Keyword: Robust engineering

Search Result 4,629, Processing Time 0.034 seconds

Automatic Detection of Stage 1 Sleep (자동 분석을 이용한 1단계 수면탐지)

  • 신홍범;한종희;정도언;박광석
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.1
    • /
    • pp.11-19
    • /
    • 2004
  • Stage 1 sleep provides important information regarding interpretation of nocturnal polysomnography, particularly sleep onset. It is a short transition period from wakeful consciousness to sleep. Lack of prominent sleep events characterizing stage 1 sleep is a major obstacle in automatic sleep stage scoring. In this study, we attempted to utilize simultaneous EEC and EOG processing and analyses to detect stage 1 sleep automatically. Relative powers of the alpha waves and the theta waves were calculated from spectral estimation. Either the relative power of alpha waves less than 50% or the relative power of theta waves more than 23% was regarded as stage 1 sleep. SEM (slow eye movement) was defined as the duration of both eye movement ranging from 1.5 to 4 seconds and regarded also as stage 1 sleep. If one of these three criteria was met, the epoch was regarded as stage 1 sleep. Results f ere compared to the manual rating results done by two polysomnography experts. Total of 169 epochs was analyzed. Agreement rate for stage 1 sleep between automatic detection and manual scoring was 79.3% and Cohen's Kappa was 0.586 (p<0.01). A significant portion (32%) of automatically detected stage 1 sleep included SEM. Generally, digitally-scored sleep s1aging shows the accuracy up to 70%. Considering potential difficulties in stage 1 sleep scoring, the accuracy of 79.3% in this study seems to be robust enough. Simultaneous analysis of EOG provides differential value to the present study from previous oneswhich mainly depended on EEG analysis. The issue of close relationship between SEM and stage 1 sleep raised by Kinnariet at. remains to be a valid one in this study.

Capacity Comparison of Two Uplink OFDMA Systems Considering Synchronization Error among Multiple Users and Nonlinear Distortion of Amplifiers (사용자간 동기오차와 증폭기의 비선형 왜곡을 동시에 고려한 두 상향링크 OFDMA 기법의 채널용량 비교 분석)

  • Lee, Jin-Hui;Kim, Bong-Seok;Choi, Kwonhue
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.5
    • /
    • pp.258-270
    • /
    • 2014
  • In this paper, we investigate channel capacity of two kinds of uplink OFDMA (Orthogonal Frequency Division Multiple Access) schemes, i.e. ZCZ (Zero Correlation Zone) code time-spread OFDMA and sparse SC-FDMA (Single Carrier Frequency Division Mmultiple Access) robust to access timing offset (TO) among multiple users. In order to reflect the practical condition, we consider not only access TO among multiple users but also peak to average power ratio (PAPR) which is one of hot issues of uplink OFDMA. In the case with access TO among multiple users, the amplified signal of users by power control might affect a severe interference to signals of other users. Meanwhile, amplified signal by considering distance between user and base station might be distorted due to the limit of amplifier and thus the performance might degrade. In order to achieve the maximum channel capacity, we investigate the combinations of transmit power so called ASF (adaptive scaling factor) by numerical simulations. We check that the channel capacity of the case with ASF increases compared to the case with considering only distance i.e. ASF=1. From the simulation results, In the case of high signal to noise ratio (SNR), ZCZ code time-spread OFDMA achieves higher channel capacity compared to sparse block SC-FDMA. On the other hand, in the case of low SNR, the sparse block SC-FDMA achieves better performance compared to ZCZ time-spread OFDMA.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Packaging Technology for the Optical Fiber Bragg Grating Multiplexed Sensors (광섬유 브래그 격자 다중화 센서 패키징 기술에 관한 연구)

  • Lee, Sang Mae
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.24 no.4
    • /
    • pp.23-29
    • /
    • 2017
  • The packaged optical fiber Bragg grating sensors which were networked by multiplexing the Bragg grating sensors with WDM technology were investigated in application for the structural health monitoring of the marine trestle structure transporting the ship. The optical fiber Bragg grating sensor was packaged in a cylindrical shape made of aluminum tubes. Furthermore, after the packaged optical fiber sensor was inserted in polymeric tube, the epoxy was filled inside the tube so that the sensor has resistance and durability against sea water. The packaged optical fiber sensor component was investigated under 0.2 MPa of hydraulic pressure and was found to be robust. The number and location of Bragg gratings attached at the trestle were determined where the trestle was subject to high displacement obtained by the finite element simulation. Strain of the part in the trestle being subjected to the maximum load was analyzed to be ${\sim}1000{\mu}{\varepsilon}$ and thus shift in Bragg wavelength of the sensor caused by the maximum load of the trestle was found to be ~1,200 pm. According to results of the finite element analysis, the Bragg wavelength spacings of the sensors were determined to have 3~5 nm without overlapping of grating wavelengths between sensors when the trestle was under loads and thus 50 of the grating sensors with each module consisting of 5 sensors could be networked within 150 nm optical window at 1550 nm wavelength of the Bragg wavelength interrogator. Shifts in Bragg wavelength of the 5 packaged optical fiber sensors attached at the mock trestle unit were well interrogated by the grating interrogator which used the optical fiber loop mirror, and the maximum strain rate was measured to be about $235.650{\mu}{\varepsilon}$. The modelling result of the sensor packaging and networking was in good agreements with experimental result each other.

Comparison of Forest Carbon Stocks Estimation Methods Using Forest Type Map and Landsat TM Satellite Imagery (임상도와 Landsat TM 위성영상을 이용한 산림탄소저장량 추정 방법 비교 연구)

  • Kim, Kyoung-Min;Lee, Jung-Bin;Jung, Jaehoon
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.5
    • /
    • pp.449-459
    • /
    • 2015
  • The conventional National Forest Inventory(NFI)-based forest carbon stock estimation method is suitable for national-scale estimation, but is not for regional-scale estimation due to the lack of NFI plots. In this study, for the purpose of regional-scale carbon stock estimation, we created grid-based forest carbon stock maps using spatial ancillary data and two types of up-scaling methods. Chungnam province was chosen to represent the study area and for which the $5^{th}$ NFI (2006~2009) data was collected. The first method (method 1) selects forest type map as ancillary data and uses regression model for forest carbon stock estimation, whereas the second method (method 2) uses satellite imagery and k-Nearest Neighbor(k-NN) algorithm. Additionally, in order to consider uncertainty effects, the final AGB carbon stock maps were generated by performing 200 iterative processes with Monte Carlo simulation. As a result, compared to the NFI-based estimation(21,136,911 tonC), the total carbon stock was over-estimated by method 1(22,948,151 tonC), but was under-estimated by method 2(19,750,315 tonC). In the paired T-test with 186 independent data, the average carbon stock estimation by the NFI-based method was statistically different from method2(p<0.01), but was not different from method1(p>0.01). In particular, by means of Monte Carlo simulation, it was found that the smoothing effect of k-NN algorithm and mis-registration error between NFI plots and satellite image can lead to large uncertainty in carbon stock estimation. Although method 1 was found suitable for carbon stock estimation of forest stands that feature heterogeneous trees in Korea, satellite-based method is still in demand to provide periodic estimates of un-investigated, large forest area. In these respects, future work will focus on spatial and temporal extent of study area and robust carbon stock estimation with various satellite images and estimation methods.

The Research to Correct Overestimation in TOF-MRA for Severity of Cerebrovascular Stenosis (3D-SPACE T2 기법에 의한 TOF-MRA검사 시 발생하는 혈관 내 협착 정도의 측정 오류 개선에 관한 연구)

  • Han, Yong Su;Kim, Ho Chul;Lee, Dong Young;Lee, Su Cheol;Ha, Seung Han;Kim, Min Gi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.12
    • /
    • pp.180-188
    • /
    • 2014
  • It is very important accurate diagnosis and quick treatment in cerebrovascular disease, i.e. stenosis or occlusion that could be caused by risk factors such as poor dietary habits, insufficient exercise, and obesity. Time-of-flight magnetic resonance angiography (TOF-MRA), it is well known as diagnostic method without using contrast agent for cerebrovascular disease, is the most representative and reliable technique. Nevertheless, it still has measurement errors (also known as overestimation) for length of stenosis and area of occlusion in celebral infarction that is built by accumulation and rupture of plaques generated by hemodynamic turbulence. The purpose of this study is to show clinical trial feasibility for 3D-SPACE T2, which is improved by using signal attenuation effects of fluid velocity, in diagnosis of cerebrovascular disease. To model angiostenosis, strictures of different proportions (40%, 50%, 60%, and 70%) and virtual blood stream (normal saline) of different velocities (0.19 ml/sec, 1.5 ml/sec, 2.1 ml/sec, and 2.6 ml/sec) by using dialysis were made. Cross-examinations were performed for 3D-SPACE T2 and TOF-MRA (16 times each). The accuracy of measurement for length of stenosis was compared in all experimental conditions. 3D-SPACE 2T has superiority in terms of accuracy for measurements of the length of stenosis, compared with TOF-MRA. Also, it is robust in fast blood stream and large stenosis than TOF-MRA. 3D-SPACE 2T will be promising technique to increase diagnosis accuracy in narrow complex lesions as like two cerebral small vessels with stenosis, created by hemodynamic turbulence.

Development of Pharmaceutical Dosage Forms with Biphasic Drug Release using Double-Melt Extrusion Technology (이중 고온용융 압출 성형된 이중 방출능을 가지는 제형의 개발)

  • Kim, Dong-Wook;Kang, Chin-Yang;Kang, Changmin;Park, Jun-Bom
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.9
    • /
    • pp.228-234
    • /
    • 2016
  • The aim of this study was to develop pharmaceutical dosage forms with a bi-phasic drug using a double extrusion approach. Hot melt extrusion was performed using a co-rotating twin-screw extruder. The. 1st melt extrusion was performed using polymer with a relatively higher Tg, such as HPMC and the 2nd melt extrudate was obtained using the 1st extrudate and polymers with a lower Tg, such as HPMC-AS and PEO. In addition, the formulation with all the content in the same proportion as the double extudate was produced using single extrusion for comparison. Physical characterization was performed on the formulations employing differential scanning calorimetry (DSC). In vitro release tests were studied using a USP Type-I apparatus at $37{\pm}0.5^{\circ}C$ and 100 rpm. The similarity factor (f2) was also used to check the difference statistically. The DSC results indicated that the crystallinity of ibuprofen was changed to an amorphous state after extrusion in both double and single melt extrusion. Double melt extrudate with ibuprofen showed the desired release in acidic media (pH 1.2) in the first two hours and basic (pH 6.8) during six hours. Double melt extrudate with glimepiride showed faster release in 60 min of over 80%, whereas the single extrudate with glimepiride showed retarded release due to the interaction with HPMC. The similarity factor(f2) value was 28.5, which demonstrates that there were different drug release behavior between the double and single extrusion. Consequently, the double melt extrudated formulation was robust and gave the desired drug release pattern.

A Comparative Study of Vegetation Phenology Using High-resolution Sentinel-2 Imagery and Topographically Corrected Vegetation Index (고해상도 Sentinel-2 위성 자료와 지형효과를 고려한 식생지수 기반의 산림 식생 생장패턴 비교)

  • Seungheon Yoo;Sungchan Jeong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.26 no.2
    • /
    • pp.89-102
    • /
    • 2024
  • Land Surface Phenology (LSP) plays a crucial role in understanding vegetation dynamics. The near-infrared reflectance of vegetation (NIRv) has been increasingly adopted in LSP studies, being recognized as a robust proxy for gross primary production (GPP). However, NIR v is sensitive to the terrain effects in mountainous areas due to artifacts in NIR reflectance cannot be canceled out. Because of this, estimating phenological metrics in mountainous regions have a substantial uncertainty, especially in the end of season (EOS). The topographically corrected NIRv (TCNIRv) employs the path length correction (PLC) method, which was deduced from the simplification of the radiative transfer equation, to alleviate limitations related to the terrain effects. TCNIRv has been demonstrated to estimate phenology metrics more accurately than NIRv, especially exhibiting improved estimation of EOS. As the topographic effect is significantly influenced by terrain properties such as slope and aspect, our study compared phenology metrics estimations between south-facing slopes (SFS) and north-facing slopes (NFS) using NIRv and TCNIRv in two distinct mountainous regions: Gwangneung Forest (GF) and Odaesan National Park (ONP), representing relatively flat and rugged areas, respectively. The results indicated that TCNIR v-derived EOS at NFS occurred later than that at SFS for both study sites (GF : DOY 266.8/268.3 at SFS/NFS; ONP : DOY 262.0/264.8 at SFS/NFS), in contrast to the results obtained with NIRv (GF : DOY 270.3/265.5 at SFS/NFS; ONP : DOY 265.0/261.8 at SFS/NFS). Additionally, the gap between SFS and NFS diminished after topographic correction (GF : DOY 270.3/265.5 at SFS/NFS; ONP : DOY 265.0/261.8 at SFS/NFS). We conclude that TCNIRv exhibits discrepancy with NIR v in EOS detection considering slope orientation. Our findings underscore the necessity of topographic correction in estimating photosynthetic phenology, considering slope orientation, especially in diverse terrain conditions.