• Title/Summary/Keyword: mapping method

Search Result 2,590, Processing Time 0.03 seconds

PCA­based Waveform Classification of Rabbit Retinal Ganglion Cell Activity (주성분분석을 이용한 토끼 망막 신경절세포의 활동전위 파형 분류)

  • 진계환;조현숙;이태수;구용숙
    • Progress in Medical Physics
    • /
    • v.14 no.4
    • /
    • pp.211-217
    • /
    • 2003
  • The Principal component analysis (PCA) is a well-known data analysis method that is useful in linear feature extraction and data compression. The PCA is a linear transformation that applies an orthogonal rotation to the original data, so as to maximize the retained variance. PCA is a classical technique for obtaining an optimal overall mapping of linearly dependent patterns of correlation between variables (e.g. neurons). PCA provides, in the mean-squared error sense, an optimal linear mapping of the signals which are spread across a group of variables. These signals are concentrated into the first few components, while the noise, i.e. variance which is uncorrelated across variables, is sequestered in the remaining components. PCA has been used extensively to resolve temporal patterns in neurophysiological recordings. Because the retinal signal is stochastic process, PCA can be used to identify the retinal spikes. With excised rabbit eye, retina was isolated. A piece of retina was attached with the ganglion cell side to the surface of the microelectrode array (MEA). The MEA consisted of glass plate with 60 substrate integrated and insulated golden connection lanes terminating in an 8${\times}$8 array (spacing 200 $\mu$m, electrode diameter 30 $\mu$m) in the center of the plate. The MEA 60 system was used for the recording of retinal ganglion cell activity. The action potentials of each channel were sorted by off­line analysis tool. Spikes were detected with a threshold criterion and sorted according to their principal component composition. The first (PC1) and second principal component values (PC2) were calculated using all the waveforms of the each channel and all n time points in the waveform, where several clusters could be separated clearly in two dimension. We verified that PCA-based waveform detection was effective as an initial approach for spike sorting method.

  • PDF

Design and Performance Evaluation of Selective DFT Spreading Method for PAPR Reduction in Uplink OFDMA System (OFDMA 상향 링크 시스템에서 PAPR 저감을 위한 선택적 DFT Spreading 기법의 설계와 성능 평가)

  • Kim, Sang-Woo;Ryu, Heung-Gyoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.18 no.3 s.118
    • /
    • pp.248-256
    • /
    • 2007
  • In this paper, we propose a selective DFT spreading method to solve a high PAPR problem in uplink OFDMA system. A selective characteristic is added to the DFT spreading, so the DFT spreading method is mixed with SLM method. However, to minimize increment of computational complexity, differently with common SLM method, our proposed method uses only one DFT spreading block. After DFT, several copy branches are generated by multiplying with each different matrix. This matrix is obtained by linear transforming the each phase rotation in front of DFT block. And it has very lower computational complexity than one DFT process. For simulation, we suppose that the 512 point IFFT is used, the number of effective sub-carrier is 300, the number of allowed sub-carrier to each user's is 1/4 and 1/3 and QPSK modulation is used. From the simulation result, when the number of copy branch is 4, our proposed method has more than about 5.2 dB PAPR reduction effect. It is about 1.8 dB better than common DFT spreading method and 0.95 dB better than common SLM which uses 32 copy branches. And also, when the number of copy branch is 2, it is better than SLM using 32 copy branches. From the comparison, the proposed method has 91.79 % lower complexity than SLM using 32 copy branches in similar PAPR reduction performance. So, we can find a very good performance of our proposed method. Also, we can expect the similar performance when all number of sub-carrier is allocated to one user like the OFDM.

A Study on Object-Based Image Analysis Methods for Land Cover Classification in Agricultural Areas (농촌지역 토지피복분류를 위한 객체기반 영상분석기법 연구)

  • Kim, Hyun-Ok;Yeom, Jong-Min
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.4
    • /
    • pp.26-41
    • /
    • 2012
  • It is necessary to manage, forecast and prepare agricultural production based on accurate and up-to-date information in order to cope with the climate change and its impacts such as global warming, floods and droughts. This study examined the applicability as well as challenges of the object-based image analysis method for developing a land cover image classification algorithm, which can support the fast thematic mapping of wide agricultural areas on a regional scale. In order to test the applicability of RapidEye's multi-temporal spectral information for differentiating agricultural land cover types, the integration of other GIS data was minimized. Under this circumstance, the land cover classification accuracy at the study area of Kimje ($1300km^2$) was 80.3%. The geometric resolution of RapidEye, 6.5m showed the possibility to derive the spatial features of agricultural land use generally cultivated on a small scale in Korea. The object-based image analysis method can realize the expert knowledge in various ways during the classification process, so that the application of spectral image information can be optimized. An additional advantage is that the already developed classification algorithm can be stored, edited with variables in detail with regard to analytical purpose, and may be applied to other images as well as other regions. However, the segmentation process, which is fundamental for the object-based image classification, often cannot be explained quantitatively. Therefore, it is necessary to draw the best results based on expert's empirical and scientific knowledge.

A facile synthesis of transfer-free graphene by Ni-C co-deposition

  • An, Sehoon;Lee, Geun-Hyuk;Jang, Seong Woo;Hwang, Sehoon;Yoon, Jung Hyeon;Lim, Sang-Ho;Han, Seunghee
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2016.02a
    • /
    • pp.129-129
    • /
    • 2016
  • Graphene, as a single layer of $sp^2$-bonded carbon atoms packed into a 2D honeycomb crystal lattice, has attracted much attention due to its outstanding properties. In order to synthesize high quality graphene, transition metals, such as nickel and copper, have been widely employed as catalysts, which needs transfer to desired substrates for various applications. However, the transfer steps are not only complicated but also inevitably induce defects, impurities, wrinkles, and cracks of graphene. Furthermore, the direct synthesis of graphene on dielectric surfaces has still been a premature field for practical applications. Therefore, cost effective and concise methods for transfer-free graphene are essentially required for commercialization. Here, we report a facile transfer-free graphene synthesis method through nickel and carbon co-deposited layer. In order to fabricate 100 nm thick NiC layer on the top of $SiO_2/Si$ substrates, DC reactive magnetron sputtering was performed at a gas pressure of 2 mTorr with various Ar : $CH_4$ gas flow ratio and the 200 W DC input power was applied to a Ni target at room temperature. Then, the sample was annealed under 200 sccm Ar flow and pressure of 1 Torr at $1000^{\circ}C$ for 4 min employing a rapid thermal annealing (RTA) equipment. During the RTA process, the carbon atoms diffused through the NiC layer and deposited on both sides of the NiC layer to form graphene upon cooling. The remained NiC layer was removed by using a 0.5 M $FeCl_3$ aqueous solution, and graphene was then directly obtained on $SiO_2/Si$ without any transfer process. In order to confirm the quality of resulted graphene layer, Raman spectroscopy was implemented. Raman mapping revealed that the resulted graphene was at high quality with low degree of $sp^3$-type structural defects. Additionally, sheet resistance and transmittance of the produced graphene were analyzed by a four-point probe method and UV-vis spectroscopy, respectively. This facile non-transfer process would consequently facilitate the future graphene research and industrial applications.

  • PDF

A Method of Generating Changeable Face Template for Statistical Appearance-Based Face Recognition (통계적 형상 기반의 얼굴인식을 위한 가변얼굴템플릿 생성방법)

  • Lee, Chul-Han;Jung, Min-Yi;Kim, Jong-Sun;Choi, Jeung-Yoon;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.2 s.314
    • /
    • pp.27-36
    • /
    • 2007
  • Changeable biometrics identify a person using transformed biometric data instead of original biometric data in order to enhance privacy and security in biometrics when biometric data is compromised. In this paper, a novel scheme which generates changeable face templates for statistical appearance-based face recognition is proposed. Two different original face feature vectors are extracted from two different appearance-based approaches, respectively, each original feature vector is normalized, and its elements are re-ordered. Finally a changeable face template is generated by weighted addition between two normalized and scrambled feature vectors. Since the two feature vectors are combined into one by a two to one mapping, the original two feature vectors are not easily recovered from the changeable face template even if the combining rule is known. Also, when we need to make new changeable face template for a person, we change the re-ordering rule for the person and make a new feature vector for the person. Therefore, the security and privacy in biometric system can be enhanced by using the proposed changeable face templates. In our experiments, we analyze the proposed method with respect to performance and security using an AR-face database.

Timing Verification of AUTOSAR-compliant Diesel Engine Management System Using Measurement-based Worst-case Execution Time Analysis (측정기반 최악실행시간 분석 기법을 이용한 AUTOSAR 호환 승용디젤엔진제어기의 실시간 성능 검증에 관한 연구)

  • Park, Inseok;Kang, Eunhwan;Chung, Jaesung;Sohn, Jeongwon;Sunwoo, Myoungho;Lee, Kangseok;Lee, Wootaik;Youn, Jeamyoung;Won, Donghoon
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.5
    • /
    • pp.91-101
    • /
    • 2014
  • In this study, we presented a timing verification method for a passenger car diesel engine management system (EMS) using measurement-based worst-case execution time (WCET) analysis. In order to cope with AUTOSAR-compliant software architecture, a development process model is proposed. In the process model, a runnable is regarded as a test unit and its temporal behavior (i.e. maximum observed execution time, MOET) is obtained along with on-target functionality evaluation results during online unit test. Furthermore, a cost-effective framework for online unit test is proposed. Because the runtime environment layer and the standard calibration environment are utilized to implement test interface, additional resource consumption of the target processor is minimized. Using the proposed development process model and unit test framework, the MOETs of 86 runnables for diesel EMS are obtained with 213 unit test cases. Using the obtained MOETs of runnables, the WCETs of tasks are estimated and the schedulability is evaluated. From the schedulability analysis results, the problems of the initially designed schedule table is recognized and it is fixed by redesigning of the runnable mapping and task offset. Through the various test scenarios, the proposed method is validated.

Performance Analysis on Terrain-Adaptive Clutter Map Algorithm for Ground Clutter Rejection of Weather Radar (기상 레이다의 지형 클러터 제거를 위한 지형적응 클러터 맵 알고리듬 성능분석)

  • Kim, Hye-Ri;Jung, Jung-Soo;Kwag, Young-Kil;Kim, Ji-Won;Kim, Ji-Hyeon;Ko, Jeong-Seok
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.25 no.12
    • /
    • pp.1292-1299
    • /
    • 2014
  • Weather radar systems can provide weather information of the ground, sea, and air in extensive spatial coverage in near real time. However, it becomes problematic when ground clutter signal exists around precipitation because strong signals of ground can cause a false precipitation report. A large percentage of land coverage of Korea consists of mountainous regions where ground clutter needs to be mitigated for more accurate prediction. Thus, it is considered necessary to introduce a new suitable ground clutter removal technique specifically adequate for Korea. In this paper, the C-Map(Clutter Map) method using raw radar signals is proposed for removing ground clutter using a terrain-adaptive clutter map. A clutter map is generated using raw radar signals(I/Q) of clear days, then it is subtracted from received radar signals in frequency domain. The proposed method is applied to the radar data acquired from Sobaeksan rain radar and the result shows that the clutter rejection ratio is about 91.17 %.

Web and Building Information Model-based Visualization of Indoor Environment -Focusing on the Data of Temperature, Humidity and Dust Density- (웹 및 건물정보모델기반 실내 환경 디지털 시각화 -온습도와 미세먼지 농도 데이터를 중심으로-)

  • Huang, Jin-hua;Lee, Jin-Kook;Jeon, Gyu-yeob
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.2
    • /
    • pp.327-336
    • /
    • 2017
  • People spend most of their time in the indoor environment. Among the various indoor environmental factors, air and thermal environment directly affect human's health and efficiency of work. Therefore, efficient monitoring of indoor environment is highly important. For assisting the residents to understand the state of the indoor environment much easier and more intuitive, this paper analyze the visualization cases of the conventional indoor environment. Then explore the direction of improvement for the visualization method to propose a more effective visualization method. The approach of web and BIM(Building Information Model)-based visualization of indoor environment proposed in this study is composed of four major parts: 1) the generation of the model data of the building; 2) the generation of indoor environmental data; 3) the creation of visualization elements; 4) data mapping. Then it realized through the generating process of visualization results.

Cascade Composition of Translation Rules for the Ontology Interoperability of Simple RDF Message (단순 RDF 메시지의 온톨로지 상호 운용성을 위한 변환 규칙들의 연쇄 조합)

  • Kim, Jae-Hoon;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.34 no.6
    • /
    • pp.528-545
    • /
    • 2007
  • Recently ontology has been an attractive technology along with the business strategy of providing a plenty of more intelligent services. The essential problem in application domains using ontology is that all members, agents, and application programs in the domains must share the same ontology concepts. However, a variety of mobile devices, sensing devices, and network components manufactured by various companies, a variety of common carriers, and a variety of contents providers make multiple heterogeneous ontologies more likely to coexist. We can see many past researches fallen into resolving this semantic interoperability. Such methods can be broadly classified into by-mapping, by-merging, and by-translation. In this research, we focus on by-translation among them which uses a translation rule directly made between two heterogeneous ontology data like OntoMorph. However, the manual composition of the direct translation rule is not convenient by itself and if there are N ontologies, the direct method has the rule composition complexity of $O(N^2)$ in the worst case. Therefore, in this paper we introduce the cascade composition of translation rules based on web openness in order to improve the complexity. The research result made us recognize some important factors in an ontology translation system, that is speediness of translation, and conveniency of translation rule composition, and some experiments and comparing analysis with existing methods showed that our cascade method has more conveniency with insuring the speediness and the correctness.

Component Analysis for Constructing an Emotion Ontology (감정 온톨로지의 구축을 위한 구성요소 분석)

  • Yoon, Ae-Sun;Kwon, Hyuk-Chul
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.1
    • /
    • pp.157-175
    • /
    • 2010
  • Understanding dialogue participant's emotion is important as well as decoding the explicit message in human communication. It is well known that non-verbal elements are more suitable for conveying speaker's emotions than verbal elements. Written texts, however, contain a variety of linguistic units that express emotions. This study aims at analyzing components for constructing an emotion ontology, that provides us with numerous applications in Human Language Technology. A majority of the previous work in text-based emotion processing focused on the classification of emotions, the construction of a dictionary describing emotion, and the retrieval of those lexica in texts through keyword spotting and/or syntactic parsing techniques. The retrieved or computed emotions based on that process did not show good results in terms of accuracy. Thus, more sophisticate components analysis is proposed and the linguistic factors are introduced in this study. (1) 5 linguistic types of emotion expressions are differentiated in terms of target (verbal/non-verbal) and the method (expressive/descriptive/iconic). The correlations among them as well as their correlation with the non-verbal expressive type are also determined. This characteristic is expected to guarantees more adaptability to our ontology in multi-modal environments. (2) As emotion-related components, this study proposes 24 emotion types, the 5-scale intensity (-2~+2), and the 3-scale polarity (positive/negative/neutral) which can describe a variety of emotions in more detail and in standardized way. (3) We introduce verbal expression-related components, such as 'experiencer', 'description target', 'description method' and 'linguistic features', which can classify and tag appropriately verbal expressions of emotions. (4) Adopting the linguistic tag sets proposed by ISO and TEI and providing the mapping table between our classification of emotions and Plutchik's, our ontology can be easily employed for multilingual processing.

  • PDF