• Title/Summary/Keyword: Data Paper

Search Result 56,610, Processing Time 0.087 seconds

IMAGING SIMULATIONS FOR THE KOREAN VLBI NETWORK(KVN) (한국우주전파관측망(KVN)의 영상모의실험)

  • Jung, Tae-Hyun;Rhee, Myung-Hyun;Roh, Duk-Gyoo;Kim, Hyun-Goo;Sohn, Bong-Won
    • Journal of Astronomy and Space Sciences
    • /
    • v.22 no.1
    • /
    • pp.1-12
    • /
    • 2005
  • The Korean VLBI Network (KVN) will open a new field of research in astronomy, geodesy and earth science using the newest three Elm radio telescopes. This will expand our ability to look at the Universe in the millimeter regime. Imaging capability of radio interferometry is highly dependent upon the antenna configuration, source size, declination and the shape of target. In this paper, imaging simulations are carried out with the KVN system configuration. Five test images were used which were a point source, multi-point sources, a uniform sphere with two different sizes compared to the synthesis beam of the KVN and a Very Large Array (VLA) image of Cygnus A. The declination for the full time simulation was set as +60 degrees and the observation time range was -6 to +6 hours around transit. Simulations have been done at 22GHz, one of the KVN observation frequency. All these simulations and data reductions have been run with the Astronomical Image Processing System (AIPS) software package. As the KVN array has a resolution of about 6 mas (milli arcsecond) at 220Hz, in case of model source being approximately the beam size or smaller, the ratio of peak intensity over RMS shows about 10000:1 and 5000:1. The other case in which model source is larger than the beam size, this ratio shows very low range of about 115:1 and 34:1. This is due to the lack of short baselines and the small number of antenna. We compare the coordinates of the model images with those of the cleaned images. The result shows mostly perfect correspondence except in the case of the 12mas uniform sphere. Therefore, the main astronomical targets for the KVN will be the compact sources and the KVN will have an excellent performance in the astrometry for these sources.

A Study on the Serialized Event Sharing System for Multiple Telecomputing User Environments (원격.다원 사용자 환경에서의 순차적 이벤트 공유기에 관한 연구)

  • 유영진;오용선
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.05a
    • /
    • pp.344-350
    • /
    • 2003
  • In this paper, we propose a novel sharing method ordering the events occurring between users collaborated with the common telecomputing environment. We realize the sharing method with multimedia data to improve the coworking effect using teleprocessing network. This sharing method advances the efficiency of communicating projects such as remote education, tele-conference, and co-authoring of multimedia contents by offering conveniences of presentation, group authoring, common management, and transient event productions of the users. As for the conventional sharing white board system, all the multimedia contents segments should be authored by the exclusive program, and we cannot use any existing contents or program. Moreover we suffer from the problem that ordering error occurs in the teleprocessing operation because we do not have any line-up technology for the input ordering of commands. Therefore we develop a method of retrieving input and output events from the windows system and the message hooking technology which transmits between programs in the operating system In addition, we realize the allocation technology of the processing results for all sharing users of the distributed computing environment without any error. Our sharing technology should contribute to improve the face-to-face coworking efficiency for multimedia contents authoring, common blackboard system in the area of remote educations, and presentation display in visual conference.

  • PDF

The Effect of Consumer's Perceptual Characteristics for PB Products on Relational Continuance Intention: Mediated by Brand Trust and Brand Equity (PB상품에 대한 소비자의 지각특성이 관계지속의도에 미치는 영향: 브랜드신뢰 및 브랜드자산을 매개로 한 정책적 접근)

  • Lim, Chaekwan
    • Journal of Distribution Research
    • /
    • v.17 no.5
    • /
    • pp.85-111
    • /
    • 2012
  • Introduction : The purpose of this study was to examine the relationship between perceptual characteristics of consumers and intent of relational continuance for PB(Private Brand) products in discount stores. This study was conducted as an empirical study based on survey. For the empirical study, factors of PB products as characteristics perceived by consumers such as perceived quality, store image, brand image and perceived value were deduced from preceding studies. The effect of such factors on intent of relational continuance mediated by brand trust and brand equity of PB products was structurally examined. Research Model : Based on theory analysis and hypotheses, constructed a Structural Equation Model(SEM). The research model is shown in Figure 1. Research Method : This paper is based on s qualitative study of selected literature and empirical data. The survey for empirical study was carried out on consumers in Gyeonggi and Busan between January 2012 and May 2012. 300 surveys were distributed and 253 (84.3%) of them were returned. After excluding omissions and insincere responses, 245 surveys (81.6%) were used for final analysis as effective samples. Result : First of all, the Reliability was carried out for instrument used. The lower limit of 0.7 for Cronbach's Alpha as suggested by Hair et al. (1998). And Construct validity was established by carrying out exploratory factor analysis by Varimax rotation for all. Four factor result for the consumer's perceptual characteristics of PB Products, two mediating factors and one dependent factor. All constructs included in research framework have acceptable validity and reliability. Table 1 shows the factor loading, eigen value, explained variance and Cronbach's alpha for each factor. In order to assure validity of constructs, I implemented Confirmatory Factor Analysis (CFA), using AMOS 20.0. In confirmatory factor analysis, researcher can take control over the specification of indicators for each factor by hypothesizing that a specific factor is loaded with the relevant indicators. Moreover, CFA is particularly useful in the validation of scale for the measurement of specific construct. CFA result summarized Table 2 shows that the fit measures of all constructs fulfill the recommended level and loadings are significant. To test causal relationship between constructs in the research model, used AMOS 20.0 that provides a graphic module as method for analysing Structural Equation Modeling. The result of hypothesis test is shown in Table 3. As a result of empirical study, perceived quality, brand image and perceived value as selected attributes for PB products showed significantly positive (+) effect on brand trust and brand equity. Furthermore, brand trust and brand equity showed significantly positive (+) effect on intent of relational continuance. However, store image of discount stores selling the PB products was analyzed to have positive (+) effect on brand trust and no significant effect on brand equity. Discussion : Based on the results of this study, the relationship between overall quality, store image, brand image and value perceived by consumers about PB products and intent of relational continuance was structurally verified as being mediated by brand trust and brand equity. Looking at the results, a strategic approach that maximizes brand trust and equity value for PB products by large discount stores is required on top of basic efforts to improve quality, brand image and value of PB products in order to maximize consumer's intent of relational continuance and to continuously attract repeated purchase of products.

  • PDF

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Evaluation of Web Service Similarity Assessment Methods (웹서비스 유사성 평가 방법들의 실험적 평가)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component based software development to promote application interaction and integration both within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web service repositories not only be well-structured but also provide efficient tools for developers to find reusable Web service components that meet their needs. As the potential of Web services for service-oriented computing is being widely recognized, the demand for effective Web service discovery mechanisms is concomitantly growing. A number of techniques for Web service discovery have been proposed, but the discovery challenge has not been satisfactorily addressed. Unfortunately, most existing solutions are either too rudimentary to be useful or too domain dependent to be generalizable. In this paper, we propose a Web service organizing framework that combines clustering techniques with string matching and leverages the semantics of the XML-based service specification in WSDL documents. We believe that this is one of the first attempts at applying data mining techniques in the Web service discovery domain. Our proposed approach has several appealing features : (1) It minimizes the requirement of prior knowledge from both service consumers and publishers; (2) It avoids exploiting domain dependent ontologies; and (3) It is able to visualize the semantic relationships among Web services. We have developed a prototype system based on the proposed framework using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web service registries. We report on some preliminary results demonstrating the efficacy of the proposed approach.

  • PDF

Improving Bidirectional LSTM-CRF model Of Sequence Tagging by using Ontology knowledge based feature (온톨로지 지식 기반 특성치를 활용한 Bidirectional LSTM-CRF 모델의 시퀀스 태깅 성능 향상에 관한 연구)

  • Jin, Seunghee;Jang, Heewon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.253-266
    • /
    • 2018
  • This paper proposes a methodology applying sequence tagging methodology to improve the performance of NER(Named Entity Recognition) used in QA system. In order to retrieve the correct answers stored in the database, it is necessary to switch the user's query into a language of the database such as SQL(Structured Query Language). Then, the computer can recognize the language of the user. This is the process of identifying the class or data name contained in the database. The method of retrieving the words contained in the query in the existing database and recognizing the object does not identify the homophone and the word phrases because it does not consider the context of the user's query. If there are multiple search results, all of them are returned as a result, so there can be many interpretations on the query and the time complexity for the calculation becomes large. To overcome these, this study aims to solve this problem by reflecting the contextual meaning of the query using Bidirectional LSTM-CRF. Also we tried to solve the disadvantages of the neural network model which can't identify the untrained words by using ontology knowledge based feature. Experiments were conducted on the ontology knowledge base of music domain and the performance was evaluated. In order to accurately evaluate the performance of the L-Bidirectional LSTM-CRF proposed in this study, we experimented with converting the words included in the learned query into untrained words in order to test whether the words were included in the database but correctly identified the untrained words. As a result, it was possible to recognize objects considering the context and can recognize the untrained words without re-training the L-Bidirectional LSTM-CRF mode, and it is confirmed that the performance of the object recognition as a whole is improved.

Modeling and Intelligent Control for Activated Sludge Process (활성슬러지 공정을 위한 모델링과 지능제어의 적용)

  • Cheon, Seong-pyo;Kim, Bongchul;Kim, Sungshin;Kim, Chang-Won;Kim, Sanghyun;Woo, Hae-Jin
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.22 no.10
    • /
    • pp.1905-1919
    • /
    • 2000
  • The main motivation of this research is to develop an intelligent control strategy for Activated Sludge Process (ASP). ASP is a complex and nonlinear dynamic system because of the characteristic of wastewater, the change in influent flow rate, weather conditions, and etc. The mathematical model of ASP also includes uncertainties which are ignored or not considered by process engineer or controller designer. The ASP is generally controlled by a PID controller that consists of fixed proportional, integral, and derivative gain values. The PID gains are adjusted by the expert who has much experience in the ASP. The ASP model based on $Matlab^{(R)}5.3/Simulink^{(R)}3.0$ is developed in this paper. The performance of the model is tested by IWA(International Water Association) and COST(European Cooperation in the field of Scientific and Technical Research) data that include steady-state results during 14 days. The advantage of the developed model is that the user can easily modify or change the controller by the help of the graphical user interface. The ASP model as a typical nonlinear system can be used to simulate and test the proposed controller for an educational purpose. Various control methods are applied to the ASP model and the control results are compared to apply the proposed intelligent control strategy to a real ASP. Three control methods are designed and tested: conventional PID controller, fuzzy logic control approach to modify setpoints, and fuzzy-PID control method. The proposed setpoints changer based on the fuzzy logic shows a better performance and robustness under disturbances. The objective function can be defined and included in the proposed control strategy to improve the effluent water quality and to reduce the operating cost in a real ASP.

  • PDF

A Scalable and Modular Approach to Understanding of Real-time Software: An Architecture-based Software Understanding(ARSU) and the Software Re/reverse-engineering Environment(SRE) (실시간 소프트웨어의 조절적${\cdot}$단위적 이해 방법 : ARSU(Architecture-based Software Understanding)와 SRE(Software Re/reverse-engineering Environment))

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3159-3174
    • /
    • 1997
  • This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.

  • PDF

A Study on the Accelerated Life Test for the Estimation of Licorice Durability in Cosmetics (화장품 중 유용성감초추출물의 유통기한 예측을 위한 가속수명 시험연구)

  • Lee, So-Mi;Joo, Kyeong-Mi;Park, Jong-Eun;Jeong, Hye-Jin;Chang, Ih-Seop
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.33 no.3
    • /
    • pp.197-201
    • /
    • 2007
  • Oil soluble licorice extract(licorice extract) is an officially approved cosmetic component as a whitening ingredient in Korea. The durability of licorice, during which the whitening effect can be maintained in optimum condition, must be accurately defined. Since the cosmetics durability under real condition is relatively longer than its development time. It is needed to predict the real durability interval from the experimental measurement under simulated operating conditions. We analyzed the relationship between the licorice lifetime and the high temperature condition by using Arrhenius equation. We have established the constant stress test with temperature of $50^{\circ}C$, $55^{\circ}C$, and $60^{\circ}C$ condition, within which no formulation change of licorice products is expected for the accelerated stress test. In this paper, the lifetime of licorice in cosmetics was defined as time period for its 10% contents reduction. We observed that the lifetime of licorice is 580 h at $50^{\circ}C$, 319 h at $55^{\circ}C$ and 166 h at $60^{\circ}C$. Using the above experimental data, we obtained the equation for the relationship between the licorice lifetime and temperature as follows; log(lifetime)=-35.0243 + 1.15322$\times$(11604.83/temperature). From this equation, the lifetime of licorice at $25^{\circ}C$ can be estimated as 26 months. The estimated result was verified by measuring full lifetime of licorice. In fact, there was no significant difference between the estimated lifetime and real measurement within 95 % significance level. This study can be applied to other useful cosmetic components for the fast estimation of the exact durability.

A Quick-and-dirty Method for Detection of Ground Moving Targets in Single-Channel SAR Single-Look Complex (SLC) Images by Differentiation (미분을 이용한 단일채널 SAR SLC 영상 내 지상 이동물체의 탐지방법)

  • Won, Joong-Sun
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.2
    • /
    • pp.185-205
    • /
    • 2014
  • SAR ground moving target indicator (GMTI) has long been an important issue for SAR advanced applications. As spatial resolution of space-borne SAR system has been significantly improved recently, the GMTI becomes a very useful tool. Various GMTI techniques have been developed particularly using multi-channel SAR systems. It is, however, still problematic to detect ground moving targets within single channel SAR images while it is not practical to access high resolution multi-channel space-borne SAR systems. Once a ground moving target is detected, it is possible to retrieve twodimensional velocities of the target from single channel space-borne SAR with an accuracy of about 5 % if moving faster than 3 m/s. This paper presents a quick-and-dirty method for detecting ground moving targets from single channel SAR single-look complex (SLC) images by differentiation. Since the signal powers of derivatives present Doppler centroid and rate, it is very efficient and effective for detection of non-stationary targets. The derivatives correlate well with velocities retrieved by a precise method with a correlation coefficient $R^2$ of 0.62, which is well enough to detect the ground moving targets. While the approach is theoretically straightforward, it is necessary to remove the effects of residual Doppler rate before finalizing the ground moving target candidates. The confidence level of results largely depends on the efficiency and effectiveness of the residual Doppler rate removal method. Application results using TerraSAR-X and truck-mounted corner reflectors validated the efficiency of the method. While the derivatives of moving targets remain easily detectable, the signal energy of stationary corner reflectors was suppressed by about 18.5 dB. It results in an easy detection of ground targets moving faster than 8.8 km/h. The proposed method is applicable to any high resolution single channel SAR systems including KOMPSAT-5.