• Title/Summary/Keyword: User Test

Search Result 1,973, Processing Time 0.038 seconds

Study of Feature Based Algorithm Performance Comparison for Image Matching between Virtual Texture Image and Real Image (가상 텍스쳐 영상과 실촬영 영상간 매칭을 위한 특징점 기반 알고리즘 성능 비교 연구)

  • Lee, Yoo Jin;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1057-1068
    • /
    • 2022
  • This paper compares the combination performance of feature point-based matching algorithms as a study to confirm the matching possibility between image taken by a user and a virtual texture image with the goal of developing mobile-based real-time image positioning technology. The feature based matching algorithm includes process of extracting features, calculating descriptors, matching features from both images, and finally eliminating mismatched features. At this time, for matching algorithm combination, we combined the process of extracting features and the process of calculating descriptors in the same or different matching algorithm respectively. V-World 3D desktop was used for the virtual indoor texture image. Currently, V-World 3D desktop is reinforced with details such as vertical and horizontal protrusions and dents. In addition, levels with real image textures. Using this, we constructed dataset with virtual indoor texture data as a reference image, and real image shooting at the same location as a target image. After constructing dataset, matching success rate and matching processing time were measured, and based on this, matching algorithm combination was determined for matching real image with virtual image. In this study, based on the characteristics of each matching technique, the matching algorithm was combined and applied to the constructed dataset to confirm the applicability, and performance comparison was also performed when the rotation was additionally considered. As a result of study, it was confirmed that the combination of Scale Invariant Feature Transform (SIFT)'s feature and descriptor detection had the highest matching success rate, but matching processing time was longest. And in the case of Features from Accelerated Segment Test (FAST)'s feature detector and Oriented FAST and Rotated BRIEF (ORB)'s descriptor calculation, the matching success rate was similar to that of SIFT-SIFT combination, while matching processing time was short. Furthermore, in case of FAST-ORB, it was confirmed that the matching performance was superior even when 10° rotation was applied to the dataset. Therefore, it was confirmed that the matching algorithm of FAST-ORB combination could be suitable for matching between virtual texture image and real image.

An Empirical Study on the Determinants of Supply Chain Management Systems Success from Vendor's Perspective (참여자관점에서 공급사슬관리 시스템의 성공에 영향을 미치는 요인에 관한 실증연구)

  • Kang, Sung-Bae;Moon, Tae-Soo;Chung, Yoon
    • Asia pacific journal of information systems
    • /
    • v.20 no.3
    • /
    • pp.139-166
    • /
    • 2010
  • The supply chain management (SCM) systems have emerged as strong managerial tools for manufacturing firms in enhancing competitive strength. Despite of large investments in the SCM systems, many companies are not fully realizing the promised benefits from the systems. A review of literature on adoption, implementation and success factor of IOS (inter-organization systems), EDI (electronic data interchange) systems, shows that this issue has been examined from multiple theoretic perspectives. And many researchers have attempted to identify the factors which influence the success of system implementation. However, the existing studies have two drawbacks in revealing the determinants of systems implementation success. First, previous researches raise questions as to the appropriateness of research subjects selected. Most SCM systems are operating in the form of private industrial networks, where the participants of the systems consist of two distinct groups: focus companies and vendors. The focus companies are the primary actors in developing and operating the systems, while vendors are passive participants which are connected to the system in order to supply raw materials and parts to the focus companies. Under the circumstance, there are three ways in selecting the research subjects; focus companies only, vendors only, or two parties grouped together. It is hard to find researches that use the focus companies exclusively as the subjects probably due to the insufficient sample size for statistic analysis. Most researches have been conducted using the data collected from both groups. We argue that the SCM success factors cannot be correctly indentified in this case. The focus companies and the vendors are in different positions in many areas regarding the system implementation: firm size, managerial resources, bargaining power, organizational maturity, and etc. There are no obvious reasons to believe that the success factors of the two groups are identical. Grouping the two groups also raises questions on measuring the system success. The benefits from utilizing the systems may not be commonly distributed to the two groups. One group's benefits might be realized at the expenses of the other group considering the situation where vendors participating in SCM systems are under continuous pressures from the focus companies with respect to prices, quality, and delivery time. Therefore, by combining the system outcomes of both groups we cannot measure the system benefits obtained by each group correctly. Second, the measures of system success adopted in the previous researches have shortcoming in measuring the SCM success. User satisfaction, system utilization, and user attitudes toward the systems are most commonly used success measures in the existing studies. These measures have been developed as proxy variables in the studies of decision support systems (DSS) where the contribution of the systems to the organization performance is very difficult to measure. Unlike the DSS, the SCM systems have more specific goals, such as cost saving, inventory reduction, quality improvement, rapid time, and higher customer service. We maintain that more specific measures can be developed instead of proxy variables in order to measure the system benefits correctly. The purpose of this study is to find the determinants of SCM systems success in the perspective of vendor companies. In developing the research model, we have focused on selecting the success factors appropriate for the vendors through reviewing past researches and on developing more accurate success measures. The variables can be classified into following: technological, organizational, and environmental factors on the basis of TOE (Technology-Organization-Environment) framework. The model consists of three independent variables (competition intensity, top management support, and information system maturity), one mediating variable (collaboration), one moderating variable (government support), and a dependent variable (system success). The systems success measures have been developed to reflect the operational benefits of the SCM systems; improvement in planning and analysis capabilities, faster throughput, cost reduction, task integration, and improved product and customer service. The model has been validated using the survey data collected from 122 vendors participating in the SCM systems in Korea. To test for mediation, one should estimate the hierarchical regression analysis on the collaboration. And moderating effect analysis should estimate the moderated multiple regression, examines the effect of the government support. The result shows that information system maturity and top management support are the most important determinants of SCM system success. Supply chain technologies that standardize data formats and enhance information sharing may be adopted by supply chain leader organization because of the influence of focal company in the private industrial networks in order to streamline transactions and improve inter-organization communication. Specially, the need to develop and sustain an information system maturity will provide the focus and purpose to successfully overcome information system obstacles and resistance to innovation diffusion within the supply chain network organization. The support of top management will help focus efforts toward the realization of inter-organizational benefits and lend credibility to functional managers responsible for its implementation. The active involvement, vision, and direction of high level executives provide the impetus needed to sustain the implementation of SCM. The quality of collaboration relationships also is positively related to outcome variable. Collaboration variable is found to have a mediation effect between on influencing factors and implementation success. Higher levels of inter-organizational collaboration behaviors such as shared planning and flexibility in coordinating activities were found to be strongly linked to the vendors trust in the supply chain network. Government support moderates the effect of the IS maturity, competitive intensity, top management support on collaboration and implementation success of SCM. In general, the vendor companies face substantially greater risks in SCM implementation than the larger companies do because of severe constraints on financial and human resources and limited education on SCM systems. Besides resources, Vendors generally lack computer experience and do not have sufficient internal SCM expertise. For these reasons, government supports may establish requirements for firms doing business with the government or provide incentives to adopt, implementation SCM or practices. Government support provides significant improvements in implementation success of SCM when IS maturity, competitive intensity, top management support and collaboration are low. The environmental characteristic of competition intensity has no direct effect on vendor perspective of SCM system success. But, vendors facing above average competition intensity will have a greater need for changing technology. This suggests that companies trying to implement SCM systems should set up compatible supply chain networks and a high-quality collaboration relationship for implementation and performance.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

A Study on Analyzing Sentiments on Movie Reviews by Multi-Level Sentiment Classifier (영화 리뷰 감성분석을 위한 텍스트 마이닝 기반 감성 분류기 구축)

  • Kim, Yuyoung;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.71-89
    • /
    • 2016
  • Sentiment analysis is used for identifying emotions or sentiments embedded in the user generated data such as customer reviews from blogs, social network services, and so on. Various research fields such as computer science and business management can take advantage of this feature to analyze customer-generated opinions. In previous studies, the star rating of a review is regarded as the same as sentiment embedded in the text. However, it does not always correspond to the sentiment polarity. Due to this supposition, previous studies have some limitations in their accuracy. To solve this issue, the present study uses a supervised sentiment classification model to measure a more accurate sentiment polarity. This study aims to propose an advanced sentiment classifier and to discover the correlation between movie reviews and box-office success. The advanced sentiment classifier is based on two supervised machine learning techniques, the Support Vector Machines (SVM) and Feedforward Neural Network (FNN). The sentiment scores of the movie reviews are measured by the sentiment classifier and are analyzed by statistical correlations between movie reviews and box-office success. Movie reviews are collected along with a star-rate. The dataset used in this study consists of 1,258,538 reviews from 175 films gathered from Naver Movie website (movie.naver.com). The results show that the proposed sentiment classifier outperforms Naive Bayes (NB) classifier as its accuracy is about 6% higher than NB. Furthermore, the results indicate that there are positive correlations between the star-rate and the number of audiences, which can be regarded as the box-office success of a movie. The study also shows that there is the mild, positive correlation between the sentiment scores estimated by the classifier and the number of audiences. To verify the applicability of the sentiment scores, an independent sample t-test was conducted. For this, the movies were divided into two groups using the average of sentiment scores. The two groups are significantly different in terms of the star-rated scores.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

A Study on Strategy for developing LBS Entertainment content based on local tourist information (지역 관광 정보를 활용한 LBS 엔터테인먼트 컨텐츠 개발 방안에 관한 연구)

  • Kim, Hyun-Jeong
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.151-162
    • /
    • 2007
  • How can new media devices and networks provide an effective response to the world's growing sector of cultural and historically-minded travelers? This study emerged from the question of how mobile handsets can change the nature of cultural and historical tourism in ubiquitous city environments. As wireless network and mobile IT have rapidly developed, it becomes possible to deliver cultural and historical information on the site through mobile handset as a tour guidance system. The paper describes the development of a new type of mobile tourism platform for site-specific cultural and historical information. The central objective of the project was to organize this cultural and historical walking tour around the mobile handset and its unique advantages (i.e. portability, multi-media capacity, access to wireless internet, and location-awareness potential) and then integrate the tour with a historical story and role-playing game that would deepen the mobile user's interest in the sites being visited, and enhance his or her overall experience of the area. The project was based on twelve locations that were culturally and historically significant to Korean War era in Busan. After the mobile tour game prototype was developed for this route, it was evaluated at the 10th PIFF (Pusan International Film Festival). After use test, some new strategies for developing mobile "edutainment content" to deliver cultural historical contents of the location were discussed. Combining 'edutainment' with a cultural and historical mobile walking tour brings a new dimension to existing approaches of the tourism and mobile content industry.

  • PDF

The Brassica rapa Tissue-specific EST Database (배추의 조직 특이적 발현유전자 데이터베이스)

  • Yu, Hee-Ju;Park, Sin-Gi;Oh, Mi-Jin;Hwang, Hyun-Ju;Kim, Nam-Shin;Chung, Hee;Sohn, Seong-Han;Park, Beom-Seok;Mun, Jeong-Hwan
    • Horticultural Science & Technology
    • /
    • v.29 no.6
    • /
    • pp.633-640
    • /
    • 2011
  • Brassica rapa is an A genome model species for Brassica crop genetics, genomics, and breeding. With the completion of sequencing the B. rapa genome, functional analysis of the genome is forthcoming issue. The expressed sequence tags are fundamental resources supporting annotation and functional analysis of the genome including identification of tissue-specific genes and promoters. As of July 2011, 147,217 ESTs from 39 cDNA libraries of B. rapa are reported in the public database. However, little information can be retrieved from the sequences due to lack of organized databases. To leverage the sequence information and to maximize the use of publicly-available EST collections, the Brassica rapa tissue-specific EST database (BrTED) is developed. BrTED includes sequence information of 23,962 unigenes assembled by StackPack program. The unigene set is used as a query unit for various analyses such as BLAST against TAIR gene model, functional annotation using MIPS and UniProt, gene ontology analysis, and prediction of tissue-specific unigene sets based on statistics test. The database is composed of two main units, EST sequence processing and information retrieving unit and tissue-specific expression profile analysis unit. Information and data in both units are tightly inter-connected to each other using a web based browsing system. RT-PCR evaluation of 29 selected unigene sets successfully amplified amplicons from the target tissues of B. rapa. BrTED provided here allows the user to identify and analyze the expression of genes of interest and aid efforts to interpret the B. rapa genome through functional genomics. In addition, it can be used as a public resource in providing reference information to study the genus Brassica and other closely related crop crucifer plants.

Design and Implementation of the SSL Component based on CBD (CBD에 기반한 SSL 컴포넌트의 설계 및 구현)

  • Cho Eun-Ae;Moon Chang-Joo;Baik Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.3
    • /
    • pp.192-207
    • /
    • 2006
  • Today, the SSL protocol has been used as core part in various computing environments or security systems. But, the SSL protocol has several problems, because of the rigidity on operating. First, SSL protocol brings considerable burden to the CPU utilization so that performance of the security service in encryption transaction is lowered because it encrypts all data which is transferred between a server and a client. Second, SSL protocol can be vulnerable for cryptanalysis due to the key in fixed algorithm being used. Third, it is difficult to add and use another new cryptography algorithms. Finally. it is difficult for developers to learn use cryptography API(Application Program Interface) for the SSL protocol. Hence, we need to cover these problems, and, at the same time, we need the secure and comfortable method to operate the SSL protocol and to handle the efficient data. In this paper, we propose the SSL component which is designed and implemented using CBD(Component Based Development) concept to satisfy these requirements. The SSL component provides not only data encryption services like the SSL protocol but also convenient APIs for the developer unfamiliar with security. Further, the SSL component can improve the productivity and give reduce development cost. Because the SSL component can be reused. Also, in case of that new algorithms are added or algorithms are changed, it Is compatible and easy to interlock. SSL Component works the SSL protocol service in application layer. First of all, we take out the requirements, and then, we design and implement the SSL Component, confidentiality and integrity component, which support the SSL component, dependently. These all mentioned components are implemented by EJB, it can provide the efficient data handling when data is encrypted/decrypted by choosing the data. Also, it improves the usability by choosing data and mechanism as user intend. In conclusion, as we test and evaluate these component, SSL component is more usable and efficient than existing SSL protocol, because the increase rate of processing time for SSL component is lower that SSL protocol's.

A Framework on 3D Object-Based Construction Information Management System for Work Productivity Analysis for Reinforced Concrete Work (철근콘크리트 공사의 작업 생산성 분석을 위한 3차원 객체 활용 정보관리 시스템 구축방안)

  • Kim, Jun;Cha, Heesung
    • Korean Journal of Construction Engineering and Management
    • /
    • v.19 no.2
    • /
    • pp.15-24
    • /
    • 2018
  • Despite the recognition of the need for productivity information and its importance, the feedback of productivity information is not well-established in the construction industry. Effective use of productivity information is required to improve the reliability of construction planning. However, in many cases, on-site productivity information is hardly management effectively, but rather it relies on the experience and/or intuition of project participants. Based on the literature review and expert interviews, the authors recognized that one of the possible solutions is to develop a systematic approach in dealing with productivity information of the construction job-sites. It is required that the new system should not be burdensome to users, purpose-oriented information management, easy-to follow information structure, real-time information feedback, and productivity-related factor recognition. Based on the preliminary investigations, this study proposed a framework for a novel system that facilitate the effective management of construction productivity information. This system has utilized Sketchup software which has good user accessibility by minimizing additional data input and related workload. The proposed system has been designed to input, process, and output the pertinent information through a four-stage process: preparation, input, processing, and output. The inputted construction information is classified into Task Breakdown Structure (TBS) and Material Breakdown Structure (MBS), which are constructed by referring to the contents of the standard specification of building construction, and converted into productivity information. In addition, the converted information is also graphically visualized on the screen, allowing the users to use the productivity information from the job-site. The productivity information management system proposed in this study has been pilot-tested in terms of practical applicability and information availability in the real construction project. Very positive results have been obtained from the usability and the applicability of the system and benefits are expected from the validity test of the system. If the proposed system is used in the planning stage in the construction, the productivity information and the continuous information is accumulated, the expected effectiveness of this study would be conceivably further enhanced.