• Title/Summary/Keyword: 사용자 환경

Search Result 10,830, Processing Time 0.036 seconds

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Context Sharing Framework Based on Time Dependent Metadata for Social News Service (소셜 뉴스를 위한 시간 종속적인 메타데이터 기반의 컨텍스트 공유 프레임워크)

  • Ga, Myung-Hyun;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.39-53
    • /
    • 2013
  • The emergence of the internet technology and SNS has increased the information flow and has changed the way people to communicate from one-way to two-way communication. Users not only consume and share the information, they also can create and share it among their friends across the social network service. It also changes the Social Media behavior to become one of the most important communication tools which also includes Social TV. Social TV is a form which people can watch a TV program and at the same share any information or its content with friends through Social media. Social News is getting popular and also known as a Participatory Social Media. It creates influences on user interest through Internet to represent society issues and creates news credibility based on user's reputation. However, the conventional platforms in news services only focus on the news recommendation domain. Recent development in SNS has changed this landscape to allow user to share and disseminate the news. Conventional platform does not provide any special way for news to be share. Currently, Social News Service only allows user to access the entire news. Nonetheless, they cannot access partial of the contents which related to users interest. For example user only have interested to a partial of the news and share the content, it is still hard for them to do so. In worst cases users might understand the news in different context. To solve this, Social News Service must provide a method to provide additional information. For example, Yovisto known as an academic video searching service provided time dependent metadata from the video. User can search and watch partial of video content according to time dependent metadata. They also can share content with a friend in social media. Yovisto applies a method to divide or synchronize a video based whenever the slides presentation is changed to another page. However, we are not able to employs this method on news video since the news video is not incorporating with any power point slides presentation. Segmentation method is required to separate the news video and to creating time dependent metadata. In this work, In this paper, a time dependent metadata-based framework is proposed to segment news contents and to provide time dependent metadata so that user can use context information to communicate with their friends. The transcript of the news is divided by using the proposed story segmentation method. We provide a tag to represent the entire content of the news. And provide the sub tag to indicate the segmented news which includes the starting time of the news. The time dependent metadata helps user to track the news information. It also allows them to leave a comment on each segment of the news. User also may share the news based on time metadata as segmented news or as a whole. Therefore, it helps the user to understand the shared news. To demonstrate the performance, we evaluate the story segmentation accuracy and also the tag generation. For this purpose, we measured accuracy of the story segmentation through semantic similarity and compared to the benchmark algorithm. Experimental results show that the proposed method outperforms benchmark algorithms in terms of the accuracy of story segmentation. It is important to note that sub tag accuracy is the most important as a part of the proposed framework to share the specific news context with others. To extract a more accurate sub tags, we have created stop word list that is not related to the content of the news such as name of the anchor or reporter. And we applied to framework. We have analyzed the accuracy of tags and sub tags which represent the context of news. From the analysis, it seems that proposed framework is helpful to users for sharing their opinions with context information in Social media and Social news.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Application and Analysis of Ocean Remote-Sensing Reflectance Quality Assurance Algorithm for GOCI-II (천리안해양위성 2호(GOCI-II) 원격반사도 품질 검증 시스템 적용 및 결과)

  • Sujung Bae;Eunkyung Lee;Jianwei Wei;Kyeong-sang Lee;Minsang Kim;Jong-kuk Choi;Jae Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1565-1576
    • /
    • 2023
  • An atmospheric correction algorithm based on the radiative transfer model is required to obtain remote-sensing reflectance (Rrs) from the Geostationary Ocean Color Imager-II (GOCI-II) observed at the top-of-atmosphere. This Rrs derived from the atmospheric correction is utilized to estimate various marine environmental parameters such as chlorophyll-a concentration, total suspended materials concentration, and absorption of dissolved organic matter. Therefore, an atmospheric correction is a fundamental algorithm as it significantly impacts the reliability of all other color products. However, in clear waters, for example, atmospheric path radiance exceeds more than ten times higher than the water-leaving radiance in the blue wavelengths. This implies atmospheric correction is a highly error-sensitive process with a 1% error in estimating atmospheric radiance in the atmospheric correction process can cause more than 10% errors. Therefore, the quality assessment of Rrs after the atmospheric correction is essential for ensuring reliable ocean environment analysis using ocean color satellite data. In this study, a Quality Assurance (QA) algorithm based on in-situ Rrs data, which has been archived into a database using Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Bio-optical Archive and Storage System (SeaBASS), was applied and modified to consider the different spectral characteristics of GOCI-II. This method is officially employed in the National Oceanic and Atmospheric Administration (NOAA)'s ocean color satellite data processing system. It provides quality analysis scores for Rrs ranging from 0 to 1 and classifies the water types into 23 categories. When the QA algorithm is applied to the initial phase of GOCI-II data with less calibration, it shows the highest frequency at a relatively low score of 0.625. However, when the algorithm is applied to the improved GOCI-II atmospheric correction results with updated calibrations, it shows the highest frequency at a higher score of 0.875 compared to the previous results. The water types analysis using the QA algorithm indicated that parts of the East Sea, South Sea, and the Northwest Pacific Ocean are primarily characterized as relatively clear case-I waters, while the coastal areas of the Yellow Sea and the East China Sea are mainly classified as highly turbid case-II waters. We expect that the QA algorithm will support GOCI-II users in terms of not only statistically identifying Rrs resulted with significant errors but also more reliable calibration with quality assured data. The algorithm will be included in the level-2 flag data provided with GOCI-II atmospheric correction.

Festival Space Design to Change the Value of Sudokwon Landfill Site - 2014 Dreampark Chrysanthemum Festival Basic Plan and Design - (수도권 매립지 가치변화를 위한 지속 가능한 축제 공간 계획 - 2014 드림파크 국화축제 기본계획 및 기본설계 -)

  • Kim, Ok-Kyung;Lee, Hak-Youn;Kim, Joo-Am;Lee, Bo-Ram;Kim, Ha-Yan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.42 no.4
    • /
    • pp.89-99
    • /
    • 2014
  • This paper offers a landscape design proposal for the 2014 Dreampark Chrysanthemum Festival within the Sudokwon Landfill Site. This site is located at 58 Baeksukdong, Seo-gu, Incheon, and it has an area of approximately $560,000m^2$. Over 1.53million visitors came to this festival on the previous year. This design includes an overall masterplan and a series of planting plans along with a core selection of iconic topiaries. The goal of the design is to create a landscape that improves the value of the place image and local economy as well as celebrates the 2014 Incheon Asian Game. In order to achieve this goal, three design subjects were considered: promoting local pride as a part of Incheon, increasing the aesthetic value of the site based on the brand image, and sustainable placemaking. To promote local pride, the 2km long "Little Incheon" is designed over a wildflower field, which is inspired by Incheon Bridge to give a strong image of the locality. A variety of programs from local gardening participation were introduced to the east part of the site. The design also outlines the vision for the development of Dreampark - a people-oriented gathering place for the entire community with spaces that offer a variety of unstructured recreational and cultural experiences. To increase the aesthetic brand value of the festival, it introduced a variety of wildflower beds scattering through the whole site. It creates a strong brand image for the festival and memories that will encourage visitors to return. Various folktales of Asian countries are displayed by autumn flowers and groundcover plants at the centre of the site, which is the highlight of the festival site. For sustainable placemaking, the design preserves the existing trees and reed beds for wildlife to create natural layers of landscape. In addition, facilities and service centers are designed to be flexible and are centred on the needs of the people using them. Also a festival management scheme was planned in order to operate the site efficiently and economically.

The Role of Digital Knowledge Richness in Green Technology Adoption: A Digital Option Theory Perspective (그린기술 채택에의 디지털 지식풍부성의 역할: 디지털 옵션 이론 관점에서)

  • Yoo, Hosun;Lee, Namyeon;Kwon, Ohbyung
    • The Journal of Information Systems
    • /
    • v.24 no.2
    • /
    • pp.23-52
    • /
    • 2015
  • Purpose This study aims to understand the role of digital knowledge in accepting the green technology. This study combined digital option theory with the second version of the Unified Theory of Acceptance and Use of Technology (UTAUT2). Contrary to other studies in which the UTAUT2 is used to explain IT adoption behavior, we look at the relationship between IT and the UTAUT2 from a new angle, incorporating an important aspect of IT, that is, digitized knowledge richness, as a determinant of the UTAUT2. Design/methodology/approach Grounded in the UTAUT2, a content analysis was conducted to investigate novel constructs dedicated to explaining green technology adoption. In this study, an amended version of the UTAUT2 specific to green technology is offered that better explains the green technology adoption behavior of consumers. Using the items identified by content analysis, we developed a questionnaire with 36 survey items. We measured all the items on a seven-point Likert-type scale. We randomly selected 402 survey respondents from a set of panel data. After a pilot study, we analyzed the main survey data by using PLS 2.0M3 and SPSS 20.0, and employed structural equation modeling to test the hypotheses. Findings The results suggest that the UTAUT2 was found to be extendable to technologies other than conventional IT. Social influence is more significant than conventional utilitarian and hedonic-based constructs such as those utilized in the UTAUT and UTAUT2 in explaining adoption behavior in the context of green technologies. The hypothesized connection between digitized knowledge richness and adoption intention was supported by the results of studies on the role of IT in formation of attitudes toward eco-friendly production. The results also indicate that digital knowledge can also encourage people to try green technology when they learn that their peers are already using the technology successfully.

A Study on Fast Iris Detection for Iris Recognition in Mobile Phone (휴대폰에서의 홍채인식을 위한 고속 홍채검출에 관한 연구)

  • Park Hyun-Ae;Park Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.19-29
    • /
    • 2006
  • As the security of personal information is becoming more important in mobile phones, we are starting to apply iris recognition technology to these devices. In conventional iris recognition, magnified iris images are required. For that, it has been necessary to use large magnified zoom & focus lens camera to capture images, but due to the requirement about low size and cost of mobile phones, the zoom & focus lens are difficult to be used. However, with rapid developments and multimedia convergence trends in mobile phones, more and more companies have built mega-pixel cameras into their mobile phones. These devices make it possible to capture a magnified iris image without zoom & focus lens. Although facial images are captured far away from the user using a mega-pixel camera, the captured iris region possesses sufficient pixel information for iris recognition. However, in this case, the eye region should be detected for accurate iris recognition in facial images. So, we propose a new fast iris detection method, which is appropriate for mobile phones based on corneal specular reflection. To detect specular reflection robustly, we propose the theoretical background of estimating the size and brightness of specular reflection based on eye, camera and illuminator models. In addition, we use the successive On/Off scheme of the illuminator to detect the optical/motion blurring and sunlight effect on input image. Experimental results show that total processing time(detecting iris region) is on average 65ms on a Samsung SCH-S2300 (with 150MHz ARM 9 CPU) mobile phone. The rate of correct iris detection is 99% (about indoor images) and 98.5% (about outdoor images).

Development of Drawing & Specification Management System Using 3D Object-based Product Model (3차원 객체기반 모델을 이용한 설계도면 및 시방서관리 시스템 구축)

  • Kim Hyun-nam;Wang Il-kook;Chin Sang-yoon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.1 no.3 s.3
    • /
    • pp.124-134
    • /
    • 2000
  • In construction projects, the design information, which should contain accurate product information in a systematic way, needs to be applicable through the life-cycle of projects. However, paper-based 2D drawings and relevant documents has difficulties in communicating and sharing the owner's and architect's intention and requirement effectively and building a corporate knowledge base through on-going projects due to Tack of interoperability between specific task or function-oriented software and handling massive information. Meanwhile, computer and information technologies are being developed so rapidly that the practitioners are even hard to adapt them into the industry efficiently. 3D modeling capabilities in CAD systems are enormously developed and enables users to associate 3D models with other relevant information. However, this still requires a great deal of efforts and costs to have all the design information represented in CAD system, and the sophisticated system is difficult to manage. This research focuses on the transition period from 2D-based design Information management to 3D-based, which means co-existence of 2D and 3D-based management. This research proposes a model of a compound system of 2D and 3D-based CAD system which presents the general design information using 3D model integrating with 2D CAD drawings for detailed design information. This research developed an integrated information management system for design and specification by associating 2D drawings and 3D models, where 2D drawings represents detailed design and parts that are hard to express in 3D objects. To do this, related management processes was analyzed to build an information model which in turn became the basis of the integrated information management system.

  • PDF