• Title/Summary/Keyword: Information-based smart construction

Search Result 288, Processing Time 0.026 seconds

A Research for Methodology of Culture Semiotics for Smart Healing Contents (스마트 힐링콘텐츠의 문화기호학적 방법론 연구)

  • Baik, Seung-Kuk;Yoon, En-Ho
    • Journal of Information Technology and Architecture
    • /
    • v.11 no.3
    • /
    • pp.347-357
    • /
    • 2014
  • This research aims to suggest the possibility of functional culture contents based on interdisciplinary methodologies, especially for people who have Autism Spectrum Conditions, or those who have disabilities on express and receive gamsung (emotions). Recently, the development of application technologies in smartphones and tablet computers needs of functional culture contents, which are connected with the gamsung system. Moreover, the potential of marketplace of functional culture contents is emerging, as can be seen from the success of Augmentative and Alternative Communications (AAC) applications. Therefore, with the development of more applications that prevent and resolve Gamsung Disabilities anticipatively, there will be a positive economic effect of reducing back on intervention expenses as well as the construction of new contents ecosystem. So, in this research, we will attempt to make an approach of using the cultural semiotics methodology in finding attributes and features of applications that help to keep mental stability and balance for people with gamsung Disabilities. Particularly, this research will suggest an interdisciplinary theory on healing contents making methodology, using contents analysis; user interface (UI) analysis; and user experience (UX) analysis on existing smart healing applications.

Road Sign Function Diversification Strategy to Respond to Changes in the Future Traffic Environment : Focusing on Citizens' Usability of Road Signs (미래 교통환경 변화 대응을 위한 도로표지 기능 다변화 전략: 시민의 도로표지 활용성을 중심으로)

  • Choi, Woo-Chul;Cheong, Kyu-Soo;Na, Joon-Yeop
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.3
    • /
    • pp.30-41
    • /
    • 2022
  • With the advent of autonomous driving, personal mobility, drones, and smart roads, it is necessary to respond to changes in the road traffic environment in the road guidance system. However, the use of road signs to guide the road is decreasing compared to the past due to the advent of devices such as navigation and smartphones. Therefore, in this study, a large-scale survey was conducted to derive road sign issues and usage plans to respond to future changes. Based on this, this study presented a strategy to diversify road sign functions by analyzing the factors affecting the use of road signs by citizens. As a result, first, it is necessary to provide real-time variable road guidance information that reflects user needs such as traffic, weather, and local events. Second, it is necessary to informatize digital road signs such as reflecting maps with precision. Third, it is necessary to demonstrate road guidance in a virtual environment that reflects various future mobility and road environments.

A Study on the Application of Blockchain Technology to the Record Management Model (블록체인기술을 적용한 기록관리 모델 구축 방법 연구)

  • Hong, Deok-Yong
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.19 no.3
    • /
    • pp.223-245
    • /
    • 2019
  • As the foundation for the Fourth Industrial Revolution, blockchain is becoming an essential core infrastructure and technology that creates new growth engines in various industries and is rapidly spreading to the environment of businesses and institutions worldwide. In this study, the characteristics and trends of blockchain technology were investigated and arranged, its application to the records management section of public institutions was required, and the procedures and methods of construction in the records management field of public institutions were studied in literature. Finally, blockchain technology was applied to the records management to propose an archive chain model and describe possible expectations. When the transactions that record the records management process of electronic documents are loaded into the blockchain, all the step information can be checked at once in the activity of processing the records management standard tasks that were fragmentarily nonlinked. If a blockchain function is installed in the electronic records management system, the person who produces the document by acquiring and registering the document enters the metadata and information, as well as stores and classifies all contents. This would simplify the process of reporting the production status and provide real-time information through the original text information disclosure service. Archivechain is a model that applies a cloud infrastructure as a backend as a service (BaaS) by applying a hyperledger platform based on the assumption that an electronic document production system and a records management system are integrated. Creating a smart, electronic system of the records management is the solution to bringing scattered information together by placing all life cycles of public records management in a blockchain.

A Study on the Deriving of Areas of Concern for Crime using the Mental Map (멘탈 맵을 이용한 범죄발생 우려 지역 도출에 관한 연구)

  • Park, Su Jeong;Shin, Dong Bin
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.177-188
    • /
    • 2019
  • Recently, citizens are feeling anxious as 'Motiveless Crime' increases. The quality of citizens life is degraded and the degree of crime fear is increasing. In this study, based on various variables related to crime other than actual crime occurrence status, crime occurrence points (point line polygon) felt by citizens are created by using mental map methodology. And the purpose of this study is to derive the area of concern for crime through spatial overlap analysis using kernel density estimation analysis. It also uses spatial overlay analysis using kernel density estimation to derive areas of concern for crime occurrence. As a result, the local residents' request point and the areas of concern for crime were overlapped. In addition, the mental map indicating the fear of crime was constructed by mapping mainly the areas between the facilities, the non-construction area such as the narrow area, the security CCTV, the streetlight. This study is meaningful in that it tried to derive a crime occurrence concern area by using mental map method unlike the previous study related to crime. The results of this study, such as mental map, could be used in various fields such as construction of fragile crime map, guideline of crime prevention through environment design.

CNN-based Shadow Detection Method using Height map in 3D Virtual City Model (3차원 가상도시 모델에서 높이맵을 이용한 CNN 기반의 그림자 탐지방법)

  • Yoon, Hee Jin;Kim, Ju Wan;Jang, In Sung;Lee, Byung-Dai;Kim, Nam-Gi
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.55-63
    • /
    • 2019
  • Recently, the use of real-world image data has been increasing to express realistic virtual environments in various application fields such as education, manufacturing, and construction. In particular, with increasing interest in digital twins like smart cities, realistic 3D urban models are being built using real-world images, such as aerial images. However, the captured aerial image includes shadows from the sun, and the 3D city model including the shadows has a problem of distorting and expressing information to the user. Many studies have been conducted to remove the shadow, but it is recognized as a challenging problem that is still difficult to solve. In this paper, we construct a virtual environment dataset including the height map of buildings using 3D spatial information provided by VWorld, and We propose a new shadow detection method using height map and deep learning. According to the experimental results, We can observed that the shadow detection error rate is reduced when using the height map.

Estimation of PM concentrations at night time using CCTV images in the area around the road (도로 주변 지역의 CCTV영상을 이용한 야간시간대 미세먼지 농도 추정)

  • Won, Taeyeon;Eo, Yang Dam;Jo, Su Min;Song, Junyoung;Youn, Junhee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.393-399
    • /
    • 2021
  • In this study, experiments were conducted to estimate the PM concentrations by learning the nighttime CCTV images of various PM concentrations environments. In the case of daytime images, there have been many related studies, and the various texture and brightness information of images is well expressed, so the information affecting learning is clear. However, nighttime images contain less information than daytime images, and studies using only nighttime images are rare. Therefore, we conducted an experiment combining nighttime images with non-uniform characteristics due to light sources such as vehicles and streetlights and building roofs, building walls, and streetlights with relatively constant light sources as an ROI (Region of Interest). After that, the correlation was analyzed compared to the daytime experiment to see if deep learning-based PM concentrations estimation was possible with nighttime images. As a result of the experiment, the result of roof ROI learning was the highest, and the combined learning model with the entire image showed more improved results. Overall, R2 exceeded 0.9, indicating that PM estimation is possible from nighttime CCTV images, and it was calculated that additional combined learning of weather data did not significantly affect the experimental results.

Development Process for User Needs-based Chatbot: Focusing on Design Thinking Methodology (사용자 니즈 기반의 챗봇 개발 프로세스: 디자인 사고방법론을 중심으로)

  • Kim, Museong;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.221-238
    • /
    • 2019
  • Recently, companies and public institutions have been actively introducing chatbot services in the field of customer counseling and response. The introduction of the chatbot service not only brings labor cost savings to companies and organizations, but also enables rapid communication with customers. Advances in data analytics and artificial intelligence are driving the growth of these chatbot services. The current chatbot can understand users' questions and offer the most appropriate answers to questions through machine learning and deep learning. The advancement of chatbot core technologies such as NLP, NLU, and NLG has made it possible to understand words, understand paragraphs, understand meanings, and understand emotions. For this reason, the value of chatbots continues to rise. However, technology-oriented chatbots can be inconsistent with what users want inherently, so chatbots need to be addressed in the area of the user experience, not just in the area of technology. The Fourth Industrial Revolution represents the importance of the User Experience as well as the advancement of artificial intelligence, big data, cloud, and IoT technologies. The development of IT technology and the importance of user experience have provided people with a variety of environments and changed lifestyles. This means that experiences in interactions with people, services(products) and the environment become very important. Therefore, it is time to develop a user needs-based services(products) that can provide new experiences and values to people. This study proposes a chatbot development process based on user needs by applying the design thinking approach, a representative methodology in the field of user experience, to chatbot development. The process proposed in this study consists of four steps. The first step is 'setting up knowledge domain' to set up the chatbot's expertise. Accumulating the information corresponding to the configured domain and deriving the insight is the second step, 'Knowledge accumulation and Insight identification'. The third step is 'Opportunity Development and Prototyping'. It is going to start full-scale development at this stage. Finally, the 'User Feedback' step is to receive feedback from users on the developed prototype. This creates a "user needs-based service (product)" that meets the process's objectives. Beginning with the fact gathering through user observation, Perform the process of abstraction to derive insights and explore opportunities. Next, it is expected to develop a chatbot that meets the user's needs through the process of materializing to structure the desired information and providing the function that fits the user's mental model. In this study, we present the actual construction examples for the domestic cosmetics market to confirm the effectiveness of the proposed process. The reason why it chose the domestic cosmetics market as its case is because it shows strong characteristics of users' experiences, so it can quickly understand responses from users. This study has a theoretical implication in that it proposed a new chatbot development process by incorporating the design thinking methodology into the chatbot development process. This research is different from the existing chatbot development research in that it focuses on user experience, not technology. It also has practical implications in that companies or institutions propose realistic methods that can be applied immediately. In particular, the process proposed in this study can be accessed and utilized by anyone, since 'user needs-based chatbots' can be developed even if they are not experts. This study suggests that further studies are needed because only one field of study was conducted. In addition to the cosmetics market, additional research should be conducted in various fields in which the user experience appears, such as the smart phone and the automotive market. Through this, it will be able to be reborn as a general process necessary for 'development of chatbots centered on user experience, not technology centered'.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on the SCM Capability Modeling and Process Improvement in Small Venture Firms (중소·벤처기업의 SCM역량 모델링과 프로세스 개선 방안에 관한 연구)

  • Lee, Seolbin;Park, Jugyeong
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.13 no.2
    • /
    • pp.115-123
    • /
    • 2018
  • This study is empirically intended to put forward the modeling and process improvement measures for the SCM capability in small venture firms. The findings are summarized as follows. There were strategic alliance, technological development and centralization in the modeling of strategic planning for supply chain, not the least of which is strategic alliance, followed by centralization and technological development. There were routing scheduling, network integration and third party logistics outsourcing in decision making, not the least of which was network integration. There were customer service management, productivity management and quality management in management control, not the least of which was quality management. And there were order management choice, pricing demand, shipment delivery and customer management in transaction support system, not the least of which was order management choice. As for the above-mentioned findings, to maximize the SCM capability and operate the optimized process in small venture firms, the existing strategic alliances can optimize the quality management and stabilize the transaction support system through the network sharing and integration from the perspective of relevant organizational members' capability and process improvement. And the strategic linkage between firms can maximize the integrated capability of information system beyond the simple exchange relation between electronic data, achieving a differentiated competitive advantage. Consequently, the systematization and centralization for the maximization of SCM capability, including the infrastructure construction based on the system compatibility and reliability for information integration, should be preceded before the modeling of the integrated capability for optimum supply chain and the best process management in the smart era.

Improvement of Face Recognition Algorithm for Residential Area Surveillance System Based on Graph Convolution Network (그래프 컨벌루션 네트워크 기반 주거지역 감시시스템의 얼굴인식 알고리즘 개선)

  • Tan Heyi;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.1-15
    • /
    • 2024
  • The construction of smart communities is a new method and important measure to ensure the security of residential areas. In order to solve the problem of low accuracy in face recognition caused by distorting facial features due to monitoring camera angles and other external factors, this paper proposes the following optimization strategies in designing a face recognition network: firstly, a global graph convolution module is designed to encode facial features as graph nodes, and a multi-scale feature enhancement residual module is designed to extract facial keypoint features in conjunction with the global graph convolution module. Secondly, after obtaining facial keypoints, they are constructed as a directed graph structure, and graph attention mechanisms are used to enhance the representation power of graph features. Finally, tensor computations are performed on the graph features of two faces, and the aggregated features are extracted and discriminated by a fully connected layer to determine whether the individuals' identities are the same. Through various experimental tests, the network designed in this paper achieves an AUC index of 85.65% for facial keypoint localization on the 300W public dataset and 88.92% on a self-built dataset. In terms of face recognition accuracy, the proposed network achieves an accuracy of 83.41% on the IBUG public dataset and 96.74% on a self-built dataset. Experimental results demonstrate that the network designed in this paper exhibits high detection and recognition accuracy for faces in surveillance videos.