• Title/Summary/Keyword: Artificial Intelligence Services

Search Result 595, Processing Time 0.023 seconds

Modified analytical AI evolution of composite structures with algorithmic optimization of performance thresholds

  • ZY Chen;Yahui Meng;Huakun Wu;ZY Gu;Timothy Chen
    • Steel and Composite Structures
    • /
    • v.53 no.1
    • /
    • pp.103-114
    • /
    • 2024
  • This study proposes a new hybrid approach that utilizes post-earthquake survey data and numerical analysis results from an evolving finite element routing model to capture vulnerability processes. In order to achieve cost-effective evaluation and optimization, this study introduced an online data evolution data platform. The proposed method consists of four stages: 1) development of diagnostic sensitivity curve; 2) determination of probability distribution parameters of throughput threshold through optimization; 3) update of distribution parameters using smart evolution method; 4) derivation of updated diffusion parameters. Produce a blending curve. The analytical curves were initially obtained based on a finite element model used to represent a similar RC building with an estimated (previous) capacity height in the damaged area. The previous data are updated based on the estimated empirical failure probabilities from the post-earthquake survey data, and the mixed sensitivity curve is constructed using the update (subsequent) that best describes the empirical failure probabilities. The results show that the earthquake rupture estimate is close to the empirical rupture probability and corresponds very accurately to the real engineering online practical analysis. The objectives of this paper are to obtain adequate, safe and affordable housing and basic services, promote inclusive and sustainable urbanization and participation, implement sustainable and disaster-resilient buildings, sustainable human settlement planning and management. Therefore, with the continuous development of artificial intelligence and management strategy, this goal is expected to be achieved in the near future.

Enhancing mechanical performance of steel-tube-encased HSC composite walls: Experimental investigation and analytical modeling

  • ZY Chen;Ruei-Yuan Wang;Yahui Meng;Huakun Wu;Lai B;Timothy Chen
    • Steel and Composite Structures
    • /
    • v.52 no.6
    • /
    • pp.647-656
    • /
    • 2024
  • This paper discusses the study of concrete composite walls of algorithmic modeling, in which steel tubes are embedded. The load-bearing capacity of STHC composite walls increases with the increase of axial load coefficient, but its ductility decreases. The load-bearing capacity can be improved by increasing the strength of the steel pipes; however, the elasticity of STHC composite walls was found to be slightly reduced. As the shear stress coefficient increases, the load-bearing capacity of STHC composite walls decreases significantly, while the deformation resistance increases. By analyzing actual cases, we demonstrate the effectiveness of the research results in real situations and enhance the persuasiveness of the conclusions. The research results can provide a basis for future research, inspire more explorations on seismic design and construction, and further advance the development of this field. Emphasize the importance of research results, promote interdisciplinary cooperation in the fields of structural engineering, earthquake engineering, and materials science, and improve overall seismic resistance. The emphasis on these aspects will help highlight the practical impact of the research results, further strengthen the conclusions, and promote progress in the design and construction of earthquake-resistant structures. The goals of this work are access to adequate, safe and affordable housing and basic services, promotion of inclusive and sustainable urbanization and participation, implementation of sustainable and disaster-resilient architecture, sustainable planning and management of human settlements. Simulation results of linear and nonlinear structures show that this method can detect structural parameters and their changes due to damage and unknown disturbances. Therefore, it is believed that with the further development of fuzzy neural network artificial intelligence theory, this goal will be achieved in the near future.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

A Proposal for Mobile Gallery Auction Method Using NFC-based FIDO and 2 Factor Technology and Permission-type Distributed Director Block-chain (NFC 기반 FIDO(Fast IDentity Online) 및 2 Factor 기술과 허가형 분산원장 블록체인을 이용한 모바일 갤러리 경매 방안 제안)

  • Noh, Sun-Kuk
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.129-135
    • /
    • 2019
  • Recently, studies have been conducted to improve the m-commerce process in the NFC-based mobile environment and the increase of the number of smart phones built in NFC. Since authentication is important in mobile electronic payment, FIDO(Fast IDentity Online) and 2 Factor electronic payment system are applied. In addition, block-chains using distributed raw materials have emerged as a representative technology of the fourth industry. In this study, for the mobile gallery auction of the traders using NFC embedded terminal (smartphone) in a small gallery auction in which an unspecified minority participates, password-based authentication and biometric authentication technology (fingerprint) were applied to record transaction details and ownership transfer of the auction participants in electronic payment. And, for the cost reduction and data integrity related to gallery auction, the private distributed director block chain was constructed and used. In addition, domestic and foreign cases applying block chain in the auction field were investigated and compared. In the future, the study will also study the implementation of block chain networks and smart contract and the integration of block chain and artificial intelligence to apply the proposed method.

Convergence of Artificial Intelligence Techniques and Domain Specific Knowledge for Generating Super-Resolution Meteorological Data (기상 자료 초해상화를 위한 인공지능 기술과 기상 전문 지식의 융합)

  • Ha, Ji-Hun;Park, Kun-Woo;Im, Hyo-Hyuk;Cho, Dong-Hee;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.63-70
    • /
    • 2021
  • Generating a super-resolution meteological data by using a high-resolution deep neural network can provide precise research and useful real-life services. We propose a new technique of generating improved training data for super-resolution deep neural networks. To generate high-resolution meteorological data with domain specific knowledge, Lambert conformal conic projection and objective analysis were applied based on observation data and ERA5 reanalysis field data of specialized institutions. As a result, temperature and humidity analysis data based on domain specific knowledge showed improved RMSE by up to 42% and 46%, respectively. Next, a super-resolution generative adversarial network (SRGAN) which is one of the aritifial intelligence techniques was used to automate the manual data generation technique using damain specific techniques as described above. Experiments were conducted to generate high-resolution data with 1 km resolution from global model data with 10 km resolution. Finally, the results generated with SRGAN have a higher resoltuion than the global model input data, and showed a similar analysis pattern to the manually generated high-resolution analysis data, but also showed a smooth boundary.

Detection of video editing points using facial keypoints (얼굴 특징점을 활용한 영상 편집점 탐지)

  • Joshep Na;Jinho Kim;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, various services using artificial intelligence(AI) are emerging in the media field as well However, most of the video editing, which involves finding an editing point and attaching the video, is carried out in a passive manner, requiring a lot of time and human resources. Therefore, this study proposes a methodology that can detect the edit points of video according to whether person in video are spoken by using Video Swin Transformer. First, facial keypoints are detected through face alignment. To this end, the proposed structure first detects facial keypoints through face alignment. Through this process, the temporal and spatial changes of the face are reflected from the input video data. And, through the Video Swin Transformer-based model proposed in this study, the behavior of the person in the video is classified. Specifically, after combining the feature map generated through Video Swin Transformer from video data and the facial keypoints detected through Face Alignment, utterance is classified through convolution layers. In conclusion, the performance of the image editing point detection model using facial keypoints proposed in this paper improved from 87.46% to 89.17% compared to the model without facial keypoints.

Proposal for the 『Army TIGER Cyber Defense System』 Installation capable of responding to future enemy cyber attack (미래 사이버위협에 대응 가능한 『Army TIGER 사이버방호체계』 구축을 위한 제언)

  • Byeong-jun Park;Cheol-jung Kim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.157-166
    • /
    • 2024
  • The Army TIGER System, which is being deployed to implement a future combat system, is expected to bring innovative changes to the army's combat methods and comabt execution capability such as mobility, networking and intelligence. To this end, the Army will introduce various systems using drones, robots, unmanned vehicles, AI(Artificial Intelligence), etc. and utilize them in combat. The use of various unmanned vehicles and AI is expected to result in the introduction of equipment with new technologies into the army and an increase in various types of transmitted information, i.e. data. However, currently in the military, there is an acceleration in research and combat experimentations on warfigthing options using Army TIGER forces system for specific functions. On the other hand, the current reality is that research on cyber threats measures targeting information systems related to the increasing number of unmanned systems, data production, and transmission from unmanned systems, as well as the establishment of cloud centers and AI command and control center driven by the new force systems, is not being pursued. Accordingly this paper analyzes the structure and characteristics of the Army TIGER force integration system and makes suggestions for necessity of building, available cyber defense solutions and Army TIGER integrated cyber protections system that can respond to cyber threats in the future.

Contactless Data Society and Reterritorialization of the Archive (비접촉 데이터 사회와 아카이브 재영토화)

  • Jo, Min-ji
    • The Korean Journal of Archival Studies
    • /
    • no.79
    • /
    • pp.5-32
    • /
    • 2024
  • The Korean government ranked 3rd among 193 UN member countries in the UN's 2022 e-Government Development Index. Korea, which has consistently been evaluated as a top country, can clearly be said to be a leading country in the world of e-government. The lubricant of e-government is data. Data itself is neither information nor a record, but it is a source of information and records and a resource of knowledge. Since administrative actions through electronic systems have become widespread, the production and technology of data-based records have naturally expanded and evolved. Technology may seem value-neutral, but in fact, technology itself reflects a specific worldview. The digital order of new technologies, armed with hyper-connectivity and super-intelligence, not only has a profound influence on traditional power structures, but also has an a similar influence on existing information and knowledge transmission media. Moreover, new technologies and media, including data-based generative artificial intelligence, are by far the hot topic. It can be seen that the all-round growth and spread of digital technology has led to the augmentation of human capabilities and the outsourcing of thinking. This also involves a variety of problems, ranging from deep fakes and other fake images, auto profiling, AI lies hallucination that creates them as if they were real, and copyright infringement of machine learning data. Moreover, radical connectivity capabilities enable the instantaneous sharing of vast amounts of data and rely on the technological unconscious to generate actions without awareness. Another irony of the digital world and online network, which is based on immaterial distribution and logical existence, is that access and contact can only be made through physical tools. Digital information is a logical object, but digital resources cannot be read or utilized without some type of device to relay it. In that respect, machines in today's technological society have gone beyond the level of simple assistance, and there are points at which it is difficult to say that the entry of machines into human society is a natural change pattern due to advanced technological development. This is because perspectives on machines will change over time. Important is the social and cultural implications of changes in the way records are produced as a result of communication and actions through machines. Even in the archive field, what problems will a data-based archive society face due to technological changes toward a hyper-intelligence and hyper-connected society, and who will prove the continuous activity of records and data and what will be the main drivers of media change? It is time to research whether this will happen. This study began with the need to recognize that archives are not only records that are the result of actions, but also data as strategic assets. Through this, author considered how to expand traditional boundaries and achieves reterritorialization in a data-driven society.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Satellite Imagery and AI-based Disaster Monitoring and Establishing a Feasible Integrated Near Real-Time Disaster Monitoring System (위성영상-AI 기반 재난모니터링과 실현 가능한 준실시간 통합 재난모니터링 시스템)

  • KIM, Junwoo;KIM, Duk-jin
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.3
    • /
    • pp.236-251
    • /
    • 2020
  • As remote sensing technologies are evolving, and more satellites are orbited, the demand for using satellite data for disaster monitoring is rapidly increasing. Although natural and social disasters have been monitored using satellite data, constraints on establishing an integrated satellite-based near real-time disaster monitoring system have not been identified yet, and thus a novel framework for establishing such system remains to be presented. This research identifies constraints on establishing satellite data-based near real-time disaster monitoring systems by devising and testing a new conceptual framework of disaster monitoring, and then presents a feasible disaster monitoring system that relies mainly on acquirable satellite data. Implementing near real-time disaster monitoring by satellite remote sensing is constrained by technological and economic factors, and more significantly, it is also limited by interactions between organisations and policy that hamper timely acquiring appropriate satellite data for the purpose, and institutional factors that are related to satellite data analyses. Such constraints could be eased by employing an integrated computing platform, such as Amazon Web Services(AWS), which enables obtaining, storing and analysing satellite data, and by developing a toolkit by which appropriate satellites'sensors that are required for monitoring specific types of disaster, and their orbits, can be analysed. It is anticipated that the findings of this research could be used as meaningful reference when trying to establishing a satellite-based near real-time disaster monitoring system in any country.