• Title/Summary/Keyword: Internet search

Search Result 1,637, Processing Time 0.03 seconds

Leision Detection in Chest X-ray Images based on Coreset of Patch Feature (패치 특징 코어세트 기반의 흉부 X-Ray 영상에서의 병변 유무 감지)

  • Kim, Hyun-bin;Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.35-45
    • /
    • 2022
  • Even in recent years, treatment of first-aid patients is still often delayed due to a shortage of medical resources in marginalized areas. Research on automating the analysis of medical data to solve the problems of inaccessibility for medical services and shortage of medical personnel is ongoing. Computer vision-based medical inspection automation requires a lot of cost in data collection and labeling for training purposes. These problems stand out in the works of classifying lesion that are rare, or pathological features and pathogenesis that are difficult to clearly define visually. Anomaly detection is attracting as a method that can significantly reduce the cost of data collection by adopting an unsupervised learning strategy. In this paper, we propose methods for detecting abnormal images on chest X-RAY images as follows based on existing anomaly detection techniques. (1) Normalize the brightness range of medical images resampled as optimal resolution. (2) Some feature vectors with high representative power are selected in set of patch features extracted as intermediate-level from lesion-free images. (3) Measure the difference from the feature vectors of lesion-free data selected based on the nearest neighbor search algorithm. The proposed system can simultaneously perform anomaly classification and localization for each image. In this paper, the anomaly detection performance of the proposed system for chest X-RAY images of PA projection is measured and presented by detailed conditions. We demonstrate effect of anomaly detection for medical images by showing 0.705 classification AUROC for random subset extracted from the PadChest dataset. The proposed system can be usefully used to improve the clinical diagnosis workflow of medical institutions, and can effectively support early diagnosis in medically poor area.

Implementation of Specific Target Detection and Tracking Technique using Re-identification Technology based on public Multi-CCTV (공공 다중CCTV 기반에서 재식별 기술을 활용한 특정대상 탐지 및 추적기법 구현)

  • Hwang, Joo-Sung;Nguyen, Thanh Hai;Kang, Soo-Kyung;Kim, Young-Kyu;Kim, Joo-Yong;Chung, Myoung-Sug;Lee, Jooyeoun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.4
    • /
    • pp.49-57
    • /
    • 2022
  • The government is making great efforts to prevent crimes such as missing children by using public CCTVs. However, there is a shortage of operating manpower, weakening of concentration due to long-term concentration, and difficulty in tracking. In addition, applying real-time object search, re-identification, and tracking through a deep learning algorithm showed a phenomenon of increased parameters and insufficient memory for speed reduction due to complex network analysis. In this paper, we designed the network to improve speed and save memory through the application of Yolo v4, which can recognize real-time objects, and the application of Batch and TensorRT technology. In this thesis, based on the research on these advanced algorithms, OSNet re-ranking and K-reciprocal nearest neighbor for re-identification, Jaccard distance dissimilarity measurement algorithm for correlation, etc. are developed and used in the solution of CCTV national safety identification and tracking system. As a result, we propose a solution that can track objects by recognizing and re-identification objects in real-time within situation of a Korean public multi-CCTV environment through a set of algorithm combinations.

A Study on Effective Adversarial Attack Creation for Robustness Improvement of AI Models (AI 모델의 Robustness 향상을 위한 효율적인 Adversarial Attack 생성 방안 연구)

  • Si-on Jeong;Tae-hyun Han;Seung-bum Lim;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.25-36
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology is introduced in various fields, including security, the development of technology is accelerating. However, with the development of AI technology, attack techniques that cleverly bypass malicious behavior detection are also developing. In the classification process of AI models, an Adversarial attack has emerged that induces misclassification and a decrease in reliability through fine adjustment of input values. The attacks that will appear in the future are not new attacks created by an attacker but rather a method of avoiding the detection system by slightly modifying existing attacks, such as Adversarial attacks. Developing a robust model that can respond to these malware variants is necessary. In this paper, we propose two methods of generating Adversarial attacks as efficient Adversarial attack generation techniques for improving Robustness in AI models. The proposed technique is the XAI-based attack technique using the XAI technique and the Reference based attack through the model's decision boundary search. After that, a classification model was constructed through a malicious code dataset to compare performance with the PGD attack, one of the existing Adversarial attacks. In terms of generation speed, XAI-based attack, and reference-based attack take 0.35 seconds and 0.47 seconds, respectively, compared to the existing PGD attack, which takes 20 minutes, showing a very high speed, especially in the case of reference-based attack, 97.7%, which is higher than the existing PGD attack's generation rate of 75.5%. Therefore, the proposed technique enables more efficient Adversarial attacks and is expected to contribute to research to build a robust AI model in the future.

Path Algorithm for Maximum Tax-Relief in Maximum Profit Tax Problem of Multinational Corporation (다국적기업 최대이익 세금트리 문제의 최대 세금경감 경로 알고리즘)

  • Sang-Un Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.157-164
    • /
    • 2023
  • This paper suggests O(n2) polynomial time heuristic algorithm for corporate tax structure optimization problem that has been classified as NP-complete problem. The proposed algorithm constructs tax tree levels that the target holding company is located at root node of Level 1, and the tax code categories(Te) 1,4,3,2 are located in each level 2,3,4,5 sequentially. To find the maximum tax-relief path from source(S) to target(T), firstly we connect the minimum witholding tax rate minrw(u, v) arc of node u point of view for transfer the profit from u to v node. As a result we construct the spanning tree from all of the source nodes to a target node, and find the initial feasible solution. Nextly, we find the alternate path with minimum foreign tax rate minrfi(u, v) of v point of view. Finally we choose the minimum tax-relief path from of this two paths. The proposed heuristic algorithm performs better optimal results than linear programming and Tabu search method that is a kind of metaheuristic method.

A Comparison of Image Classification System for Building Waste Data based on Deep Learning (딥러닝기반 건축폐기물 이미지 분류 시스템 비교)

  • Jae-Kyung Sung;Mincheol Yang;Kyungnam Moon;Yong-Guk Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.199-206
    • /
    • 2023
  • This study utilizes deep learning algorithms to automatically classify construction waste into three categories: wood waste, plastic waste, and concrete waste. Two models, VGG-16 and ViT (Vision Transformer), which are convolutional neural network image classification algorithms and NLP-based models that sequence images, respectively, were compared for their performance in classifying construction waste. Image data for construction waste was collected by crawling images from search engines worldwide, and 3,000 images, with 1,000 images for each category, were obtained by excluding images that were difficult to distinguish with the naked eye or that were duplicated and would interfere with the experiment. In addition, to improve the accuracy of the models, data augmentation was performed during training with a total of 30,000 images. Despite the unstructured nature of the collected image data, the experimental results showed that VGG-16 achieved an accuracy of 91.5%, and ViT achieved an accuracy of 92.7%. This seems to suggest the possibility of practical application in actual construction waste data management work. If object detection techniques or semantic segmentation techniques are utilized based on this study, more precise classification will be possible even within a single image, resulting in more accurate waste classification

The Role of Content Services Within a Firm's Internet Service Portfolio: Case Studies of Naver Webtoon and Google YouTube (기업의 인터넷 서비스 포트폴리오 내 콘텐츠 서비스의 역할: 네이버 웹툰과 구글 유튜브의 사례 연구)

  • Choi, Jiwon;Cho, Wooje;Jung, Yoonhyuk;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.1-28
    • /
    • 2022
  • In recent years, many Internet giants have begun providing their own content services, which attract online users by offering personalized services based on artificial intelligence technologies. This study investigates the role of two firms' content services within the firms' online service network. We examine the role of Naver Webtoon, which can be characterized as a professional-generated content, within Naver's service portfolio, and that of Google YouTube, which can be characterized as a user-generated content, within Google's service portfolio. Using survey data on viewers' use of the two services, we analyze a valued directed service network, where a node denotes an online service and a relationship between two nodes denotes a sequential use of two services. We found that both Webtoon and YouTube show higher out-degree centrality than in-degree centrality, which implies these content services are more likely to be starting services rather than arriving services within the firms' interactive network. The gap between the out-degree and in-degree centrality of YouTube is much smaller than that of Webtoon. The high centrality of YouTube, a user-generated content service, within the Google service network shows that YouTube's initial role of providing specific-content videos (e.g., entertainment) has expanded into a general search service for users.

A Study on Automatic Discovery and Summarization Method of Battlefield Situation Related Documents using Natural Language Processing and Collaborative Filtering (자연어 처리 및 협업 필터링 기반의 전장상황 관련 문서 자동탐색 및 요약 기법연구)

  • Kunyoung Kim;Jeongbin Lee;Mye Sohn
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.127-135
    • /
    • 2023
  • With the development of information and communication technology, the amount of information produced and shared in the battlefield and stored and managed in the system dramatically increased. This means that the amount of information which cansupport situational awareness and decision making of the commanders has increased, but on the other hand, it is also a factor that hinders rapid decision making by increasing the information overload on the commanders. To overcome this limitation, this study proposes a method to automatically search, select, and summarize documents that can help the commanders to understand the battlefield situation reports that he or she received. First, named entities are discovered from the battlefield situation report using a named entity recognition method. Second, the documents related to each named entity are discovered. Third, a language model and collaborative filtering are used to select the documents. At this time, the language model is used to calculate the similarity between the received report and the discovered documents, and collaborative filtering is used to reflect the commander's document reading history. Finally, sentences containing each named entity are selected from the documents and sorted. The experiment was carried out using academic papers since their characteristics are similar to military documents, and the validity of the proposed method was verified.

Federated learning-based client training acceleration method for personalized digital twins (개인화 디지털 트윈을 위한 연합학습 기반 클라이언트 훈련 가속 방식)

  • YoungHwan Jeong;Won-gi Choi;Hyoseon Kye;JeeHyeong Kim;Min-hwan Song;Sang-shin Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.23-37
    • /
    • 2024
  • Digital twin is an M&S (Modeling and Simulation) technology designed to solve or optimize problems in the real world by replicating physical objects in the real world as virtual objects in the digital world and predicting phenomena that may occur in the future through simulation. Digital twins have been elaborately designed and utilized based on data collected to achieve specific purposes in large-scale environments such as cities and industrial facilities. In order to apply this digital twin technology to real life and expand it into user-customized service technology, practical but sensitive issues such as personal information protection and personalization of simulations must be resolved. To solve this problem, this paper proposes a federated learning-based accelerated client training method (FACTS) for personalized digital twins. The basic approach is to use a cluster-driven federated learning training procedure to protect personal information while simultaneously selecting a training model similar to the user and training it adaptively. As a result of experiments under various statistically heterogeneous conditions, FACTS was found to be superior to the existing FL method in terms of training speed and resource efficiency.

Development of checklist questions to measure AI capabilities of elementary school students (초등학생의 AI 역량 측정을 위한 체크리스트 문항 개발)

  • Eun Chul Lee;YoungShin Pyun
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.3
    • /
    • pp.7-12
    • /
    • 2024
  • The development of artificial intelligence technology changes the social structure and educational environment, and the importance of artificial intelligence capabilities continues to increase. This study was conducted with the purpose of developing a checklist of questions to measure AI capabilities of elementary school students. To achieve the purpose of the study, a Delphi survey was used to analyze literature and develop questions. For literature analysis, two domestic studies, five international studies, and the Ministry of Education's curriculum report were collected through a search. The collected data was analyzed to construct core competency measurement elements. The core competency measurement elements consisted of understanding artificial intelligence (6 elements), artificial intelligence thinking (4 elements), artificial intelligence ethics (4 elements), and artificial intelligence social-emotion (3 elements). Considering the knowledge, skills, and attitudes of the constructed measurement elements, 19 questions were developed. The developed questions were verified through the first Delphi survey, and 7 questions were revised according to the revision opinions. The validity of 19 questions was verified through the second Delphi survey. The checklist items developed in this study are measured by teacher evaluation based on performance and behavioral observations rather than a self-report questionnaire. This has the implication that the measurement results of competency are raised to a reliable level.

Development of checklist questions to measure AI core competencies of middle school students (중학생의 AI 핵심역량 측정을 위한 체크리스트 문항 개발)

  • Eun Chul Lee;JungSoo Han
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.3
    • /
    • pp.49-55
    • /
    • 2024
  • This study was conducted with the purpose of developing a checklist of questions to measure middle school students' AI capabilities. To achieve the goal of the study, literature analysis and question development Delphi survey were used. For literature analysis, two domestic studies, five international studies, and the Ministry of Education's curriculum report were collected through a search. The collected data was analyzed to construct core competency measurement elements. The core competency measurement elements are understanding of artificial intelligence (5 elements), artificial intelligence thinking (5 elements), utilization of artificial intelligence (4 elements), artificial intelligence ethics (6 elements), and artificial intelligence social-emotion (6 elements). elements). Considering the knowledge, skills, and attitudes of the constructed measurement elements, 31 questions were developed. The developed questions were verified through the first Delphi survey, and 10 questions were revised according to the revision opinions. The validity of 31 questions was verified through the second Delphi survey. The checklist items developed in this study are measured by teacher evaluation based on performance and behavioral observations rather than a self-report questionnaire. This has the implication that the level of reliability of measurement results increases.