• Title/Summary/Keyword: automatic classification

Search Result 875, Processing Time 0.038 seconds

Current Trend of EV (Electric Vehicle) Waste Battery Diagnosis and Dismantling Technologies and a Suggestion for Future R&D Strategy with Environmental Friendliness (전기차 폐배터리 진단/해체 기술 동향 및 향후 친환경적 개발 전략)

  • Byun, Chaeeun;Seo, Jihyun;Lee, Min kyoung;Keiko, Yamada;Lee, Sang-hun
    • Resources Recycling
    • /
    • v.31 no.4
    • /
    • pp.3-11
    • /
    • 2022
  • Owing to the increasing demand for electric vehicles (EVs), appropriate management of their waste batteries is required urgently for scrapped vehicles or for addressing battery aging. With respect to technological developments, data-driven diagnosis of waste EV batteries and management technologies have drawn increasing attention. Moreover, robot-based automatic dismantling technologies, which are seemingly interesting, require industrial verifications and linkages with future battery-related database systems. Among these, it is critical to develop and disseminate various advanced battery diagnosis and assessment techniques to improve the efficiency and safety/environment of the recirculation of waste batteries. Incorporation of lithium-related chemical substances in the public pollutant release and transfer register (PRTR) database as well as in-depth risk assessment of gas emissions in waste EV battery combustion and their relevant fire safety are some of the necessary steps. Further research and development thus are needed for optimizing the lifecycle management of waste batteries from various aspects related to data-based diagnosis/classification/disassembly processes as well as reuse/recycling and final disposal. The idea here is that the data should contribute to clean design and manufacturing to reduce the environmental burden and facilitate reuse/recycling in future production of EV batteries. Such optimization should also consider the future technological and market trends.

Classification of Diabetic Retinopathy using Mask R-CNN and Random Forest Method

  • Jung, Younghoon;Kim, Daewon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.29-40
    • /
    • 2022
  • In this paper, we studied a system that detects and analyzes the pathological features of diabetic retinopathy using Mask R-CNN and a Random Forest classifier. Those are one of the deep learning techniques and automatically diagnoses diabetic retinopathy. Diabetic retinopathy can be diagnosed through fundus images taken with special equipment. Brightness, color tone, and contrast may vary depending on the device. Research and development of an automatic diagnosis system using artificial intelligence to help ophthalmologists make medical judgments possible. This system detects pathological features such as microvascular perfusion and retinal hemorrhage using the Mask R-CNN technique. It also diagnoses normal and abnormal conditions of the eye by using a Random Forest classifier after pre-processing. In order to improve the detection performance of the Mask R-CNN algorithm, image augmentation was performed and learning procedure was conducted. Dice similarity coefficients and mean accuracy were used as evaluation indicators to measure detection accuracy. The Faster R-CNN method was used as a control group, and the detection performance of the Mask R-CNN method through this study showed an average of 90% accuracy through Dice coefficients. In the case of mean accuracy it showed 91% accuracy. When diabetic retinopathy was diagnosed by learning a Random Forest classifier based on the detected pathological symptoms, the accuracy was 99%.

Escape Route Prediction and Tracking System using Artificial Intelligence (인공지능을 활용한 도주경로 예측 및 추적 시스템)

  • Yang, Bum-suk;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.225-227
    • /
    • 2022
  • Now In Seoul, about 75,000 CCTVs are installed in 25 district offices. Each ward office in Seoul has built a control center for CCTV control and is building information such as people, vehicle types, license plate recognition and color classification into big data through 24-hour artificial intelligence intelligent image analysis. Seoul Metropolitan Government has signed MOUs with the Ministry of Land, Infrastructure and Transport, the National Police Agency, the Fire Service, the Ministry of Justice, and the military base to enable rapid response to emergency/emergency situations. In other words, we are building a smart city that is safe and can prevent disasters by providing CCTV images of each ward office. In this paper, the CCTV image is designed to extract the characteristics of the vehicle and personnel when an incident occurs through artificial intelligence, and based on this, predict the escape route and enable continuous tracking. It is designed so that the AI automatically selects and displays the CCTV image of the route. It is designed to expand the smart city integration platform by providing image information and extracted information to the adjacent ward office when the escape route of a person or vehicle related to an incident is expected to an area other than the relevant jurisdiction. This paper will contribute as basic data to the development of smart city integrated platform research.

  • PDF

GIS Information Generation for Electric Mobility Aids Based on Object Recognition Model (객체 인식 모델 기반 전동 이동 보조기용 GIS 정보 생성)

  • Je-Seung Woo;Sun-Gi Hong;Dong-Seok Park;Jun-Mo Park
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.200-208
    • /
    • 2022
  • In this study, an automatic information collection system and geographic information construction algorithm for the transportation disadvantaged using electric mobility aids are implemented using an object recognition model. Recognizes objects that the disabled person encounters while moving, and acquires coordinate information. It provides an improved route selection map compared to the existing geographic information for the disabled. Data collection consists of a total of four layers including the HW layer. It collects image information and location information, transmits them to the server, recognizes, and extracts data necessary for geographic information generation through the process of classification. A driving experiment is conducted in an actual barrier-free zone, and during this process, it is confirmed how efficiently the algorithm for collecting actual data and generating geographic information is generated.The geographic information processing performance was confirmed to be 70.92 EA/s in the first round, 70.69 EA/s in the second round, and 70.98 EA/s in the third round, with an average of 70.86 EA/s in three experiments, and it took about 4 seconds to be reflected in the actual geographic information. From the experimental results, it was confirmed that the walking weak using electric mobility aids can drive safely using new geographic information provided faster than now.

Real-time Road Surface Recognition and Black Ice Prevention System for Asphalt Concrete Pavements using Image Analysis (실시간 영상이미지 분석을 통한 아스팔트 콘크리트 포장의 노면 상태 인식 및 블랙아이스 예방시스템)

  • Hoe-Pyeong Jeong;Homin Song;Young-Cheol Choi
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.1
    • /
    • pp.82-89
    • /
    • 2024
  • Black ice is very difficult to recognize and reduces the friction of the road surface, causing automobile accidents. Since black ice is difficult to detect, there is a need for a system that identifies black ice in real time and warns the driver. Various studies have been conducted to prevent black ice on road surfaces, but there is a lack of research on systems that identify black ice in real time and warn drivers. In this paper, an real-time image-based analysis system was developed to identify the condition of asphalt road surface, which is widely used in Korea. For this purpose, a dataset was built for each asphalt road surface image, and then the road surface condition was identified as dry, wet, black ice, and snow using deep learning. In addition, temperature and humidity data measured on the actual road surface were used to finalize the road surface condition. When the road surface was determined to be black ice, the salt spray equipment installed on the road was automatically activated. The surface condition recognition system for the asphalt concrete pavement and black ice automatic prevention system developed in this study are expected to ensure safe driving and reduce the incidence of traffic accidents.

Salient Region Detection Algorithm for Music Video Browsing (뮤직비디오 브라우징을 위한 중요 구간 검출 알고리즘)

  • Kim, Hyoung-Gook;Shin, Dong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2
    • /
    • pp.112-118
    • /
    • 2009
  • This paper proposes a rapid detection algorithm of a salient region for music video browsing system, which can be applied to mobile device and digital video recorder (DVR). The input music video is decomposed into the music and video tracks. For the music track, the music highlight including musical chorus is detected based on structure analysis using energy-based peak position detection. Using the emotional models generated by SVM-AdaBoost learning algorithm, the music signal of the music videos is classified into one of the predefined emotional classes of the music automatically. For the video track, the face scene including the singer or actor/actress is detected based on a boosted cascade of simple features. Finally, the salient region is generated based on the alignment of boundaries of the music highlight and the visual face scene. First, the users select their favorite music videos from various music videos in the mobile devices or DVR with the information of a music video's emotion and thereafter they can browse the salient region with a length of 30-seconds using the proposed algorithm quickly. A mean opinion score (MOS) test with a database of 200 music videos is conducted to compare the detected salient region with the predefined manual part. The MOS test results show that the detected salient region using the proposed method performed much better than the predefined manual part without audiovisual processing.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

Determination of Tumor Boundaries on CT Images Using Unsupervised Clustering Algorithm (비교사적 군집화 알고리즘을 이용한 전산화 단층영상의 병소부위 결정에 관한 연구)

  • Lee, Kyung-Hoo;Ji, Young-Hoon;Lee, Dong-Han;Yoo, Seoung-Yul;Cho, Chul-Koo;Kim, Mi-Sook;Yoo, Hyung-Jun;Kwon, Soo-Il;Chun, Jun-Chul
    • Journal of Radiation Protection and Research
    • /
    • v.26 no.2
    • /
    • pp.59-66
    • /
    • 2001
  • It is a hot issue to determine the spatial location and shape of tumor boundary in fractionated stereotactic radiotherapy (FSRT). We could get consecutive transaxial plane images from the phantom (paraffin) and 4 patients with brain tumor using helical computed tomography(HCT). K-means classification algorithm was adjusted to change raw data pixel value in CT images into classified average pixel value. The classified images consists of 5 regions that ate tumor region (TR), normal region (NR), combination region (CR), uncommitted region (UR) and artifact region (AR). The major concern was how to separate the normal region from tumor region in the combination area. Relative average deviation analysis was adjusted to alter average pixel values of 5 regions into 2 regions of normal and tumor region to define maximum point among average deviation pixel values. And then we drawn gross tumor volume (GTV) boundary by connecting maximum points in images using semi-automatic contour method by IDL(Interactive Data Language) program. The error limit of the ROI boundary in homogeneous phantom is estimated within ${\pm}1%$. In case of 4 patients, we could confirm that the tumor lesions described by physician and the lesions described automatically by the K-mean classification algorithm and relative average deviation analyses were similar. These methods can make uncertain boundary between normal and tumor region into clear boundary. Therefore it will be useful in the CT images-based treatment planning especially to use above procedure apply prescribed method when CT images intermittently fail to visualize tumor volume comparing to MRI images.

  • PDF

Methylation of P16 and hMLH1 in Gastric Carcinoma (위암에서 P16 및 hMLH1 유전자의 메틸화)

  • Sung, Gi-Young;Chun, Kyung-Hwa;Song, Gyo-Yeong;Kim, Jin-Jo;Chin, Hyung-Min;Kim, Wook;Park, Cho-Hyun;Park, Seung-Man;Lim, Keun-Woo;Park, Woo-Bae;Kim, Seung-Nam;Jeon, Hae-Myung
    • Journal of Gastric Cancer
    • /
    • v.5 no.4 s.20
    • /
    • pp.228-237
    • /
    • 2005
  • Purpose: We investigated the impacts of the methylation states of the P16 and the hMLH1 genes on pathogenesis and genetic expression of stomach cancer and their relationships with Helicobater pylori infection, and with other clinico-pathologic factors. Material and Methods: In our study, to detect protein expression and methylation status of the P16 and the hMLH1 genes in 100 advanced gastric adenocarcinomas, used immunohistochemical staining and methylation-specific PCR (MSP) and direct automatic genetic sequencing analysis. Results: Methylation of the P16 gene was observed in 19 out of 100 cases (19%) and in the 18 of those cases (94.7%) loss of protein expression was seen. We were sble to show that loss of P16 gene expression was related to methylation of the P16 gene (kappa coefficient=0.317, p=0.0011). Methylation of the hMLH1 gene was observed in 27 cases (27%), and in 24 cases of those 27 cases (88.8%), loss of protein expression was seen, which suggested that loss of protein expression in the hMLH1 gene is related to methylation of hMLH1 gene (kappa coefficient=0.675, P<0.0001). Also methylation of the hMLH1 gene was related to age, size of the mass, and lauren's classification. Conclusion: We found that methylation of DNA plays an important role in inactivation of the P16 and the hMLH1 genes. The methylation of the hMLH1 genes is significantly related to age, size of the mass, and lauren's classification.

  • PDF

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.