• Title/Summary/Keyword: Age of Artificial Intelligence

Search Result 165, Processing Time 0.02 seconds

A Study on Consumer Type Data Analysis Methodology - Focusing on www.ethno-mining.com data - (소비자유형 데이터 분석방법론 연구 - www.ethno-mining.com 데이터를 중심으로 -)

  • Wookwhan, Jung;Jinho, Ahn;Joseph, Na
    • Journal of Service Research and Studies
    • /
    • v.12 no.2
    • /
    • pp.80-93
    • /
    • 2022
  • This study is a study on a methodology that can extract various factors that affect purchase and use of products/services from the consumer's point of view through previous studies, and analyze the types and tendencies of consumers according to age and gender. To this end, we quantify factors in terms of general personal propensity, consumption influence, consumption decision, etc. to check the consistency of data, and based on these studies, we conduct research to suggest and prove data analysis methodologies of consumer types that are meaningful from the perspectives of startups and SMEs. did As a result, it was confirmed through cross-validation that there is a correlation between the three main factors assumed for data analysis from the consumer's point of view, the general tendency, the general consumption tendency, and the factors influencing the consumption decision. verified. This study presented a data analysis methodology and a framework for consumer data analysis from the consumer's point of view. In the current data analysis trend, where digital infrastructure develops exponentially and seeks ways to project individual preferences, this data analysis perspective can be a valid insight.

Analytical Evaluation of PPG Blood Glucose Monitoring System - researcher clinical trial (PPG 혈당 모니터링 시스템의 분석적 평가 - 연구자 임상)

  • Cheol-Gu Park;Sang-Ki Choi;Seong-Geun Jo;Kwon-Min Kim
    • Journal of Digital Convergence
    • /
    • v.21 no.3
    • /
    • pp.33-39
    • /
    • 2023
  • This study is a performance evaluation of a blood sugar monitoring system that combines a PPG sensor, which is an evaluation device for blood glucose monitoring, and a DNN algorithm when monitoring capillary blood glucose. The study is a researcher-led clinical trial conducted on participants from September 2023 to November 2023. PPG-BGMS compared predicted blood sugar levels for evaluation using 1-minute heart rate and heart rate variability information and the DNN prediction algorithm with capillary blood glucose levels measured with a blood glucose meter of the standard personal blood sugar management system. Of the 100 participants, 50 had type 2 diabetes (T2DM), and the average age was 67 years (range, 28 to 89 years). It was found that 100% of the predicted blood sugar level of PPG-BGMS was distributed in the A+B area of the Clarke error grid and Parker(Consensus) error grid. The MARD value of PPG-BGMS predicted blood glucose is 5.3 ± 4.0%. Consequentially, the non-blood-based PPG-BGMS was found to be non-inferior to the instantaneous blood sugar level of the clinical standard blood-based personal blood glucose measurement system.

Autopoietic Machinery and the Emergence of Third-Order Cybernetics (자기생산 기계 시스템과 3차 사이버네틱스의 등장)

  • Lee, Sungbum
    • Cross-Cultural Studies
    • /
    • v.52
    • /
    • pp.277-312
    • /
    • 2018
  • First-order cybernetics during the 1940s and 1950s aimed for control of an observed system, while second-order cybernetics during the mid-1970s aspired to address the mechanism of an observing system. The former pursues an objective, subjectless, approach to a system, whereas the latter prefers a subjective, personal approach to a system. Second-order observation must be noted since a human observer is a living system that has its unique cognition. Maturana and Varela place the autopoiesis of this biological system at the core of second-order cybernetics. They contend that an autpoietic system maintains, transforms and produces itself. Technoscientific recreation of biological autopoiesis opens up to a new step in cybernetics: what I describe as third-order cybernetics. The formation of technoscientific autopoiesis overlaps with the Fourth Industrial Revolution or what Erik Brynjolfsson and Andrew McAfee call the Second Machine Age. It leads to a radical shift from human centrism to posthumanity whereby humanity is mechanized, and machinery is biologized. In two versions of the novel Demon Seed, American novelist Dean Koontz explores the significance of technoscientific autopoiesis. The 1973 version dramatizes two kinds of observers: the technophobic human observer and the technology-friendly machine observer Proteus. As the story concludes, the former dominates the latter with the result that an anthropocentric position still works. The 1997 version, however, reveals the victory of the techno-friendly narrator Proteus over the anthropocentric narrator. Losing his narrational position, the technophobic human narrator of the story disappears. In the 1997 version, Proteus becomes the subject of desire in luring divorcee Susan. He longs to flaunt his male egomaniac. His achievement of male identity is a sign of technological autopoiesis characteristic of third-order cybernetics. To display self-producing capabilities integral to the autonomy of machinery, Koontz's novel demonstrates that Proteus manipulates Susan's egg to produce a human-machine mixture. Koontz's demon child, problematically enough, implicates the future of eugenics in an era of technological autopoiesis. Proteus creates a crossbreed of humanity and machinery to engineer a perfect body and mind. He fixes incurable or intractable diseases through genetic modifications. Proteus transfers a vast amount of digital information to his offspring's brain, which enables the demon child to achieve state-of-the-art intelligence. His technological editing of human genes and consciousness leads to digital standardization through unanimous spread of the best qualities of humanity. He gathers distinguished human genes and mental status much like collecting luxury brands. Accordingly, Proteus's child-making project ultimately moves towards technologically-controlled eugenics. Pointedly, it disturbs the classical ideal of liberal humanism celebrating a human being as the master of his or her nature.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Generative Adversarial Network-Based Image Conversion Among Different Computed Tomography Protocols and Vendors: Effects on Accuracy and Variability in Quantifying Regional Disease Patterns of Interstitial Lung Disease

  • Hye Jeon Hwang;Hyunjong Kim;Joon Beom Seo;Jong Chul Ye;Gyutaek Oh;Sang Min Lee;Ryoungwoo Jang;Jihye Yun;Namkug Kim;Hee Jun Park;Ho Yun Lee;Soon Ho Yoon;Kyung Eun Shin;Jae Wook Lee;Woocheol Kwon;Joo Sung Sun;Seulgi You;Myung Hee Chung;Bo Mi Gil;Jae-Kwang Lim;Youkyung Lee;Su Jin Hong;Yo Won Choi
    • Korean Journal of Radiology
    • /
    • v.24 no.8
    • /
    • pp.807-820
    • /
    • 2023
  • Objective: To assess whether computed tomography (CT) conversion across different scan parameters and manufacturers using a routable generative adversarial network (RouteGAN) can improve the accuracy and variability in quantifying interstitial lung disease (ILD) using a deep learning-based automated software. Materials and Methods: This study included patients with ILD who underwent thin-section CT. Unmatched CT images obtained using scanners from four manufacturers (vendors A-D), standard- or low-radiation doses, and sharp or medium kernels were classified into groups 1-7 according to acquisition conditions. CT images in groups 2-7 were converted into the target CT style (Group 1: vendor A, standard dose, and sharp kernel) using a RouteGAN. ILD was quantified on original and converted CT images using a deep learning-based software (Aview, Coreline Soft). The accuracy of quantification was analyzed using the dice similarity coefficient (DSC) and pixel-wise overlap accuracy metrics against manual quantification by a radiologist. Five radiologists evaluated quantification accuracy using a 10-point visual scoring system. Results: Three hundred and fifty CT slices from 150 patients (mean age: 67.6 ± 10.7 years; 56 females) were included. The overlap accuracies for quantifying total abnormalities in groups 2-7 improved after CT conversion (original vs. converted: 0.63 vs. 0.68 for DSC, 0.66 vs. 0.70 for pixel-wise recall, and 0.68 vs. 0.73 for pixel-wise precision; P < 0.002 for all). The DSCs of fibrosis score, honeycombing, and reticulation significantly increased after CT conversion (0.32 vs. 0.64, 0.19 vs. 0.47, and 0.23 vs. 0.54, P < 0.002 for all), whereas those of ground-glass opacity, consolidation, and emphysema did not change significantly or decreased slightly. The radiologists' scores were significantly higher (P < 0.001) and less variable on converted CT. Conclusion: CT conversion using a RouteGAN can improve the accuracy and variability of CT images obtained using different scan parameters and manufacturers in deep learning-based quantification of ILD.