• Title/Summary/Keyword: 지능 시스템

Search Result 12,345, Processing Time 0.042 seconds

Individual Thinking Style leads its Emotional Perception: Development of Web-style Design Evaluation Model and Recommendation Algorithm Depending on Consumer Regulatory Focus (사고가 시각을 바꾼다: 조절 초점에 따른 소비자 감성 기반 웹 스타일 평가 모형 및 추천 알고리즘 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.171-196
    • /
    • 2018
  • With the development of the web, two-way communication and evaluation became possible and marketing paradigms shifted. In order to meet the needs of consumers, web design trends are continuously responding to consumer feedback. As the web becomes more and more important, both academics and businesses are studying consumer emotions and satisfaction on the web. However, some consumer characteristics are not well considered. Demographic characteristics such as age and sex have been studied extensively, but few studies consider psychological characteristics such as regulatory focus (i.e., emotional regulation). In this study, we analyze the effect of web style on consumer emotion. Many studies analyze the relationship between the web and regulatory focus, but most concentrate on the purpose of web use, particularly motivation and information search, rather than on web style and design. The web communicates with users through visual elements. Because the human brain is influenced by all five senses, both design factors and emotional responses are important in the web environment. Therefore, in this study, we examine the relationship between consumer emotion and satisfaction and web style and design. Previous studies have considered the effects of web layout, structure, and color on emotions. In this study, however, we excluded these web components, in contrast to earlier studies, and analyzed the relationship between consumer satisfaction and emotional indexes of web-style only. To perform this analysis, we collected consumer surveys presenting 40 web style themes to 204 consumers. Each consumer evaluated four themes. The emotional adjectives evaluated by consumers were composed of 18 contrast pairs, and the upper emotional indexes were extracted through factor analysis. The emotional indexes were 'softness,' 'modernity,' 'clearness,' and 'jam.' Hypotheses were established based on the assumption that emotional indexes have different effects on consumer satisfaction. After the analysis, hypotheses 1, 2, and 3 were accepted and hypothesis 4 was rejected. While hypothesis 4 was rejected, its effect on consumer satisfaction was negative, not positive. This means that emotional indexes such as 'softness,' 'modernity,' and 'clearness' have a positive effect on consumer satisfaction. In other words, consumers prefer emotions that are soft, emotional, natural, rounded, dynamic, modern, elaborate, unique, bright, pure, and clear. 'Jam' has a negative effect on consumer satisfaction. It means, consumer prefer the emotion which is empty, plain, and simple. Regulatory focus shows differences in motivation and propensity in various domains. It is important to consider organizational behavior and decision making according to the regulatory focus tendency, and it affects not only political, cultural, ethical judgments and behavior but also broad psychological problems. Regulatory focus also differs from emotional response. Promotion focus responds more strongly to positive emotional responses. On the other hand, prevention focus has a strong response to negative emotions. Web style is a type of service, and consumer satisfaction is affected not only by cognitive evaluation but also by emotion. This emotional response depends on whether the consumer will benefit or harm himself. Therefore, it is necessary to confirm the difference of the consumer's emotional response according to the regulatory focus which is one of the characteristics and viewpoint of the consumers about the web style. After MMR analysis result, hypothesis 5.3 was accepted, and hypothesis 5.4 was rejected. But hypothesis 5.4 supported in the opposite direction to the hypothesis. After validation, we confirmed the mechanism of emotional response according to the tendency of regulatory focus. Using the results, we developed the structure of web-style recommendation system and recommend methods through regulatory focus. We classified the regulatory focus group in to three categories that promotion, grey, prevention. Then, we suggest web-style recommend method along the group. If we further develop this study, we expect that the existing regulatory focus theory can be extended not only to the motivational part but also to the emotional behavioral response according to the regulatory focus tendency. Moreover, we believe that it is possible to recommend web-style according to regulatory focus and emotional desire which consumers most prefer.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Analysis of shopping website visit types and shopping pattern (쇼핑 웹사이트 탐색 유형과 방문 패턴 분석)

  • Choi, Kyungbin;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.85-107
    • /
    • 2019
  • Online consumers browse products belonging to a particular product line or brand for purchase, or simply leave a wide range of navigation without making purchase. The research on the behavior and purchase of online consumers has been steadily progressed, and related services and applications based on behavior data of consumers have been developed in practice. In recent years, customization strategies and recommendation systems of consumers have been utilized due to the development of big data technology, and attempts are being made to optimize users' shopping experience. However, even in such an attempt, it is very unlikely that online consumers will actually be able to visit the website and switch to the purchase stage. This is because online consumers do not just visit the website to purchase products but use and browse the websites differently according to their shopping motives and purposes. Therefore, it is important to analyze various types of visits as well as visits to purchase, which is important for understanding the behaviors of online consumers. In this study, we explored the clustering analysis of session based on click stream data of e-commerce company in order to explain diversity and complexity of search behavior of online consumers and typified search behavior. For the analysis, we converted data points of more than 8 million pages units into visit units' sessions, resulting in a total of over 500,000 website visit sessions. For each visit session, 12 characteristics such as page view, duration, search diversity, and page type concentration were extracted for clustering analysis. Considering the size of the data set, we performed the analysis using the Mini-Batch K-means algorithm, which has advantages in terms of learning speed and efficiency while maintaining the clustering performance similar to that of the clustering algorithm K-means. The most optimized number of clusters was derived from four, and the differences in session unit characteristics and purchasing rates were identified for each cluster. The online consumer visits the website several times and learns about the product and decides the purchase. In order to analyze the purchasing process over several visits of the online consumer, we constructed the visiting sequence data of the consumer based on the navigation patterns in the web site derived clustering analysis. The visit sequence data includes a series of visiting sequences until one purchase is made, and the items constituting one sequence become cluster labels derived from the foregoing. We have separately established a sequence data for consumers who have made purchases and data on visits for consumers who have only explored products without making purchases during the same period of time. And then sequential pattern mining was applied to extract frequent patterns from each sequence data. The minimum support is set to 10%, and frequent patterns consist of a sequence of cluster labels. While there are common derived patterns in both sequence data, there are also frequent patterns derived only from one side of sequence data. We found that the consumers who made purchases through the comparative analysis of the extracted frequent patterns showed the visiting pattern to decide to purchase the product repeatedly while searching for the specific product. The implication of this study is that we analyze the search type of online consumers by using large - scale click stream data and analyze the patterns of them to explain the behavior of purchasing process with data-driven point. Most studies that typology of online consumers have focused on the characteristics of the type and what factors are key in distinguishing that type. In this study, we carried out an analysis to type the behavior of online consumers, and further analyzed what order the types could be organized into one another and become a series of search patterns. In addition, online retailers will be able to try to improve their purchasing conversion through marketing strategies and recommendations for various types of visit and will be able to evaluate the effect of the strategy through changes in consumers' visit patterns.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

Exploring the 4th Industrial Revolution Technology from the Landscape Industry Perspective (조경산업 관점에서 4차 산업혁명 기술의 탐색)

  • Choi, Ja-Ho;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.47 no.2
    • /
    • pp.59-75
    • /
    • 2019
  • This study was carried out to explore the 4th Industrial Revolution technology from the perspective of the landscape industry to provide the basic data necessary to increase the virtuous circle value. The 4th Industrial Revolution, the characteristics of the landscape industry and urban regeneration were considered and the methodology was established and studied including the technical classification system suitable for systematic research, which was selected as a framework. First, the 4th Industrial Revolution technology based on digital data was selected, which could be utilized to increase the value of the virtuous circle for the landscape industry. From 'Element Technology Level', and 'Core Technology' such as the Internet of Things, Cloud Computing, Big Data, Artificial Intelligence, Robot, 'Peripheral Technology', Virtual or Augmented Reality, Drones, 3D 4D Printing, and 3D Scanning were highlighted as the 4th Industrial Revolution technology. It has been shown that it is possible to increase the value of the virtuous circle when applied at the 'Trend Level', in particular to the landscape industry. The 'System Level' was analyzed as a general-purpose technology, and based on the platform, the level of element technology(computers, and smart devices) was systematically interconnected, and illuminated with the 4th Industrial Revolution technology based on digital data. The application of the 'Trend Level' specific to the landscape industry has been shown to be an effective technology for increasing the virtuous circle values. It is possible to realize all synergistic effects and implementation of the proposed method at the trend level applying the element technology level. Smart gardens, smart parks, etc. have been analyzed to the level they should pursue. It was judged that Smart City, Smart Home, Smart Farm, and Precision Agriculture, Smart Tourism, and Smart Health Care could be highly linked through the collaboration among technologies in adjacent areas at the Trend Level. Additionally, various utilization measures of related technology applied at the Trend Level were highlighted in the process of urban regeneration, public service space creation, maintenance, and public service. In other words, with the realization of ubiquitous computing, Hyper-Connectivity, Hyper-Reality, Hyper-Intelligence, and Hyper-Convergence were proposed, reflecting the basic characteristics of digital technology in the landscape industry can be achieved. It was analyzed that the landscaping industry was effectively accommodating and coordinating with the needs of new characters, education and consulting, as well as existing tasks, even when participating in urban regeneration projects. In particular, it has been shown that the overall landscapig area is effective in increasing the virtuous circle value when it systems the related technology at the trend level by linking maintenance with strategic bridgehead. This is because the industrial structure is effective in distributing data and information produced from various channels. Subsequent research, such as demonstrating the fusion of the 4th Industrial Revolution technology based on the use of digital data in creation, maintenance, and service of actual landscape space is necessary.

Improvement of Mid-Wave Infrared Image Visibility Using Edge Information of KOMPSAT-3A Panchromatic Image (KOMPSAT-3A 전정색 영상의 윤곽 정보를 이용한 중적외선 영상 시인성 개선)

  • Jinmin Lee;Taeheon Kim;Hanul Kim;Hongtak Lee;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1283-1297
    • /
    • 2023
  • Mid-wave infrared (MWIR) imagery, due to its ability to capture the temperature of land cover and objects, serves as a crucial data source in various fields including environmental monitoring and defense. The KOMPSAT-3A satellite acquires MWIR imagery with high spatial resolution compared to other satellites. However, the limited spatial resolution of MWIR imagery, in comparison to electro-optical (EO) imagery, constrains the optimal utilization of the KOMPSAT-3A data. This study aims to create a highly visible MWIR fusion image by leveraging the edge information from the KOMPSAT-3A panchromatic (PAN) image. Preprocessing is implemented to mitigate the relative geometric errors between the PAN and MWIR images. Subsequently, we employ a pre-trained pixel difference network (PiDiNet), a deep learning-based edge information extraction technique, to extract the boundaries of objects from the preprocessed PAN images. The MWIR fusion imagery is then generated by emphasizing the brightness value corresponding to the edge information of the PAN image. To evaluate the proposed method, the MWIR fusion images were generated in three different sites. As a result, the boundaries of terrain and objects in the MWIR fusion images were emphasized to provide detailed thermal information of the interest area. Especially, the MWIR fusion image provided the thermal information of objects such as airplanes and ships which are hard to detect in the original MWIR images. This study demonstrated that the proposed method could generate a single image that combines visible details from an EO image and thermal information from an MWIR image, which contributes to increasing the usage of MWIR imagery.

Multi-classification of Osteoporosis Grading Stages Using Abdominal Computed Tomography with Clinical Variables : Application of Deep Learning with a Convolutional Neural Network (멀티 모달리티 데이터 활용을 통한 골다공증 단계 다중 분류 시스템 개발: 합성곱 신경망 기반의 딥러닝 적용)

  • Tae Jun Ha;Hee Sang Kim;Seong Uk Kang;DooHee Lee;Woo Jin Kim;Ki Won Moon;Hyun-Soo Choi;Jeong Hyun Kim;Yoon Kim;So Hyeon Bak;Sang Won Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.3
    • /
    • pp.187-201
    • /
    • 2024
  • Osteoporosis is a major health issue globally, often remaining undetected until a fracture occurs. To facilitate early detection, deep learning (DL) models were developed to classify osteoporosis using abdominal computed tomography (CT) scans. This study was conducted using retrospectively collected data from 3,012 contrast-enhanced abdominal CT scans. The DL models developed in this study were constructed for using image data, demographic/clinical information, and multi-modality data, respectively. Patients were categorized into the normal, osteopenia, and osteoporosis groups based on their T-scores, obtained from dual-energy X-ray absorptiometry, into normal, osteopenia, and osteoporosis groups. The models showed high accuracy and effectiveness, with the combined data model performing the best, achieving an area under the receiver operating characteristic curve of 0.94 and an accuracy of 0.80. The image-based model also performed well, while the demographic data model had lower accuracy and effectiveness. In addition, the DL model was interpreted by gradient-weighted class activation mapping (Grad-CAM) to highlight clinically relevant features in the images, revealing the femoral neck as a common site for fractures. The study shows that DL can accurately identify osteoporosis stages from clinical data, indicating the potential of abdominal CT scans in early osteoporosis detection and reducing fracture risks with prompt treatment.

The Churchlands' Theory of Representation and the Semantics (처칠랜드의 표상이론과 의미론적 유사성)

  • Park, Je-Youn
    • Korean Journal of Cognitive Science
    • /
    • v.23 no.2
    • /
    • pp.133-164
    • /
    • 2012
  • Paul Churchland(1989) suggests the theory of representation from the results of cognitive biology and connectionist AI studies. According to the theory, our representations of the diverse phenomena in the world can be represented as the positions of phase state spaces with the actions of the neurons or of the assembly of neurons. He insists connectionist AI neural networks can have the semantical category systems to recognize the world. But Fodor and Lepore(1996) don't look the perspective bright. From their points of view, the Churchland's theory of representation stands on the base of Quine's holism, and the network semantics cannot explain how the criteria of semantical content similarity could be possible, and so cannot the theory. This thesis aims to excavate which one is the better between the perspective of the theory and the one of Fodor and Lepore's. From my understandings of state space theory of representation, artificial nets can coordinates the criteria of contents similarity by the learning algorithm. On the basis of these, I can see that Fodor and Lepore's points cannot penetrate the Churchlands' theory. From the view point of the theory, we can see how the future's artificial systems can have the conceptual systems recognizing the world. Therefore we can have the perspectives what cognitive scientists have to focus on.

  • PDF

Research Trends on Estimation of Soil Moisture and Hydrological Components Using Synthetic Aperture Radar (SAR를 이용한 토양수분 및 수문인자 산출 연구동향)

  • CHUNG, Jee-Hun;LEE, Yong-Gwan;KIM, Seong-Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.3
    • /
    • pp.26-67
    • /
    • 2020
  • Synthetic Aperture Radar(SAR) is able to photograph the earth's surface regardless of weather conditions, day and night. Because of its possibility to search for hydrological factors such as soil moisture and groundwater, and its importance is gradually increasing in the field of water resources. SAR began to be mounted on satellites in the 1970s, and about 15 or more satellites were launched as of 2020, which around 10 satellites will be launched within the next 5 years. Recently, various types of SAR technologies such as enhancement of observation width and resolution, multiple polarization and multiple frequencies, and diversification of observation angles were being developed and utilized. In this paper, a brief history of the SAR system, as well as studies for estimating soil moisture and hydrological components were investigated. Up to now hydrological components that can be estimated using SAR satellites include soil moisture, subsurface groundwater discharge, precipitation, snow cover area, leaf area index(LAI), and normalized difference vegetation index(NDVI) and among them, soil moisture is being studied in 17 countries in South Korea, North America, Europe, and India by using the physical model, the IEM(Integral Equation Model) and the artificial intelligence-based ANN(Artificial Neural Network). RADARSAT-1, ENVISAT, ASAR, and ERS-1/2 were the most widely used satellite, but the operation has ended, and utilization of RADARSAT-2, Sentinel-1, and SMAP, which are currently in operation, is gradually increasing. Since Korea is developing a medium-sized satellite for water resources and water disasters equipped with C-band SAR with the goal of launching in 2025, various hydrological components estimation researches using SAR are expected to be active.

A 14b 200KS/s $0.87mm^2$ 1.2mW 0.18um CMOS Algorithmic A/D Converter (14b 200KS/s $0.87mm^2$ 1.2mW 0.18um CMOS 알고리즈믹 A/D 변환기)

  • Park, Yong-Hyun;Lee, Kyung-Hoon;Choi, Hee-Cheol;Lee, Seung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.12 s.354
    • /
    • pp.65-73
    • /
    • 2006
  • This work presents a 14b 200KS/s $0.87mm^2$ 1.2mW 0.18um CMOS algorithmic A/D converter (ADC) for intelligent sensors control systems, battery-powered system applications simultaneously requiring high resolution, low power, and small area. The proposed algorithmic ADC not using a conventional sample-and-hold amplifier employs efficient switched-bias power-reduction techniques in analog circuits, a clock selective sampling-capacitor switching in the multiplying D/A converter, and ultra low-power on-chip current and voltage references to optimize sampling rate, resolution, power consumption, and chip area. The prototype ADC implemented in a 0.18um 1P6M CMOS process shows a measured DNL and INL of maximum 0.98LSB and 15.72LSB, respectively. The ADC demonstrates a maximum SNDR and SFDR of 54dB and 69dB, respectively, and a power consumption of 1.2mW at 200KS/s and 1.8V. The occupied active die area is $0.87mm^2$.