• Title/Summary/Keyword: Deep drawing

Search Result 470, Processing Time 0.024 seconds

Authigenic Neodymium Isotope Record of Past Ocean Circulation (과거 해수 순환을 지시하는 해수기원 네오디뮴 동위원소 비 기록)

  • Huh, Youngsook;Jang, Kwangchul
    • The Journal of the Petrological Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.249-259
    • /
    • 2014
  • Proxies for paleo-circulation are drawing much interest with the recognition that ocean circulation plays an important part in the redistribution of heat and climate change on orbital and millennial timescales. In this review, we will introduce how neodymium isotope ratios of the authigenic fraction of marine sediments can be used as a proxy for ocean circulation along with analytical methods and two case studies. The first case study shows how the North Atlantic Deep Water (NADW) has varied over the glacial-interglacial and stadial-interstadial periods. The second case study shows how the freshwater budget and water circulation within the Arctic Ocean can be reconstructed for the last glacial period.

Framework for Reconstructing 2D Data Imported from Mobile Devices into 3D Models

  • Shin, WooSung;Min, JaeEun;Han, WooRi;Kim, YoungSeop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.4
    • /
    • pp.6-9
    • /
    • 2021
  • The 3D industry is drawing attention for its applications in various markets, including architecture, media, VR/AR, metaverse, imperial broadcast, and etc.. The current feature of the architecture we are introducing is to make 3D models more easily created and modified than conventional ones. Existing methods for generating 3D models mainly obtain values using specialized equipment such as RGB-D cameras and Lidar cameras, through which 3D models are constructed and used. This requires the purchase of equipment and allows the generated 3D model to be verified by the computer. However, our framework allows users to collect data in an easier and cheaper manner using cell phone cameras instead of specialized equipment, and uses 2D data to proceed with 3D modeling on the server and output it to cell phone application screens. This gives users a more accessible environment. In addition, in the 3D modeling process, object classification is attempted through deep learning without user intervention, and mesh and texture suitable for the object can be applied to obtain a lively 3D model. It also allows users to modify mesh and texture through requests, allowing them to obtain sophisticated 3D models.

Recent Progress of Smart Sensor Technology Relying on Artificial Intelligence (인공지능 기반의 스마트 센서 기술 개발 동향)

  • Shin, Hyun Sik;Kim, Jong-Woong
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.29 no.3
    • /
    • pp.1-12
    • /
    • 2022
  • With the rapid development of artificial intelligence technology that gives existing sensors functions similar to human intelligence is drawing attention. Previously, researches were mainly focused on an improvement of fundamental performance indicators as sensors. However, recently, attempts to combine artificial intelligence such as classification and prediction with sensors have been explored. Based on this, intelligent sensor research has been actively reported in almost all kinds of sensing fields such as disease detection, motion detection, and gas sensor. In this paper, we introduce the basic concepts, types, and driving mechanisms of artificial intelligence and review some examples of its use.

Related-key Neural Distinguisher on Block Ciphers SPECK-32/64, HIGHT and GOST

  • Erzhena Tcydenova;Byoungjin Seok;Changhoon Lee
    • Journal of Platform Technology
    • /
    • v.11 no.1
    • /
    • pp.72-84
    • /
    • 2023
  • With the rise of the Internet of Things, the security of such lightweight computing environments has become a hot topic. Lightweight block ciphers that can provide efficient performance and security by having a relatively simpler structure and smaller key and block sizes are drawing attention. Due to these characteristics, they can become a target for new attack techniques. One of the new cryptanalytic attacks that have been attracting interest is Neural cryptanalysis, which is a cryptanalytic technique based on neural networks. It showed interesting results with better results than the conventional cryptanalysis method without a great amount of time and cryptographic knowledge. The first work that showed good results was carried out by Aron Gohr in CRYPTO'19, the attack was conducted on the lightweight block cipher SPECK-/32/64 and showed better results than conventional differential cryptanalysis. In this paper, we first apply the Differential Neural Distinguisher proposed by Aron Gohr to the block ciphers HIGHT and GOST to test the applicability of the attack to ciphers with different structures. The performance of the Differential Neural Distinguisher is then analyzed by replacing the neural network attack model with five different models (Multi-Layer Perceptron, AlexNet, ResNext, SE-ResNet, SE-ResNext). We then propose a Related-key Neural Distinguisher and apply it to the SPECK-/32/64, HIGHT, and GOST block ciphers. The proposed Related-key Neural Distinguisher was constructed using the relationship between keys, and this made it possible to distinguish more rounds than the differential distinguisher.

  • PDF

Step-wise Combinded Implicit/Explicit Finite Element Simulation of Autobody Stamping Processes (차체 스템핑공정을 위한 스텝형식의 내연적/외연적 결함 유한요소해석)

  • Jung, D.W.;Yang, D.Y.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.12
    • /
    • pp.86-98
    • /
    • 1996
  • An combined implicit/explicit scheme for the analysis of sheet forming problems has been proposed in this work. In finite element simulation of sheet metal forming processes, the robustness and stability of computation are important requirements since the computation time and convergency become major points of consideration besides the solution accuracy due to the complexity of geometry and boundary conditions. The implicit scheme dmploys a more reliable and rigorous scheme in considering the equilibrium at each step of deformation, while in the explict scheme the problem of convergency is elimented at thecost of solution accuracy. The explicit approach and the implicit approach have merits and demerits, respectively. In order to combine the merits of these two methods a step-wise combined implici/explicit scheme has been developed. In the present work, the rigid-plastic finite element method using bending energy augmented membraneelements(BEAM)(1) is employed for computation. Computations are carried out for some typical sheet forming examples by implicit, combined implicit/explicit schemes including deep drawing of an oil pan, front fender and fuel tank. From the comparison between the methods the advantages and disadvantages of the methods are discussed.

  • PDF

Computational Analysis on Twitter Users' Attitudes towards COVID-19 Policy Intervention

  • Joohee Kim;Yoomi Kim
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.358-377
    • /
    • 2023
  • During the initial period of the COVID-19 pandemic, governments around the world implemented non-pharmaceutical interventions. For these policy interventions to be effective, authorities engaged in the political discourse of legitimising their activity to generate positive public attitudes. To understand effective COVID-19 policy, this study investigates public attitudes in South Korea, the United Kingdom, and the United States and how they reflect different legitimisation of policy intervention. We adopt a big data approach to analyse public attitudes, drawing from public comments posted on Twitter during selected periods. We collect the number of tweets related to COVID-19 policy intervention and conduct a sentiment analysis using a deep learning method. Public attitudes and sentiments in the three countries show different patterns according to how policy interventions were implemented. Overall concern about policy intervention is higher in South Korea than in the other two countries. However, public sentiments in all three countries tend to improve following implementation of policy intervention. The findings suggest that governments can achieve policy effectiveness when consistent and transparent communication take place during the initial period of the pandemic. This study contributes to the existing literature by applying big data analysis to explain which policies engender positive public attitudes.

A Feature Point Extraction and Identification Technique for Immersive Contents Using Deep Learning (딥 러닝을 이용한 실감형 콘텐츠 특징점 추출 및 식별 방법)

  • Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.529-535
    • /
    • 2020
  • As the main technology of the 4th industrial revolution, immersive 360-degree video contents are drawing attention. The market size of immersive 360-degree video contents worldwide is projected to increase from $6.7 billion in 2018 to approximately $70 billion in 2020. However, most of the immersive 360-degree video contents are distributed through illegal distribution networks such as Webhard and Torrent, and the damage caused by illegal reproduction is increasing. Existing 2D video industry uses copyright filtering technology to prevent such illegal distribution. The technical difficulties dealing with immersive 360-degree videos arise in that they require ultra-high quality pictures and have the characteristics containing images captured by two or more cameras merged in one image, which results in the creation of distortion regions. There are also technical limitations such as an increase in the amount of feature point data due to the ultra-high definition and the processing speed requirement. These consideration makes it difficult to use the same 2D filtering technology for 360-degree videos. To solve this problem, this paper suggests a feature point extraction and identification technique that select object identification areas excluding regions with severe distortion, recognize objects using deep learning technology in the identification areas, extract feature points using the identified object information. Compared with the previously proposed method of extracting feature points using stitching area for immersive contents, the proposed technique shows excellent performance gain.

Automated Measurement of Native T1 and Extracellular Volume Fraction in Cardiac Magnetic Resonance Imaging Using a Commercially Available Deep Learning Algorithm

  • Suyon Chang;Kyunghwa Han;Suji Lee;Young Joong Yang;Pan Ki Kim;Byoung Wook Choi;Young Joo Suh
    • Korean Journal of Radiology
    • /
    • v.23 no.12
    • /
    • pp.1251-1259
    • /
    • 2022
  • Objective: T1 mapping provides valuable information regarding cardiomyopathies. Manual drawing is time consuming and prone to subjective errors. Therefore, this study aimed to test a DL algorithm for the automated measurement of native T1 and extracellular volume (ECV) fractions in cardiac magnetic resonance (CMR) imaging with a temporally separated dataset. Materials and Methods: CMR images obtained for 95 participants (mean age ± standard deviation, 54.5 ± 15.2 years), including 36 left ventricular hypertrophy (12 hypertrophic cardiomyopathy, 12 Fabry disease, and 12 amyloidosis), 32 dilated cardiomyopathy, and 27 healthy volunteers, were included. A commercial deep learning (DL) algorithm based on 2D U-net (Myomics-T1 software, version 1.0.0) was used for the automated analysis of T1 maps. Four radiologists, as study readers, performed manual analysis. The reference standard was the consensus result of the manual analysis by two additional expert readers. The segmentation performance of the DL algorithm and the correlation and agreement between the automated measurement and the reference standard were assessed. Interobserver agreement among the four radiologists was analyzed. Results: DL successfully segmented the myocardium in 99.3% of slices in the native T1 map and 89.8% of slices in the post-T1 map with Dice similarity coefficients of 0.86 ± 0.05 and 0.74 ± 0.17, respectively. Native T1 and ECV showed strong correlation and agreement between DL and the reference: for T1, r = 0.967 (95% confidence interval [CI], 0.951-0.978) and bias of 9.5 msec (95% limits of agreement [LOA], -23.6-42.6 msec); for ECV, r = 0.987 (95% CI, 0.980-0.991) and bias of 0.7% (95% LOA, -2.8%-4.2%) on per-subject basis. Agreements between DL and each of the four radiologists were excellent (intraclass correlation coefficient [ICC] of 0.98-0.99 for both native T1 and ECV), comparable to the pairwise agreement between the radiologists (ICC of 0.97-1.00 and 0.99-1.00 for native T1 and ECV, respectively). Conclusion: The DL algorithm allowed automated T1 and ECV measurements comparable to those of radiologists.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Development of Algorithms for the Construction of Hydrogeologic Thematic Maps using AvenueTM Language in ArcView GIS (ArcView GIS의 AvenueTM Language를 활용한 수문지질도 작성 알고리즘 개발 및 적용 사례 연구)

  • Kim, Gyoo-Bum;Son, Young-Chul;Kim, Jong-Wook;Lee, Jang-Yong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.8 no.3
    • /
    • pp.107-120
    • /
    • 2005
  • In Korea, MOCT and KOWACO published a standard for lineament map drawings, "The Handbook for the Drawing and Management of Hydrogeologic Map" in 2003. According to this guideline, hydrogeologic and related thematic maps should include characteristics of groundwater quality and quantity. These maps are generally drawn with ArcView GIS 3.x software. The activities of well notation on groundwater level map and Stiff diagram drawings on groundwater quality map require a great deal of efforts because hundreds or thousands of well data, water level data and hydrogeochemical data are produced through many kinds of investigations. As well, lineament density map is very important to survey and explore groundwater in a deep aquifer. In this study we developed some modules for well notation, Stiff diagram drawings, and lineament density value calculation with Avenue$^{TM}$ script and it was revealed that they can be very useful and easy for drawing groundwater thematic maps.

  • PDF