• Title/Summary/Keyword: Global feature

Search Result 492, Processing Time 0.022 seconds

LOW REGULARITY SOLUTIONS TO HIGHER-ORDER HARTREE-FOCK EQUATIONS WITH UNIFORM BOUNDS

  • Changhun Yang
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.37 no.1
    • /
    • pp.27-40
    • /
    • 2024
  • In this paper, we consider the higher-order HartreeFock equations. The higher-order linear Schrödinger equation was introduced in [5] as the formal finite Taylor expansion of the pseudorelativistic linear Schrödinger equation. In [13], the authors established global-in-time Strichartz estimates for the linear higher-order equations which hold uniformly in the speed of light c ≥ 1 and as their applications they proved the convergence of higher-order Hartree-Fock equations to the corresponding pseudo-relativistic equation on arbitrary time interval as c goes to infinity when the Taylor expansion order is odd. To achieve this, they not only showed the existence of solutions in L2 space but also proved that the solutions stay bounded uniformly in c. We address the remaining question on the convergence of higherorder Hartree-Fock equations when the Taylor expansion order is even. The distinguished feature from the odd case is that the group velocity of phase function would be vanishing when the size of frequency is comparable to c. Owing to this property, the kinetic energy of solutions is not coercive and only weaker Strichartz estimates compared to the odd case were obtained in [13]. Thus, we only manage to establish the existence of local solutions in Hs space for s > $\frac{1}{3}$ on a finite time interval [-T, T], however, the time interval does not depend on c and the solutions are bounded uniformly in c. In addition, we provide the convergence result of higher-order Hartree-Fock equations to the pseudo-relativistic equation with the same convergence rate as the odd case, which holds on [-T, T].

MLCNN-COV: A multilabel convolutional neural network-based framework to identify negative COVID medicine responses from the chemical three-dimensional conformer

  • Pranab Das;Dilwar Hussain Mazumder
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.290-306
    • /
    • 2024
  • To treat the novel COronaVIrus Disease (COVID), comparatively fewer medicines have been approved. Due to the global pandemic status of COVID, several medicines are being developed to treat patients. The modern COVID medicines development process has various challenges, including predicting and detecting hazardous COVID medicine responses. Moreover, correctly predicting harmful COVID medicine reactions is essential for health safety. Significant developments in computational models in medicine development can make it possible to identify adverse COVID medicine reactions. Since the beginning of the COVID pandemic, there has been significant demand for developing COVID medicines. Therefore, this paper presents the transferlearning methodology and a multilabel convolutional neural network for COVID (MLCNN-COV) medicines development model to identify negative responses of COVID medicines. For analysis, a framework is proposed with five multilabel transfer-learning models, namely, MobileNetv2, ResNet50, VGG19, DenseNet201, and Inceptionv3, and an MLCNN-COV model is designed with an image augmentation (IA) technique and validated through experiments on the image of three-dimensional chemical conformer of 17 number of COVID medicines. The RGB color channel is utilized to represent the feature of the image, and image features are extracted by employing the Convolution2D and MaxPooling2D layer. The findings of the current MLCNN-COV are promising, and it can identify individual adverse reactions of medicines, with the accuracy ranging from 88.24% to 100%, which outperformed the transfer-learning model's performance. It shows that three-dimensional conformers adequately identify negative COVID medicine responses.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A Dynamic Queue Manager for Optimizing the Resource and Performance of Mass-call based IN Services in Joint Wired and Wireless Networks (유무선 통합 망에서 대량호 지능망 서비스의 성능 및 자원 최적화를 위한 동적 큐 관리자)

  • 최한옥;안순신
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.5B
    • /
    • pp.942-955
    • /
    • 2000
  • This paper proposes enhanced designs of global service logic and information flow for the mass-call based IN service, which increase call completion rates and optimize the resource in joint wired and wireless networks. In order to hanve this logic implemented, we design a Dynamic Queue Manager(DQM) applied to the call queuing service feature in the Service Control Point(SCP). In order to apply this logic to wireless service subscribers as well as wired service subscribers, the service registration flags between the Home Location Register(HLR) and the SCP are managed to notify the DQM of the corresponding service subscribers’ mobility. Hence, we present a dynamic queue management mechanism, which dynamically manages the service group and the queue size based on M/M/c/K queueing model as the wireless subscribers roam the service groups due to their mobility characteristics. In order to determine the queue size allocated by the DQM, we simulator and analyze the relationship between the number of the subscriber’s terminals and the drop rate by considering the service increment rate. The appropriate waiting time in the queue as required is simulated according to the above relationship. Moreover, we design and implement the DQM that includes internal service logic interacting with SIBs(Service Independent building Blocks) and its data structure.

  • PDF

A Bayesian Estimation of Price for Commercial Property: Using subjective priors and a kriging technique (상업용 토지 가격의 베이지안 추정: 주관적 사전지식과 크리깅 기법의 활용을 중심으로)

  • Lee, Chang Ro;Eum, Young Seob;Park, Key Ho
    • Journal of the Korean Geographical Society
    • /
    • v.49 no.5
    • /
    • pp.761-778
    • /
    • 2014
  • There has been relatively little study to model price for commercial property because of its low transaction volume in the market. Despite of this thin market character, this paper tried to estimate prices for commercial lots as accurate as possible. We constructed a model whose components consist of mean structure(global trend), exponential covariance function and a pure error term, and applied it to actual sales price data of Seoul. We explicitly took account of spatial autocorrelation of land price by utilizing a kriging technique, a representative method of spatial interpolation, because the land price of commercial lots has feature of differential price forming pattern depending on submarkets they belong to. In addition, we chose to apply a bayesian kriging to overcome data scarcity by incorporating experts' knowledge into prior probability distribution. The chosen model's excellent performance was verified by the result from validation data. We confirmed that the excellence of the model is attributed to incorporating both autocorexperts' knowledge and spatial autocorrelation in the model construction. This paper is differentiated from previous studies in the sense that it applied the bayesian kriging technique to estimate price for commercial lots and explicitly combined experts' knowledge with data. It is expected that the result of this paper would provide a useful guide for the circumstances under which property price has to be estimated reliably based on sparse transaction data.

  • PDF

Distributed Hashing-based Fast Discovery Scheme for a Publish/Subscribe System with Densely Distributed Participants (참가자가 밀집된 환경에서의 게재/구독을 위한 분산 해쉬 기반의 고속 서비스 탐색 기법)

  • Ahn, Si-Nae;Kang, Kyungran;Cho, Young-Jong;Kim, Nowon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.12
    • /
    • pp.1134-1149
    • /
    • 2013
  • Pub/sub system enables data users to access any necessary data without knowledge of the data producer and synchronization with the data producer. It is widely used as the middleware technology for the data-centric services. DDS (Data Distribution Service) is a standard middleware supported by the OMG (Object Management Group), one of global standardization organizations. It is considered quite useful as a standard middleware for US military services. However, it is well-known that it takes considerably long time in searching the Participants and Endpoints in the system, especially when the system is booting up. In this paper, we propose a discovery scheme to reduce the latency when the participants and Endpoints are densely distributed in a small area. We propose to modify the standard DDS discovery process in three folds. First, we integrate the Endpoint discovery process with the Participant discovery process. Second, we reduce the number of connections per participant during the discovery process by adopting the concept of successors in Distributed Hashing scheme. Third, instead of UDP, the participants are connected through TCP to exploit the reliable delivery feature of TCP. We evaluated the performance of our scheme by comparing with the standard DDS discovery process. The evaluation results show that our scheme achieves quite lower discovery latency in case that the Participants and the Endpoints are densely distributed in a local network.

Design of a Bit-Level Super-Systolic Array (비트 수준 슈퍼 시스톨릭 어레이의 설계)

  • Lee Jae-Jin;Song Gi-Yong
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.12
    • /
    • pp.45-52
    • /
    • 2005
  • A systolic array formed by interconnecting a set of identical data-processing cells in a uniform manner is a combination of an algorithm and a circuit that implements it, and is closely related conceptually to arithmetic pipeline. High-performance computation on a large array of cells has been an important feature of systolic array. To achieve even higher degree of concurrency, it is desirable to make cells of systolic array themselves systolic array as well. The structure of systolic array with its cells consisting of another systolic array is to be called super-systolic array. This paper proposes a scalable bit-level super-systolic amy which can be adopted in the VLSI design including regular interconnection and functional primitives that are typical for a systolic architecture. This architecture is focused on highly regular computational structures that avoids the need for a large number of global interconnection required in general VLSI implementation. A bit-level super-systolic FIR filter is selected as an example of bit-level super-systolic array. The derived bit-level super-systolic FIR filter has been modeled and simulated in RT level using VHDL, then synthesized using Synopsys Design Compiler based on Hynix $0.35{\mu}m$ cell library. Compared conventional word-level systolic array, the newly proposed bit-level super-systolic arrays are efficient when it comes to area and throughput.

A Study on the Construction of Near-Real Time Drone Image Preprocessing System to use Drone Data in Disaster Monitoring (재난재해 분야 드론 자료 활용을 위한 준 실시간 드론 영상 전처리 시스템 구축에 관한 연구)

  • Joo, Young-Do
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.3
    • /
    • pp.143-149
    • /
    • 2018
  • Recently, due to the large-scale damage of natural disasters caused by global climate change, a monitoring system applying remote sensing technology is being constructed in disaster areas. Among remote sensing platforms, the drone has been actively used in the private sector due to recent technological developments, and has been applied in the disaster areas owing to advantages such as timeliness and economical efficiency. This paper deals with the development of a preprocessing system that can map the drone image data in a near-real time manner as a basis for constructing the disaster monitoring system using the drones. For the research purpose, our system is based on the SURF algorithm which is one of the computer vision technologies. This system aims to performs the desired correction through the feature point matching technique between reference images and shot images. The study area is selected as the lower part of the Gahwa River and the Daecheong dam basin. The former area has many characteristic points for matching whereas the latter area has a relatively low number of difference, so it is possible to effectively test whether the system can be applied in various environments. The results show that the accuracy of the geometric correction is 0.6m and 1.7m respectively, in both areas, and the processing time is about 30 seconds per 1 scene. This indicates that the applicability of this study may be high in disaster areas requiring timeliness. However, in case of no reference image or low-level accuracy, the results entail the limit of the decreased calibration.

Temporal and Spatial Variability of Heating and Cooling Degree-days in South Korea, 1973-2002 (한반도 난${\cdot}$냉방도일의 시공간 분포 특성 변화에 관한 연구)

  • Choi, Youn-Geun
    • Journal of the Korean Geographical Society
    • /
    • v.40 no.5 s.110
    • /
    • pp.584-593
    • /
    • 2005
  • The spatial and temporal variations of heating degree-days (HDDs) and cooling degree-days (CDDs) are closely related with the temperature field. The spatial distribution of 30-year mean HDDs shows that the higher values locates in the northern part of South Korea while the lower values locates in the southern part. The 30-year mean CDDs shows a more randomized distribution than the HDDs. The changing trends of HDDs and CDDs show a different feature: HDDs have a distinct decreasing trend while CDDs have an insignificant change. The decreasing trends of HDDs are consistent over South Korea and most of stations have experienced the statistically significant change. As significant changing areas of HDDs are much broader than those of annual mean temperature, HDDs can be more useful than annual mean temperature to detect the climate change impact on a regional level. In other words, an insignificant change on the mean temperature field can induce the significant change of thermal climatology in a region. The temporal pattern of climatic departure index (CDI) for South Korea HDDs series shows a general decreasing, but a sharp increase during recent years. The drastic decrease of HDDs induces higher CDI indicating larger variability among stations. However, the decrease of South Korea HDDs series cannot totally attribute to the global warming due to urban effects. By the early 1980s, there were no big differences of HDDs between urban and rural series, but later the differences are getting larger. This was expected to be with the intensification of urbanization in South Korea. However, still there is a decreasing trend of HDDs for rural stations.

The Strategic Research Approach for the Grand Plan of the Korean Peninsula Infrastructure (통일한반도 국토인프라 Grand Plan 연구 구상)

  • Lee, Bok-Nam
    • Land and Housing Review
    • /
    • v.6 no.2
    • /
    • pp.43-48
    • /
    • 2015
  • Right after president Keun-Hae Park's announcement at German Dresden on March 2014, both expectation and skepticism have been raised for the Korean unification. The unification would give a great chance for the economic prosperity in the positive sense. In the negative sense, it would only give a great burden to the Republic's financial status. Comparing the expectation of the unification, there are lack of structured preparation, duplicated and/or overlapped systematic approach, and even the national strategies are diffused. There are several individual research papers, analytical data and information, researches on the industry and technology. However most of the previous researches and findings are unstructured and lack of completeness. It is hard to find out the overall feature of the unification strategy. West German has compassed that it knew very few the reality of East German status. The Korean Government may know much less about North Korea's condition comparing to West German. Before the actual unification in the Korean peninsula, it needs the Grand Plan for the national infrastructure and land utilization of the Korean peninsula. During the development of the Grand Plan for the Korean peninsula, the Asian Global transportation network could be developed at the same time. The German's unification experience can give a great opportunity to the development of the Gran Plan. The data and information, and the previous researches should be classified and structured in a way of systematic arrangement. Since most of investment and budget for the unification come from the Korea, it would be very much beneficial for the Korean people. The openness and early exposures of the Grand Plan for the national infrastructure are considered as mandatory action.