• 제목/요약/키워드: Network Features

Search Result 2,693, Processing Time 0.025 seconds

A Study on the Non-Innovative Formation of Urban Industrial Agglomeration in an Old Industrial Complex: A Case of Seoul Onsu Industrial Complex (노후산업단지의 비혁신형 도시산업 집적지 형성에 관한 연구: 서울온수산업단지를 사례로)

  • Hyeyoon Jung
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.26 no.3
    • /
    • pp.223-237
    • /
    • 2023
  • The Seoul Onsu Industrial Complex, having been completed over 50 years ago, is an old industrial complex, with deteriorating infrastructure and factory buildings. Despite this, there's a current urban industrial agglomeration centered on the machinery industry in the Seoul Onsu Industrial Complex. This study aims to holistically analyze the physical deterioration of facilities in the aging industrial complex and the characteristics of industrial agglomeration to derive the identity of the Seoul Onsu Industrial Complex. Based on the research findings, the complex is seeing an enhanced urban industrial agglomeration due to the influx of small-scale businesses resulting from concentrated trade networks in the metropolitan area and plot subdivision, permission for noise-producing processes, and the ease of securing highly-skilled technicians. However, this agglomeration coexists with a weakening of the complex's production function, limited innovativeness of resident companies, and non-innovative features resulting from weakened competitiveness in the metropolitan machinery industry. In summary, the identity of the Seoul Onsu Industrial Complex is a 'Non-Innovative Urban Industry Agglomeration', an old industrial complex, witnessing non-innovative agglomeration based on a machinery industry network centered in the metropolitan area.

Induction of Unique STAT Heterodimers by IL-21 Provokes IL-1RI Expression on CD8+ T Cells, Resulting in Enhanced IL-1β Dependent Effector Function

  • Dong Hyun Kim;Hee Young Kim;Won-Woo Lee
    • IMMUNE NETWORK
    • /
    • v.21 no.5
    • /
    • pp.33.1-33.19
    • /
    • 2021
  • IL-1β plays critical roles in the priming and effector phases of immune responses such as the differentiation, commitment, and memory formation of T cells. In this context, several reports have suggested that the IL-1β signal is crucial for CTL-mediated immune responses to viral infections and tumors. However, little is known regarding whether IL-1β acts directly on CD8+ T cells and what the molecular mechanisms underlying expression of IL-1 receptors (IL-1Rs) on CD8+ T cells and features of IL-1R+ CD8+ T cells are. Here, we provide evidence that the expression of IL-1R type I (IL-1RI), the functional receptor of IL-1β, is preferentially induced by IL-21 on TCR-stimulated CD8+ T cells. Further, IL-1β enhances the effector function of CD8+ T cells expressing IL-21-induced IL-1RI by increasing cytokine production and release of cytotoxic granules containing granzyme B. The IL-21-IL-1RI-IL-1β axis is involved in an augmented effector function through regulation of transcription factors BATF, Blimp-1, and IRF4. Moreover, this axis confers a unique effector function to CD8+ T cells compared to conventional type 1 cytotoxic T cells differentiated with IL-12. Chemical inhibitor and immunoprecipitation assay demonstrated that IL-21 induces a unique pattern of STAT activation with the formation of both STAT1:STAT3 and STAT3:STAT5 heterodimers, which are critical for the induction of IL-1RI on TCR-stimulated CD8+ T cells. Taken together, we propose that induction of a novel subset of IL-1RI-expressing CD8+ T cells by IL-21 may be beneficial to the protective immune response against viral infections and is therefore important to consider for vaccine design.

Development of deep learning algorithm for classification of disc cutter wear condition based on real-time measurement data (실시간 측정데이터 기반의 디스크커터 마모상태 판별 딥러닝 알고리즘 개발)

  • Ji Yun Lee;Byung Chul Yeo;Ho Young Jeong;Jung Joo Kim
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.3
    • /
    • pp.281-301
    • /
    • 2024
  • The power cable tunnels which are part of the underground transmission line project, are constructed using the shield TBM method. The disc cutter among the shield TBM components plays an important role in breaking rock mass. Efficient tunnel construction is possible only when appropriate replacement occurs as the wear limit is reached or damage such as uneven wear occurs. A study was conducted to determine the wear conditions of disc cutter using a deep learning algorithm based on real-time measurement data of wear and rotation speed. Based on the results of full-scaled tunnelling tests, it was confirmed that measurement data was obtained differently depending on the wear conditions of disc cutter. Using real-time measurement data, an algorithm was developed to determine disc cutter wear characteristics based on a convolutional neural network model. Distributional patterns of data can be learned through CNN filters, and the performance of the model that can classify uniform wear and uneven wear through these pattern features.

A study on ESG Management Guidelines for Small and Medium-sized Logistics Enterprises (중소·중견 물류기업 ESG 경영 이행 가이드라인에 관한 연구)

  • Maowei Chen;Hyangsook Lee;Kyongjun Yun
    • Journal of Korea Port Economic Association
    • /
    • v.39 no.4
    • /
    • pp.147-161
    • /
    • 2023
  • As global challenges, particularly climate change, become more pressing, there is a growing global awareness of Environmental, Social, and Governance (ESG) management. Given the crucial role played by the logistics industry in the complex network of the global supply chain, various societal stakeholders are emphasizing the necessity for logistics entities to practice ESG management. Despite the comprehensive ESG guidelines established by Korea for all enterprises, a notable limitation arises from its inadequate consideration of the distinctive features inherent to logistics enterprises, especially those of a smaller and medium scale. Accordingly, this study conducts a thorough examination of existing ESG guidelines, sustainable management approaches in large-scale logistics enterprises, and prior research to identify potential ESG management diagnostic criteria relevant to small and medium-sized logistics enterprises, including aspects such as Public(P), Environmental(E), Social(S), and Governance(G). To streamline the diagnostic criteria, taking into account the unique characteristics of small and medium-sized logistics enterprises, this study conducts a survey involving 60 logistics company personnel and experts from academic and research domains. The collected data undergoes Principal Component Analysis (PCA), revealing that the four dimensions of information disclosure can be consolidated into a single dimension. Additionally, environmental criteria reduce from 16 to 3 items, societal considerations decrease from 22 to 7 items, and governance structures distill from 20 to 5 items. This empirical endeavor is deemed significant in presenting tailored ESG management diagnostic criteria aligned with the specificities of small and medium-sized logistics enterprises. The findings of this study are expected to serve as a foundational resource for the development of guidelines by relevant entities, promoting the wider adoption of ESG management practices in the sphere of small and medium-sized logistics enterprises in the near future. population coming from areas other than Gwangyang, where Gwangyang Port is located.

Increasing Accuracy of Stock Price Pattern Prediction through Data Augmentation for Deep Learning (데이터 증강을 통한 딥러닝 기반 주가 패턴 예측 정확도 향상 방안)

  • Kim, Youngjun;Kim, Yeojeong;Lee, Insun;Lee, Hong Joo
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.1-12
    • /
    • 2019
  • As Artificial Intelligence (AI) technology develops, it is applied to various fields such as image, voice, and text. AI has shown fine results in certain areas. Researchers have tried to predict the stock market by utilizing artificial intelligence as well. Predicting the stock market is known as one of the difficult problems since the stock market is affected by various factors such as economy and politics. In the field of AI, there are attempts to predict the ups and downs of stock price by studying stock price patterns using various machine learning techniques. This study suggest a way of predicting stock price patterns based on the Convolutional Neural Network(CNN) among machine learning techniques. CNN uses neural networks to classify images by extracting features from images through convolutional layers. Therefore, this study tries to classify candlestick images made by stock data in order to predict patterns. This study has two objectives. The first one referred as Case 1 is to predict the patterns with the images made by the same-day stock price data. The second one referred as Case 2 is to predict the next day stock price patterns with the images produced by the daily stock price data. In Case 1, data augmentation methods - random modification and Gaussian noise - are applied to generate more training data, and the generated images are put into the model to fit. Given that deep learning requires a large amount of data, this study suggests a method of data augmentation for candlestick images. Also, this study compares the accuracies of the images with Gaussian noise and different classification problems. All data in this study is collected through OpenAPI provided by DaiShin Securities. Case 1 has five different labels depending on patterns. The patterns are up with up closing, up with down closing, down with up closing, down with down closing, and staying. The images in Case 1 are created by removing the last candle(-1candle), the last two candles(-2candles), and the last three candles(-3candles) from 60 minutes, 30 minutes, 10 minutes, and 5 minutes candle charts. 60 minutes candle chart means one candle in the image has 60 minutes of information containing an open price, high price, low price, close price. Case 2 has two labels that are up and down. This study for Case 2 has generated for 60 minutes, 30 minutes, 10 minutes, and 5minutes candle charts without removing any candle. Considering the stock data, moving the candles in the images is suggested, instead of existing data augmentation techniques. How much the candles are moved is defined as the modified value. The average difference of closing prices between candles was 0.0029. Therefore, in this study, 0.003, 0.002, 0.001, 0.00025 are used for the modified value. The number of images was doubled after data augmentation. When it comes to Gaussian Noise, the mean value was 0, and the value of variance was 0.01. For both Case 1 and Case 2, the model is based on VGG-Net16 that has 16 layers. As a result, 10 minutes -1candle showed the best accuracy among 60 minutes, 30 minutes, 10 minutes, 5minutes candle charts. Thus, 10 minutes images were utilized for the rest of the experiment in Case 1. The three candles removed from the images were selected for data augmentation and application of Gaussian noise. 10 minutes -3candle resulted in 79.72% accuracy. The accuracy of the images with 0.00025 modified value and 100% changed candles was 79.92%. Applying Gaussian noise helped the accuracy to be 80.98%. According to the outcomes of Case 2, 60minutes candle charts could predict patterns of tomorrow by 82.60%. To sum up, this study is expected to contribute to further studies on the prediction of stock price patterns using images. This research provides a possible method for data augmentation of stock data.

  • PDF

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

A Study on the Proposal for Training of the Trade Experts to Promote Export of Domestic Companies (내수기업 수출활성화를 위한 무역전문인력 양성 방안에 대한 연구)

  • KANG, Ho-Yeon;JEONG, Yoon Say
    • THE INTERNATIONAL COMMERCE & LAW REVIEW
    • /
    • v.78
    • /
    • pp.93-117
    • /
    • 2018
  • In all countries of the world, the development of trade is an important factor for the survival of the national economy. Increased export will lead to national economic growth. Export is directly linked to employment, and the industrial structure will be developed in the direction to produce products of comparative advantages. Therefore, every country around the world is trying to promote export regardless of the size of its economy. Accordingly, this paper focused on the promotion of export of domestic companies. It proposed to cultivate trade experts to promote export of domestic companies. The following five methods were proposed to materialize the proposal. First, it is important to foster trade experts to expand and foster the one-person creative companies. In particular, it is important to develop a professional education curriculum. It is necessary to design and conduct a systematic curriculum throughout the process including follow-up after education such as teaching detailed procedures for establishing a trade business, identification of relevant regulations and related organizations, understanding of special features of each exporting country, and details of exporting procedures through specialist training for the individual industries, helping themto keep their network steady so that they can easily get help from consultants. Second, it is necessary to educate traders working in the field to make them trade experts and utilize themin on-the-job training and consulting. To do this, it is necessary to introduce systematic consultant selection process, and to introduce a systemto educate and manage them. It is because, we must select the most appropriate candidates, educate themto be lecturers and consultants, and dispatch themto the field, in order to make the best achievement in export. Nurturing trading professionals utilizing the current trading workers to activate export of domestic companies can be more efficient through cooperation of trading education agencies and related agencies in various industries. Third, it is also proposed to cultivate female trade experts by educating female trade workers whose career has been disrupted. It is to provide career disrupted women with opportunities to work after training them as trade professionals and to give manpower pool to domestic companies that are preparing for export. Fourth, it is also proposed to educate foreign students living in Korea to be trading experts and to utilize them as trading infra. They can be trading professionals who will contribute to the promotion of export. In the short term, they will be provided with opportunities for employment and start-upin the field of trade, and in the mid- to long-term, they may develop a business network between Korea and their own countries. To this end, we need to improve the visa system, expand free trade education opportunities, and support them so that they can establish small but strong enterprises. Fifth, it is proposed to proactively expand trade education to specialized high school students. Considering that most of domestic companies pursuing activation of export are small but strong companies or small and mediumsized companies, they may prefer high school graduates rather than university graduates because of financial limitations. Besides, the specialized high school students may occupy better position in the job market if they are equipped with expertise in trading. This study can be meaningful, in that it is the first research that focuses on cultivating trading experts to contribute to the export activation of domestic companies. However, it also has a limitation that it has failed to reflect the more specific field voices. It is hoped that detailed plans will be derived from the opinions of the employees of domestic companies making efforts to become an export company in the related researches in the future.

  • PDF

Personalized Recommendation System for IPTV using Ontology and K-medoids (IPTV환경에서 온톨로지와 k-medoids기법을 이용한 개인화 시스템)

  • Yun, Byeong-Dae;Kim, Jong-Woo;Cho, Yong-Seok;Kang, Sang-Gil
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.147-161
    • /
    • 2010
  • As broadcasting and communication are converged recently, communication is jointed to TV. TV viewing has brought about many changes. The IPTV (Internet Protocol Television) provides information service, movie contents, broadcast, etc. through internet with live programs + VOD (Video on demand) jointed. Using communication network, it becomes an issue of new business. In addition, new technical issues have been created by imaging technology for the service, networking technology without video cuts, security technologies to protect copyright, etc. Through this IPTV network, users can watch their desired programs when they want. However, IPTV has difficulties in search approach, menu approach, or finding programs. Menu approach spends a lot of time in approaching programs desired. Search approach can't be found when title, genre, name of actors, etc. are not known. In addition, inserting letters through remote control have problems. However, the bigger problem is that many times users are not usually ware of the services they use. Thus, to resolve difficulties when selecting VOD service in IPTV, a personalized service is recommended, which enhance users' satisfaction and use your time, efficiently. This paper provides appropriate programs which are fit to individuals not to save time in order to solve IPTV's shortcomings through filtering and recommendation-related system. The proposed recommendation system collects TV program information, the user's preferred program genres and detailed genre, channel, watching program, and information on viewing time based on individual records of watching IPTV. To look for these kinds of similarities, similarities can be compared by using ontology for TV programs. The reason to use these is because the distance of program can be measured by the similarity comparison. TV program ontology we are using is one extracted from TV-Anytime metadata which represents semantic nature. Also, ontology expresses the contents and features in figures. Through world net, vocabulary similarity is determined. All the words described on the programs are expanded into upper and lower classes for word similarity decision. The average of described key words was measured. The criterion of distance calculated ties similar programs through K-medoids dividing method. K-medoids dividing method is a dividing way to divide classified groups into ones with similar characteristics. This K-medoids method sets K-unit representative objects. Here, distance from representative object sets temporary distance and colonize it. Through algorithm, when the initial n-unit objects are tried to be divided into K-units. The optimal object must be found through repeated trials after selecting representative object temporarily. Through this course, similar programs must be colonized. Selecting programs through group analysis, weight should be given to the recommendation. The way to provide weight with recommendation is as the follows. When each group recommends programs, similar programs near representative objects will be recommended to users. The formula to calculate the distance is same as measure similar distance. It will be a basic figure which determines the rankings of recommended programs. Weight is used to calculate the number of watching lists. As the more programs are, the higher weight will be loaded. This is defined as cluster weight. Through this, sub-TV programs which are representative of the groups must be selected. The final TV programs ranks must be determined. However, the group-representative TV programs include errors. Therefore, weights must be added to TV program viewing preference. They must determine the finalranks.Based on this, our customers prefer proposed to recommend contents. So, based on the proposed method this paper suggested, experiment was carried out in controlled environment. Through experiment, the superiority of the proposed method is shown, compared to existing ways.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

Clustering and classification of residential noise sources in apartment buildings based on machine learning using spectral and temporal characteristics (주파수 및 시간 특성을 활용한 머신러닝 기반 공동주택 주거소음의 군집화 및 분류)

  • Jeong-hun Kim;Song-mi Lee;Su-hong Kim;Eun-sung Song;Jong-kwan Ryu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.603-616
    • /
    • 2023
  • In this study, machine learning-based clustering and classification of residential noise in apartment buildings was conducted using frequency and temporal characteristics. First, a residential noise source dataset was constructed . The residential noise source dataset was consisted of floor impact, airborne, plumbing and equipment noise, environmental, and construction noise. The clustering of residential noise was performed by K-Means clustering method. For frequency characteristics, Leq and Lmax values were derived for 1/1 and 1/3 octave band for each sound source. For temporal characteristics, Leq values were derived at every 6 ms through sound pressure level analysis for 5 s. The number of k in K-Means clustering method was determined through the silhouette coefficient and elbow method. The clustering of residential noise source by frequency characteristic resulted in three clusters for both Leq and Lmax analysis. Temporal characteristic clustered residential noise source into 9 clusters for Leq and 11 clusters for Lmax. Clustering by frequency characteristic clustered according to the proportion of low frequency band. Then, to utilize the clustering results, the residential noise source was classified using three kinds of machine learning. The results of the residential noise classification showed the highest accuracy and f1-score for data labeled with Leq values in 1/3 octave bands, and the highest accuracy and f1-score for classifying residential noise sources with an Artificial Neural Network (ANN) model using both frequency and temporal features, with 93 % accuracy and 92 % f1-score.