• Title/Summary/Keyword: 신뢰성 평가모델

Search Result 868, Processing Time 0.027 seconds

Evaluation of Correlation between Chlorophyll-a and Multiple Parameters by Multiple Linear Regression Analysis (다중회귀분석을 이용한 낙동강 하류의 Chlorophyll-a 농도와 복합 영향인자들의 상관관계 분석)

  • Lim, Ji-Sung;Kim, Young-Woo;Lee, Jae-Ho;Park, Tae-Joo;Byun, Im-Gyu
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.37 no.5
    • /
    • pp.253-261
    • /
    • 2015
  • In this study, Chlorophyll-a (chl-a) prediction model and multiple parameters affecting algae occurrence in Mulgeum site were evaluated by statistical analysis using water quality, hydraulic and climate data at Mulgeum site (1998~2008). Before the analysis, control chart method and effect period of typhoon were adopted for improving reliability of the data. After data preprocessing step two methods were used in this study. In method 1, chl-a prediction model was developed using preprocessed data. Another model was developed by Method 2 using significant parameters affecting chl-a after data preprocessing step. As a result of correlation analysis, water temperature, pH, DO, BOD, COD, T-N, $NO_3-N$, $PO_4-P$, flow rate, flow velocity and water depth were revealed as significant multiple parameters affecting chl-a concentration. Chl-a prediction model from Method 1 and 2 showed high $R^2$ value with 0.799 and 0.790 respectively. Validation for each prediction model was conducted with the data from 2009 to 2010. Training period and validation period of Method 1 showed 20.912 and 24.423 respectively. And Method 2 showed 21.422 and 26.277 in each period. Especially BOD, DO and $PO_4-P$ played important role in both model. So it is considered that analysis of algae occurrence at Mulgeum site need to focus on BOD, DO and $PO_4-P$.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

Methods of Incorporating Design for Production Considerations into Concept Design Investigations (개념설계 단계에서 총 건조비를 최소로 하는 생산지향적 설계 적용 방법)

  • H.S.,Bong
    • Bulletin of the Society of Naval Architects of Korea
    • /
    • v.27 no.3
    • /
    • pp.131-136
    • /
    • 1990
  • 여러해 전부터 선박의 생산실적이나 생산성 관련 자료를 기록하고 보완하는 작업을 꾸준히 개선토록 노력해온 결과중 중요한 것 하나는, 선박의 여러 가지 설계 검토과정에서 충분히 활용할 수 있는 함축성 있고 믿을만한 형태의 생산정보를 제공해줄 수 있게 되었다는 것이라고 말 할 수 있겠다. 이러한 자료들은 생산계획상 각 단계(stage)에서의 작업량, 예상재료비와 인건비의 산출등이 포함될 수 있으며, 선박이나 해상구조물의 전반적인 설계방법론(design methodology)을 개선코자 한다면 ''생산지향적 설계(Design for Production)''의 근간이 되는 선박건조전략(build strategy), 구매정책(purchasing policy)과 생산기술(production technology)에 대한 폭넓은 지식이 한데 어우러져야 한다. 최근에는 CIMS의 일부분에서 보는 바와 같은 경영관리, 설계 및 생산지원 시스템의 도입으로 이와 같은 설계 프로세스의 추진을 가능케하고 있다. 이와 병행하여 설계를 지원하기 위한 전산기술, 특히 대화형 화상처리기술(interactive graphics)의 발달은 설계자가 선박의 형상이나 구조 배치를 여러 가지로 변화시켜 가면서 눈으로 즉시 확인할 수 있도록 설계자의 능력을 배가시키는데 크게 기여하고 있다. 여러 가지의 설계안(alternative design arrangement)을 신속히 만들어내고 이를 즉시 검토 평가할 수 있는 능력을 초기설계 단계에서 가질 수 있다면 이는 분명히 큰 장점일 것이며, 더구나 설계초기 단계에 생산관련인자를 설계에서 고려할 수 있다면 이는 더욱 두드러진 발전일 것이다. 생산공법과 관련생산 비용을 정확히 반영한 각 가지의 설계안을 짧은 시간내에 검토하고 생산소요 비용을 산출하여 비교함으로써, 수주계약단계에서 실제적인 생산공법과 신뢰성있는 생산실적자료를 기준으로 하여 총 건조비(total production cost)를 최소로 하는 최적의 설계를 선택할 수 있도록 해 줄 것이다. 이제 이와 같은 새로운 설계도구(design tool)를 제공해 주므로써 초기설계에 각종 생산관련 정보나 지식 및 실적자료가 반영가능토록 발전되었다. 본 논문은 영국의 뉴카슬대학교(Univ. of Newcastle upon Type)에서 위에 언급한 특징들을 반영하여 새로운 선박구조 설계 방법을 개발한 연구결과를 보여주고 있다. 본 선계연구는 5단계로 구분되는데; (1) 컴퓨터 그라픽스를 이용하고 생산정보 데이타베이스와 연결시켜 구조형상(geometry)을 정의하고 구조부재 칫수(scantling) 계산/결정 (2) 블럭 분할(block division) 및 강재 배치(panel arrangement)의 확정을 위해 생산기술 및 건조방식에 대한 정보 제공 (3) 상기 (1) 및 (2)를 활용하여 아래 각 생산 단계에서의 생산작업 분석(work content assessment) a) 생산 준비 단계(Preparation) b) 가공 조립 단계(Fabrication/Assembly) c) 탑재 단계(Erection) (4) 각각의 설계(안)에 대하여 재료비(material cost), 인건비(labour cost) 및 오버헤드 비용(overhead cost)을 산출키 위한 조선소의 생산시설 및 각종 품셈 정보 (5) 총 건조 비용(total production cost)을 산출하여 각각의 설계안을 비교 검토. 본 설계 방식을 산적화물선(Bulk Carrier) 설계에 적용하여 구조배치(structural geometry), 표준화의 정도(levels of standardisation), 구조생산공법(structural topology) 등의 변화에 따른 설계 결과의 민감도를 분석(sensitivity studies)하였다. 전산장비는 설계자의 대화형 접근을 용이하도록 하기 위해 VAX의 화상 처리장치를 이용하여 각 설계안에 대한 구조형상과 작업분석, 건조비 현황 등을 제시할 수 있도록 하였다. 결론적으로 본 연구는 설계초기 단계에서 상세한 건조비 모델(detailed production cost model)을 대화형 화상 처리방법에 접합시켜 이를 이용하여 여러가지 설계안의 도출과 비교검토를 신속히 처리할 수 있도록 함은 물론, 각종 생산 실적정보를 초기설계에 반영하는 최초의 시도라고 믿으며, 생산지향적(Design for Production) 최적설계분야의 발전에 많은 도움이 되기를 기대해 마지 않는다. 참고로 본 시스템의 설계 적용결과를 부록에 요약 소개하며, 상세한 내용은 참고문헌 [4] 또는 [7]을 참조 요망한다.

  • PDF

A Study on the Essence and Tendency of Modern Manager (현대 경영자로서의 본질과 성향 연구)

  • Yeom, Bae-Hoon;Kim, Hyunsoo
    • Journal of Service Research and Studies
    • /
    • v.10 no.3
    • /
    • pp.23-42
    • /
    • 2020
  • This study conceptualized the essence and propensity of modern management in service age, based on philosophy, and developed items to evaluate the conceptualized content. It was carried out as a new study to deepen the study of management philosophy and management theory by the new management framework. In order to establish the philosophical foundation of the modern management, the essence of the modern management was conceptualized based on the fundamental ideas of the East and West, and then an evaluation item was developed to put the essence and propensity of the modern management into practical use through analytical and empirical methods. After analyzing the representative ideas of mankind, it was derived that the Book of Change has the qualification as a philosophical model that can derive the essence of modern management. The Book of Change explains the reasoning of the world in the structure of two opposing parties, such as Taiji or Yin and Yang, and the process of acknowledging the contradictions within each opposing party and overcoming the contradictions through change is the central idea. Because you can see. After conducting a conceptual study, through empirical research, the essence and propensity of a modern manager should be conceptualized. The concept of essence and empirical study of the modern management using the leading role was conducted in two stages. First, a qualitative study using repetitive comparative analysis (CCM), focus group interview (FGI), and text mining was conducted to derive the essential and propensity conceptualization items that modern managers should possess. In addition, a quantitative study using factor analysis to develop sample items and develop measurement items through literature review and FGI was conducted to derive the essential concept of the modern management. Finally, the essence of modern management was derived: learning, preparation, challenge, inclusion, trust, morality, and sacrifice. In the future, it is necessary to conduct empirical research on the effectiveness of the essence of modern management for global and Korean representative companies.

Development of Health Promotion Program through IUHPE - Possibilities of collaboration in East Asia - (IUHPE를 통한 건강 증진 프로그램의 발달-동아시아권의 공동연구의 가능성-)

  • Moriyama, Masaki
    • Proceedings of The Korean Society of Health Promotion Conference
    • /
    • 2004.10a
    • /
    • pp.1-16
    • /
    • 2004
  • This paper considers the possibilities of health promotion from the following perspectives; (1) IUHPE, (2) socio-cultural similarities, (3) action research, and (4) learning from our past. 1. The IUHPE values decentralized activities through regions, and countries such as Japan, Korea, Hong Kong, Taiwan and China belong to NPWP region. Since IUHPE World Conference was held in Japan in 1995, Japan used to occupy more than 60% of NPWP membership. After 2001, membership is increasing rapidly in Chinese speaking sub-region. The transnational collaboration is still in its beginning phase. 2. Confucianism is one of key points. Confucian tradition should not be seen only as obstacles but as advantages to seek a form of health promotion more acceptable in East Asia. 3. Within the new public health framework, people are expected to create and live their health. However, especially in Japan, the tendency of 'lacking of face-to-face explicit interactions' is still common at health-promotion settings as well as academic settings. Therefore, the author tried participatory approaches such as asking WlFY (interactive questions designed for subjects to review their daily life and environment) and as introducing round table interactions. So far, majority of participants welcome new trials. 4. The following social phenomena are comparatively discussed after Japanese invasion and occupation of Korea ended in 1945; ·status of oriental medicine, ·separation of dispensary services, and ·health promotion specialist as a national license. In contrast to Japanese' tendency of maintaining the status quo and postponing of substantial social change, trend toward rapid and dynamic social changes are more commonly observed in Korea. Although all of above possibilities are still in their beginning stages, they are going to offer interesting directions waiting for further challenges and accompanying researches.

  • PDF

Comparison of Forest Carbon Stocks Estimation Methods Using Forest Type Map and Landsat TM Satellite Imagery (임상도와 Landsat TM 위성영상을 이용한 산림탄소저장량 추정 방법 비교 연구)

  • Kim, Kyoung-Min;Lee, Jung-Bin;Jung, Jaehoon
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.5
    • /
    • pp.449-459
    • /
    • 2015
  • The conventional National Forest Inventory(NFI)-based forest carbon stock estimation method is suitable for national-scale estimation, but is not for regional-scale estimation due to the lack of NFI plots. In this study, for the purpose of regional-scale carbon stock estimation, we created grid-based forest carbon stock maps using spatial ancillary data and two types of up-scaling methods. Chungnam province was chosen to represent the study area and for which the $5^{th}$ NFI (2006~2009) data was collected. The first method (method 1) selects forest type map as ancillary data and uses regression model for forest carbon stock estimation, whereas the second method (method 2) uses satellite imagery and k-Nearest Neighbor(k-NN) algorithm. Additionally, in order to consider uncertainty effects, the final AGB carbon stock maps were generated by performing 200 iterative processes with Monte Carlo simulation. As a result, compared to the NFI-based estimation(21,136,911 tonC), the total carbon stock was over-estimated by method 1(22,948,151 tonC), but was under-estimated by method 2(19,750,315 tonC). In the paired T-test with 186 independent data, the average carbon stock estimation by the NFI-based method was statistically different from method2(p<0.01), but was not different from method1(p>0.01). In particular, by means of Monte Carlo simulation, it was found that the smoothing effect of k-NN algorithm and mis-registration error between NFI plots and satellite image can lead to large uncertainty in carbon stock estimation. Although method 1 was found suitable for carbon stock estimation of forest stands that feature heterogeneous trees in Korea, satellite-based method is still in demand to provide periodic estimates of un-investigated, large forest area. In these respects, future work will focus on spatial and temporal extent of study area and robust carbon stock estimation with various satellite images and estimation methods.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.