• Title/Summary/Keyword: 임계시간

Search Result 839, Processing Time 0.031 seconds

Optimization of TDA Recycling Process for TDI Residue using Near-critical Hydrolysis Process (근임계수 가수분해 공정을 이용한 TDI 공정 폐기물로부터 TDA 회수 공정 최적화)

  • Han, Joo Hee;Han, Kee Do;Jeong, Chang Mo;Do, Seung Hoe;Sin, Yeong Ho
    • Korean Chemical Engineering Research
    • /
    • v.44 no.6
    • /
    • pp.650-658
    • /
    • 2006
  • The recycling of TDA from solid waste of TDI plant(TDI-R) by near-critical hydrolysis reaction had been studied by means of a statistical design of experiment. The main and interaction effects of process variables had been defined from the experiments in a batch reactor and the correlation equation with process variables for TDA yield had been obtained from the experiments in a continuous pilot plant. It was confirmed that the effects of reaction temperature, catalyst type and concentration, and the weight ratio of water to TDI-R(WR) on TDA yield were significant. TDA yield decreased with increases in reaction temperature and catalyst concentration, and increased with an increase in WR. As a catalyst, NaOH was more effective than $Na_2CO_3$ for TDA yield. The interaction effects between catalyst concentration and temperature, WR and temperature, catalyst type and reaction time on TDA yield had been defined as significant. Although the effect of catalyst concentration on TDA yield at $300^{\circ}C$ as subcritical water was insignificant, the TDA yield decreased with increasing catalyst concentration at $400^{\circ}C$ as supercritical water. On the other hand, the yield increased with an increase in WR at $300^{\circ}C$ but showed negligible effect with WR at $400^{\circ}C$. The optimization of process variables for TDA yield has been explored with a pilot plant for scale-up. The catalyst concentration and WR were selected as process variables with respect to economic feasibility and efficiency. The effects of process variables on TDA yield had been explored by means of central composite design. The TDA yield increased with an increase in catalyst concentration. It showed maximum value at below 2.5 of WR and then decreased with an increase in WR. However, the ratio at which the TDA yield showed a maximum value increased with increasing catalyst concentration. The correlation equation of a quadratic model with catalyst concentration and WR had been obtained by the regression analysis of experimental results in a pilot plant.

A Template-based Interactive University Timetabling Support System (템플릿 기반의 상호대화형 전공강의시간표 작성지원시스템)

  • Chang, Yong-Sik;Jeong, Ye-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-145
    • /
    • 2010
  • University timetabling depending on the educational environments of universities is an NP-hard problem that the amount of computation required to find solutions increases exponentially with the problem size. For many years, there have been lots of studies on university timetabling from the necessity of automatic timetable generation for students' convenience and effective lesson, and for the effective allocation of subjects, lecturers, and classrooms. Timetables are classified into a course timetable and an examination timetable. This study focuses on the former. In general, a course timetable for liberal arts is scheduled by the office of academic affairs and a course timetable for major subjects is scheduled by each department of a university. We found several problems from the analysis of current course timetabling in departments. First, it is time-consuming and inefficient for each department to do the routine and repetitive timetabling work manually. Second, many classes are concentrated into several time slots in a timetable. This tendency decreases the effectiveness of students' classes. Third, several major subjects might overlap some required subjects in liberal arts at the same time slots in the timetable. In this case, it is required that students should choose only one from the overlapped subjects. Fourth, many subjects are lectured by same lecturers every year and most of lecturers prefer the same time slots for the subjects compared with last year. This means that it will be helpful if departments reuse the previous timetables. To solve such problems and support the effective course timetabling in each department, this study proposes a university timetabling support system based on two phases. In the first phase, each department generates a timetable template from the most similar timetable case, which is based on case-based reasoning. In the second phase, the department schedules a timetable with the help of interactive user interface under the timetabling criteria, which is based on rule-based approach. This study provides the illustrations of Hanshin University. We classified timetabling criteria into intrinsic and extrinsic criteria. In intrinsic criteria, there are three criteria related to lecturer, class, and classroom which are all hard constraints. In extrinsic criteria, there are four criteria related to 'the numbers of lesson hours' by the lecturer, 'prohibition of lecture allocation to specific day-hours' for committee members, 'the number of subjects in the same day-hour,' and 'the use of common classrooms.' In 'the numbers of lesson hours' by the lecturer, there are three kinds of criteria : 'minimum number of lesson hours per week,' 'maximum number of lesson hours per week,' 'maximum number of lesson hours per day.' Extrinsic criteria are also all hard constraints except for 'minimum number of lesson hours per week' considered as a soft constraint. In addition, we proposed two indices for measuring similarities between subjects of current semester and subjects of the previous timetables, and for evaluating distribution degrees of a scheduled timetable. Similarity is measured by comparison of two attributes-subject name and its lecturer-between current semester and a previous semester. The index of distribution degree, based on information entropy, indicates a distribution of subjects in the timetable. To show this study's viability, we implemented a prototype system and performed experiments with the real data of Hanshin University. Average similarity from the most similar cases of all departments was estimated as 41.72%. It means that a timetable template generated from the most similar case will be helpful. Through sensitivity analysis, the result shows that distribution degree will increase if we set 'the number of subjects in the same day-hour' to more than 90%.

Understanding the Response Characteristics of X-ray Verification Film (X-선 Verification 필름의 반응 특성에 관한 연구)

  • Yeo Inhwan;Seong Jinsil;Chu Sung Sil;Kim Gwi Eon;Suh Chang Ok;Burch Sandra E.;Wang Chris K.
    • Radiation Oncology Journal
    • /
    • v.16 no.4
    • /
    • pp.505-515
    • /
    • 1998
  • Purpose : This study is intended to understand the sensitometric characteristics and the emulsion properties of the commercially available CEA TVS film in comparison with the Kodak X-Omat V film. Materials and Methods : For this purpose, we have formulated an analytic expression of the characteristic curves for x-ray film exposed to mixed radiation of electrons, photons, and visible light. This mathematical expression was developed based on reaction-rate and target-hit theories. Unlike previous expressions. it relates optical density to emulsion properties such as grain size and silver bromide content We have also developed a quantity which characterizes the film response to visible light relative to that to photons and electrons. This quantity could be expressed as a function of grain area. Thus, we have developed mathematical expressions and quantities with which the emulsion properties of the films can be revealed based on the sensitometric characteristics. Demonstrating the use of this analytical study, we exposed CEA and Kodak verification films to the mixed radiation of electrons, photons, and visible light, and interpreted the experimental results accordingly. Results : We have demonstrated that: (1) the saturation density increases as the silver bromide content increases, (2) the time required to reach the threshold dose (to which the film begins to respond) when films are exposed to visible light decreases as the grain size increases, and (3) the CEA film contains more silver bromide. whereas the Kodak film contains larger grains. These findings were supported by the data provided by the manufacturers afterward. Conclusion : This study presented an analytical and experimental basis for understanding the response of X-ray film with respect to the emulsion properties.

  • PDF

A Feasibility Study of the K-LandBridge through a Linear Programming Model of Minimum Transport Costs (최소운송비용의 선형계획모형을 통한 K-LandBridge의 타당성 연구)

  • Koh, Yong Ki;Seo, Su Wan;Na, Jung Ho
    • Journal of Korea Port Economic Association
    • /
    • v.32 no.3
    • /
    • pp.95-108
    • /
    • 2016
  • China has recently advocated a national strategy called "One Belt One Road" and transferred to execution to refine it into detailed action plans and has continued to fix the complement. However, the Korean Peninsula, including the North Korea remains could not be included at all in the Chinese development policy and framework in terms of the International Logistics. Currently it is raised between Korea-China rail ferry system again and that is when we need to make effective policy development on international multimodal transport system in Northeast Asia. This paper introduces the K-LB (Korea LandBridge) as its execution plan and conducted a feasibility study on this. K-LB consists of a Korea-Russian train ferry system based in Pohang Yeongil New Port(light-wing) and a Korea-China train ferry system based in Saemangeum New Port(left-wing). These two wings are linked to the existing rail system in Korea. This study is convinced that the K-LB is an effective international logistics system in the current terms and conditions and also demonstrated that it is feasible to introduce th K-LB on the peninsula. More strictly speaking, through a linear programming under objective function that minimize the transport cost quantified prior to demonstrate the feasibility, the available ranges and conditions for the transportation costs that are ensured the effectiveness of the K-LB are presented as results. According to the results, if the transport cost of K-LB is cheaper about 34.5% than that of sea transport such as container transport, the object goods may be transported by K-LB on this route. It means that the K-LB system has a competitive advantage due to more rapid customs clearance as well as omitted loading and unloading procedures over container transportation system. It also noted that the threshold level may not be large. Therefore, K-LB has competitive enough to prove its introduction in the Northeast Asian logistics system.

Analyzing the Characteristics of Atmospheric Stability from Radiosonde Observations in the Southern Coastal Region of the Korean Peninsula during the Summer of 2019 (라디오존데 고층관측자료를 활용한 한반도 남해안 지역의 2019년도 여름철 대기 안정도 특성 분석)

  • Shin, Seungsook;Hwang, Sung-Eun;Lee, Young-Tae;Kim, Byung-Taek;Kim, Ki-Hoon
    • Journal of the Korean earth science society
    • /
    • v.42 no.5
    • /
    • pp.496-503
    • /
    • 2021
  • By analyzing the characteristics of atmospheric stability in the southern coastal region of the Korean Peninsula in the summer of 2019, a quantitative threshold of atmospheric instability indices was derived for predicting rainfall events in the Korean Peninsula. For this analysis, we used data from all of the 243 radiosonde intensive observations recorded at the Boseong Standard Weather Observatory (BSWO) in the summer of 2019. To analyze the atmospheric stability of rain events and mesoscale atmospheric phenomena, convective available potential energy (CAPE) and storm relative helicity (SRH) were calculated and compared. In particular, SRH analysis was divided into four levels based on the depth of the atmosphere (0-1, 0-3, 0-6, and 0-10 km). The rain events were categorized into three cases: that of no rain, that of 12 h before the rain, and that of rain. The results showed that SRH was more suitable than CAPE for the prediction of the rainfall events in Boseong during the summer of 2019, and that the rainfall events occurred when the 0-6 km SRH was 150 m2 s-2 or more, which is the same standard as that for a possible weak tornado. In addition, the results of the atmospheric stability analysis during the Changma, which is the rainy period in the Korean Peninsula during the summer and typhoon seasons, showed that the 0-6 km SRH was larger than the mean value of the 0-10 km SRH, whereas SRH generally increased as the depth of the atmosphere increased. Therefore, it can be said that the 0-6 km SRH was more effective in determining the rainfall events caused by typhoons in Boseong in the summer of 2019.

Prevalence and risk factors of peri-implantitis: A retrospective study (임플란트 주위염의 유병률 및 위험요소분석에 관한 후향적 연구)

  • Lee, Sae-Eun;Kim, Dae-Yeob;Lee, Jong-Bin;Pang, Eun-Kyoung
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.57 no.1
    • /
    • pp.8-17
    • /
    • 2019
  • Purpose: The study analyzed the prevalence of peri-implantitis and factors which may have affected the disease. Materials and methods: This study based on medical records and radiographs of 422 patients (853 implant cases) who visited Ewha Womans University Mokdong Hospital Dental Center from January 1, 2012 to December 31, 2016. Generalized estimation equations (GEE) was utilized to determine the statistical relationship between peri-implantitis and each element, and the cumulative prevalence of peri-implantitis during the observation period was obtained by using the Kaplan Meier Method. Results: The prevalence rate of peri-implantitis at the patient level resulted in 7.3% (31 patients out of a total of 422 patients), and at the implant level 5.5% (47 implants out of a total of 853 implants). Sex, GBR, guided bone regeneration (GBR) and functional loading periods had statistical significance with the occurrence of peri-implantitis. Upon analysis of the cumulative prevalence of peri-implantitis in terms of implant follow-up period, the first case of peri-implantitis occurred at 9 months after the placement of an implant, and the prevalence of peri-implantitis showed a non-linear rise over time without a hint of a critical point. Conclusion: The prevalence of peri-implantitis at the patient level and the implant were 7.3% and 5.5%, respectively. Male, implant installed with GBR and longer Functional Loading Periods were related with the risk of peri-implantitis.

Comparison of Color Stability and Surface Roughness of 3D Printing Resin by Polishing Methods (연마 방법에 따른 3D 프린팅 레진의 색조 안정성과 표면 조도의 비교)

  • Heeju Kim;Yujin Kim;Jongsoo Kim;Joonhaeng Lee;Mi Ran Han;Jisun Shin;Jongbin Kim
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.50 no.2
    • /
    • pp.205-216
    • /
    • 2023
  • This study aimed to compare the color stability and surface roughness of three-dimensional (3D) printing resin according to polishing methods. 3D-printed resin specimens were fabricated at TC-80DP (Graphy, Seoul, Korea) with a stereolithography 3D printer, and the specimens were divided into three groups, each of which was not polished, was polished using Enhance®, and was polished using a Sof-LexTM disc. The CIE L*a*b* value and the surface roughness of each group were measured and immersed in artificial saliva and orange juice after 0, 1, 7, 30, and 60 days, and the color difference (ΔE*) was calculated. As a result of the study, no noticeable color change was observed in artificial saliva, but a noticeable color change was demonstrated in orange juice after 60 days of immersion, and the difference was significant. In the Sof-LexTM group, surface roughness according to the solution was found to be significantly higher in the orange juice than that in artificial saliva. No significant difference in color change was found according to the polishing method, but surface roughness was significantly lower in the Sof-LexTM group than both that of the unpolished group and that of the Enhance® group. Nevertheless, all groups exhibited clinically acceptable properties regardless of their higher surface roughness than the threshold for plaque accumulation. Overall, this study recommends utilizing Sof-LexTM for polishing 3D printing resin when used in primary anterior tooth coverage.

Case Analysis on Platform Business Models for IT Service Planning (IT서비스 기획을 위한 플랫폼 비즈니스 모델 사례 분석연구)

  • Kim, Hyun Ji;Cha, yun so;Kim, Kyung Hoon
    • Korea Science and Art Forum
    • /
    • v.25
    • /
    • pp.103-118
    • /
    • 2016
  • Due to the rapid development of ICT, corporate business models quickly changed and because of the radical growth of IT technology, sequential or gradual survival has become difficult. Internet-based new businesses such as IT service companies are seeking for new convergence business models that have not existed before to create business models that are more competitive, but the economic efficiency of business models that were successful in the past is wearing off. Yet, as reaching the critical point where the platform value becomes extremely high for platforms via the Internet is happening at a much higher speed than before, platform-ization has become a very important condition for rapid business expansion for all kinds of businesses. This study analyzes the necessity of establishing platform business models in IT service planning and identifies their characteristics through case analyses of platform business models. The results derived features First, there is a need to ensure sufficient buyers and sellers, and second, platform business model should provide customers with distinctive value of the only platforms are generating. third, the common interests between platform-driven company and a partner, participants Should be existing. Fourthly, by expanding base of participants and upgrades, expansion of adjacent areas we must have a continuous scalability and evolution must be sustainable. While it is expected that the identified characteristics will cause tremendous impacts to the establishment of platform business models and to the graphing of service planning, we also look forward to this study serving as the starting point for the establishment of theories of profit models for platform businesses, which were not mentioned in the study, so that planners responsible for platform-based IT service planning will spend less time and draw bigger schemes in building planning drafts.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.