Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.
An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.
Split-flow thin cell fractionation (SPLITT fractionation, SF) is a particle separation technique that allows continuous (and thus a preparative scale) separation into two subpopulations based on the particle size or the density. In SF, there are two basic performance parameters. One is the throughput (TP), which was defined as the amount of sample that can be processed in a unit time period. Another is the fractionation efficiency (FE), which was defined as the number % of particles that have the size predicted by theory. Full-feed depletion mode (FFD-SF) have only one inlet for the sample feed, and the channel is equipped with a flow stream splitter only at the outlet in SF mode. In conventional FFD-mode, it was difficult to extend channel due to splitter in channel. So, we use large scale splitter-less FFD-SF to increase TP from increase channel scale. In this study, a FFD-SF channel was developed for a large-scale fractionation, which has no flow stream splitters (‘splitter less’), and then was tested for optimum TP and FE by varying the sample concentration and the flow rates at the inlet and outlet of the channel. Polyurethane (PU) latex beads having two different size distribution (about 3~7 µm, and about 2~30 µm) were used for the test. The sample concentration was varied from 0.2 to 0.8% (wt/vol). The channel flow rate was varied from 70, 100, 120 and 160 mL/min. The fractionated particles were monitored by optical microscopy (OM). The sample recovery was determined by collecting the particles on a 0.1 µm membrane filter. Accumulation of relatively large micron sized particles in channel could be prevented by feeding carrier liquid. It was found that, in order to achieve effective TP, the concentration of sample should be at higher than 0.4%.
Methylobacterium organophilum, a facultative methylotroph was cultivated on a methanol as a sole carbon and energy source. The cell growth was affected by the various components of minimal synthetic medium and the medium composition was optimized with 0.5% (v/v) methanol at pH 6.8 and at 3$0^{\circ}C$. The maximum specific growth rate of M. organophilum was achieved to 0.26 hr$^{-1}$ in the optimized medium which has following composition: Methanol, 0.5% (v/v):(NH$_4$)$_2$SO$_4$, 1.0g/l:KH$_2$PO$_4$, 2.13g/l:KH$_2$PO$_4$, 1.305g/ι:MgSO$_4$.7$H_2O$. 45g/l and trace elements (CaCl$_2$.2$H_2O$, 3.3mg:FeSO$_4$.7$H_2O$, 1.3mg:MnSO$_4$.4$H_2O$, 130$\mu\textrm{g}$:ZnSO$_4$.5$H_2O$, 40$\mu\textrm{g}$:Na$_2$MoO$_4$.2$H_2O$, 40$\mu\textrm{g}$:CoCl$_2$.6$H_2O$, 40$\mu\textrm{g}$:H$_3$BO$_3$, 30$\mu\textrm{g}$ per liter). By the limitation of nitrogen and deficiency of Mn$^{+2}$ or Fe$^{+2}$, the cell growth was significantly repressed. Methanol greatly repressed the cell growth and the complete inhibition was observed at concentration above 4% (v/v). In order to overcome the methanol inhibition and to prevent the methanol limitation, intermittent feeding of methanol was conducted by a D.O.-stat technique. PHB production by M. organophilum was stimulated by deficiency of nutrients such as NH$_{4}^{+}$, SO$_{4}^{-2}$, $Mg^{+2}$, $K^{+}$, or PO$_{4}^{-3}$ in the medium. The maximum PHB content was obtained as 58% of dry cell weight under deficiency of potassium ion in the optimized synthetic medium.
A bacterium producing non- or partially digestible dextran was isolated from kimchi broth by enrichment culture technique. The bacterium was identified tentatively as Leuconostoc sp. strain SKY. We established the response surface methodology (Box-Behnken design) to optimize the principle parameters such as culture pH, temperature, and yeast extract concentration for maximizing production of dextran. The ranges of parameters were determined based on prior screening works done at our laboratory and accordingly chosen as 5.5, 6.5, and 7.5 for pH, 25, 30, and $35^{\circ}C$ for temperature, and 1, 5, and 9 g/l yeast extract. Initial concentration of sucrose was 100 g/l. The mineral medium consisted of 3.0 g $KH_2PO_4$, 0.01 g $FeSO_4{\cdot}H_2O$, 0.01 g $MnSO_4{\cdot}4H_2O$, 0.2 g $MgSO_4{\cdot}7H_2O$, 0.01 g NaCl, and 0.05 g $CaCO_3$ per 1 liter deionized water. The optimum values of pH and temperature, and yeast extract concentration were obtained at pH (around 7.0), temperature (27 to $28^{\circ}C$), and yeast extract (6 to 7 g/l). The best dextran yield was 60% (dextran/g sucrose). The best dextran productivity was 0.8 g/h-l.
Consecutive brain 〔Tc-99m〕ECD SPECT studies before and after acetazolamide (Diamox) administration have been performed with patients for the evaluation of cerebrovascular hemodynamic reserve. However, the quantitaitve potential of SPECT Diamox imaging is limited as a result of degrading fractors such as finite detector resolution, attenuation, scatter, poor counting statistics, and methods of data analysis. Making physical measurements in phantoms filled with known amounts of radioactivity can help characterize and potentially quantify the sensitivities. However, it is often very difficult to make a realistic phantom simulating patients in clinical situations. By computer simulation, we studied the sensitivities of ECD SPECT before and after Diamox administration. The sensitivity is defined as ($\Delta$N/N)/($\Delta$S/S)$\times$100%, where $\Delta$N denotes the differences in mean counts between post-and pre-Diamox in the measured data, N denotes the mean counts before Diamox in the measure data, $\Delta$S denotes the differences in mean counts between post-and pre-Diamox in the model, and S denotes the mean counts before Diamox in the model. In clinical Diamox studies, the percentage changes of radioactivity could be determined to measure changes in radioactivity concentration by Diamox after subtracting pre-from post-Diamox data. However, the optimal amount of subtraction for 100% sensitivity is not known since this requires a thorough sensitivity analysis by computer simulation. For consecutive brain SPECT imaging model before and after Diamox, when 30% increased radioactivity concentrations were assingned for Diamox effect in model, the sensitivities were measured as 51.03, 73.4, 94.00, 130.74% for 0, 100, 150, 200% subtraction, respectively. Sensitivity analysis indicated that the partial voluming effects due to finite detector resolution and statistical noise result in a significant underestimation of radioactivity measurements and the amount of underestimation depends on the. % increase of radioactivity concentration and % subtraction of pre-from post-Diamox data. The 150% subtraction appears to be optimal in clinical situations where we expect approximately 30% changes in radioactivity concentration. The computer simulation may be a powerful technique to study sensitivities of ECD SPECT before and after Diamox administration.
Park, Chun-Soo;Kim, Yong-Jin;Sung, Si-Chan;Park, Ji-Eun;Choi, Sun-Young;Kim, Woong-Han;Kim, Kyung-Hwan
Journal of Chest Surgery
/
v.41
no.5
/
pp.550-562
/
2008
Background: We attempted to reproduce a previously reported method that is known to be effective for decellularization, and we sought to find the optimal condition for decellularization by introducing some modifications to this method. Material and Method: Porcine semilunar valves, arterial walls and pericardium were processed for decellularization with using a variety of combinations and concentrations of decellularizing agents under different conditions of temperature, osmolarity and incubation time. The degree of decellularization and the preservation of the extracellular matrix were evaluated by staining with hematoxylin and eosin and with alpha-Gal and DAPI in some of the decellularized tissues. Result: Decellularization was achieved in the specimens that were treated with sodium deoxycholate, sodium dodesyl sulfate, Triton X-100 and sodium dodesyl sulfate with Triton X-100 as single-step methods, and this was also achieved in the specimens that were treated with hypotonic solution ${\rightarrow}$ Triton X-100 ${\rightarrow}$ sodium dodesyl sulfate, sodium deoxycholate ${\rightarrow}$ hypotonic solution ${\rightarrow}$ sodium dodesyl sulfate, and hypotonic solution sodium dodesyl sulfate as multi-step methods. Conclusion: Considering the number and the amount of the chemicals that were used, the incubation time and the degree of damage to the extracellular matrix, a single-step method with sodium dodesyl sulfate and Triton X-100 and a multi-step method with hypotonic solution followed by sodium dodesyl sulfate were both relatively optimal methods for decellularization in this study.
Journal of Korean Society of Environmental Engineers
/
v.39
no.8
/
pp.449-455
/
2017
Process input data including material and energy, process output data including product, co-product and its environmental emissions of the reference and target processes were collected and analyzed to evaluate the process performance. Environmentally problematic input/environmental emissions of the manufacturing processes were identified using these data. Significant process inputs contributing to each of the environmental emissions were identified using multiple regression analysis between the process inputs and environmental emissions. Optimum combination of the end-of-pipe technologies for treating the environmental emissions considering economic aspects was made using the linear programming technique. The cement manufacturing processes in Korea and the EU producing same type of cement were chosen for the case study. Environmentally problematic input/environmental emissions of the domestic cement manufacturing processes include coal, dust, and $SO_x$. Multiple regression analysis among the process inputs and environmental emissions revealed that $CO_2$ emission was influenced most by coal, followed by the input raw materials and gypsum. $SO_x$ emission was influenced by coal, and dust emission by gypsum followed by raw material. Optimization of the end-of-pipe technologies treating dust showed that a combination of 100% of the electro precipitator and 2.4% of the fiber filter gives the lowest cost. The $SO_x$ case showed that a combination of 100% of the dry addition process and 25.88% of the wet scrubber gives the lowest cost. Salient feature of this research is that it proposed a method for identifying environmentally problematic input/environmental emissions of the manufacturing processes, in particular, cement manufacturing process. Another feature is that it showed a method for selecting the optimum combination of the end-of-pipe treatment technologies.
KSCE Journal of Civil and Environmental Engineering Research
/
v.13
no.2
/
pp.95-109
/
1993
In this study, the optimal configuration of arch structure has been tested by a decomposition technique. The object of this study is to provide the method of optimizing the shapes of both two hinged and fixed arches. The problem of optimal configuration of arch structures includes the interaction formulas, the working stress, and the buckling stress constraints on the assumption that arch ribs can be approximated by a finite number of straight members. On the first level, buckling loads are calculated from the relation of the stiffness matrix and the geometric stiffness matrix by using Rayleigh-Ritz method, and the number of the structural analyses can be decreased by approximating member forces through sensitivity analysis using the design space approach. The objective function is formulated as the total weight of the structures, and the constraints are derived by including the working stress, the buckling stress, and the side limit. On the second level, the nodal point coordinates of the arch structures are used as design variables and the objective function has been taken as the weight function. By treating the nodal point coordinates as design variable, the problem of optimization can be reduced to unconstrained optimal design problem which is easy to solve. Numerical comparisons with results which are obtained from numerical tests for several arch structures with various shapes and constraints show that convergence rate is very fast regardless of constraint types and configuration of arch structures. And the optimal configuration or the arch structures obtained in this study is almost the identical one from other results. The total weight could be decreased by 17.7%-91.7% when an optimal configuration is accomplished.
In this chapter, we summarize the results on the optimal location selection and present limitation and direction of research. In order to reach the objective, this study selected and tested the interaction model which obtains the value of co-ordinates on location selection through the optimization technique. This study used the original variables in the model, but the results indicated that there is difference in reality. In order to overcome this difference, this study peformed market survey and found the new variables (first data such as price, quality and assortment of goods, and the second data such as aggregate area, and area of shop, and the number of cars in the parking lot). Then this study determined an optimal variable by empirical analysis which compares an actual value of market share in 1988 with the market share yielded in the model. However, this study found the market share in each variables does not reflect a reality due to an assumption of λ-value in the model. In order to improve this, this study performed a sensitivity analysis which adds the λ value from 1.0 to 2.9 marginally. The analyzed result indicated the highest significance with the market share ratio in 1998 at λ of 1.0. Applying the weighted value to a variable from each of the first data and second data yielded the results that more variables from the first data coincided with the realistic rank on sales. Although this study have some limits and improvements, if a marketer uses this extended model, more significant results will be produced.
본 웹사이트에 게시된 이메일 주소가 전자우편 수집 프로그램이나
그 밖의 기술적 장치를 이용하여 무단으로 수집되는 것을 거부하며,
이를 위반시 정보통신망법에 의해 형사 처벌됨을 유념하시기 바랍니다.
[게시일 2004년 10월 1일]
이용약관
제 1 장 총칙
제 1 조 (목적)
이 이용약관은 KoreaScience 홈페이지(이하 “당 사이트”)에서 제공하는 인터넷 서비스(이하 '서비스')의 가입조건 및 이용에 관한 제반 사항과 기타 필요한 사항을 구체적으로 규정함을 목적으로 합니다.
제 2 조 (용어의 정의)
① "이용자"라 함은 당 사이트에 접속하여 이 약관에 따라 당 사이트가 제공하는 서비스를 받는 회원 및 비회원을
말합니다.
② "회원"이라 함은 서비스를 이용하기 위하여 당 사이트에 개인정보를 제공하여 아이디(ID)와 비밀번호를 부여
받은 자를 말합니다.
③ "회원 아이디(ID)"라 함은 회원의 식별 및 서비스 이용을 위하여 자신이 선정한 문자 및 숫자의 조합을
말합니다.
④ "비밀번호(패스워드)"라 함은 회원이 자신의 비밀보호를 위하여 선정한 문자 및 숫자의 조합을 말합니다.
제 3 조 (이용약관의 효력 및 변경)
① 이 약관은 당 사이트에 게시하거나 기타의 방법으로 회원에게 공지함으로써 효력이 발생합니다.
② 당 사이트는 이 약관을 개정할 경우에 적용일자 및 개정사유를 명시하여 현행 약관과 함께 당 사이트의
초기화면에 그 적용일자 7일 이전부터 적용일자 전일까지 공지합니다. 다만, 회원에게 불리하게 약관내용을
변경하는 경우에는 최소한 30일 이상의 사전 유예기간을 두고 공지합니다. 이 경우 당 사이트는 개정 전
내용과 개정 후 내용을 명확하게 비교하여 이용자가 알기 쉽도록 표시합니다.
제 4 조(약관 외 준칙)
① 이 약관은 당 사이트가 제공하는 서비스에 관한 이용안내와 함께 적용됩니다.
② 이 약관에 명시되지 아니한 사항은 관계법령의 규정이 적용됩니다.
제 2 장 이용계약의 체결
제 5 조 (이용계약의 성립 등)
① 이용계약은 이용고객이 당 사이트가 정한 약관에 「동의합니다」를 선택하고, 당 사이트가 정한
온라인신청양식을 작성하여 서비스 이용을 신청한 후, 당 사이트가 이를 승낙함으로써 성립합니다.
② 제1항의 승낙은 당 사이트가 제공하는 과학기술정보검색, 맞춤정보, 서지정보 등 다른 서비스의 이용승낙을
포함합니다.
제 6 조 (회원가입)
서비스를 이용하고자 하는 고객은 당 사이트에서 정한 회원가입양식에 개인정보를 기재하여 가입을 하여야 합니다.
제 7 조 (개인정보의 보호 및 사용)
당 사이트는 관계법령이 정하는 바에 따라 회원 등록정보를 포함한 회원의 개인정보를 보호하기 위해 노력합니다. 회원 개인정보의 보호 및 사용에 대해서는 관련법령 및 당 사이트의 개인정보 보호정책이 적용됩니다.
제 8 조 (이용 신청의 승낙과 제한)
① 당 사이트는 제6조의 규정에 의한 이용신청고객에 대하여 서비스 이용을 승낙합니다.
② 당 사이트는 아래사항에 해당하는 경우에 대해서 승낙하지 아니 합니다.
- 이용계약 신청서의 내용을 허위로 기재한 경우
- 기타 규정한 제반사항을 위반하며 신청하는 경우
제 9 조 (회원 ID 부여 및 변경 등)
① 당 사이트는 이용고객에 대하여 약관에 정하는 바에 따라 자신이 선정한 회원 ID를 부여합니다.
② 회원 ID는 원칙적으로 변경이 불가하며 부득이한 사유로 인하여 변경 하고자 하는 경우에는 해당 ID를
해지하고 재가입해야 합니다.
③ 기타 회원 개인정보 관리 및 변경 등에 관한 사항은 서비스별 안내에 정하는 바에 의합니다.
제 3 장 계약 당사자의 의무
제 10 조 (KISTI의 의무)
① 당 사이트는 이용고객이 희망한 서비스 제공 개시일에 특별한 사정이 없는 한 서비스를 이용할 수 있도록
하여야 합니다.
② 당 사이트는 개인정보 보호를 위해 보안시스템을 구축하며 개인정보 보호정책을 공시하고 준수합니다.
③ 당 사이트는 회원으로부터 제기되는 의견이나 불만이 정당하다고 객관적으로 인정될 경우에는 적절한 절차를
거쳐 즉시 처리하여야 합니다. 다만, 즉시 처리가 곤란한 경우는 회원에게 그 사유와 처리일정을 통보하여야
합니다.
제 11 조 (회원의 의무)
① 이용자는 회원가입 신청 또는 회원정보 변경 시 실명으로 모든 사항을 사실에 근거하여 작성하여야 하며,
허위 또는 타인의 정보를 등록할 경우 일체의 권리를 주장할 수 없습니다.
② 당 사이트가 관계법령 및 개인정보 보호정책에 의거하여 그 책임을 지는 경우를 제외하고 회원에게 부여된
ID의 비밀번호 관리소홀, 부정사용에 의하여 발생하는 모든 결과에 대한 책임은 회원에게 있습니다.
③ 회원은 당 사이트 및 제 3자의 지적 재산권을 침해해서는 안 됩니다.
제 4 장 서비스의 이용
제 12 조 (서비스 이용 시간)
① 서비스 이용은 당 사이트의 업무상 또는 기술상 특별한 지장이 없는 한 연중무휴, 1일 24시간 운영을
원칙으로 합니다. 단, 당 사이트는 시스템 정기점검, 증설 및 교체를 위해 당 사이트가 정한 날이나 시간에
서비스를 일시 중단할 수 있으며, 예정되어 있는 작업으로 인한 서비스 일시중단은 당 사이트 홈페이지를
통해 사전에 공지합니다.
② 당 사이트는 서비스를 특정범위로 분할하여 각 범위별로 이용가능시간을 별도로 지정할 수 있습니다. 다만
이 경우 그 내용을 공지합니다.
제 13 조 (홈페이지 저작권)
① NDSL에서 제공하는 모든 저작물의 저작권은 원저작자에게 있으며, KISTI는 복제/배포/전송권을 확보하고
있습니다.
② NDSL에서 제공하는 콘텐츠를 상업적 및 기타 영리목적으로 복제/배포/전송할 경우 사전에 KISTI의 허락을
받아야 합니다.
③ NDSL에서 제공하는 콘텐츠를 보도, 비평, 교육, 연구 등을 위하여 정당한 범위 안에서 공정한 관행에
합치되게 인용할 수 있습니다.
④ NDSL에서 제공하는 콘텐츠를 무단 복제, 전송, 배포 기타 저작권법에 위반되는 방법으로 이용할 경우
저작권법 제136조에 따라 5년 이하의 징역 또는 5천만 원 이하의 벌금에 처해질 수 있습니다.
제 14 조 (유료서비스)
① 당 사이트 및 협력기관이 정한 유료서비스(원문복사 등)는 별도로 정해진 바에 따르며, 변경사항은 시행 전에
당 사이트 홈페이지를 통하여 회원에게 공지합니다.
② 유료서비스를 이용하려는 회원은 정해진 요금체계에 따라 요금을 납부해야 합니다.
제 5 장 계약 해지 및 이용 제한
제 15 조 (계약 해지)
회원이 이용계약을 해지하고자 하는 때에는 [가입해지] 메뉴를 이용해 직접 해지해야 합니다.
제 16 조 (서비스 이용제한)
① 당 사이트는 회원이 서비스 이용내용에 있어서 본 약관 제 11조 내용을 위반하거나, 다음 각 호에 해당하는
경우 서비스 이용을 제한할 수 있습니다.
- 2년 이상 서비스를 이용한 적이 없는 경우
- 기타 정상적인 서비스 운영에 방해가 될 경우
② 상기 이용제한 규정에 따라 서비스를 이용하는 회원에게 서비스 이용에 대하여 별도 공지 없이 서비스 이용의
일시정지, 이용계약 해지 할 수 있습니다.
제 17 조 (전자우편주소 수집 금지)
회원은 전자우편주소 추출기 등을 이용하여 전자우편주소를 수집 또는 제3자에게 제공할 수 없습니다.
제 6 장 손해배상 및 기타사항
제 18 조 (손해배상)
당 사이트는 무료로 제공되는 서비스와 관련하여 회원에게 어떠한 손해가 발생하더라도 당 사이트가 고의 또는 과실로 인한 손해발생을 제외하고는 이에 대하여 책임을 부담하지 아니합니다.
제 19 조 (관할 법원)
서비스 이용으로 발생한 분쟁에 대해 소송이 제기되는 경우 민사 소송법상의 관할 법원에 제기합니다.
[부 칙]
1. (시행일) 이 약관은 2016년 9월 5일부터 적용되며, 종전 약관은 본 약관으로 대체되며, 개정된 약관의 적용일 이전 가입자도 개정된 약관의 적용을 받습니다.