• Title/Summary/Keyword: Structure Test Model

Search Result 2,070, Processing Time 0.029 seconds

Structure-Activity Relationships of Polyhydroxyursane-type Triterpenoids on the Cytoprotective and Anti-inflammatory Effects

  • Jung, Hyun-Ju;Nam, Jung-Hwan;Lee, Kyung-Tae;Lee, Yong-Sup;Choi, Jong-Won;Kim, Won-Bae;Chung, Won-Yoon;Park, Kwang-Kyun;Park, Hee-Juhn
    • Natural Product Sciences
    • /
    • v.13 no.1
    • /
    • pp.33-39
    • /
    • 2007
  • Eleven polyhydroxyursane triterpenoids (PHUTs) were tested to determine their cytoprotective, immunosuppressive and anti-inflammatory effects. To compare the bioactivities of $19{\alpha}$-hydroxyursane-type triterpenoids {23-hydroxytormentic acid (6), its methyl ester (7), tormentic acid (8), niga-ichigoside $F_1$ (9),euscaphic acid (10) and kaji-ichigoside $F_1$ (11)} of the Rosaceae crude drugs (Rubi Fructus and Rosa rugosae Radix) with PHUTs possessing no $19{\alpha}-hydroxyl$ of Centella asiatica (Umbelliferae), the four PHUTs, asiaticoside (1), madecassoside (2), asiatic acid (3), and madecassic acid (4) were isolated from C. asiatica and 23-hydroxyursolic acid (5) from Cussonia bancoensis. Cytoprotective effects were assessed by measuring cell viabilities against cisplatin-induced cytotoxocity in $LLC-PK_1$, cells (proximal tubule, pig kidney) to determine whether these agents have protective effects against nephrotoxicity caused by cisplatin. The inhibitory effect of 11 PHUTS on nitric oxide (NO) and prostaglandin $E_2\;(PGE_2)$ were evaluated by measuring nitrite accumulation in lipopolysaccharide (LPS)-induced macrophage RAW 264.7 cells, and their anti-inflammatory effects were tested in 12-O-tetradecanoylphorbol-13-acetate (TPA)-induced mouse ear edema model. Six MHUTs (compounds 1, 2, 4, 6, 10, and 11) exhibited higher cell viabilities during cisplatin-induced cytotoxicity testing even at a concentration of $200\;{\mu}g/ml$ than cisplatin only-treated group, suggesting that ese compounds have the potentcytoprotective efffcts. Compounds 1 and 3 of the C. asiatica and niga-ichigoside $F_1$ exhibited no inhibitory effect on NO and/or $PGE_2$ production whereas other PHUTs produced mild to significant NO and/or $PGE_2$ production.The four compounds (2, 5, 9, and 10) potently inhibited mouse ear edema induced by TPA whereas two compounds (1 and 3) had no activity in this test. These results suggest that many PHUTs are potentchemopreventives. Structure-activity relationship (SAR) was also discussed in each assay with regard to the significant role of OHs at the position of 2, 3, 6, 19, and 23 and to the glycoside linkage at the 28-carboxyl.

Effect of various casting alloys and abutment composition on the marginal accuracy of bar-type retainer (합금의 종류와 지대주 성분이 바형 유지 장치의 변연 적합도에 미치는 영향)

  • Lee, Yun-Hui;Song, Young-Gyun;Lee, Joon-Seok
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.50 no.2
    • /
    • pp.85-91
    • /
    • 2012
  • Purpose: The object of this study was to determine if the low-priced alloy and metal UCLA abutment could be available for manufacturing bar-retained framework of implant prosthesis. Materials and methods: Bar structure was classified into 4 groups, The specimen of group 1 and 2 were based on casting high noble metal alloys and noble metal alloys with gold UCLA abutment. The specimen of group 3 and 4 were based on casting noble metal alloys and base metal alloys with metal UCLA abutment. Cast bar structure was installed in an acrylic resin model and only the screw on the hexed abutment side was tightened to 20 Ncm. On the opposite side, vertical discrepancy was measured with stereo microscope from front, back, and lateral side of the implant-abutment interface. One-way ANOVA was performed to analyze the marginal fit discrepancy. Results: One-way ANOVA test showed significant differences among all groups ($P$<.05) except for Group 1 and 3. Among them, difference between Group 1 and 2 was noticeable. Measured vertical discrepancies were all below $70{\mu}m$ except to Group 2. Conclusion: Base metal alloy and metal UCLA abutment could be used as an alternative to high-priced gold alloy for implant bar-retained framework.

Effectiveness of multi-mode surface wave inversion in shallow engineering site investigations (토목관련 천부층 조사에서 다중 모드 표면파 역산의 효과)

  • Feng Shaokong;Sugiyama Takeshi;Yamanaka Hiroaki
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.1
    • /
    • pp.26-33
    • /
    • 2005
  • Inversion of multi-mode surface-wave phase velocity for shallow engineering site investigation has received much attention in recent years. A sensitivity analysis and inversion of both synthetic and field data demonstrates the greater effectiveness of this method over employing the fundamental mode alone. Perturbation of thickness and shear-wave velocity parameters in multi-modal Rayleigh wave phase velocities revealed that the sensitivities of higher modes: (a) concentrate in different frequency bands, and (b) are greater than the fundamental mode for deeper parameters. These observations suggest that multi-mode phase velocity inversion can provide better parameter discrimination and imaging of deep structure, especially with a velocity reversal, than can inversion of fundamental mode data alone. An inversion of the theoretical phase velocities in a model with a low velocity layer at 20 m depth can only image the soft layer when the first higher mode is incorporated. This is especially important when the lowest measurable frequency is only 6 Hz. Field tests were conducted at sites surveyed by borehole and PS logging. At the first site, an array microtremor survey, often used for deep geological surveying in Japan, was used to survey the soil down to 35 m depth. At the second site, linear multichannel spreads with a sledgehammer source were recorded, for an investigation down to 12 m depth. The f-k power spectrum method was applied for dispersion analysis, and velocities up to the second higher mode were observed in each test. The multi-mode inversion results agree well with PS logs, but models estimated from the fundamental mode alone show f large underestimation of the depth to shallow soft layers below artificial fill.

Development of Deep Learning Structure to Secure Visibility of Outdoor LED Display Board According to Weather Change (날씨 변화에 따른 실외 LED 전광판의 시인성 확보를 위한 딥러닝 구조 개발)

  • Sun-Gu Lee;Tae-Yoon Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.340-344
    • /
    • 2023
  • In this paper, we propose a study on the development of deep learning structure to secure visibility of outdoor LED display board according to weather change. The proposed technique secures the visibility of the outdoor LED display board by automatically adjusting the LED luminance according to the weather change using deep learning using an imaging device. In order to automatically adjust the LED luminance according to weather changes, a deep learning model that can classify the weather is created by learning it using a convolutional network after first going through a preprocessing process for the flattened background part image data. The applied deep learning network reduces the difference between the input value and the output value using the Residual learning function, inducing learning while taking the characteristics of the initial input value. Next, by using a controller that recognizes the weather and adjusts the luminance of the outdoor LED display board according to the weather change, the luminance is changed so that the luminance increases when the surrounding environment becomes bright, so that it can be seen clearly. In addition, when the surrounding environment becomes dark, the visibility is reduced due to scattering of light, so the brightness of the electronic display board is lowered so that it can be seen clearly. By applying the method proposed in this paper, the result of the certified measurement test of the luminance measurement according to the weather change of the LED sign board confirmed that the visibility of the outdoor LED sign board was secured according to the weather change.

Method of Earthquake Acceleration Estimation for Predicting Damage to Arbitrary Location Structures based on Artificial Intelligence (임의 위치 구조물의 손상예측을 위한 인공지능 기반 지진가속도 추정방법 )

  • Kyeong-Seok Lee;Young-Deuk Seo;Eun-Rim Baek
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.3
    • /
    • pp.71-79
    • /
    • 2023
  • It is not efficient to install a maintenance system that measures seismic acceleration and displacement on all bridges and buildings to evaluate the safety of structures after an earthquake occurs. In order to maintain this, an on-site investigation is conducted. Therefore, it takes a lot of time when the scope of the investigation is wide. As a result, secondary damage may occur, so it is necessary to predict the safety of individual structures quickly. The method of estimating earthquake damage of a structure includes a finite element analysis method using approved seismic information and a structural analysis model. Therefore, it is necessary to predict the seismic information generated at arbitrary location in order to quickly determine structure damage. In this study, methods to predict the ground response spectrum and acceleration time history at arbitrary location using linear estimation methods, and artificial neural network learning methods based on seismic observation data were proposed and their applicability was evaluated. In the case of the linear estimation method, the error was small when the locations of nearby observatories were gathered, but the error increased significantly when it was spread. In the case of the artificial neural network learning method, it could be estimated with a lower level of error under the same conditions.

A Framework on 3D Object-Based Construction Information Management System for Work Productivity Analysis for Reinforced Concrete Work (철근콘크리트 공사의 작업 생산성 분석을 위한 3차원 객체 활용 정보관리 시스템 구축방안)

  • Kim, Jun;Cha, Heesung
    • Korean Journal of Construction Engineering and Management
    • /
    • v.19 no.2
    • /
    • pp.15-24
    • /
    • 2018
  • Despite the recognition of the need for productivity information and its importance, the feedback of productivity information is not well-established in the construction industry. Effective use of productivity information is required to improve the reliability of construction planning. However, in many cases, on-site productivity information is hardly management effectively, but rather it relies on the experience and/or intuition of project participants. Based on the literature review and expert interviews, the authors recognized that one of the possible solutions is to develop a systematic approach in dealing with productivity information of the construction job-sites. It is required that the new system should not be burdensome to users, purpose-oriented information management, easy-to follow information structure, real-time information feedback, and productivity-related factor recognition. Based on the preliminary investigations, this study proposed a framework for a novel system that facilitate the effective management of construction productivity information. This system has utilized Sketchup software which has good user accessibility by minimizing additional data input and related workload. The proposed system has been designed to input, process, and output the pertinent information through a four-stage process: preparation, input, processing, and output. The inputted construction information is classified into Task Breakdown Structure (TBS) and Material Breakdown Structure (MBS), which are constructed by referring to the contents of the standard specification of building construction, and converted into productivity information. In addition, the converted information is also graphically visualized on the screen, allowing the users to use the productivity information from the job-site. The productivity information management system proposed in this study has been pilot-tested in terms of practical applicability and information availability in the real construction project. Very positive results have been obtained from the usability and the applicability of the system and benefits are expected from the validity test of the system. If the proposed system is used in the planning stage in the construction, the productivity information and the continuous information is accumulated, the expected effectiveness of this study would be conceivably further enhanced.

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

Design and Implementation of Asynchronous Memory for Pipelined Bus (파이프라인 방식의 버스를 위한 비 동기식 주 기억장치의 설계 및 구현)

  • Hahn, Woo-Jong;Kim, Soo-Won
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.11
    • /
    • pp.45-52
    • /
    • 1994
  • In recent days low cost, high performance microprocessors have led to construction of medium scale shared memory multiprocessor systems with shared bus. Such multiprocessor systems are heavily influenced by the structures of memory systems and memory systems become more important factor in design space as microprocessors are getting faster. Even though local cache memories are very common for such systems, the latency on access to the shared memory limits throughput and scalability. There have been many researches on the memory structure for multiprocessor systems. In this paper, an asynchronous memory architecture is proposed to utilize the bandwith of system bus effectively as well as to provide flexibility of implementation. The effect of the proposed architecture if shown by simulation. We choose, as our model of the shared bus is HiPi+Bus which is designed by ETRI to meet the requirements of the High-Speed Midrange Computer System. The simulation is done by using Verilog hardware decription language. With this simulation, it is explored that the proposed asynchronous memory architecture keeps the utilization of system bus low enough to provide better throughput and scalibility. The implementation trade-offs are also described in this paper. The asynchronous memory is implemented and tested under the prototype testing environment by using test program. This intensive test has validated the operation of the proposed architecture.

  • PDF