• Title/Summary/Keyword: software system

Search Result 12,099, Processing Time 0.054 seconds

Medical Information Dynamic Access System in Smart Mobile Environments (스마트 모바일 환경에서 의료정보 동적접근 시스템)

  • Jeong, Chang Won;Kim, Woo Hong;Yoon, Kwon Ha;Joo, Su Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.47-55
    • /
    • 2015
  • Recently, the environment of a hospital information system is a trend to combine various SMART technologies. Accordingly, various smart devices, such as a smart phone, Tablet PC is utilized in the medical information system. Also, these environments consist of various applications executing on heterogeneous sensors, devices, systems and networks. In these hospital information system environment, applying a security service by traditional access control method cause a problems. Most of the existing security system uses the access control list structure. It is only permitted access defined by an access control matrix such as client name, service object method name. The major problem with the static approach cannot quickly adapt to changed situations. Hence, we needs to new security mechanisms which provides more flexible and can be easily adapted to various environments with very different security requirements. In addition, for addressing the changing of service medical treatment of the patient, the researching is needed. In this paper, we suggest a dynamic approach to medical information systems in smart mobile environments. We focus on how to access medical information systems according to dynamic access control methods based on the existence of the hospital's information system environments. The physical environments consist of a mobile x-ray imaging devices, dedicated mobile/general smart devices, PACS, EMR server and authorization server. The software environment was developed based on the .Net Framework for synchronization and monitoring services based on mobile X-ray imaging equipment Windows7 OS. And dedicated a smart device application, we implemented a dynamic access services through JSP and Java SDK is based on the Android OS. PACS and mobile X-ray image devices in hospital, medical information between the dedicated smart devices are based on the DICOM medical image standard information. In addition, EMR information is based on H7. In order to providing dynamic access control service, we classify the context of the patients according to conditions of bio-information such as oxygen saturation, heart rate, BP and body temperature etc. It shows event trace diagrams which divided into two parts like general situation, emergency situation. And, we designed the dynamic approach of the medical care information by authentication method. The authentication Information are contained ID/PWD, the roles, position and working hours, emergency certification codes for emergency patients. General situations of dynamic access control method may have access to medical information by the value of the authentication information. In the case of an emergency, was to have access to medical information by an emergency code, without the authentication information. And, we constructed the medical information integration database scheme that is consist medical information, patient, medical staff and medical image information according to medical information standards.y Finally, we show the usefulness of the dynamic access application service based on the smart devices for execution results of the proposed system according to patient contexts such as general and emergency situation. Especially, the proposed systems are providing effective medical information services with smart devices in emergency situation by dynamic access control methods. As results, we expect the proposed systems to be useful for u-hospital information systems and services.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Latent topics-based product reputation mining (잠재 토픽 기반의 제품 평판 마이닝)

  • Park, Sang-Min;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.39-70
    • /
    • 2017
  • Data-drive analytics techniques have been recently applied to public surveys. Instead of simply gathering survey results or expert opinions to research the preference for a recently launched product, enterprises need a way to collect and analyze various types of online data and then accurately figure out customer preferences. In the main concept of existing data-based survey methods, the sentiment lexicon for a particular domain is first constructed by domain experts who usually judge the positive, neutral, or negative meanings of the frequently used words from the collected text documents. In order to research the preference for a particular product, the existing approach collects (1) review posts, which are related to the product, from several product review web sites; (2) extracts sentences (or phrases) in the collection after the pre-processing step such as stemming and removal of stop words is performed; (3) classifies the polarity (either positive or negative sense) of each sentence (or phrase) based on the sentiment lexicon; and (4) estimates the positive and negative ratios of the product by dividing the total numbers of the positive and negative sentences (or phrases) by the total number of the sentences (or phrases) in the collection. Furthermore, the existing approach automatically finds important sentences (or phrases) including the positive and negative meaning to/against the product. As a motivated example, given a product like Sonata made by Hyundai Motors, customers often want to see the summary note including what positive points are in the 'car design' aspect as well as what negative points are in thesame aspect. They also want to gain more useful information regarding other aspects such as 'car quality', 'car performance', and 'car service.' Such an information will enable customers to make good choice when they attempt to purchase brand-new vehicles. In addition, automobile makers will be able to figure out the preference and positive/negative points for new models on market. In the near future, the weak points of the models will be improved by the sentiment analysis. For this, the existing approach computes the sentiment score of each sentence (or phrase) and then selects top-k sentences (or phrases) with the highest positive and negative scores. However, the existing approach has several shortcomings and is limited to apply to real applications. The main disadvantages of the existing approach is as follows: (1) The main aspects (e.g., car design, quality, performance, and service) to a product (e.g., Hyundai Sonata) are not considered. Through the sentiment analysis without considering aspects, as a result, the summary note including the positive and negative ratios of the product and top-k sentences (or phrases) with the highest sentiment scores in the entire corpus is just reported to customers and car makers. This approach is not enough and main aspects of the target product need to be considered in the sentiment analysis. (2) In general, since the same word has different meanings across different domains, the sentiment lexicon which is proper to each domain needs to be constructed. The efficient way to construct the sentiment lexicon per domain is required because the sentiment lexicon construction is labor intensive and time consuming. To address the above problems, in this article, we propose a novel product reputation mining algorithm that (1) extracts topics hidden in review documents written by customers; (2) mines main aspects based on the extracted topics; (3) measures the positive and negative ratios of the product using the aspects; and (4) presents the digest in which a few important sentences with the positive and negative meanings are listed in each aspect. Unlike the existing approach, using hidden topics makes experts construct the sentimental lexicon easily and quickly. Furthermore, reinforcing topic semantics, we can improve the accuracy of the product reputation mining algorithms more largely than that of the existing approach. In the experiments, we collected large review documents to the domestic vehicles such as K5, SM5, and Avante; measured the positive and negative ratios of the three cars; showed top-k positive and negative summaries per aspect; and conducted statistical analysis. Our experimental results clearly show the effectiveness of the proposed method, compared with the existing method.

The nanoleakage patterns of experimental hydrophobic adhesives after load cycling (Load cycling에 따른 소수성 실험용 상아질 접착제의 nanoleakage 양상)

  • Sohn, Suh-Jin;Chang, Ju-Hae;Kang, Suk-Ho;Yoo, Hyun-Mi;Cho, Byeong-Hoon;Son, Ho-Hyun
    • Restorative Dentistry and Endodontics
    • /
    • v.33 no.1
    • /
    • pp.9-19
    • /
    • 2008
  • The purpose of this study was: (1) to compare nanoleakage patterns of a conventional 3-step etch and rinse adhesive system and two experimental hydrophobic adhesive systems and (2) to investigate the change of the nanoleakage patterns after load cycling. Two kinds of hydrophobic experimental adhesives, ethanol containing adhesive (EA) and methanol containing adhesive (MA), were prepared. Thirty extracted human molars were embedded in resin blocks and occlusal thirds of the crowns were removed. The polished dentin surfaces were etched with a 35 % phosphoric acid etching gel and rinsed with water. Scotchbond Multi-Purpose (MP), EA and MA were used for bonding procedure. Z-250 composite resin was built-up on the adhesive-treated surfaces. Five teeth of each dentin adhesive group were subjected to mechanical load cycling. The teeth were sectioned into 2 mm thick slabs and then stained with 50 % ammoniacal silver nitrate. Ten specimens for each group were examined under scanning electron microscope in backscattering electron mode. All photographs were analyzed using image analysis software. Three regions of each specimen were used for evaluation of the silver uptake within the hybrid layer. The area of silver deposition was calculated and expressed in gray value. Data were statistically analyzed by two-way ANOVA and post-hoc testing of multiple comparisons was done with the Scheffe's test. Silver particles were observed in all the groups. However, silver particles were more sparsely distributed in the EA group and the MA group than in the MP group (p < .0001). There were no changes in nanoleakage patterns after load cycling.

Decreased White Matter Structural Connectivity in Psychotropic Drug-Naïve Adolescent Patients with First Onset Major Depressive Disorder (정신과적 투약력이 없는 초발 주요 우울장애 청소년 환아들에서의 백질 구조적 연결성 감소)

  • Suh, Eunsoo;Kim, Jihyun;Suh, Sangil;Park, Soyoung;Lee, Jeonho;Lee, Jongha;Kim, In-Seong;Lee, Moon-Soo
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.25 no.2
    • /
    • pp.153-165
    • /
    • 2017
  • Objectives : Recent neuroimaging studies focus on dysfunctions in connectivity between cognitive circuits and emotional circuits: anterior cingulate cortex that connects dorsolateral orbitofrontal cortex and prefrontal cortex to limbic system. Previous studies on pediatric depression using DTI have reported decreased neural connectivity in several brain regions, including the amygdala, anterior cingulate cortex, superior longitudinal fasciculus. We compared the neural connectivity of psychotropic drug naïve adolescent patients with a first onset of major depressive episode with healthy controls using DTI. Methods : Adolescent psychotropic drug naïve patients(n=26, 10 men, 16 women; age range, 13-18 years) who visited the Korea University Guro Hospital and were diagnosed with first onset major depressive disorder were registered. Healthy controls(n=27, 5 males, 22 females; age range, 12-17 years) were recruited. Psychiatric interviews, complete psychometrics including IQ and HAM-D, MRI including diffusion weighted image acquisition were conducted prior to antidepressant administration to the patients. Fractional anisotropy(FA), radial, mean, and axial diffusivity were estimated using DTI. FMRIB Software Library-Tract Based Spatial Statistics was used for statistical analysis. Results : We did not observe any significant difference in whole brain analysis. However, ROI analysis on right superior longitudinal fasciculus resulted in 3 clusters with significant decrease of FA in patients group. Conclusions : The patients with adolescent major depressive disorder showed statistically significant FA decrease in the DTI-based structure compared with healthy control. Therefore we suppose DTI can be used as a bio-marker in psychotropic drug-naïve adolescent patients with first onset major depressive disorder.

Development of Movement Analysis Program and its Feasibility Test in Streotactic Body Radiation Threrapy (복부부위의 체부정위방사선치료시 호흡에 의한 움직임분석 프로그램 개발 및 유용성 평가)

  • Shin, Eun-Hyuk;Han, Young-Yih;Kim, Jin-Sung;Park, Hee-Chul;Shin, Jung-Suk;Ju, Sang-Gyu;Lee, Ji-Hea;Ahn, Jong-Ho;Lee, Jai-Ki;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.22 no.3
    • /
    • pp.107-116
    • /
    • 2011
  • Respiratory gated radiation therapy and stereotactic body radiation therapy require identical tumor motions during each treatment with the motion detected in treatment planning CT. Therefore, this study developed a tumor motion monitoring and analysis system during the treatments employing RPM data, gated setup OBI images and a data analysis software. A respiratory training and guiding program which improves the regularity of breathing was used to patients. The breathing signal was obtained by RPM and the recorded data in the 4D console was read after treatment. The setup OBI images obtained gated at 0% and 50% of breathing phases were used to detect the tumor motion range in crenio-caudal direction. By matching the RPM data recorded at the OBI imaging time, a factor which converts the RPM motion to the tumor motion was computed. RPM data was entered to the institute developed data analysis software and the maximum, minimum, average of the breathing motion as well as the standard deviation of motion amplitude and period was computed. The computed result is exported in an excel file. The conversion factor was applied to the analyzed data to estimate the tumor motion. The accuracy of the developed method was tested by using a moving phantom, and the efficacy was evaluated for 10 stereotactic body radiation therapy patients. For the sine wave motion of the phantom with 4 sec of period and 2 cm of peak-to-peak amplitude, the measurement was slightly larger (4.052 sec) and the amplitude was smaller (1.952 cm). For patient treatment, one patient was evaluated not to qualified to SBRT due to the usability of the breathing, and in one patient case, the treatment was changed to respiratory gated treatment due the larger motion range of the tumor than treatment planed motion. The developed method and data analysis program was useful to estimate the tumor motion during treatment.

Development of a Testing Environment for Parallel Programs based on MSC Specifications (MSC 명세를 기반으로 한 병렬 프로그램 테스팅 환경의 개발)

  • Kim, Hyeon-Soo;Bae, Hyun-Seop;Chung, In-Sang;Kwon, Yong-Rae;Chung, Young-Sik;Lee, Byung-Sun;Lee, Dong-Gil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.2
    • /
    • pp.135-149
    • /
    • 2000
  • Most of prior works on testing parallel programs have concentrated on how to guarantee the reproducibility by employing event traces exercised during executions of a program. Consequently, little work has been done to generate test cases, especially, from specifications produced from software development process. In this research work, we devise the techniques for deriving test cases automatically from the specifications written in Message Sequence Charts(MSCs) which are widely used in telecommunication areas and develop the testing environment for performing module testing of parallel programs with derived test cases. For deriving test cases from MSCs, we have to uncover the causality relations among events embedded implicitly in MSCs. For this, we devise the methods for adapting vector time stamping to MSCs, Then, valid event sequences, satisfying the causality relations, are generated and these are used as test cases. The generated test cases, written in TTCN, are translated into CHILL source codes, which interact with a target module to be tested and test the validity of behaviors of the module. Since the testing method developed in this research work extracts test cases from the MSC specifications produced front telecommunications software development process, it is not necessary to describe auxiliary specifications for testing. In audition adapting vector time stamping generates automatically the event sequences, the generated event sequences that are ones for whole system can be used for individual testing purpose.

  • PDF

Beam Shaping by Independent Jaw Closure in Steveotactic Radiotherapy (정위방사선치료 시 독립턱 부분폐쇄를 이용하는 선량분포개선 방법)

  • Ahn Yong Chan;Cho Byung Chul;Choi Dong Rock;Kim Dae Yong;Huh Seung Jae;Oh Do Hoon;Bae Hoonsik;Yeo In Hwan;Ko Young Eun
    • Radiation Oncology Journal
    • /
    • v.18 no.2
    • /
    • pp.150-156
    • /
    • 2000
  • Purpose : Stereotactic radiation therapy (SRT) can deliver highly focused radiation to a small and spherical target lesion with very high degree of mechanical accuracy. For non-spherical and large lesions, however, inclusion of the neighboring normal structures within the high dose radiation volume is inevitable in SRT This is to report the beam shaping using the partial closure of the independent jaw in SRT and the verification of dose calculation and the dose display using a home-made soft ware. Materials and Methods : Authors adopted the idea to partially close one or more independent collimator jaw(5) in addition to the circular collimator cones to shield the neighboring normal structures while keeping the target lesion within the radiation beam field at all angles along the arc trajectory. The output factors (OF's) and the tissue-maximum ratios (TMR's) were measured using the micro ion chamber in the water phantom dosimetry system, and were compared with the theoretical calculations. A film dosimetry procedure was peformed to obtain the depth dose profiles at 5 cm, and they were also compared with the theoretical calculations, where the radiation dose would depend on the actual area of irradiation. Authors incorporated this algorithm into the home-made SRT software for the isodose calculation and display, and was tried on an example case with single brain metastasis. The dose-volume histograms (DVH's) of the planning target volume (PTV) and the normal brain derived by the control plan were reciprocally compared with those derived by the plan using the same arc arrangement plus the independent collimator jaw closure. Results : When using 5.0 cm diameter collimator, the measurements of the OF's and the TMR's with one independent jaw set at 30 mm (unblocked), 15.5 mm, 8.6 mm, and 0 mm from th central beam axis showed good correlation to the theoretical calculation within 0.5% and 0.3% error range. The dose profiles at 5 cm depth obtained by the film dosimetry also showed very good correlation to the theoretical calculations. The isodose profiles obtained on the home-made software demonstrated a slightly more conformal dose distribution around the target lesion by using the independent jaw closure, where the DVH's of the PTV were almost equivalent on the two plans, while the DVH's for the normal brain showed that less volume of the normal brain receiving high radiation dose by using this modification than the control plan employing the circular collimator cone only. Conclusions : With the beam shaping modification using the independent jaw closure, authors have realized wider clinical application of SRT with more conformal dose planning. Authors believe that SRT, with beam shaping ideas and efforts, should no longer be limited to the small spherical lesions, but be more widely applied to rather irregularly shaped tumors in the intracranial and the head and neck regions.

  • PDF

A Study on the Efficiency Enhancement Plan of the Broadcasting: Advertising Industry Infrastructure Construction Direction in Korea (한국 방송광고산업 인프라 구축방향에 관한 효율성 제고방안 연구)

  • Yeom, Sung-Won
    • Korean journal of communication and information
    • /
    • v.22
    • /
    • pp.131-166
    • /
    • 2003
  • The opening of advertising market and introduction of the free competition doctrine make the competition harsher among advertising agencies. Advertising agencies do their best to execute their ad more efficiently and scientifically. But, it is the reality that broadcasting advertising industry in korea did not construct enough infrastructure to execute the systematic activities compared with that of advanced countries. So, we need to grasp the present conditions and draw a time-table to construct primarily necessary infrastructures. In case of hardware infrastructure in advertising industry, digitalization of broadcasting and convergence of broadcasting with telecommunication make it hurry to construct that. But as the ad agencies was in the situation to compete each other, they have a difficulty to construct common hardware infrastructure enthusiastically. Thus, it is necessary to build hardware infrastructure in advertising industry for policy. And the construction of that should be executed systematically not for the short term effects but for the long term objectives. Also, it is the most important to construct reliable Software infrastructure in advertising industry from all of ad agencies. In these days, ad agencies have a tendency not to believe the important information, like the data of ratings and advertising transaction information, in relation to the advertising activities. And they do not share and communicate about the information of the advertising industry trends, research trends, advertisement related information. So, it is also hurry to build the on-line and off-line database system. Finally, for the development of brainware infrastructure in advertising industry, it is the most necessary to activate the cooperation relation between university and advertising agencies. Universities need to invite experts in the advertising to teach the students practical knowledge and ad agencies to recruit students who want to develop their carrier in the advertising industries. In conclusion, advertising industry in korea to solve these tasks for the development of advertising industry infrastructure in the way of cooperation and harmony of each other rationally and efficiently.

  • PDF

CIA-Level Driven Secure SDLC Framework for Integrating Security into SDLC Process (CIA-Level 기반 보안내재화 개발 프레임워크)

  • Kang, Sooyoung;Kim, Seungjoo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.909-928
    • /
    • 2020
  • From the early 1970s, the US government began to recognize that penetration testing could not assure the security quality of products. Results of penetration testing such as identified vulnerabilities and faults can be varied depending on the capabilities of the team. In other words none of penetration team can assure that "vulnerabilities are not found" is not equal to "product does not have any vulnerabilities". So the U.S. government realized that in order to improve the security quality of products, the development process itself should be managed systematically and strictly. Therefore, the US government began to publish various standards related to the development methodology and evaluation procurement system embedding "security-by-design" concept from the 1980s. Security-by-design means reducing product's complexity by considering security from the initial phase of development lifecycle such as the product requirements analysis and design phase to achieve trustworthiness of product ultimately. Since then, the security-by-design concept has been spread to the private sector since 2002 in the name of Secure SDLC by Microsoft and IBM, and is currently being used in various fields such as automotive and advanced weapon systems. However, the problem is that it is not easy to implement in the actual field because the standard or guidelines related to Secure SDLC contain only abstract and declarative contents. Therefore, in this paper, we present the new framework in order to specify the level of Secure SDLC desired by enterprises. Our proposed CIA (functional Correctness, safety Integrity, security Assurance)-level-based security-by-design framework combines the evidence-based security approach with the existing Secure SDLC. Using our methodology, first we can quantitatively show gap of Secure SDLC process level between competitor and the company. Second, it is very useful when you want to build Secure SDLC in the actual field because you can easily derive detailed activities and documents to build the desired level of Secure SDLC.