• Title/Summary/Keyword: 문제해결방식

Search Result 3,037, Processing Time 0.035 seconds

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.

The study of stereoscopic editing process with applying depth information (깊이정보를 활용한 입체 편집 프로세스 연구)

  • Baek, Kwang-Ho;Kim, Min-Seo;Han, Myung-Hee
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.225-233
    • /
    • 2012
  • The 3D stereoscopic image contents have been emerging as the blue chip of the contents market of the next generation since the . However, all the 3D contents created commercially in the country have failed to enter box office. It is because the quality of Korean 3D contents is much lower than that of overseas contents and also current 3D post production process is based on 2D. Considering all these facts, the 3D editing process has connection with the quality of contents. The current 3D editing processes of the production case of are using the way that edits with the system on basis of 2D, followed by checking with 3D display system and modifying, if there are any problems. In order to improve those conditions, I suggest that the 3D editing process contain more objectivity by visualizing the depth data applied in some composition work such as Disparity map, Depth map, and the current 3D editing process. The proposed process has been used in the music drama , comparing with those of the film . The 3D values could be checked among cuts which have been changed a lot since those of , while the 3D value of drew an equal result in general. Since the current process is based on an artist's subjective sense of 3D, it could be changed according to the condition and state of the artist. Furthermore, it is impossible for us to predict the positive range, so it is apprehended that the cubic effect of space might be perverted by showing each different 3D value according to cuts in the same space or a limited space. On the other hand, the objective 3D editing by applying the visualization of depth data can adjust itself to the cubic effect of the same space and the whole content equally, which will enrich the 3D contents. It will even be able to solve some problems such as distortion of cubic effect and visual fatigue, etc.

Der Vollrauschtatbestand de lege ferenda (완전명정죄 처벌규정의 입법론)

  • Seong, Nak-Hyon
    • Journal of Legislation Research
    • /
    • no.55
    • /
    • pp.137-166
    • /
    • 2018
  • Wenn nach dem starken Trinken etwas strafbares passiert, so ist das Gesamtverhalten als $strafw{\ddot{u}}rdig$ und strafbar anzuerkennen. Aber nach dem Schuldprinzip handelt ohne Schuld, wer bei Begehung der Tat $unf{\ddot{a}}hig$ ist, das Unrecht der Tat einzusehen oder nach dieser Einsicht zu handeln(Koinzidenzprinzip). Die Rechtsfigur der "actio libera in causa" dient dazu, diese in $h{\ddot{a}}ufigen$ $F{\ddot{a}}llen$ als kriminalpolitisch $unerw{\ddot{u}}nscht$ empfundene $L{\ddot{u}}cke$ zu umgehen. Dabei kommt auch dem Vollrauschtatbestand in der Praxis $erh{\ddot{o}}hte$ Bedeutung zu. Der deutsche Gesetzgeber war sich bei der Aufnahme des Vollrauschtatbestandes in das Gesetz durchaus $bewu{\ss}t$, $da{\ss}$ die Vorschrift eine Ausnahme zur Schuldzurechnungsregelung darstellte. Er $w{\ddot{a}}hlte$ jedoch die Form eines $selbst{\ddot{a}}ndigen$ Tatbestandes, um die Durchbrechung des reinen Schuldprinzips $ertr{\ddot{a}}glich$ zu machen. Der Vollrauschtatbestand ist ein abstraktes $Gef{\ddot{a}}hrdungdsdelikt$ -demnach die im Rausch verwirklichte rechtswidrige Tat nur objektive Bedingung der Strafbarkeit ist -, das sachlich eine Schuldzurechnungsregelung $enth{\ddot{a}}lt$, und zwar eine Ausnahme $gegen{\ddot{u}}ber$ die Regelungen ${\ddot{u}}ber$ Schuldzurechnung. Dieser Vollrauschtatbestand ist dennoch als regitime $Erg{\ddot{a}}nzung$ der in Schuldzurechnungsregelungen beschriebenen $Schuldzurechnungsgrunds{\ddot{a}}tze$ anzusehen. Er steht $n{\ddot{a}}mlich$ in Einklang mit dem Schuldgrundsatz, wenn als subjektives Tatbestandsmerkmal des Vollrausches die Kenntnis der $Gef{\ddot{a}}hrlichkeit$ des Rauschzustandes $f{\ddot{u}}r$ die Begehung von Delikten vorausgesetzt wird.

On the Problem of Virtue in Confucian and Neoconfucian Philosophy (유학 및 신유학 철학에서의 덕의 문제)

  • Gabriel, Werner
    • (The)Study of the Eastern Classic
    • /
    • no.50
    • /
    • pp.89-120
    • /
    • 2013
  • The concept of virtue seems to be one of the rare cases where the European and the Chinese traditions coincide. The meaning of the Latin word virtus and of Greek $aret{\acute{e}}$ seems to be similar to the Chinese $d{\acute{e}}$德. Most striking in virtue is that it is a capacity for self-realisation through action which is unique to man. On the other hand, there is something physical about it. It is the strength to do something. This strength overcomes the resistance of what is naturally given, it transforms the world, turns the natural world into a human one. In the Chinese tradition, $d{\acute{e}}$ 德, i.e. virtue, is therefore always connected with $da{\grave{o}}$ 道, the totality of natural forces. In the Chinese tradition, as opposed to the European one, virtue is itself considered to be a natural force that is present in man. This force sustains man's connectedness, unity and harmony with the surrounding world. Things exist through the unity of principle理 and ether氣. But the knowledge of this unity is due to principle. Moral and legal norms are shifted totally to the sphere of principle. Therefore their have found the final dissolution from a heroic models. Above all the classical Confucians, but also the other schools, would reply to this that there is nothing more precise than a concrete successful action. Its result fits the world perfectly. The difference is due to the differing interest of ethical thought. In the case of the Confucians the path is more direct. The actor establishes a precise pattern for other actions. Education therefore lies in detailed knowledge about forms of behaviour, not so much in conceptual differentiation. It is quite possible that generalisation may be a methodical prerequisite for success in this endeavour. That problem, too, is discussed. But the success of conceptualisation lies in the successful performance of individual actions, not in shaping actions in accordance with normative concepts.

Suggestion of Community Design for the Efficiency of CPTED - Focused on Community Furniture - (범죄예방환경설계(CPTED)의 효율성 증대를 위한 커뮤니티디자인 제안 - 커뮤니티퍼니쳐를 중심으로 -)

  • Lee, Ho Sang
    • Korea Science and Art Forum
    • /
    • v.29
    • /
    • pp.305-318
    • /
    • 2017
  • The need for recognizing the crime in the urban spaces as a social problem and finding out specific approaches such as the study of space design and various guidelines for crime prevention is increasing. In this regard, "Crime Prevention Through Environmental Design" (marked as "CPTED") is actively underway. Yeomri-dong Salt Way is the first place to which the Seoul Crime Prevention Design Project was appled. The business objective of improving the local environment has been implemented rationally through cooperation and voluntary participation between subject of the project executives and community members. Since its efficiency has been proven, the sites have been expanded since then and becomes a benchmarking example of each local government.This kind of problem solving effort has the same context in purpose and direction of the 'Village Art Project' which has been implemented since 2009 with the aim of promoting the culture of the underdeveloped area and encouraging the participation of the residents by introducing the public art. It is noteworthy that this trend is centered around the characteristics of community functions and values. The purpose of this study is to propose the application method of community furniture as a way to increase the efficiency of CPTED to improve the 'quality of life' of residents. To do this, we reviewed CPTED, community design, public art literature and prior research, and identified the problems and implications based on the site visit Yeomri-dong of Seoul and Gamcheon Village of Pusan which is the successful model of "Seoul Root out Crime by Design" and 'Maeulmisul Art Project' respectively. The common elements of the two case places identified in this study are as follows: First, the 'lives' of community residents found its place in the center through the activation of community by collaborative activities in addition to the physical composition of the environment. Second, community design and introduction of public art created a new space, and thereby many people came to visit the village and revitalize the local economy. Third, it strengthened the natural monitoring, the territoriality and control, and the activity increase among the CPTED factors. The psychological aspect of CPTED and the emotional function of public art are fused with the 'community furniture', thereby avoiding a vague or tremendous approach to the public space through a specific local context based on the way of thinking and emotion of local people and it will be possible to create an environment beneficial for all. In this way, the possibility and implication of the fusion of CPTED and public art are expected to be able to reduce the social cost through the construction of the crime prevention infrastructure such as expansion of the CPTED application space, and to suggest a plan to implement the visual amenity as a design strategy to regenerate city.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

How to improve the accuracy of recommendation systems: Combining ratings and review texts sentiment scores (평점과 리뷰 텍스트 감성분석을 결합한 추천시스템 향상 방안 연구)

  • Hyun, Jiyeon;Ryu, Sangyi;Lee, Sang-Yong Tom
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.219-239
    • /
    • 2019
  • As the importance of providing customized services to individuals becomes important, researches on personalized recommendation systems are constantly being carried out. Collaborative filtering is one of the most popular systems in academia and industry. However, there exists limitation in a sense that recommendations were mostly based on quantitative information such as users' ratings, which made the accuracy be lowered. To solve these problems, many studies have been actively attempted to improve the performance of the recommendation system by using other information besides the quantitative information. Good examples are the usages of the sentiment analysis on customer review text data. Nevertheless, the existing research has not directly combined the results of the sentiment analysis and quantitative rating scores in the recommendation system. Therefore, this study aims to reflect the sentiments shown in the reviews into the rating scores. In other words, we propose a new algorithm that can directly convert the user 's own review into the empirically quantitative information and reflect it directly to the recommendation system. To do this, we needed to quantify users' reviews, which were originally qualitative information. In this study, sentiment score was calculated through sentiment analysis technique of text mining. The data was targeted for movie review. Based on the data, a domain specific sentiment dictionary is constructed for the movie reviews. Regression analysis was used as a method to construct sentiment dictionary. Each positive / negative dictionary was constructed using Lasso regression, Ridge regression, and ElasticNet methods. Based on this constructed sentiment dictionary, the accuracy was verified through confusion matrix. The accuracy of the Lasso based dictionary was 70%, the accuracy of the Ridge based dictionary was 79%, and that of the ElasticNet (${\alpha}=0.3$) was 83%. Therefore, in this study, the sentiment score of the review is calculated based on the dictionary of the ElasticNet method. It was combined with a rating to create a new rating. In this paper, we show that the collaborative filtering that reflects sentiment scores of user review is superior to the traditional method that only considers the existing rating. In order to show that the proposed algorithm is based on memory-based user collaboration filtering, item-based collaborative filtering and model based matrix factorization SVD, and SVD ++. Based on the above algorithm, the mean absolute error (MAE) and the root mean square error (RMSE) are calculated to evaluate the recommendation system with a score that combines sentiment scores with a system that only considers scores. When the evaluation index was MAE, it was improved by 0.059 for UBCF, 0.0862 for IBCF, 0.1012 for SVD and 0.188 for SVD ++. When the evaluation index is RMSE, UBCF is 0.0431, IBCF is 0.0882, SVD is 0.1103, and SVD ++ is 0.1756. As a result, it can be seen that the prediction performance of the evaluation point reflecting the sentiment score proposed in this paper is superior to that of the conventional evaluation method. In other words, in this paper, it is confirmed that the collaborative filtering that reflects the sentiment score of the user review shows superior accuracy as compared with the conventional type of collaborative filtering that only considers the quantitative score. We then attempted paired t-test validation to ensure that the proposed model was a better approach and concluded that the proposed model is better. In this study, to overcome limitations of previous researches that judge user's sentiment only by quantitative rating score, the review was numerically calculated and a user's opinion was more refined and considered into the recommendation system to improve the accuracy. The findings of this study have managerial implications to recommendation system developers who need to consider both quantitative information and qualitative information it is expect. The way of constructing the combined system in this paper might be directly used by the developers.

The Research on Recommender for New Customers Using Collaborative Filtering and Social Network Analysis (협력필터링과 사회연결망을 이용한 신규고객 추천방법에 대한 연구)

  • Shin, Chang-Hoon;Lee, Ji-Won;Yang, Han-Na;Choi, Il Young
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.19-42
    • /
    • 2012
  • Consumer consumption patterns are shifting rapidly as buyers migrate from offline markets to e-commerce routes, such as shopping channels on TV and internet shopping malls. In the offline markets consumers go shopping, see the shopping items, and choose from them. Recently consumers tend towards buying at shopping sites free from time and place. However, as e-commerce markets continue to expand, customers are complaining that it is becoming a bigger hassle to shop online. In the online shopping, shoppers have very limited information on the products. The delivered products can be different from what they have wanted. This case results to purchase cancellation. Because these things happen frequently, they are likely to refer to the consumer reviews and companies should be concerned about consumer's voice. E-commerce is a very important marketing tool for suppliers. It can recommend products to customers and connect them directly with suppliers with just a click of a button. The recommender system is being studied in various ways. Some of the more prominent ones include recommendation based on best-seller and demographics, contents filtering, and collaborative filtering. However, these systems all share two weaknesses : they cannot recommend products to consumers on a personal level, and they cannot recommend products to new consumers with no buying history. To fix these problems, we can use the information which has been collected from the questionnaires about their demographics and preference ratings. But, consumers feel these questionnaires are a burden and are unlikely to provide correct information. This study investigates combining collaborative filtering with the centrality of social network analysis. This centrality measure provides the information to infer the preference of new consumers from the shopping history of existing and previous ones. While the past researches had focused on the existing consumers with similar shopping patterns, this study tried to improve the accuracy of recommendation with all shopping information, which included not only similar shopping patterns but also dissimilar ones. Data used in this study, Movie Lens' data, was made by Group Lens research Project Team at University of Minnesota to recommend movies with a collaborative filtering technique. This data was built from the questionnaires of 943 respondents which gave the information on the preference ratings on 1,684 movies. Total data of 100,000 was organized by time, with initial data of 50,000 being existing customers and the latter 50,000 being new customers. The proposed recommender system consists of three systems : [+] group recommender system, [-] group recommender system, and integrated recommender system. [+] group recommender system looks at customers with similar buying patterns as 'neighbors', whereas [-] group recommender system looks at customers with opposite buying patterns as 'contraries'. Integrated recommender system uses both of the aforementioned recommender systems to recommend movies that both recommender systems pick. The study of three systems allows us to find the most suitable recommender system that will optimize accuracy and customer satisfaction. Our analysis showed that integrated recommender system is the best solution among the three systems studied, followed by [-] group recommended system and [+] group recommender system. This result conforms to the intuition that the accuracy of recommendation can be improved using all the relevant information. We provided contour maps and graphs to easily compare the accuracy of each recommender system. Although we saw improvement on accuracy with the integrated recommender system, we must remember that this research is based on static data with no live customers. In other words, consumers did not see the movies actually recommended from the system. Also, this recommendation system may not work well with products other than movies. Thus, it is important to note that recommendation systems need particular calibration for specific product/customer types.

The Role of the Soft Law for Space Debris Mitigation in International Law (국제법상 우주폐기물감축 연성법의 역할에 관한 연구)

  • Kim, Han-Taek
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.30 no.2
    • /
    • pp.469-497
    • /
    • 2015
  • In 2009 Iridium 33, a satellite owned by the American Iridium Communications Inc. and Kosmos-2251, a satellite owned by the Russian Space Forces, collided at a speed of 42,120 km/h and an altitude of 789 kilometers above the Taymyr Peninsula in Siberia. NASA estimated that the satellite collision had created approximately 1,000 pieces of debris larger than 10 centimeters, in addition to many smaller ones. By July 2011, the U.S. Space Surveillance Network(SSN) had catalogued over 2,000 large debris fragments. On January 11, 2007 China conducted a test on its anti-satellite missile. A Chinese weather satellite, the FY-1C polar orbit satellite, was destroyed by the missile that was launched using a multistage solid-fuel. The test was unprecedented for having created a record amount of debris. At least 2,317 pieces of trackable size (i.e. of golf ball size or larger) and an estimated 150,000 particles were generated as a result. As far as the Space Treaties such as 1967 Outer Space Treaty, 1968 Rescue Agreement, 1972 Liability Convention, 1975 Registration Convention and 1979 Moon Agreement are concerned, few provisions addressing the space environment and debris in space can be found. In the early years of space exploration dating back to the late 1950s, the focus of international law was on the establishment of a basic set of rules on the activities undertaken by various states in outer space.. Consequently environmental issues, including those of space debris, did not receive the priority they deserve when international space law was originally drafted. As shown in the case of the 1978 "Cosmos 954 Incident" between Canada and USSR, the two parties settled it by the memorandum between two nations not by the Space Treaties to which they are parties. In 1994 the 66th conference of International Law Association(ILA) adopted "International Instrument on the Protection of the Environment from Damage Caused by Space Debris". The Inter-Agency Space Debris Coordination Committee(IADC) issued some guidelines for the space debris which were the basis of "the UN Space Debris Mitigation Guidelines" which had been approved by the Committee on the Peaceful Uses of Outer Space(COPUOS) in its 527th meeting. On December 21 2007 this guideline was approved by UNGA Resolution 62/217. The EU has proposed an "International Code of Conduct for Outer Space Activities" as a transparency and confidence-building measure. It was only in 2010 that the Scientific and Technical Subcommittee began considering as an agenda item the long-term sustainability of outer space. A Working Group on the Long-term Sustainability of Outer Space Activities was established, the objectives of which include identifying areas of concern for the long-term sustainability of outer space activities, proposing measures that could enhance sustainability, and producing voluntary guidelines to reduce risks to long-term sustainability. By this effort "Guidelines on the Long-term Sustainability of Outer Space Activities" are being under consideration. In the case of "Declaration of Legal Principles Governing the Activities of States in the Exp1oration and Use of Outer Space" adopted by UNGA Resolution 1962(XVIII), December 13 1963, the 9 principles proclaimed in that Declaration, although all of them incorporated in the Space Treaties, could be regarded as customary international law binding all states considering the time and opinio juris by the responses of the world. Although the soft law such as resolutions, guidelines are not binding law, there are some provisions which have a fundamentally norm-creating character and customary international law. In November 12 1974 UN General Assembly recalled through a Resolution 3232(XXIX) "Review of the role of International Court of Justice" that the development of international law may be reflected, inter alia, by the declarations and resolutions of the General Assembly which may to that extend be taken into consideration by the judgements of the International Court of Justice. We are expecting COPUOS which gave birth 5 Space Treaties that it could give us binding space debris mitigation measures to be implemented based on space debris mitigation soft law in the near future.

Medical Information Dynamic Access System in Smart Mobile Environments (스마트 모바일 환경에서 의료정보 동적접근 시스템)

  • Jeong, Chang Won;Kim, Woo Hong;Yoon, Kwon Ha;Joo, Su Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.47-55
    • /
    • 2015
  • Recently, the environment of a hospital information system is a trend to combine various SMART technologies. Accordingly, various smart devices, such as a smart phone, Tablet PC is utilized in the medical information system. Also, these environments consist of various applications executing on heterogeneous sensors, devices, systems and networks. In these hospital information system environment, applying a security service by traditional access control method cause a problems. Most of the existing security system uses the access control list structure. It is only permitted access defined by an access control matrix such as client name, service object method name. The major problem with the static approach cannot quickly adapt to changed situations. Hence, we needs to new security mechanisms which provides more flexible and can be easily adapted to various environments with very different security requirements. In addition, for addressing the changing of service medical treatment of the patient, the researching is needed. In this paper, we suggest a dynamic approach to medical information systems in smart mobile environments. We focus on how to access medical information systems according to dynamic access control methods based on the existence of the hospital's information system environments. The physical environments consist of a mobile x-ray imaging devices, dedicated mobile/general smart devices, PACS, EMR server and authorization server. The software environment was developed based on the .Net Framework for synchronization and monitoring services based on mobile X-ray imaging equipment Windows7 OS. And dedicated a smart device application, we implemented a dynamic access services through JSP and Java SDK is based on the Android OS. PACS and mobile X-ray image devices in hospital, medical information between the dedicated smart devices are based on the DICOM medical image standard information. In addition, EMR information is based on H7. In order to providing dynamic access control service, we classify the context of the patients according to conditions of bio-information such as oxygen saturation, heart rate, BP and body temperature etc. It shows event trace diagrams which divided into two parts like general situation, emergency situation. And, we designed the dynamic approach of the medical care information by authentication method. The authentication Information are contained ID/PWD, the roles, position and working hours, emergency certification codes for emergency patients. General situations of dynamic access control method may have access to medical information by the value of the authentication information. In the case of an emergency, was to have access to medical information by an emergency code, without the authentication information. And, we constructed the medical information integration database scheme that is consist medical information, patient, medical staff and medical image information according to medical information standards.y Finally, we show the usefulness of the dynamic access application service based on the smart devices for execution results of the proposed system according to patient contexts such as general and emergency situation. Especially, the proposed systems are providing effective medical information services with smart devices in emergency situation by dynamic access control methods. As results, we expect the proposed systems to be useful for u-hospital information systems and services.