• Title/Summary/Keyword: neuron-computer

Search Result 92, Processing Time 0.024 seconds

Neurotechnologies and civil law issues (뇌신경과학 연구 및 기술에 대한 민사법적 대응)

  • SooJeong Kim
    • The Korean Society of Law and Medicine
    • /
    • v.24 no.2
    • /
    • pp.147-196
    • /
    • 2023
  • Advances in brain science have made it possible to stimulate the brain to treat brain disorder or to connect directly between the neuron activity and an external devices. Non-invasive neurotechnologies already exist, but invasive neurotechnologies can provide more precise stimulation or measure brainwaves more precisely. Nowadays deep brain stimulation (DBS) is recognized as an accepted treatment for Parkinson's disease and essential tremor. In addition DBS has shown a certain positive effect in patients with Alzheimer's disease and depression. Brain-computer interfaces (BCI) are in the clinical stage but help patients in vegetative state can communicate or support rehabilitation for nerve-damaged people. The issue is that the people who need these invasive neurotechnologies are those whose capacity to consent is impaired or who are unable to communicate due to disease or nerve damage, while DBS and BCI operations are highly invasive and require informed consent of patients. Especially in areas where neurotechnology is still in clinical trials, the risks are greater and the benefits are uncertain, so more explanation should be provided to let patients make an informed decision. If the patient is under guardianship, the guardian is able to substitute for the patient's consent, if necessary with the authorization of court. If the patient is not under guardianship and the patient's capacity to consent is impaired or he is unable to express the consent, korean healthcare institution tend to rely on the patient's near relative guardian(de facto guardian) to give consent. But the concept of a de facto guardian is not provided by our civil law system. In the long run, it would be more appropriate to provide that a patient's spouse or next of kin may be authorized to give consent for the patient, if he or she is neither under guardianship nor appointed enduring power of attorney. If the patient was not properly informed of the risks involved in the neurosurgery, he or she may be entitled to compensation of intangible damages. If there is a causal relation between the malpractice and the side effects, the patient may also be able to recover damages for those side effects. In addition, both BCI and DBS involve the implantation of electrodes or microchips in the brain, which are controlled by an external devices. Since implantable medical devices are subject to product liability laws, the patient may be able to sue the manufacturer for damages if the defect caused the adverse effects. Recently, Korea's medical device regulation mandated liability insurance system for implantable medical devices to strengthen consumer protection.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.