A mind reading motorcar that turns thoughts into complete sentences in real time could revolutionise communication with people who cannot speak or move.

Information technology has an accuracy rate of 97 per cent - more than twice as high as other brain-indicate decoding devices.

An algorithm maps the activeness of neurons to combinations of vowels, consonants and commands to parts of the oral fissure.

This enables it to type give-and-take sequences on a reckoner interface in real time, reports Nature Neuroscience.

It could assist people who cannot speak or motility. Currently they are express to spelling words out very slowly using residual heart movements or muscle twitches.

Only, in many cases, information needed to produce fluent speech communication is still there in their brains. It'southward hoped the engineering will enable them to limited it.

Corresponding author Dr Joseph Makin, of California University in San Francisco, said: "Average word error rates are as low as 3 percent."

The study was carried out on four epilepsy patients who had been fitted with brain implants to monitor their seizures.

An algorithm maps the activity of neurons to combinations of vowels, consonants and commands to parts of the oral cavity

Their neural action was turned "discussion by word into an English language sentence - in real time," said Dr Makin.

Previous techniques have had express success - with efficiency far below that of natural spoken language.

They could only decode fragments of spoken words - or less than 40 pct of words in spoken phrases.

So Dr Makin and colleagues used AI (artificial intelligence), or auto learning, to link the behaviour of encephalon cells directly to sentences.

In the report the participants read upwardly to 50 unproblematic sentences aloud while the electrodes recorded their neural activity.

Examples included "Tina Turner is a pop singer", "the oasis was a delusion", "role of the cake was eaten by the canis familiaris", "how did the human being get stuck in the tree" and "the ladder was used to rescue the cat and the man."

Video Loading

The encephalon signals were fed into a estimator which created a representation of regularly occurring neural features.

Explained Dr Makin: "These are likely to exist related to repeated features of speech such equally vowels, consonants or commands to parts of the oral fissure."

Another deep learning technique, known as a recurrent neural network, and so decoded them into sentences.

What's more, the network was able to identify private words - suggesting learning novel sentences from expanded vocabularies is possible.

In the concluding decade encephalon-machine interfaces (BMIs) take enabled some corporeality of motor function to be restored to paralysed patients.

Dr Makin said: "Although this blazon of command can be used in conjunction with a virtual keyboard to produce text, fifty-fifty nether ideal cursor control which is non currently achievable, the give-and-take rate would withal be limited to that of typing with a single finger.

The technology can type discussion sequences on a figurer interface in real time

Read more

Latest science and tech

"The alternative is the decoding of spoken, or attempted, spoken language, but up to now such BMIs have been limited either to isolated sounds or monosyllables - or, in the case of continuous speech on moderately sized vocabularies of about 100 words, to decoding correctly less than twoscore per cent of words.

"In this study, we try to decode a unmarried judgement at a time, as in almost modern machine-translation algorithms, so in fact both tasks map to the same kind of output, a sequence of words corresponding to one judgement.

"The inputs of the ii tasks, on the other hand, are very dissimilar - neural signals and text."

His team also establish brain regions that strongly contributed to speech decoding were also involved in speech production and speech perception.

The approach decoded spoken sentences from one patient'south neural activity with an mistake charge per unit similar to that of professional person-level speech transcription, said Dr Makin.

Additionally, when the AI networks were pre-trained on neural activeness and speech from one person before training on another participant, decoding results improved.

This suggests the approach may be transferable across people. Farther enquiry is needed to fully investigate the potential and to increment decoding across the restricted language, Dr Makin added.

While listen reading technologies are primarily designed to help the sick, ethical concerns accept been raised.

Some experts say they have the potential to be misused on the healthy and used to track people'southward thoughts - and relay them back to governments or companies.