Facebook wants to let you type with your thoughts and "hear with your skin".
The social network is working on brain-computer-interface technology that will "one day will let you communicate using only your mind", according to CEO Mark Zuckerberg.
The project is part of Facebook's consumer hardware lab known as Building 8, as Business Insider first reported in January.
Little has been publically revealed about Building 8 until Wednesday, when Facebook took the wraps off the division's first two projects at its annual developer conference in San Jose.
Building 8 chief Regina Dugan revealed that her team of 60 scientists is working on a non-invasive system capable of typing 100 words per minute using only brain waves. An even more futuristic project intends to deliver spoken language through human skin.
The end-goal is to eventually be able to think in Mandarin and feel in Spanish, according to Dugan, who joined Facebook last year from Google's advanced projects division.
Silent speech interfaces
The projects are still a ways off from becoming actual products, but the company believes these will eventually become products.
"Eventually, we want to turn it into a wearable technology that can be manufactured at scale," Zuckerberg said in a blog post on Wednesday.
Dugan, in her own blog post, described both of the projects as "silent speech interfaces", which she said offer the convenience of speaking with your voice but the privacy of sending a text message.
Here's Facebook's full announcement about the crazy tech that Building 8 is working on:
Project: Type with your brain
Recently there has been a lot of hype surrounding brain technology. We have taken a distinctly different, non-invasive and deeply scientific approach to building a brain-computer speech-to-text interface. A silent speech interface with the speed and flexibility of voice and privacy of text.
We have a goal of creating a system capable of typing 100 words per minute, straight from the speech centre of your brain - this is 5x faster than you can type on your smart phone today.
This isn't about decoding random thoughts. This is about decoding the words you've already decided to share by sending them to the speech centre of your brain. Think of it like this: You take many photos and choose to share only some of them. Similarly, you have many thoughts and choose to share only some of them.
We will do this via non-invasive sensors that can be shipped at scale.
We'll need new, non-invasive sensors that can measure brain activity hundreds of times per second, from locations precise to millimetres and without signal distortions. Today there is no non-invasive imaging method that can do this.
Optical imaging is the only non-invasive technique capable of providing both the spatial and temporal resolution we need. And thanks to improvements in performance, cost and miniaturization from the telecomm industry, we have a big wave to ride.
Six months ago this project was just an idea.
Today, we have a team of over 60 scientists, engineers and system integrators from UC San Francisco, UC Berkeley, Johns Hopkins Medicine, Johns Hopkins University's Applied Physics Laboratory and Washington University School of Medicine in St. Louis specialising in machine learning methods for decoding speech and language, in optical neuroimaging systems that push the limits of spatial resolution and in the most advanced neural prosthetics in the world.
Project: Hear with your skin
Our second project is directed at allowing you to hear with your skin. We are building the hardware and software necessary to deliver language through the skin.Your skin is a 2 m2 network of nerves that transmits information to your brain.
Braille, invented in France in the 19th century, has proven that small bumps on a surface can be interpreted in the brain as words.
We know from the Tadoma method, developed in the early 20th century based off the experience of Helen Keller, that deaf and blind children could learn and communicate through slight pressure changes created by puffs of air and vibrations felt by their hands placed over a person's throat and jaw.
From the 1950s to today, what all of these techniques have in common, is our brain's ability to reconstruct language from components.
The cochlea in your ear takes in sound and separates it into frequency components that are transmitted to the brain. We can do the same work of the cochlea, but transmit the resulting frequency information, instead, via your skin.
With this technology, the ability to learn language and vocabulary are just the beginning of that we can 'hear' with our skin.
This article was originally published by Business Insider.