Hello, World!
Hello, everyone. I’ve decided to start a blog about my adventures with brain-computer interfaces. Not really sure why (posterity?), but here we go.
This year I’m going to take a neuroscience class at college, which is exciting. I’ve heard that it’s really, really hard, but hopefully I’ll enjoy it.
I guess I want to get ahead of the game before brain-computer interfaces become corporate. I’m not looking forward to when the only way to interact with your Google(tm) Chrometop is via the Google(tm) BCI. Called, I dunno, … something stupid but shiny like Google Link.
God I hate webtops.
Anyways, I’ve done a little research into this already. From what I’ve heard it’s not actually that hard to get your brain hooked up to a computer.
We’ve been able to read general brain activity for forever using fMRIs. Getting more precise readings isn’t that hard even non-invasively either, if you don’t mind a funny haircut to get the little discs positioned on your scalp. And then there’s surgical procedures, of course. But it’s amazing the precision you can get just with the little discs. (I should know the name of those.)
Reading the data isn’t that hard either. The hardest part right now is that there isn’t really a standard protocol,
but there’s really not much nuance to it (there’s only so many ways to transmit “this sensor reads this much”.)
For example, I have a friend on Dynamo (@rztd:dynamo.obsolete.tech)
who’s been hacking themself a BCI from scratch. Like, writing it in C and uploading it to microcontrollers. They’re crazy.
Anyways, the hard part is actually getting your brain to do what you want it to. Like, take RZTD. All they can do right now is light 2 LEDs, picking which one(s) are on or off. And they have to spend a good 10 minutes focusing and relaxing in order to, and it’s not very consistent besides.
So yeah. Stick around if you wanna follow these adventures for whatever reason. Looks like I got my work cut out for me in any case.
Is it work if you enjoy it, though?