Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies: Gallant Lab

Curator: Stewart Smith
date: November 11, 2011
Categories: Information Design
Tags: Data, Future, Speculation, Typography

The coming revolution is going to favor typographers. (It’s been swell guest posting for the week and I’d like to sign off with a speculative piece so just ride with me a bit further on this crazy train.) The Gallant Lab at UC Berkeley recently published a scientific paper and video (see above) titled Reconstructing visual experiences from brain activity evoked by natural movies. What the paper describes is a process by which a computer can reconstruct what a person is looking at by scanning their brain activity. Let’s play with that idea a little bit. 


So you look at an image. A scanner records your brain activity. A computer analyzes the scan and is able to reconstruct the image you were looking at. How exactly does this voodoo work? If you’re familiar with the film Eternal Sunshine of the Spotless Mind you may recall that before Lacuna employees could get down to their dirty work the patient had to visit the Lacuna offices for a brain-scanning / mapping session so the computer would understand what points in the brain were of interest. A similar principle applies in real life: Gallant Lab records the brain activity of participants as they are watching video, pairing each frame of video with a snapshot of that moment’s brain activity to create a huge library. The goal here is not to note the location of activity per se (Gallant’s staff are already hyper-aware of where the visual cortex sits!) but to map what the nature of that activity is. If the computer records that some guy—let’s call him Berlusconi—shows particular brain activity every time he sees two apples, then the next time the computer picks up that particular brain activity it can be pretty sure Berlusconi is again looking at two apples. 


Gallant Lab’s result images are overlays of other existing images. (Compare this to artist Jason Salavon’s work.) This is because the software can only imagine new images based on what it’s already recorded. What would you want in your library of images? The more expansive the library the better, right? The point I’m unclear on is how much the act of *remembering* an image resembles actually seeing an image. (Warning: we’re getting speculative here.) What if you could close your eyes and imagine those apples with enough clarity that the software would think you’re actually looking at apples? Keep that on the back burner for a minute while we move on. 


Presently Gallant Lab is using fMRI scanners to observe brain activity. This isn’t exactly a mobile device. But suppose you could shrink down an fMRI into something that is mobile—like a special hat that could plug into your smart phone. (Oh, we’re riding far out into the what-ifs now, but just roll with it.) And your smart phone already has a headphone jack so what we’d have here is a system that could read what you’re seeing, interpret it on your phone (or use a cell network to pass data to a server that would do the hard number crunching, like the Shazam app.) and then provide audio feedback. If you wanted to go further you might swap out the headphones for LCD glasses so your visual input would be matched by visual feedback. And if you wanted to go much further you might wonder if you can read visual activity from the brain, could you also write to it?


So far we’re riding a wave of techno-romanticism to come up with a fantasy device that reads both real and imagined visual input, can process and “understand” it, and respond in kind. Imagine a really brainy version of Skype where you just imagine the text you want to send and on the other side of the world your conversation partner receives this as an image overlay on their reality. No screens. No physical input. Well, that seems like an awful lot of time, effort, and money for a slightly more swish hands-free mode. (Or really, what a P3Speller already does. Seriously.) If we’re aiming for visual telepathy we’d better be able to use the medium to its fullest.


Perhaps no one is better suited to do this than typographers. It’s naive to assume that just re-using our existing typographic forms would be the most efficient solution for visual telepathy. Much like ink-traps in lead type we would need medium-conscious features in our glyphs to improve legibility both for ourselves and for our software interpreters. Looking over Gallant Lab’s result images it’s clear this is a fuzzy process. How robust would our new typography need to be in order to survive translation from imagination to pixels and back again? And more pressing: whose visual imagination is crisp enough to consistently construct the lines and curves with enough precision to be understood, even when their mind is exhausted? It’s analogous to children practicing their handwriting over and over until finally it’s passable for a broader audience. The hurdle isn't the pencil or the pen. It’s the child’s ability to imagine and execute the curves. It‘s not the technology, but the user. 


Imagine a freelance typography gig in this future—meeting a client at a brain recording studio to sketch out the text for a new advertising campaign that will be broadcast straight to other people’s heads. You go through a few iterations and compensate for quirks in the technology—much like you would in a normal broadcast studio. And all of this sketching and finalizing would be infinitely quicker than using a mouse or Wacom tablet to navigate through traditional desktop software. Instead you just imagine the work—you *perform* the work. 



A more in-depth look at how Gallant Lab’s result images are constructed

In this new Gallant Lab inspired world would we experience a return to design culture dominated by Paul Rand and Saul Bass types as the simplicity of form allows for higher fidelity telepathy? Would spies be trained to imagine highly *illegible* typography in order to mask their visual wanderings from some new brain-scanning CCTV? What’s the nature of a “brain ready” font license anyway? Maybe it’s time for some young intrepid typographers to get on the phone to places like Gallant Lab and start making the future legible. 

  • Mossthomas

    I believe I “get” it but fell short with the visuals.

  • http://www.karlavonbohn.webs.com/ Damon Von Bohn

    it’s pretty impressive, considering the display data is originally just numbers.

blog comments powered by Disqus