Friday, January 25, 2013

iPad VO Controller to control all apps on the iPad?

 

(click any pic for larger image)
Picture of ipad
            voiceover controller
Read all of this. You'll be a more informed person ;-)

My iPad VO (VoiceOver) Controller connects to an i-device (iPad, iPhone, iPod Touch), through Bluetooth (a wireless technology), through an Accessibility feature that Apple generously included in the iOS (the iPhone Operating System on all iPhones, iPads, iPod Touches) for blind people, VoiceOver. Apple designed VO so that people could move their finger around on their touchscreen and hear what was under their finger. Double-tap anywhere on the screen to activate (like pressing Enter on a computer keyboard)...
image of ipad voiceover controller and ipad
                      tall stand






















My Tall Tablet Stand holding my iPad 2, inside my Carry Case/Bumper Case Combo, showing my Point To Pictures app, operated by my iPad VO Controller's arrow and Select buttons.
...Apple soon found out that blind people prefer using a keyboard to control a "screenreader" to hear the screen rather than see it. They built in many VO keyboard shortcuts so that blind persons could use a Bluetooth (BT) keyboard to control VO. They then added a function within VO called QuickNav, which allows navigating through any VO-compatible apps with left/right arrows, and simultaneous up/down key as Enter (like double-tap described above). There are keyboard shortcuts for your Home button, and toggling QuickNav mode on/off (between having arrow keystrokes move a text cursor within a text field, and moving between things on the screen).

So IF you can build a switch interface, or controller/button-box that 'emulates' (pretends to be) a BT keyboard, and 'types' these shortcuts, you can, in theory, 'globally' control any app that is compatible with VO, on an iOS device.

And that's what I've done here with my iPad...

...
VO Controller. The pictures above clearly shows what each button does, and the YouTube video below shows an actual demo of how it works.

This method of 'global' iPad control is how several switch interfaces from other companies work. And I'll be creating my own version of a iPad Switch VO Interface by early 2013.

But in between 'direct select' (directly touching the iPad screen) and switch 'scanning', I created my iPad VO Controller for you! It's a good pretty good compromise (nothing is perfect in this world of AT ;-)

Now some caveats:

  • You can only move to the 'next' screen object (button, field, list item, etc.) with the right arrow, or 'back' an object with the left arrow. There is no up/down, so you must go through everything on the screen many times to get to the bottom right of the screen. If you have long lists, it could take a loooonnngg time to make it through with a switch interface. This is especially noticeable with the iOS keyboard.
  • Currently, with my Controller, there is no hold-down mode (auto-repeat). I will be adding this option early in 2013 so the user will be able to hold down one of my buttons to move through multiple screen objects.
  • Not all apps are VO-compatible, even Apple's. When other companies claim "all iPad apps", that must be taken 'with a grain of salt' ;-) (No, I can't keep track of all apps that are VO-compatible. That would be in the thousands) Even AAC apps (of which there are now TOO many!) may not be VO-compatible. To find out, simply turn on VoiceOver (Settings, General, Accessibility, VoiceOver, On, and remember that you have to double-tap after you've highlighted something to actually *do* that something). Now go into your app and move your finger around. Do you hear things as you move over them? Do they highlight?
  • If you select a text field, the iOS turns QuickNav off, assuming you want to use your left/right arrows to move around *within* the text field. To get back to moving around between on-screen objects, you must press my Type/Move button to turn QuickNav back on. I don't believe the existing switch interfaces address this very important issue.
  • And click here for an email I sent about VO efficacy.
And there you have the true story.


Purchase

H-87 - iPad VO Controller.....$199
Order









Hello to my RJ 'fans' (yes, I say that with all false modesty aside!  29 years doing this?  I've earned a few 'fans' 

The truth about 'global' iPad access

There is a myth, or gossip, or misunderstanding about some new devices that claim they can control all, most, or even some iPad apps by special needs devices.  I'm here to tell you that this is just not true.  There are 'considerations' which you *must* understand in order to have people that can NOT *functionally* touch the iPad's screen, operate apps that have not been specifically programmed for 'alternative access'.

First, an app programmer *can* program in some special access.  I can email you, upon your request, for a list of 'switch-friendly' apps that are compatible with my, and other company's, switch devices.  But this discussion is not for those VERY few apps.

This discussion here is for those apps that do *not* have any special programming.  This discussion is for most apps in the App Store, that is, those apps that don't know anything about 'alternative access' devices.  Here we go...

There are several Bluetooth switch interfaces, an iPad case *with* switch interface, a joystick going through a Bluetooth interface, and even my own VoiceOver Controller...

http://rjcooper.com/ipad-vo-controller

...that claim some sort of 'global' access to *all* iPad apps.  In other words, they claim that apps that have no idea about switches or special needs can be accessed with/through their device.

Don't get me wrong.  Their devices are very nice and in fact, *I* am even reselling one of them:

http://rjcooper.com/switch2scan

But the bottom line is that all of these type of devices attempt to access non-switch apps through a 'back door' that Apple has provided called VoiceOver ("VO," actually meant for blind persons), Apple's only allowance of 'global' access to all apps.  Well, that was the *plan*, but I'm here to tell you that reality is much different 

First, just believe me that all these devices are dependent upon an app being 'VO-compliant,' meaning that the app will highlight and speak aloud anything your finger moves over.  If the app is not VO-compliant, then these devices do nothing.  And I'm here to tell you that even I am surprised how few apps are VO-compliant. 

Test *your* desired app.  Go into Settings, General, Accessibility, and turn on VoiceOver.  Move your finger around slowly.  Note how things get highlighted and spoken.  To activate what is highlighted, double tap *anywhere* on the screen.  Now press your Home button, navigate to your desired app, double tap, and see if moving around within your app highlights and speaks items.

I'm finding it's even a harder 'sell' to approach developers and convince them to make their apps VO-compliant as it has been to get them 'switch-friendly'.  So here I am struggling, for 2 years now, to convince special needs app developers to make their apps 'switch-friendly', that is, working with devices and interfaces that 'type' specific keystrokes to the ipad, and only getting just a few to add that capability.  In other words, all apps should work with all devices, and vice versa.

But, even at a recent special needs touch technology conference, where I was a main speaker for in St. Louis last week, as I traveled between app booths, none of them was fully VO-compliant.  Darn!

As for 'regular' apps, same deal.  Very few are VO-compliant.  BUT, thankfully, some are, most importantly, Apple's own apps on your i-device:

Camera
Email
Photos (to some extent)
Music
iBooks
All text-oriented apps that can use the Apple's built-in keyboard

And as for 'regular' apps, 'forget about it' 

OK, let's talk about what this means to you.  Let's say you have someone physically 'involved' enough so that *functionally* controlling the iPad is challenging *at all* (and I mean that *literally*), then you should consider alternative control methods.  BUT...Apple doesn't allow for alternative control methods like a PC or Mac does (such an example of global alternative control is my CrossScanner at (http://rjcooper.com/cross-scanner)).  What types of alternative control methods *do* exist?  The only ones that I can *truthfully* recommend are my own!

Capacitive HeadPointer
Conductive HandPointer

That's it!  No eyegaze.  No head tracking.  No trackball or other mouse-type device.  The primary reason?  No cursor!  Secondary reason?  Apple doesn't allow it!

So other than 'direct select' using somone's fine motor finger/'knuckle control, or one of my offerings (or something similar from some other source), what would we *want* that even *might* be possible?  

Switch scanning (auto or step)
Joystick
Buttons
Keyboard

And now we get to the 'meat of the matter'.  Can these methods *really* 'globally' access all, or most, or even some of iPad apps?  The answer is "some."  But even that must be taken with a 'grain of salt'.  I have to go into more detail to properly explain the situation.  If you want to use an ipad with/for someone and they have physical limitations, you *should* read on...

Once Apple added VoiceOver to Macs and then i-devices, they found out that, for the most part, blind people don't use 'graphic interfaces' via a mouse-device or their fingertip.  That is, they use keyboards, via a software package such as the ubiquitous JAWS, a "screenreader" program that reads aloud what people with vision see.  A comprehensive list of keyboard "shortcuts", combination of keys, gives complete readback and control over PC's.  Similarly, through VO, Apple added keyboard shortcuts to Macs and i-devices.  Apple then added a feature within VO called QuickNav, which lets the left/right arrows of a keyboard to move between screen 'objects' (paragraphs, buttons, list items, etc.), but ONLY left/right (previous/next).  

And that's how the above devices operate, by 'typing' the appropriate VO keyboard/QuickNav shortcuts.  BUT...keep in mind that if the app is not VO-compliant, that is, does *not* respond to VO keystrokes, then no device/method in the world is going to access that app, unless it has been made switch-friendly, or joystick-friendly, meaning *specifically* programmed to look for these devices.

And even if the app is VO-compliant, keep in mind that VO only goes previous/next, left/right.  This means *no* up/down.  And that means you must sequentially move through all available objects left to right, top to bottom.  OUCH when it comes to any screens that have a lot of things on them, like an AAC board with 16 or more buttons, or multi-paned lists, or web pages (they *all* have *lots* of 'objects' on them).  Worse yet is the i-device's onscreen keyboard.  Want an "N"?  You'd have to turn QuickNav off, and right-arrow (or scan) all the way from upper left of the on-screen Apple keyboard, across and down, across and down, and across again to get the "N" 
  OUCH!

And there is *no* row-column scanning; it's all sequential!  That's because an app can't actually 'tell' if it's being controlled by VO and adjust itself.  Only when the app is specifically programmed for row/column can it occur.

So VO control is funky!  It's possible, and it does give *some* access to non-alternative-device-friendly apps, but it's what we in engineering would call a "kluge."  It's pretty much a 'let the buy beware' issue because it seems that several AT companies may not be telling the whole story.  It's up to *you* to understand the above and apply it to your choice of control device and, actually of a device in general, iPad, Android or PC tablet, or Mac or PC computer.

And don't be mis-led by *any* company's claim of 'global access'.  However, even with all these limitations, you *must* consider *any* option that Apple allows.  Because of this fact, *I* make a BIG button VO-Controller, and *I* resell a VO-compatible switch interface (*I* will be making my own within a few months), and NEW is my LARGE key, colored-rows keyboard specifically for iPad (ask for details if interested in any of these).

I do I wish I could have given you better news than above, but the above is the truth, with little or no exception.

I thought this would be good information for you, and create realistic expectations when purchasing one of *my* VO controllers.

RJ :-)

These Pictures Were Drawn (by a person w/ ALS) Using A Human Eyeball. Incredible.


Francis Tsai is a concept artist who has worked for companies like Rockstar, EA and Eidos. Sadly, as we told you last month, Tsai was diagnosed with Lou Gehrig's Disease in 2010, and the condition has slowly taken away his ability to draw.

First he lost the use of his hands, so he learned to draw with his feet; when that was taken away, he promised to learn how to rig up a computer so he could draw with his eyes. Well, Francis' sister emailed us today to let us know these experiments have been a success.

The pictures you're seeing here were drawn by Francis using only his eyeballs. Using Tobii's "eye-gazing" technology, plugged into drawing programs Sketchup and GIMP, Tsai has been able to create these four images using nothing but the motion of his eyeballs. I'm at a loss for words.
You can purchase prints of these from Francis' store, with all proceeds going towards funding his medical care.


To see the larger pics in all their glory (or, if they're big enough, so you can save them as wallpaper), right-click on them below and select "open in new tab".
Fine Art is a celebration of the work of video game artists, showcasing the best of both their professional and personal portfolios. If you're in the business and have some concept, environment, promotional or character art you'd like to share, drop us a line!

These Pictures Were Drawn Using A Human Eyeball. Incredible. These Pictures Were Drawn Using A Human Eyeball. Incredible. These Pictures Were Drawn Using A Human Eyeball. Incredible.

Music with the Mind: The Brain-Computer-Music-Interface

http://www.gizmag.com/music-with-the-mind-brain-computer-music-interface/18489/

The BCMI lets you create music using nothing more than eye movement and brainwaves
The BCMI lets you create music using nothing more than eye movement and brainwaves
 
Imagine a Wii that lets you play a musical instrument with your brain without touching strings or a keyboard. That's exactly what this "proof of concept" brain-computer-music-interface (BCMI) is designed to do – it uses brain waves and eye movement to sound musical notes, so even a person with "locked-in-syndrome" could participate in creative activity analogous to learning to play a musical instrument. Developed by a team headed by Eduardo Miranda, a composer and computer music specialist from the UK's University of Plymouth, the BCMI can be set up on a laptop computer for under $3,500 (including the computer). For people who are disabled, assistive technology usually aims at day-to-day functioning and largely ignores the unique aspect of being a human – creativity. This is different.

The Brain Computer Interface as an assistive technology

"Creativity - like human life itself - begins in darkness."– Julia Cameron
No-one wants to even think about it but imagine a car crash or a stroke left you totally paralyzed and your only active movements were eye movements, facial gestures and minimal head movements. If you still retain full cognitive capacity, you would have what is called locked-in syndrome, a fate some might regard worse than death. For any person with a disability, one of the biggest obstacles is that people simply assume that if your body doesn't work, then your brain is probably not capable of much either. How much worse is this for the person isolated by locked-in syndrome?

Historically, assistive technologies have relied on the person being able to maneuver at least one part of their body. For example, an Augmented Communication Device may require them to press buttons on a keyboard that has pre-designated questions, statements or responses. These devices can be adapted in order for the buttons to be pressed with a finger, a toe, or a metal-pointer attached to their head. Pretty impressive. But what about people with locked-in syndrome who aren't capable of such motor function other than eye movements? Most of the technology has been simply passing them by.
Technology in the form of the brain computer interface (BCI) provides hope for these and many other people because we no longer have to imagine being able to use our thoughts to control a wheelchair or a communication device. In the past decade this technology has moved increasingly from fantasy into a reality.

In 2007, Mike Hanlon wrote in Gizmag about "The first commercially available Brain Computer Interface" and pointed out how work in the area was focused on enabling paralyzed humans to communicate far more freely, but noted the potential to enhance everyone was not that far away. He was right. Within the last five years we have moved from the ability to point with the mind to a thought controlled cursor. And we have moved from driving wheelchairs with brainwaves to driving a car controlled by mind power.

The brain-computer-music-interface

This latest development has thrust the BCI into the world of music and creativity where, in this, its first use, the brain computer musical interface promises to enhance life immensely for those with a most severe disability, locked-in syndrome.
This is the brainchild of a team headed up by Eduardo Miranda, and the Plymouth BCMI Project [PDF]. The system is not yet wireless, but uses a laptop computer, related software, 3 electrodes and an EEG amplifier and can be built for under US$3,500.

Using brainwaves a person can almost immediately produce a full range of musical notes from this device by simply looking intently at one of four icons. These four icons are responsible for sounding pitch, rhythm, and controlling the strength and speed of the notes. Like learning to play a musical instrument, playing music with this device requires skill and learning. As the scientists note, however, this can be an attractive attribute.

With minimal practice in this proof of concept test, the person with locked-in syndrome rapidly demonstrated skill at playing and found it an enjoyable experience.
Check out what such a device can do when output from it is hooked into a piano keyboard. A practiced person has the potential to play masterful music using nothing but his or her brainwaves.


A whole new medium for creativity

Assistive technologies have made life easier for millions of people with disabilities around the globe. We have technology that can help people at home and at work; help them to communicate; help them with mobility. In fact you could say we've got technology for almost everything important to a person's life, right? But until now, these technologies largely ignored the most unique aspect of being a human – creativity.

In the grand scheme of life, you probably wouldn't say that cooking dinner for yourself or getting yourself out of bed in the morning were the things you were most proud of achieving. People want to be unique, innovative, and admired for their talents. Why else would we write books, design cars, or start our own companies? It's in our nature to create. The BCMI promises to give a whole new medium for creativity because it can be used by anyone almost regardless of any physical disability. Inside each one of us is the untapped potential to be the next Beethoven without the agony of studying music theory or learning the piano. All you need is a brain.

Harnessing Brain Signals for Communication

 

"You survive, but you survive with what is so aptly known as 'locked-in syndrome.' Paralyzed from head to toe, the patient is imprisoned inside his own body, his mind intact, but unable to speak or move. In my case, blinking my left eyelid is my only means of communication..." (from Bauby, "The Diving Bell and the Butterfly," Fourth Estate, London, 1997, p. 12).

Brainwaves
 
These are the words of Jean-Dominique Bauby, a former journalist and editor of the French magazine, Elle, who at the age of 43 suffered a massive brain stem stroke that left him completely paralyzed, unable even to speak. All he retained was his ability to blink the left eye. After seeing he could communicate "yes" and "no" by blinking, Bauby's speech-language pathologist, Henriette Durand, set up a special alphabet he could use to blink the letters of words. Thus, he was able to communicate with Durand, his other health care providers, family, and friends.

Ultimately, Bauby achieved the near-miraculous feat of dictating a 139-page book about his locked-in experience to his assistant, Claude Mendibil. The book, "The Diving Bell and the Butterfly," was published in France shortly before his death in 1997, and 10 years later was made into an Academy Award-nominated movie of the same title. The butterfly symbolized the words and thoughts trapped within the diving bell-Bauby's steel-trap of a body. Expressing those thoughts via eye blinks was by no means easy or fast. Mendibil had to sound out the French alphabet in frequency order, and Bauby blinked as soon as she uttered the right letter. The book took about 200,000 blinks to write in four-hour sessions over 10 months. It required the constant presence of Mendibil to produce.

If Bauby suffered his stroke today, he could have been able to write his book on his own, by using brain-computer interface technology being developed, in part, by communication sciences and disorders professionals. About nine years before Bauby's book was on the shelves, a paper published in 1988 in the journal Psychophysiology signaled the starting point for an alternative, less onerous communication technique for people with similar severe motor impairments. This paper, by cognitive neuroscientist Emanuel Donchin and his former student Lawrence Farwell, first described a brain-computer interface system, called that P300 Speller, that allows people to communicate without moving a muscle by using their brain waves in lieu of fingers. About 10 years later, the BCI-2000 package, which implemented the P300 Speller described by Farwell and Donchin, was developed and distributed to researchers to test with patients with amyotrophic lateral sclerosis (ALS or Lou Gehrig's disease).

The goal of our research at the University of South Florida BCI laboratory and at Northeastern University is to ultimately develop an interface that allows locked-in patients to communicate more quickly and independently, and to control their environment efficiently-for example, to reach for and manipulate objects. Another goal is to adapt the interface to control commercially available augmentative and alternative communication devices.

The brain interface at work

Here is a look at the BCI-2000 in action: Scott Mackler, a professor at the University of Pennsylvania's School of Medicine who has advanced ALS, uses the P300 BCI to communicate with students and others via e-mail and stay actively involved in his research on addiction. During the two and a half years he has used it, the system has maintained high accuracy level of 83 percent. Mackler is one of several people with ALS who interact with others daily via the P300 BCI as part of the New York Wadsworth Center's effort to develop a home version of the BCI-2000 system. Researchers in the Wadsworth laboratory facilitate these clients' use of the technology via the Internet.
But how does the system actually elicit words from somebody who can't move? By recording electrical brain activity and isolating a particular type of brain wave-the P300. This type of brainwave is recorded in what's known as the "oddball" paradigm, in which a person is presented with a sequence of events (for example, names). Each event should belong to one of two categories (for example, female and male names), one of which should be presented more frequently than the other (for example, an 80 to 20 ratio). It is critical for the person to perform a task that requires categorizing each of the events into one of the two categories. The events in the rare category elicit a P300.

The P300 BCI system records people's brain activity via an electro-cap worn on the head, an amplifier to process the incoming signal and computer programs that allow:
  • The presentation of the oddball paradigm to the user through a visual, auditory, or tactile interface. For example, the visual interface is a 6 x 6 matrix with letters and numbers. The size of the matrix can be changed to accommodate the user's needs.
  • The detection of the P300 in real time.
In the original design of the P300-Speller, the "oddball paradigm" was implemented by flashing, successively, and randomly, the six rows and the six columns of the matrix. The participant focuses on a specific character. In the sequence of 12 flashes, there are thus two categories: Two items contain the target character, while the other 10 flashes do not. Thus, as was predicted on the basis of decades of P300 research, the row and the column containing the target would elicit a P300, and the other 10 flashes would not. The system would then "type" the character at the intersection of the row and column that elicited a P300.

A major advantage of the P300-Speller, compared with other BCI systems, is that it required virtually no prior training of the user. Other BCIs require extensive training, and users' ability to control brain activity fluctuates due to lack of clear instructions on how to control this activity. The systems differ in the brain activity they rely on. For example, one that uses the "Mu" rhythms requires a relatively long training period of weeks before users are able to control the system with 80 percent accuracy. Users deploy the mind as a mouse to "control" a cursor to select items on a screen. In these systems, motor imagery (for example, imagining moving a hand) is typically used during training.
Returning to the P300 BCI, data from one recent study, led by Christoph Guger and published in Neuroscience Letters in 2009, suggest that after only five minutes of training on the system, close to three quarters of the 81 participants could spell a five-character word with 100 percent accuracy. Most of the remaining participants achieved an accuracy level above 80 percent.

It should be noted, however, that not everyone is the right candidate for the P300 BCI. To be eligible, a user must be able to:
  • Perform a simple oddball task. To determine this, the tester presents the user with a simple oddball task, typically the letters X and O presented in different probabilities, and sees whether a P300 is elicited by the rare category-for example, the X.
  • Understand and follow the categorization instructions. To produce the P300, cognitive abilities must be intact.

What's next?

Bauby's cognitive abilities were obviously sharp as ever, suggesting he would have benefited from the P300 BCI. It would have freed him from relying on an assistant. And although the differences in spelling rate between the P300 and Bauby's method are only marginal, the benefit of the BCI system lies in its provision of increased independence and an increased range of functions: The system can be much more than just a speller. It allows people to use their brain activity to surf the Internet, write e-mails, and-with some adjustments-control their environment.

That said, increasing the speed and accuracy of the P300-BCI is a major goal of our work at the University of South Florida BCI laboratory. Working with electrical and mechanical engineers, we're fine-tuning an interface to help users reach and manipulate objects. We're also developing a "switch" that would allow users to control augmentative and alternative communication devices with the P300 BCI. Switches to control AAC devices are typically offered when the ability to speak intelligibly has deteriorated and only minimal, residual movement is left, which is typical in later ALS. In many cases, ALS patients use an AAC tied to eye gaze, although even these devices become inefficient or unstable as the disease progresses.

Although the P300 BCI was developed as a speller, it is a communication system that allows people to communicate with the environment at different levels. Not only can they spell words, but they can select common phrases or use the interface as a keyboard to control their computer or surf the Internet. They do this by selecting commands that are communicated to an external device. Given that the P300 can be elicited by visual, auditory or tactile events, is not modality specific, and does not require a motor response, we are hoping to develop a system that presents events in the user's strong modality-for example, a visual display for people with hearing loss and auditory input for people with poor vision-and that is suitable for users who are unable to execute a motor response.
As the P300 BCI develops into a home-based AAC device, its use among "locked-in" patients will likely increase, and SLPs will play an important part in its optimal operation. Some users will benefit from a device that allows them to indicate a simple "yes," "no," "stop" answer, while others will be able to control an 8 x 9 matrix that emulates a computer keyboard. It is the role of the clinician to determine the level of complexity that best suits the user.

What if the P300 BCI had been available to Bauby? Although his spelling time would have been only moderately reduced, he would have been able to work independently, at his own pace. He would have been able not only to write his book, but also to edit it. He could have used the P300 BCI not only for spelling, but also for communicating with his family members, for example by sending and receiving e-mails, and-if he lived today-by using video-phone applications such as Skype. With some adjustments, the P300 BCI may have allowed Bauby to use a television remote and otherwise control his environment, freeing him, at least somewhat, from the diving bell that so constrained him.


Yael Arbel, PhD, CCC-SLP, is a visiting clinical assistant professor in the Department of Speech-Language Pathology and Audiology at Northeastern University in Boston. She is also affiliated with the Departments of Psychology and Communication Sciences and Disorders at the University of South Florida in Tampa.

cite as: Arbel, Y. (2013, January 01). Harnessing Brain Signals for Communication. The ASHA Leader.

Web source: http://www.asha.org/Publications/leader/2013/130101/Harnessing-Brain-Signals-for-Communication.htm

Sources

Donchin, E., & Arbel, Y. (2009). P300 Based Brain Computer Interfaces: A Progress Report. Foundations of Augmented Cognition. Neuroergonomics and Operational Neuroscience Lecture Notes in Computer Science, 724–731.
Farwell, L. A., & Donchin, E. (1988). Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and Clinical Neurophysiology, 70(6), 510–523.
Guger, C., Daban, S., Sellers, E., Holzner, C., Krausz, G., Carabalona, R. ...Edlinger, G. (2009). How many people are able to control a P300-based brain-computer interface (BCI)? Neuroscience Letters, 462(1), 94–98.
Mak, J. N., Arbel, Y., Minett, J. W., McCane, L. M., Yuksel, B., Ryan, D. ...Erdogmus, D. (2011). Optimizing the P300-based BCI: current status, limitations and future directions. Journal of Neural Engineering, 8(2), 1–7.
Sellers, E. W., Arbel, Y., & Donchin, E. (2012). P300 Event-Related Potentials and Related Activity in the EEG. In Wolpaw, J. R. & Wolpaw, E. W. Brain-Computer Interfaces: Principles and Practice. Oxford, N.Y.: Oxford University Press.
Sellers, E. W., Vaughan, T. M., & Wolpaw, J. R. (2010). A brain-computer interface for long-term independent home use. Amyotrophic Lateral Sclerosis, 11(5), 449–455.

Wednesday, January 23, 2013

MT woman with ALS speaking with technology's help

http://www.kaj18.com/news/mt-woman-with-als-speaking-with-technology-s-help/

Posted: Jan 22, 2013 8:12 PM by Victoria Fregoso - MTN News
Updated: Jan 23, 2013 7:15 AM
 MT woman with ALS speaking with technology's help
  •  MT woman with ALS speaking with technology's help
  •  MT woman with ALS speaking with technology's help

BILLINGS - A computer-generated voice now speaks for Donna Fisher. Just two years ago, she was diagnosed with ALS, or Lou Gehrig's Disease. Over time, she slowly lost her ability to move and speak.

"Donna's only purposeful movement is with her eyes," said Billings Clinic Speech Pathologist Carol Morse. "She has no other purposeful movement. She has minimal head movement, no other purposeful movement."

Thanks to a device called the ECO 2, Donna is able to speak once again. "I want everyone that is in my situation to know about this machine and to know how easy it is to work," she said.

"It certainly allows Donna to be independent," Morse said. "She shops online, she pays her bills online, she signs the time cards for her personal care attendants. It allows her to have independence."
Through "eye-gaze" technology, one by one, Donna picks out letters to form a sentence. Sentences ranging from a simple greeting to getting help in an urgent situation. "Friday I ended up in the emergency room. And until someone brought this up to me, I couldn't answer any questions the doctors and nurses needed to ask me," Donna said.

Dr. Carol Morse says training Donna on the device was a breeze; it was the funding that posed a challenge. After an appeal was filed to the state Medicaid office, Donna was granted the ECO 2 after a six month wait. Donna's doctors and friends fought for the device because it was something she couldn't live without. "I will quote Donna, If you take away this device, you take away her voice," Morse said.

Donna came forward wanting to let others in her situation know they still have the ability to communicate.

Tuesday, January 22, 2013

Intel Aims To Give Stephen Hawking’s Speech Device Much Needed Upgrade

January 22, 2013


Jedidiah Becker for redOrbit.com – Your Universe Online

For years now, the paralytic conqueror of the cosmos Stephen Hawking has relied on technological gadgetry to serve as the interface between his magnificent mind and the outside world. While a Lou Gehrig-like degenerative disease has slowly eroded his ability to control his own movements for the last five decades, the world-renowned theoretical physicist and pop-science icon has, in a sense, been fortunate. Had he been born a mere twenty years earlier, the brain that first attempted to bridge the cosmological chasm between quantum mechanics and general relativity may have been forever locked away within the confines of a debilitated body.

But as Hawking’s degenerative motor neuron disease has incrementally robbed him of his ability to communicate naturally, the rise of increasingly sophisticated technology has empowered him to continue his work to the benefit of us all. In recent years, however, that technology appears to have plateaued while Professor Hawking’s physical state has all but reached its nadir. With the exception of a few facial muscles that he can still voluntarily twitch, the Cambridge-based cosmologist is now fully paralyzed.

Since the early 2000s, Hawking has relied on an electronic speech-generating device that allows him to use a voluntary cheek twitch to select letters from a screen as a continuously moving cursor as it scrolls through the alphabet. In this manner, letter by letter, word by word, the profoundest thoughts of his lightning-quick mind drip from this cognitive bottleneck at a grueling rate of about one word per minute.

Recently, however, Intel’s chief technology officer Justin Rattner stated he believes that there may be several technologies floating around that could dramatically increase the speed with which Hawking is able to communicate. At last week’s annual Consumer Electronics Show (CES), Rattner noted that an Intel research team could be on the cusp of a new technology that could improve his word count by up to 5 to 10 words per minute.

A renowned computer scientist in his own right, Rattner says that the technology currently used by Hawking utilizes only one of his voluntary motions; namely, the cheek twitch. But as the Intel CES points out, Hawking can also generate small movements in his eyebrow as well as his mouth, and the incorporation of just one of these additional voluntary muscle responses into his communication technology could dramatically increase the speed with which he can communicate. For instance, utilizing two inputs – say cheek plus eyebrow twitch – instead of just one would allow Hawking to use Morse code rather than a lazily scrolling cursor to spell words. And while this might still be slow by almost any other standard, it would still represent a “great improvement” over the current exasperatingly slow one-word-per-minute pace.

Hawking first teamed up with Intel in the late 1990s in the hopes of developing technology that would allow him to overcome his increasingly severe communication impediments. In the past two years, the 71-year-old cosmologist has again actively sought the assistance of the massive multi-national chip maker as his ability to compose text has further diminished.
After an initial meeting with Hawking in early 2012, Rattner says he was unsure whether the current state of the technology was up to snuff for the professor’s expectations. “Up to now, these technologies didn’t work well enough to satisfy someone like Stephen, who wants to produce a lot of information,” he explained.

Currently, Intel is at work on a system that can combine the physicist’s cheek twitch with his mouth and eyebrow movements to generate more complex input signals for his computer. “We’ve built a new, character-driven interface in modern terms that includes a better word predictor,” said Rattner.

THE FUTURE IS INTUITIVE, ENVIRONMENTALLY AWARE TECHNOLOGY

But that’s not the end of the story. Though still in its nascent research and development stage, the world’s largest chip-making company is also tinkering with an entirely new interface that would rely on sophisticated facial recognition software rather than mechanical muscle movements as inputs.
While Hawking is now entirely reliant on Intel’s technology to express himself, the relationship between the brilliant theoretical physicist and the cutting-edge microprocessor firm is by no means one-sided. The very special case of Professor Hawking’s deteriorating motor skills has provided a salient and urgent catalyst for the company’s broader researcher into smart technology and devices for assisting the elderly and disabled. And Hawking’s rigorous and articulate feedback regarding what works, what doesn’t and why has undoubtedly provided Intel’s R&D department with critical insight into how to move forward with these technologies.

According to Rattner, the key to getting smart technology out of the slump that it’s been in for half a decade is to create gadgets that are able to read the user’s environment in an integrated and intuitive manner. Moving forward, Intel’s work in this field will make use of ‘context aware’ devices that combine a variety of environmental inputs using hardware like cameras, microphones and thermometers. The ambient information gleaned from these devices can then be paired with software that tracks the user’s online activity, personal calendars, social media engagement, etc. to create an intimate, predictive and truly ‘smart’ AI assistant. “We use this [information] to reason your current context and what’s important at any given time [and deliver] pervasive assistance,” Rattner explained.

Mark Weiser – the twentieth-century modern computing trailblazer and long-time chief scientist at Xerox Mark – once said that the best technology “should be invisible, get out of your way, and let you live your life.” According to this philosophy, our gadgets should be quiet, invisible servants – a sort of extension of our subconscious. And this is what Intel is currently aiming at with its smart technology research. Rattner hopes to create devices that help not only the physically disabled but eventually all of us by anticipating our needs and desires at the most basic levels. And if Intel has anything to do with it, he says, “we’ll be emotionally connected with our devices in a few years.”


Source: Jedidiah Becker for redOrbit.com – Your Universe Online

BrainGate: A Breakthrough Study for Healthcare Robotics

From: Robotics Trends - 12/26/2012

Quadriplegic's brain reconnects with a long-lost ability: to reach, to grasp and to lift with robotic limb

Cathy Hutchinson is a quadriplegic patient who is able - for the first time in fifteen years - to reach and grasp with a robotic limb linked to tiny sensor in her brain. As reported in a recent study in the journal Nature, the device, called BrainGate, bypasses the nerve circuits broken by the brainstem stroke and replaces them with wires that run outside Hutchinson's body. "The implanted sensor is about the size of a baby aspirin."


Read the entire article at:


 

Links:

BrainGate


 

Paralyzed woman gets robotic arm she controls with her mind (with video 6:39) http://today.msnbc.msn.com/id/47447302/ns/today-today_health/t/paralyzed-woman-gets-robotic-arm-she-controls-her-mind/

 

Related:

Paralyzed athlete completes marathon in 16 days with bionic legs (with video

1:42)