Thursday, November 3, 2011

Did you know...there is adaptative clothing for those in wheelchairs?

 
Seven years ago celebrated Canadian fashion designer Izzy Camilleri was asked to design a custom piece for a successful journalist… who just happened to be a quadriplegic. This influential relationship opened up both the eyes as well as new doors for Camilleri, not then aware of the challenges people in wheelchairs face when it came to clothing. After much research and contemplation she was inspired to launch the most important collection of her career: IZ Adapative.

Fashion innovator Camilleri broke new ground with what is the world’s first line of everyday adaptable clothing for a “seated” clientele. The IZ Collection features modern and sophisticated pieces for both women and men who use a wheelchair, many of them under the age of 25 and yearning for access to style not readily available in the marketplace -until now. IZ Adaptive offers fashionable casual, professional, and formal wear that makes wheelchair users feel both empowered and proud. The line celebrates both body and spirit, and provides a seated client the freedom to finally define their own personal style.

 Each piece is cut to follow the line of the seated body, with strategic zipper placement accommodating specific needs. Izzy employs couture-like workmanship to her collections with a keen attention to detail that has secured her the position as one of Canada’s pre-eminent designers and also earned her Designer of the Year in 2006.

From runway to behind the scenes of the film and television industry, Izzy has worked extensively with Canadian and U.S. costume designers and stylists on feature films, TV movies, series, as well as music videos. Camilleri has designed for some of the biggest names in Hollywood including Nicole Kidman, Jennifer Lopez, Angelina Jolie, Reese Witherspoon, and Catherine Zeta Jones, to name a few.

The IZ Collection however makes a profound impact on another set of VIP’s that transcend Hollywood A-listers: the seated clientele. VIP’s that now have access to style that they’ve never had before.

To learn more about Izzy Camilleri's work, please view her website at: www.izzycamilleri.com



/

Speech Recognition through the Decades

Speech Recognition through the Decades
From: PCWorld - 11/02/2011
By: Melanie Pinola

People are impressed with the iPhone 4S's Siri, but how did such
sophisticated speech recognition technology come to be? It started back in
the 1950s.

Looking back on the development of speech recognition technology is like
watching a child grow up, progressing from the baby-talk level of recognizing
single syllables, to building a vocabulary of thousands of words, to
answering questions with quick, witty replies, as Apple's supersmart virtual
assistant Siri does.

Listening to Siri, with its slightly snarky sense of humor, made us wonder
how far speech recognition has come over the years. Here's a look at the
developments in past decades that have made it possible for people to control
devices using only their voice.

Read the entire article and view a video (1:01) at:
http://www.pcworld.com/article/243060/speech_recognition_through_the_decades_how_we_ended_up_with_siri.html

Wednesday, November 2, 2011

A Boston University scientist uses brain mapping techniques to study how thoughts are translated into speech (for BCIs)

The Mind Reader

How Frank Guenther turns thoughts into words

http://www.bu.edu/today/2011/the-mind-reader/

For thousands of years humans have spoken. Noam Chomsky and many other linguists argue that speech is what sets Homo sapiens apart in the animal kingdom. “Speech,” wrote Aristotle, “is the representation of the mind.”
It is a complex process, the series of lightning-quick steps by which your thoughts form themselves into words and travel from your brain, via the tongue, lips, vocal folds, and jaw (together known as the articulators), to your listeners’ ears—and into their own brains.
Complex, but mappable. Over the course of two decades and countless experiments using functional magnetic resonance imaging (fMRI) and other methods of data collection, neuroscientist Frank Guenther has built a computer model describing just how your brain pulls off the trick of speaking.
And the information isn’t merely fascinating. Guenther (GRS’93), a Sargent College professor of speech, language and hearing sciences, believes his model will help patients suffering from apraxia (where the desire to speak is intact, but speech production is damaged), stuttering, Lou Gehrig’s disease, throat cancer, even paralysis.
“Having a detailed understanding of how a complex system works helps you fix that system when it’s broken,” says Guenther, a former engineer who left Raytheon (“I hated being a corporate cog”) to earn a PhD in cognitive and neural sciences at BU. He now directs that program. “And a model like this is what it takes to really start understanding some of these complicated communication disorders.”
Guenther’s virtual vocal tract, Directions into Velocities of Articulators (DIVA), is the field’s leading model of speech production. It is based on fMRI studies showing what groups of neurons are activated in which regions of the brain when humans speak various phonemes (the mini-syllables that compose all words). The DIVA system imitates the way we speak: moving our articulators and unconsciously listening to ourselves and auto-correcting. When Guenther runs a fresh program, the model even goes through a babbling phase, teaching itself to produce phonemes just as human babies do.
Guenther and colleagues in his lab, which he moved to Sargent from the College of Arts & Sciences when the cognitive and neural sciences department was dissolved this year and its activities distributed to other BU teaching and research units, continue to perfect the model, but primarily, they’re focused on “using insights from the model to help us address disorders like stuttering,” Guenther says. “What we’ll do is modify the model by damaging it to mimic what’s going on in these disorders.” As they learn more about the physiological differences in the brains of stutterers, for example, his team comes closer to “having more precise hypotheses about which receptor systems a drug should target, which should lead us more quickly to a drug that doesn’t cause other behavioral problems.”

Frank Guenther, Sargent College, Boston University neuroscience
Pick a letter and these caps can probably guess which one you’re thinking of. Sensors in the caps—the red one manufactured by Frank Guenther, the gray one modified by his team from an existing product—pick up the brain’s electrical signals and transmit them to a computer screen.

Giving voice to a thought

A large part of Guenther’s work consists of devising “brain-computer interface methods for augmentative communication,” he says. The most dramatic example has been a collaboration with pioneering neuroscientist Phil Kennedy of Neural Signals, Inc., in Georgia, in which software developed by Guenther’s lab helped a paralyzed man articulate vowels with his mind.
Guenther explains the condition of a patient who is physically paralyzed but mentally sound: “In locked-in syndrome, the cortex, the main parts of the brain that the model addresses, are actually intact. What’s messed up is the motor output part of the brain. So the planning of speech goes on fine, but there’s no output.” Guenther had speculated that “if we knew what their neural signals were, how they were representing the speech, then we should be able to decode the speech. And it turned out that Kennedy and his team had implanted somebody with an electrode in that part of the brain—the speech motor cortex—but were unable to decode the signals.”
The volunteer who received the implant was Erik Ramsey, who had suffered a severe stroke following a car crash and could communicate only by answering questions with “yes” or “no” using eye movements. With a grant from the National Institutes of Health, Guenther and colleagues built Ramsey a neural prosthesis in 2008. With his electrodes hooked up to a wireless transmitter, Ramsey imagined speaking vowels, activating neurons that powered a real-time speech synthesizer (emitting a robotic “ahhhhoooooeeee…”) while the researchers watched his progress on a monitor that showed his formant plane, an X-Y axis graph representing “what we call the formant frequencies—where the tongue is, basically,” Guenther says.
“By the end of the experiment,” Guenther says, “he was hitting the auditory targets about 80 percent to 90 percent correctly.”

Frank Guenther and telepathic cap, Boston University Sargent College neuroscience
"It won’t cost patients $50,000, and they won’t have to undergo brain surgery," says Guenther. "It’s the kind of off-the-shelf thing that they can buy and use to communicate within a day or two of practicing."

Fuzzy mind reading

There are less invasive neural-prosthetic options, which Guenther’s lab is also pursuing. Electroencephalography, or EEG, involves picking up the brain’s electrical signals through external sensors resting on the subject’s head. Guenther’s colleague Jon Brumberg, a SAR research assistant professor, is testing an EEG system in which one imagines moving one’s left or right hand or foot, thereby moving a cursor on a screen. Another method involves choosing letters by staring at them on an alphabet grid.
These laborious methods have advantages, Guenther says. “First of all, it won’t cost patients $50,000, and they won’t have to undergo brain surgery. It’s the kind of off-the-shelf thing that they can buy and use to communicate”—albeit slowly—“within a day or two of practicing.”
However, because of interference from the skull, EEG signals have limited value. “Imagine an old TV antenna where you get a fuzzy picture,” Guenther says. “That’s what EEG is like. For real-time control of a synthesizer to produce conversational speech, I think the best way is going to be intracortical, intracranial, because you’re always going to get higher-resolution signals.” And Ramsey succeeded in producing vowels with only 2 output channels, while “the next system will have up to 96 channels,” Guenther says.
He points out that “these are the initial attempts. It’s like the first rockets that went up but didn’t even go into orbit. This is going to get more and more refined over the next decades. But it will happen. I can imagine a day when these surgeries become so routine that it’s not a big deal. Somebody might wear such a device as a necklace with a speaker on it.”
Guenther relishes his work as a pioneer at the nexus of engineering, neuroscience, and now rehabilitation. “Coming to Sargent College has been good timing for me because my earlier career was building up this model of normal human brain function,” he says, “and now that we’re starting to look at the disorders, like stuttering, we’re getting insights by talking to clinicians, and getting access to clinical populations, at Sargent.”
What hasn’t changed is Guenther’s fascination with the human brain. “It’s such an unbelievable machine. I’ve studied computers, and the brain does many things so much better than computers. And if you figure out how the brain works, you understand the mind, and you understand some of life’s great mysteries.”

In the video above, watch Sargent researchers Frank Guenther and Jon Brumberg discuss their work. Video courtesy of the National Science Foundation View closed captions on NSF.gov
Patrick L. Kennedy can be reached at plk@bu.edu.
This article originally appeared in the 2011-2012 edition of Inside Sargent.

Sensors for Body-Area Networks Could Enable New Healthcare Applications

Sensors for Body-Area Networks Could Enable New Healthcare Applications
From: University of Michiagn EECS News - 2010-2011

Prof David D. Wentzloff is investigating new technology that will exploit the
wireless communication standard being developed by IEEE for body area
networks. Body area networks involve sensors placed on someone's body that
talk to each other, or that talk to somethign external or internal to the
body. For example, an individual could have an internal sensor that monitors
blood levels related to their diabetes or seizure disorder that communicates
with a patch on the skin, which, in turn could communicate with a cell phone
application. Body area sensor networks may also be used to detect falls in
the elderly, or to alert nurses when very ill patients try to get out of bed.

Source:
http://www.eecs.umich.edu/eecs/about/EECSNews/EECSNews10.pdf (page 6)

Thought-Controlled Computers May Soon Be a Reality

Thought-Controlled Computers May Soon Be a Reality
From: Computerworld - 10/19/2011
By: Lucas Mearian

Wadsworth Center's Gerwin Schalk, speaking at the Massachusetts Institute of
Technology's Emerging Technology Conference, demonstrated how close real-time
human-computer interface technology is to becoming a reality. Schalk noted
that neurotechnology - a $145 billion market that is growing 9 percent
annually - already has reached several important milestones in human-computer
symbiosis. He said researchers are currently working with the brain's alpha
waves to create systems that can be used to communicate directly with
computers. In one demonstration, Schalk showed video of a patient shooting
monsters in a computer game using nothing but thoughts. Schalk also showed
how a computer can tell the difference between someone thinking of different
sounds, and how a computer can detect the sound level of music a person is
listening to and track it over time. Schalk noted that two major hurdles to
real-time thought-controlled computers is the development of better sensors
to detect alpha waves and better ways to identify the brain's signals.
"Direct computer interaction with the brain has the potential to become a
general purpose technology ... at the same scale at information technology,
computing, and the telephone," he said.

Read the entire article at:
http://www.computerworld.com/s/article/9221007/Thought_controlled_computers_may_soon_be_a_reality?taxonomyId=11

Tuesday, November 1, 2011

Environmental Control Article by Antoinette Verdone

Wonderful newsletter from my colleague Antoinette Verdone, MSBME, ATP
Sign up to receive her emails if you are interested. 

Inspired Solutions

Assistive Technology Newsletter from Antoinette Verdone, MSBME, ATP

Picture of Antoinette Verdone
Environmental Control - How to control just about anything.

In a previous newsletter we discussed options for automating door opening, but in this article, we will go more in depth on how to control items in your environment.

To read the article on door opening, click here.

For people with disabilities, some very simple actions that we take for granted can be difficult or impossible, such as turning lights on/off, changing the channel on the TV, operating the telephone, and more. Did you know that there is a whole industry of devices to help give people access to items in their environment? This article will focus mainly on environmental control. Look for a future article to focus on telephone access.

First, a little lesson in home automation technology.

There are two types of remote controls and three basic types of home automation technology.

For remote controls, the two types are infrared (IR) and radio frequency (RF). An example of IR is your TV remote. In order for the signal to be transmitted, you have to have what is called "line of sight". There is a receiver on the front of your TV that receives the signal from the remote control. If someone stands in front of the TV, you cannot change the channel. RF does not require "line of sight". An example of an RF remote would be a garage door opener. The remote communicates through the door to the receiver inside the garage to allow you to open the door. It is important to understand the difference between these technologies because many of the products discussed later can only learn IR controls.

For home automation, the three most prevalent technologies are X-10, Insteon, and Z-Wave. X-10 is a long standing home automation technology that is used by most assistive technology devices.
Picture illustrating how X-10 works
The remote that is activated by the user sends a RF command to a base module called a Transceiver module. This module then sends a signal through the house wiring to the module to be operated. With the X-10 system you can have many modules all over your home. Since the signal to the transceiver is via RF, line of sight is not required. So, you can be anywhere in your home and the signal will still work. There are many different kind of modules that will allow you to operate almost anything that can be turned on and off.
Picture of an array of X-10 modules
Examples of X-10 Modules

Insteon works very similarly to X-10 in method, but in addition, the modules communicate to each other to make for a more robust system. Most Insteon are X-10 compatible. Go to www.insteon.com for more information.

Z-wave is another home automation technology that only uses RF communication between the modules to send signals, and it does not use the house wiring to perform actions.

I could go on for pages about all the pros and cons of each technology, but my purpose here is just to give you a quick overview. All of the devices that are designed for people with disabilities use X-10, so that is the technology that we will mostly be working with.

If you are able to press the buttons on a standard remote, you can control items in your environment VERY inexpensively. For example, here is a great starter kit for $45:
Picture of Three Piece X-10 Starter Kit
Click on the Picture to Purchase

This kit includes a transceiver module, a lamp module and a remote. With this kit you can control one lamp and one other plug in device. The remote has the capability to control up to 16 devices.

Another option for a starter kit would be this product, only $75:
Picture of starter kit that includes universal TV remote.
Click on the Picture to Purchase

This kit comes with the transceiver module, which can turn one plug in item on/off, a key chain remote for X-10, and a universal remote. The universal remote can be programmed to control up to four IR devices (TV, DVD, Cable, etc) and you can also control X-10 modules directly with the remote control.

This is just the tip of the iceberg as to what can be achieved with X-10, contact us to discuss you needs and we will help you find the best solution.

Now, if a person is not able to press the buttons on a standard remote, this is where the assistive technology comes in. In order to allow a person with a disability to operate an environmental control device, a switch is needed. There are many different types of switches that would allow a person with a disability to activate the switch with any part of their body that has reliable movement. Here are some examples of switches:
Picture of different kinds of switchs for people with disabilities.

One product that New Life Medical Equipment carries is the Angel ECU.
Picture of the Angel ECU FX System
Click on the picture to go to the company's website.

This is a full featured, stand-alone environmental control system. The system is switch activated and can be highly customized and configured to meet any and all of your environmental control needs. For more information, please contact us.

CONNECT WITH ME


Read my blog
ALL MY PREVIOUS NEWSLETTERS CAN BE FOUND HERE

If you or someone you know needs assistive technology help, please contact me today!


antoinette@newlifehme.com
512-497-6026

SHARE THIS EMAIL



Copyright © 2011 Antoinette Verdone, All rights reserved.
You are receiving this email because we want to keep in touch with you.
Our mailing address is:
Antoinette Verdone
New Life Medical Equipment
Austin, Texas 78757

Add us to your address book
Email Marketing Powered by MailChimp

Monday, October 31, 2011

Free Webinar for the Tobii Sono Flex iPad App

Free Webinar for Tobii Sono Flex for the iPad
Webinar Registration

Tobii Sono Flex is an AAC app for the iPad that turns symbols into speech. In this webinar, you will learn how to use and customize Sono Flex. We will teach you customization skills such as how to change the button text and how to add pictures from your iPad library onto existing buttons. You will also learn how to adjust the Sono Flex settings, and add new vocabulary. Join Tobii ATI to get all your questions answered in this action packed hour-long webinar!


https://www1.gotomeeting.com/register/772743145

Talk to me, with your eyes

Eye tracking technology allows ALS sufferers to express creativity

http://scienceline.org/2011/10/talk-to-me-with-your-eyes/


When Los Angeles graffiti artist Tony Quan, otherwise known as Tempt1, was diagnosed with amyotrophic lateral sclerosis (ALS), he lost all muscle function in his body – except in his eyes. His brain was fully functioning, but he could not speak nor move a limb. His ideas were as abundant as ever, but he was an artist without any form of expression.
And then came The Eyewriter, a combination eye-tracking and drawing software that enabled Tempt1 to draw with his eyes. From a small hospital bed, Tempt1 drew the images in his mind while artists and hackers from the Free Art & Technology Lab drove around, downloading the images from the trunks of their cars and projecting them onto buildings for all of downtown L.A. to see.
Zach Lieberman, one of the creators of The Eyewriter, spoke about the impact of this technology for artists like Tempt1 at the October 18th “Talk to Me” symposium at the Metropolitan Museum of Art. The auditorium was filled with Radiolab enthusiasts eager to hear the NPR program’s co-hosts Robert Krulwich and Jad Abumrad banter about the relationship between people, objects and design, but it was Lieberman’s video of Tempt1 using The Eyewriter that stunned the audience.
The first half of The Eyewriter can be made from any old pair of glasses retrofitted with some copper wire, a micro-camera and near-infrared LEDs (light-emitting diodes) to illuminate the entire eyeball, making the pupil appear darker in comparison. This contrast makes it easier for the camera to detect and track the position of the pupils, enabling the eye-tracking software to map their coordinates onto a computer screen.
The second part of The Eyewriter, the drawing software, uses a time-based interface so that the user can focus on a position on the screen for a specific amount of time to trigger different buttons (such as a color change) or create a new drawing point.
Lieberman and his team wanted The Eyewriter to be available to anyone – they offer the source code for the drawing software for free online, complete with step-by-step instructions on how to set it up. And for around $50 and the time it takes to learn how to engineer the eye-tracker, anyone eager enough can make The Eyewriter.
While not perfect, Lieberman said the software is pretty accurate. He explained that “your input is your output” – in other words, the user sees the dot show up on the point of the screen where their eye is looking.
One audience member asked how Tempt1 was able to take a break, to think about what he wanted to draw without the software still tracking his eyes.
Lieberman explained that it was hugely important to have a safe zone where Tempt1 could look to pause the system and look at what he was doing, so they made sure this was part of The Eyewriter’s design.
“With an eye tracker it’s really hard to use your peripheral vision to get a complete sense of what you’re doing,” said Lieberman. “You never really get the full picture.”
Despite its imperfections, Lieberman said the best moment of the project was watching Tempt1’s face light up seeing the work that he made.
“It was amazing watching him use it,” said Lieberman. “You could really learn what he was thinking, watching him map what he was going to do.”