Still trucking away gathering data for my dissertation. I’m still looking for people who have used Particle Laboratory to teach themselves how to work with particles in Second Life.
The survey can be found here .
Thanks in advance!
If you’re like me, you detest most commercials. I primarily watch HuluPlus instead of cable television anymore and the commercials irritate me to no end (usually I lower the volume or simply choose to get up and do something when they’re on) but a few months ago, a series of commercials caught my attention. These were from Verizon and they were promoting an accessibility technology called Velasense that Verizon is integrating into their system.
I have to say, from the videos, I am deeply impressed. Velasense not only uses GPS to help guide visually impaired users to their destination, but even to doors and other structural elements. It has facial recognition built in to not only recognize friends but also tells users what their facial expressions are. It also reads things, like cans, money, newspapers, etc.
I’ve wondered how the visually impaired might be able to use virtual worlds such as Second Life. I realize that screen readers can often pick up text chat, but what about the graphical interface? Seeing technologies like Velasense coming into the marketplace makes me wonder if, someday, these technologies won’t be able to interpret online graphics for their users, painting a picture through descriptions of what is happening on the screen. Such a breakthrough would be amazing, not just for virtual worlds/vr, but online education as we know it.
For more information about Velasense and to see more video of it in action, check out their website and Tumblr.
Originally posted on the Virtual Educator blog.