Some auditory scanning solutions exist but these are all traditionally Grid-based systems which are designed to do multiple things like support symbol systems and access methods such as eyegaze. Often the language is editable but its hard to import from something like a word document to the software. Few solutions can really support re-organising large blocks of language and “trees” of words quickly. If we imagine that for someone who has visually impairment the words are not in a block but a list – so why not have them on a list? This is how the words and phrases are traditionally organised on a low-tech system. Can we not replicate something simply on a high-tech system to solve some of these issues?
What it will do:
- Be able to have an auditory cue and main voice which can be different
- A cue and main voice can be a recorded message
- Cue and Main can be split between a wired headphone and the internal speaker of the iPhone
- Be able to spell using purely auditory scanning – spelling out each letter as its written and then the whole word on a space or finish command
- Language is edited or created using a simple text file – where each tree is just an indented list of words or phrases
- Support one switch scanning
- Support direct access to a communication partner to use – or a client who requires small movements.
For more info see https://acecentre.org.uk/project/pasco/