Progressive Enhancement – the web’s strength

The web is a big beautiful mess and we love it.

We’ve has come a long way from the web’s origins as hyperlinked text for scientist to share. We’ve collectively learned a lot about what makes the web so powerful and how to exploit it in that short time. We’ve also made mistakes, swinging off course to unhelpful design extremes, only to swing back and subsequently grow in our understanding as a result.

For example we escaped the pixel-perfect positioning print-media pretence phase [alteration apology] and now appear to be in a equally obsessive javascript rules, app-tastic, web as platform, native competitor frenzy. This time it’s being driven by developers rather than designers. And again the wide inclusive web community will no doubt auto-correct our collective course. The current discussion on the place of Progressive Enhancement (PE) with or without javascript and the what makes the web great, appears to be heralding the start of that process. I hope so.

3 layers of an M&M; content, presentation, clientside scripting

One view of progressive Enhancement

Evidence of the energy in javascript frameworks can be seen in ToDoMVC which has assembled some 64 example frameworks/libraries with new ones coming regularly (and remember MV* is not the only pattern in town). As Allen Pike quipped

Studies show that a todo list is the most complex JavaScript app you can build before a newer, better framework is invented

This situation has led Christian Heilmann  to say

Let’s stop the rat-race and concentrate on building working sturdy solutions

Of all those frameworks I wondered how many support Progressive Enhancement as a feature? I’ve recently explored several of the latest and greatest frameworks for a new project. These included Angular, Backbone, Ember, Meteor, Polymer, React, Riot, WinJS and I found they don’t. You might argue “there’s a clue in the name “javascript frameworks”; they will need, well, er, javascript. Good point, but that misses what PE is about. The web page should work in a minimal browser, without CSS and Javascript but work better when the technologies are available. To have a blank page with no javascript is a fail.

In fact, I eventually reminded myself of mature (and so boring) jQueryMobile and jQueryUI which both state PE as a design principle but are not MV* in themselves.  T3js also mentioned PE but I’ve not explored how much it is supported. I’m interested to observe that many MV* solutions are not the same as the original Smalltalk MVC which was where I first experienced it. Some like the Flow architecture are much closer with no 2 way binding.

I suspect the reason for this lack of PE is as others have observed, namely with so many developer’s now coming to front end web dev many bring experiences of good practices from large software systems and naturally want to apply them. I’m not saying hard won principles such as modularity, separation of concerns, loose coupling and even MVC itself are bad. Rather the narrow focus on the software engineering with Javascipt can means we easily lose sight of strengths of the web and code ourselves into a corner.

I believe we need to remind ourselves to develop for the web, not just the browser. Better, design for users of the web, not browsers. Users are to be found using a range of devices,  a variety of browsers. perhaps with assistive technology and in varied contexts. We can’t control our users environment, whether its to get a pixel perfect layout or create a javascript platform.

The recent discussion on PE is distilling the concept that the web has it’s own strengths which derive from it’s heritage of sharing scientific information. These include hassle-free access by disparate people on varied devices. The web can do this like nothing else can and PE, responsive design and accessibility are key factors in ensuring it delivers on it’s promise. By supporting a wide inclusive range of devices and user capabilities we gain incredible reach that is of benefit both commercially and individually.

As  PPK said

We’ve lost sight of how to capitalise on that strength, though, and have to find our way back home.

I’m confident we will. My current thoughts are that PE is a key part of what makes the web work best and encompasses both mobile-first responsive design and accessibility. Design for small screens and enhance for larger ones. Design for basic inclusive access and enhance for optimal personalised experience. Taken together and with other techniques that make few assumptions will be able to reap the benefits of the web’s strength.

Perhaps someone will write  a dissertation to explain this user focussed aspect of the web to sit along side Roy Fielding’s “Architectural Styles and the Design of Network-based Software Architectures”. Any volunteers?

Posted in web | Leave a comment

CKEditor Accessibility Checker for content authors

Here’s a mini review after a quick play with a preview of the new CKEditor Accessibility Checker plugin for content creators. The plugin is provided by CKSource who lead the development of the open source CKEditor and provide additional commercial grade services.

WYSIWYG JavaScript editors

If your memory is as long as mine you will recall when WYSIWYG javascript editors first appeared back in the days when we spoke excitedly of DHTML (D = Dynamic = scripted). Designed for use in web programs such as Content Management Systems (CMSs, eg Drupal) these editors replace a basic HTML <textarea> with a rich editing experience somewhat similar to using a word processor, including toolbars. They hide the complexities of creating markup by automatically inserting tags into  generated HTML which is then persisted in the CMS and displayed as part of viewed webpages.

Two early editors emerged as leaders, at least when I last looked over 5 years ago; CKEditor and TinyMCE. Both are still going strong and now have many solid features. While these editors provide a familiar experience when creating rich content, there was a problem. Accessibility. Or rather there were 2 accessibility problems.

Accessibility woes

Firstly, the toolbars were initially implemented as bit images and provided no keyboard access. I’m pleased to say that has now been fixed and in CKEditor for example you hit Alt+F10 to get focus into the toolbars. There are other accessibility features including keyboard shortcuts and even an Accessibility Help screen accessed via Alt+0

The 2nd accessibility issues is harder to solve. It’s the accessibility of the content created by authors using the editor. While the CMS developers like Drupal may make every effort to ensure the end user experience is fully accessible, they cannot fully control user generated content. As the editor manages which tags are added it can ensure a certain level of accessibility including WAI ARIA but authors can still make common accessibility errors, For example it’s easy to create a bad structure by skipping heading levels. Or the perennial chestnut forgetting to add an alt attribute to important pictures.

CKEditor Accessibility Checker

One solution to the problem of catching author errors is to provide a tool that authors can use to check their content before they submit it. This is the approach taken in the CKEditor Accessibility Checker plugin. While there are several HTML validation tools and services that could be used the quail checker

To try the Accessibility Checker I first played with the comprehensive sample and then knocked up a little test. The sample provide some Wikipedia style content with 7 errors flagged by the Quail accessibility checker used for validating the markup.

CKEditor Accessibility Checker screen shot

As can be seen, a dialog popup provides the user interface with button for Next and Prev error, an explanation of the issue which is also highlighted in the content. A triangle on one edge of the dialog also acts as a pointer to the problem.  All-in-all this is a good interactive experience for navigating and fixing issues. In addition, if the editor content is clicked to make a change the dialog shrinks down out of the way – a nice touch. Quick fixes are provided as extra javascript snippets in the sample code and make for an easy user experience. A complication here is that authors using CKEditor are intentionally insulated from the raw markup details so errors have to make sense with minimal reference to techy details and standards. I think a reasonable job has been done though some understanding of the underlying markup is required.

In order to get an experience of the effort involved in using the Editor and Accessibility Checker I created a simple webpage with some dodgey initial content for the editor. This shows how painless it is. In addition to the CKEditor initialisation and textarea  element replacement code, it is only required to include jQuery and declare use of the Accessibility Checker plug in.

Note that the checker makes an XHR call to Quail so the sample had to run from a server, not file:///.  An easy way to do that is with nodejs and hapi configured as a simple static server. I also used the new Visual Studio code editor to edit and run it. All in all that’s a nice easy and portable way to get a nodejs server up and running

While the first three accessibility errors in my dodgy markup were found the contrast fail was not, even though this is listed in the Quail documentation. I checked with TPG’s WAT to ensure it was indeed a WCAG AA and AAA fail. I tried adding it as a new CKeditor style but that made no difference. Perhaps Quail does not map the colour names to values? I assume it works with inline styles. I didn’t spend anymore time investigating this.

Some errors are no doubt hard to find as the editor contents are one part in a larger page context. If the wider page context is ignored then some structural errors will not be found. The edit page context is likely to be different to the view one.  Also, if the same content is used in several page contexts the structural integrity may vary.

Final Thoughts

The Accessibility Checker plugin is a good solution for ensuring user supplied content is accessible or otherwise checking in browser edited markup. The UX is good, though I did not check the accessibility of the UX itself. Quail is a good open source accessibility validator, is configurable and supports test for both WCAG and Section 508. The Accessibility Checker documentation claims it can be used with others checkers so perhaps it would work with, though that requires a subscription. The Quick fix feature make it even easier for authors to use

Currently Accessibility Checker is a commercial offering from CKSource but they said they plan to make it open source under GPL, like CKEditor itself.  I see there is a Drupal plugin for using CKEditor so no doubt the Accessibility checker could be added as well, making Drupal even more accessible.

Posted in a11y, opensource, web | Tagged , | 2 Comments

On Recovering Windows 8.1 when it goes wrong


After a double disaster with my workhorse Windows 8.1 laptop I’ve been exploring the easy ways to create a usable backup image of my system when set up to my liking. The Windows Recovery Environment (Windows RE) turns out to be well thought out but a couple of confusing bits of UI and mass of conflicting advice on the interweb leads to obfuscation. The following is my experience. Jump to the end for the solution of creating a custom image for system Refresh.

Blue Screens of Death are still a thing

I was unlucky enough to have 2 consecutive failures. The first was Windows update installing an Alps driver for my Dell XPS12 trackpad. The resulting service caused a Blue Screen of Death and required a length ‘binary chop’ process on the services to identify the cause so I could uninstall it (Basically the steps are: get into Safe Mode, run msconfig,  check Hide all Microsoft  Services on services tab and disable half the services, reboot and add or remove services before rebooting, rinse and repeat ’till you have figured out what is causing the problem).

Without any custom recovery setup my only solution was a clean install of Windows 8.1 (from a MSDN ISO) followed by resinstalling all my desktop apps, followed by restoration of File History backup for my docs/data from my NAS. Annoyingly I had forgot to backup a few hidden files in my user root when setting up File History. Anyway, the result was a nice fast PC booting.

Oh, but then after a reboot I suddenly got stuck in a non recoverable BSoD loop (CRITICAL_SERVICE_FAILURE). So I had to do it all again. I’m pretty sure the cause was using Glary Utilities to clean up and not the hardware. But I ran the Dell tests and Seagate’s own SSD tests just in case it was a hardware issue (all fine).

Being sensible and creating a recovery image

So this time I decided to create an image to facilitate easy restoration. While there are classic solutions like trusty old clonezilla I felt the Windows solution should be usable. While reading around indicated I might be able to create a partition on the system disk I wanted to create a boot able USB stick (the XPS12 has no optical drive) in case I got the non booting BSoD loop again..

While Windows RE supports options for both both Refresh (leaves your docs/data) and Reset (full factory reset)  both required me to supply recovery media (I think the install image on USB should have worked, but I went for a full reset anyway). My plan was to create a custom image that can be used in this case to reinstall all apps and programs.

Partition or image?

Now this is where the trail gets a bit muddy. While the desktop Control Panel recovery tool can create a backup drive, the useful sounding check box “Copy the recovery partition from the PC to the recovery drive” is disabled, even on a system created from a Windows install image. Firstly it appears ‘partition’ actually means image, though an OEM may have put a custom recovery image on a partition (and you can do the same).

The solution to the disabled checkbox appears to be to provide an Install.wim windows system image and configure windows to use it with reagentc. Then if I understand it the created recovery drive will contain the Windows RE and the Install.wim and so enable you to boot and recover from the drive. However I have not yet tested this.

Now it turns out you can create a custom wim image from a snapshot of your system and leave this on your system disk when it can be used for a Refresh. Several people claim you can also rename the custom image as Install.wim and then register it so you can create a recovery drive (and possibly use it for a full Reset).

Now the tool used to create and register a custom image is recimg but it’s help has a big disclaimer that only documents and apps will be preserved during a Refresh, not desktop programs. This turns out to be the second confusing information. It would no doubt be true for the default Install.wim, assuming that actually worked out of the box. The recimg help also states it can only be used for a Refresh not a Reset but I don’t know if that is true after it is renamed to Image.wim. I’ve yet to try it.

Once you create a CustomImage.wim you can then run Refresh from the Windows RE which will use the new . Of course that assumes you can get your PC to boot, which I wasn’t able to. To cover that I’ve copied my CustomImage.wim to an external drive in the hope I can use it later if required. As mentioned above you can create a recovery drive from your custom image, but the need to rename (or copy) and register it your newly created CustomImage.wim as Install.wim to enable the checkbox.

In the end my plan was to create a CustomImage.wim after installing Office and my dev tools. This weighed in at a hefty  30 GB so I removed it from my SSD as soon as I had made a copy. I didn’t have a device suitable for a recovery drive.

[Update 2015-05-24: Warning the recovery drive utility does a FAT format so your drive will be limited to 32GB – which really restricts it’s application, infact makes it useless for me. Also the drive I purchased (Elite) appears as a local drive rather than portable so is not seen by the utility]

Creating a custom image for Refresh

  • Get system to a clean, updated and stable state
  • Open a Command Prompt
  • Run ‘mkdir c:\RefreshImage’
  • Run ‘recimg -CreateImage C:\RefreshImage’
  • You will now have a C:\RefreshImage\CustomImage.wim
  • Run ‘reagentc /info’ to check the custom image is registered

Creating a Recovery Drive with you custom image

  • Open a Command Prompt
  • copy (or rename) CustomImage.wim to Install.wim
  • Run ‘reagenc /SetOSImage /Path C:\RefreshImage\Install.wim /Index 1’
  • Run ‘reagentc /info’ to check the recovery image is registered
  • In control panel select ‘Create a recovery drive’ and check ‘Copy Recovery partition.

If you want to know more Steven Synofsky wrote a post on how the Windows RE system works for Reset and Refresh.

(Note Microsoft also provide tools for OEMs to create custom recovery images, tools and menus but they are more complex to use).

Posted in Uncategorized, Win8, Windows | Leave a comment

Sara Soueidan on Improving SVG on the web

Following on from my last 2 posts; Symbols for AAC using SVG and a RESTful web API and I’m liking Microsoft again here’s an excellent video session from the recent Microsoft Edge web Summit. In ‘On the Edge with SVG‘ Sara Soueidan reviews the state of the SVG specification and implementation in browsers, including Edge. Sara also gives call for action to vote for these improvements.

Here’s the current list of SVG related suggestions for Edge on

As always Mozilla Developer Network (MDN) has excellent documentation on SVG as it currently is.

Posted in Uncategorized | Leave a comment

I’m liking Microsoft again

After watching the key notes and selected sessions from Microsoft #Build2015 over on Channel 9 I’m definitely liking Microsoft again. This new found affection is form largely from the perspective of a HTML developer and to some extent as an accessibility practitioner. To be honest this is a welcome pleasant surprise.

OSI logo with WIndows Logo superimposed on it.Microsoft Enable logo

Microsoft Edge browser logoVisual Studio logo

When I Started Windows 3.0 development back in the day (and with liberal help from Charles’ Petzold’s excellent book), I soon figured Microsoft treated developers well, even courting them with shiny tech to explore and great communications (like MSDN magazine and then later with huge stacks of CDs).

This was the early phase of Microsoft corporate culture when they were still very much developer led. Admittedly, some of their products’ usability suffered from this bias and quite rightly they changed structure. Later, I moved on from embedded development using MASM, and MS Visual C for PC deployments an eventually to MS Windows powered financial products. As a result of tweaking the UK SKU of MS Money I even managed to work for Microsoft on some MSN back end code as a ‘contractor’ in Red West B, but that’s another story.

I started to focus on the intersection of web accessibility, where Microsoft had a limited user story, and open source, where they were largely seen as the face of the proprietary corporate enemy. That view was not helped by Bill Gates’ famous open letter to hobbyists which wound a lot of people up at the time and later. Boy have things changed!

Even though I really enjoyed working on Linux and the excellent accessibility features of Gnome 2 and Mozilla Firefox, I must admit I did keep using and developing on/for Windows. I created a number of small Windows based Assistive Technologies, including the popular PowerTalk that automatically narrates PowerPoint presentations as you operate them. Powertalk uses Python to access the Office Object Model and drive SAPI speech syntheses. By the way, SAPI is one of many excellent technologies with powerful accessibility uses from the Microsoft Research stable. Another is Kinect.

Microsoft then entered what I see as the dark Balmer years and I largely moved away as user or developer. Even though my then business partner and strong Open Source community leader, Ross Gardler, was strongly not anti Microsoft. Actually, I eventual found out why Ross held that view as he left to joined Microsoft Open Tech. At one point we did look at Windows 8 HTML hybrid development (WinJS + perhaps Cordova) but in those early times we hit issues that I could not easily figure out from the historically excellent MSDN documentation. Mind you, I had not yet found Kraig Brochshmidt’s excellent free ebook – a “must read” for any HTML development on Windows.

I must admit that at the time I felt that the move to Microsoft was going to be important for both Ross and Microsoft. After my experience from Build2015 I feel that’s reasonably well validated. Here’s why

  • The clear overall impression from Build was that Microsoft have now adjusted their culture so they now embrace and engage open source communities. Those of you aware of  the more popular old criticisms of Microsoft will  know why I highlighted that :). In the process Microsoft have rediscovered a friendship with developers.  The dual C words of Community and Conversation seem to have almost become mantras, at least in the session I watched. Sure,they are doing this for business survival reasons, but it still rocks.
  • New developer tooling, including the excellent new Visual Studio Code, show energy in supporting popular open source tools and workflows used by those outside of the Microsoft way. This is especially true in the web and HTML app space as I saw demos of nodejs, Bower, Gulp, Cordova, Angular, backbone and more. Plus Microsoft’s own  WinJS framework works with Angular and other frameworks. And, oh did I, really see those Angular devs in a Build presentation!?
  • Edge is so obviously the new browser on the block from a standards point of view. If you recall why Mozilla Firefox was started you realise that is a most welcome result. We’ve recently been seeing Microsoft engage with W3C and other groups to great effect. Even if sometimes they want to push things in different directions to others, for example ORTC rather than webRTC, they are taking an active part, unlike some others whom I won’t mention. Also the the F12 tooling has some great innovative features, including the ability to attach to embedded webviews. This ‘joining in’ even includes key bindings – as I honestly heard the presenter saying “why be different to others for no good reason?”.
  • The new developer paths to and from Android and iOS are also impressive. As is the support of hosted apps and win32 apps in the appstore. Microsoft are obviously keen to get everyone on their Windows/Azure platforms.

The other reason I find I’m liking Microsoft again is their approach to supporting the plethora of devices, form factors and input modes that we now face. Universal Apps, the flexible input platform and Continuum act together to provide the basics for a smooth cross platform (device) and accessible experience for users.

For example plugging a keyboard into my Android ASUS transformer tablet just didn’t work well enough so I gave up. Windows controls support mouse, touch, keyboard and even pen and games controller. With Continuum you can plug a mouse, keyboard and HDMI monitor into a suitable phone and get a near desktop experience.

The flip side of this flexibility is a boost in accessibility. The previously mentioned paths between other developer platforms, including their own, strengthen this considerably. My strong impression is that Microsoft have the best, if not unique, story here and it will only get better.

Definitely not your mom’s Microsoft. I’m looking forward to watching this unfold…

Posted in a11y, web, Windows | Leave a comment

Symbols for AAC using SVG and a RESTful web API

A good few years ago I hooked up with Garry Paxton who needed some development support for a charity website.  He had created to provide Speech and Language Therapists access to a new and freely available symbol set. Symbols such as this Mulberry symbol set are vitally important for people with communication difficulties – such as an inability to speak. But often, like much in the AT world, proprietary symbols are expensive and so can be out of reach of many who would really benefit form them.

While this work was carried out a few years ago I felt it should be documented and will hopefully inspire others to innovate.

Communication deviceCommunication chartMan waving hello

Open symbols

Garry’s goals for the symbols and website included

  • Provide an alternative symbol set for older users as available sets were largely aimed at children. This required extra ‘adult’ symbols and a more appropriate style.
  • Promote innovation in symbol availability and use by allowing symbols to be freely used, shared, modified and accessed on the web. All symbols sets at the time had proprietary licences and often required a licence per use on a single PC. This was felt to be a serious barrier to users getting free access to critical communication aids they need. Personally I think that the majority of symbol set owners were behind the times as far as the technical possibilities were concerned and so users were missing out.
  • Demonstrate how symbols can be accessed in modern web apps through an API. This included providing an API to access symbols, a basic test App and also a protoype symbol chart maker app.
  • Provide a strong design workflow for the symbols so they have consistent style (even though permissive licence allows for derived works).
  • Encourage community suggestions for and review of symbols.
  • Allow easy access to entire symbols set or just a sub set based on criteria such as topic (eg ‘food’)

I think most of these goals were reached by Garry and the team, despite being a little ahead of the curve technically. API usage and general SVG support were nowhere as well developed as they are now. Perhaps most importantly this was disruptive as far as symbol set owners and developers who used them were concerned. We see several web uses of symbols (eg hover over words on and a few free or otherwise better licensed symbol sets have become available. Currently the Mulberry symbols are used in many apps (native and web), though some people appear to be abusing the very permissive CC BY-AT licence. Unfortunately Garry’s charity funding dried up and so it was closed at the point the current 3,000 odd symbols were finished. Note to anyone interested in picking up on this – Garry would love to see the symbol development work resume.

To see a fairly random selection of symbols go to and click on “Search”. You can mouse over the symbols to get a larger preview.


I loved the symbols and Garry’s aims  in terms of open accessibility so I quickly offered to help fix the website problems. I made several suggestions, in particular using SVG as an alternative to the  WMF (Windows meta format). WMF made sense for use in installed apps as most users had Windows PC programs that only support WMF. It’s scalable but rather crude, requiring considerable hand editing of exports from Adobe Designer (used to create the symbols). In addition, due to security problems Microsoft pulled the explorer support for showing thumb nails, thus reducing their general utility.

I was well aware of the advantages of SVG as a mature scalable format and open standard. I hoped it would soon ‘break’ into mainstream, something that is only just happening now, some 5 years on, largely I suspect as a result of the need for responsive images.

Seeing I was promoting SVG to Garry as the web-friendly format for use on the web I needed to prove it could be easily used in webs apps for symbols. This proved to be basically the case though the test app is a little more complex then expected.

Man wearing bakers clothing and a loaf of bread




person bathing a dog in a tub with a sponge









Fortunately, that has now changed with improved SVG support in browsers. The biggest breakthrough is the support of SVG files in <img> tags – hurrah! To prove the point, you should be able to see above various the in differing sizes using an <img> tag (and you can click on them for a larger scaled image). These were added to WordPress using the “embed from URL” option without any special effort. Now symbols can easily be displayed in web apps without using awkward markup such as <object type=”image/svg+xml” data=”URL> and feature testing code.

As an aside, for the site we didn’t use SVG. Rather we used 2 sizes of png (originally giff) for thumbnail and preview images. Plus the user can download zips of all the symbols in WMF, SVG and a size of png. All rather messy.


For the API design I researched best practice but found little solid wisdom available at that time. Still, I’m reasonably happy with the design, though no doubt it could be done much better now, especially as we have a good body of experience and examples (though I must say “both good and bad”). At the time, having just read Roy Fielding’s dissertation I was keen to use a self describing discoverable REST style returning JSON with further URLs embedded.  This is perhaps closer to the hypermedia controls approach rather that the alternative metadata formats which seem to be currently slugging it out for dominance (and interestingly I see Microsoft have standardised on Swagger Metadata for Azure as most popular metadata). For our API we only require GET, at least as the API stands, which makes life much simpler in terms of API design and implementation.

Here’s an example request to get all symbols with name or tag containing ‘sweet’

The API’s home page provides details usage information. And a small interactive test applications is provided at

For implementation I initially used Python which was a joy to write (as always) but as our free hosting only provided CGI is was REALLY slow. I therefore reimplemented it in PHP using a rather obscure light weight MVC framework for routing (DooPHP). This is reasonably fast. When it comes to a rewrite I’d no doubt use node.js + hapi or perhaps Python’s  Flask. In addition JSON Hyper-Schema looks like a good spec and tools to use.


Despite being a little ahead of the accessibility and web curve I think we managed a very reasonable first product implementation. Web support has moved on so far and fast that I’ve no doubt the web site, api and samples apps could be easily recreated and be in much better shape.

We’d love for symbol users to get access to these symbols in a wide range of innovative on-line and web apps. We’d also like to see many more symbols being added to make this a comprehensive set with excellent utility. Perhaps most importantly we’d love to see a community grow around these symbols to ensure sustainability.

We’ve put the Mulberry symbols, the API source and website source on GitHub in the straight-street organisation. I don’t really recommend looking at the website code – it has a strange history 😉

If this resonates with you at all and you have access to funding and/or development time then please do talk to us.

Posted in a11y, Apps, Assistve Technology | Tagged , | 1 Comment


After a 5 year gap I returned to the CSUN conference and for the 30th celebration no less. I have always enjoyed CSUN as a great place to meet up with accessibility and Assistive Technology people as well as soaking in the latest developments and trends. This year was no exception. Indeed, I had the most fantastic time catching up with old friends and making new friends. My key takeaways were Cognitive is finally on the agenda plus Math on the web and Braille has arrived at last.

I’d like to extend a huge “Thank you” to Gregg Vanderheiden for getting me out to San Diego as part of the Raising the Floor team presenting GPII. Here’s my main recollections.

  • 30th CSUN celebrations. Entertaining, with several excellent professional entertainers (and some not so professional) all having a go to celebrate and acknowledge the achievement of Harry Murphy who launched CSUN.

Harry Murphy holding at 30 sign

  • GPII, Cloud4All & Prosperity4All – In addition to our slides we presented the Library demo of two all-in-one devices running the GPII Auto Personalisation from Preferences. With a photo and card to represent each user who logged in via a NFC ‘tap’ action the device automatically ran required AT and a11y options for each user’s preferences. Unfortunately not many people attended this, but I did find that most people I spoke to were aware of the GPII, if not that it is actually a working prototype now.
  • Project:Possibility’s SS12 finals. Having attended 2 years of the European SS12 (or C4C as it is now known) it was great to be back at the US event, led with amazing efficiency by Sean Goggin (even whilst he was preparing the CSUN conference itself). This year’s CSUN and USC teams had developed great games their presentations were so close I was glad not to be judging. The winning CSUN team‘s steering race  game, while simple was designed for sound only feedback.  A big thanks to our Judges Mike Pacelo, Peter Korn and Jennison Asuncion who each gave great advice to the teams before announcing the winners
  • Presentations – I attend quite a few this year, but as always the real value is chatting to people. Highlights for me include
  • Web stuff – as Marco Zehe said in his pre CSUN blog post, the web is now very much a large part of the conference. Though it was a shame it did not have it’s own track. There was a session on Web components accessibility and I went to The WAI to Web Accessibility Education & Outreach Update followed by an extra Future of WCAG meeting called by Judy Brewer. Interestingly, Richard Schwerdtfeger stated he felt ‘Personalisation’ and ‘Contextualisation’ were the hot topics to follow. Interesting because the GPII addresses both of these topics.
  • Awards – Apart from SS12 mentioned above, I witnessed Mick Curran receiving the Deque Amaze award for NVDA from Preety and the honorees of the Knowbility Community Heroes awards. These included Jarrad Smith for Web AIM’s excellent learning resources, Molly Holzshlag for Lifetime Achievement and Steve Faulkner for being ever “mighty”.
  • Service dogs – always soooo pet-able but working hard so you mustn’t. This year labs were not so numerous and I saw more GSDs and Collies.

Yello lab service dog lying patiently under chairs

  • Friends – I had an lovely time chatting to many folks and then hanging out with good friends over the weekend. Happy memories of touring the USS Midway aircraft carrier (and clambering through endless hatches),  watching Skydiving and reminiscing about bands (and chips) from “up norf”.
  • Exhibition – not much to say really. I walked around it. Much like any year. Gregg and Amerish when round talking to many AT suppliers and demoing the GPII Automatic Personalisation from Preferences to see who would be interested in working with it. It seems all were keen and A1 Squared (of ZoomText & now Windows Eyes) are also on board. Great work. Now the Tiger Team have to get going!
  • Not so good – I left my phone at home so no pictures and I still haven’t seen the inimitable Viking and the Lumberjack in action! Also Days Inn Harbour view breakfast is, with the exception of the orange juice, ultra naff. Finally, BA’s seating policies means it’s very hard to get seats with extra legroom; I got  lucky on the way out but not back.
  • On the way home – Gareth Ford Williams recommend I watch Whiplash. I did and it is excellent – recommended if you like Jazz and don’t mind lots of swearing. Better yet – it’s not yet another a Hollywood bluckbuster (YAHB).

CSUN – I hope to be there next year.

Posted in a11y | Leave a comment

GPII contributors and Tiger Team enhance work of Cloud4All and Prosperity4All

The development of the Global Public Inclusive Infrastructure (GPII) international accessibility infrastructure is progressing apace. Our work in the EC FP7 Cloud4All (C4A) project has just completed its 3rd year and achieved a successful review. The sibling project, Prosperity4All (P4A), is currently preparing for its 1st review. With this solid background and with funding from the Universal Interface and Information Technology Access RERC (UIITA-RERC) at the Trace Center we have started to work on a couple of cross project activities, namely Contributor “on boarding” and the Tiger Team

C4a LogoThe research activity taking place in the Cloud4All P4A logoand Prosperity4All EC FP7 projects is the critical seed for the wider game plan for making the GPII a solid reality. The vision is for a vibrant ecosystem of accessibility users and developers working to co-create resources, tools and services that are effective in getting the best possible solutions into the hands of users.

In order for this to work in a sustainable way we need to have a healthy community of people who are excited, engaged and contributing to the effort. Some folks may provide funding and others their time, either paid or not. We need this to happen in all areas of the GPII activity, including the areas we are currently working as funded research. For example, the Automatic Personalisation for Preferences being developed in cloud for all will only be successful when a majority of solutions work with it so users can select and use them. Also, the Developer Space in Prosperity4All will be successful when developers regard it as the place to go for making their solutions full accessible.

So we have to activities of Raising the Floor that are designed to play a part in encouraging a active community.


Since Gregg Vanderheiden first started telling people about the GPII people have become excited about it and offered their help or asked how their solutions can work with it. In fact, we have already attracted over 130 people who have expressed such an interest. However, we have not been in a position to help these people easily get on board and contribute their skills in any organised fashion.

So we are starting to create resources that ease the “on boarding” process. These are currently on the wiki Contributor’s Emporium page but will move to a website based on the C4A site with some info on the P4A websites. We are also working with the P4A dissemination team to send a newsletter in order collect contributor interests and provide them with project news updates. We also have a new Contributors mailing list (google group) for discussion.

Naturally we welcome contributions and ideas from everyone involved in the current projects. In particular, we request that during our work we all consider

  1. Ways to incorporate contributions from others into your work
  2. Provide clear documentation suitable for people outside your workgroup
  3. Mark selected issues as ‘Suitable for Contributors’

Tiger Team

The Tiger Team is a small group of Raising the Floor members tasked with shepherding the GPII from research outputs to production infrastructure. Initially we are concentrating on increasing the number of solutions that work with the GPII architecture for Automatic Personalisation from Preferences (from C4A). This architecture has been carefully designed to ease the process of making solutions GPII-enabled. Indeed, in many cases the effort required is to provide configuration information to the architecture. Still, the work needs to be done, ideally with the solution developer’s support so the Tiger Team are leading this work.

This work, funded by UIITA-RERC, is also one example of where contributors can play a part and we naturally welcome input from all current project members. For example you might suggest solutions to work on, provide documentation or otherwise help the growth of the GPII. We hold an open team meeting every week.

Join us

If any of this interests you then do please get in contact at We look forward to working with you on this.


Posted in Uncategorized | Leave a comment

Cloud4All – providing automatic personalisation of access technologies

The Cloud4All project has just undertaken it’s important penultimate review with EC, something of a milestone for any FP7 project. This is a good time to take stock and see where the project is at. This especially true for Clou4All which is not an end itself, rather it is developing the core infrastructure for Automatic Personalisation from Preferences feature for the International GPII in`itiative.

After 3 years of research and development I’m pleased to say we have a working system that not only clearly demonstrates the user experience of using a device with a number of access technologies configured for their best possible experience, but is also provides a flexible base for hardening into a widely deployable infrastructure.

In order for the GPII to be a success it must be easy for developers of access technology to get their (or other)  solutions working with the GPII. For example, as part of the Cloud4All project activity we have already enabled a wide range of tools on several platforms:

  • Android
    • Audio, Accessibility, UI settings
    • Freespeech
    • TalkBack
    • eCTouch
  • Linux
    • magnifier, various UI settings, keyboard settings, volume
    • Orca
    • Web Anywhere
  • Windows
    • NVDA
    • Jaws
    • Read Write Gold
    • built-in magnifier, OSK, high contrast, mouse trailing, cursors
    • Web Anywhere
    • Sociable
  • Web
    • Chrome browser (via a plugin)
    • JME Themes
    • SmartHouses

We are now getting ready to help 3rd party developers and volunteers to adapt their accessibility solutions by providing information that developers will need. We’ve also set up a team to work on adding solutions (operating under the name of the Tiger Team). In addition the related GPII project, Prosperity4All, will provide more new solutions that wok with the GPII.

As an example of why developers will want to integrate solutions and how easy it is I’ll relate the experience I had with getting Maavis to work with the GPII.

Maavis is an installed Windows application that provides a full screen ultra easy to use experience for people with dementia in a care environment. It provides access to media and communications and is not end user ready, rather it is both a prototype and a framework requiring configuration.

The benefits I perceived from having Maavis working with the GPII include

  • Makes Maavis more easily available to people who will benefit from it
  • Provides mechanism to identify users who can not get on with login screens (user listeners like NFC)
  • Provides alternative and automated mechanism to change user configuration.
  • Help improve accessibility users general experience of accessing technology through automatic personalisation

Plus, although it does not apply so much to Maavis, the GPII will also make it easy for users to discover the solutions that work best for them in conjunction with others tools, plus the best configurations of all these tools.

The developer experience of getting a solution working with the GPII is actual quite easy due to careful design and a preference for declarative syntax. As long as your application can be started and stopped and provides away to programatically change the settings it straight forward. Your main work is then to provide details of how to invoke these operations.

Here’s the example for Maavis which stores it settings in a JSON file and so can use the GPII’s JSON settings converter. The GPII saves the user preferences for Maavis in a solution specific way but there are standard terms and the developer can provide information about how to map between the standard terms and their own settings.

If you are interested in getting your solution working with the GPII then take a look at this developer information (currently on the wiki but will move to our main developer website). You can also drop me a line at: stevelee [at] raisingthefloor [dot] org

Posted in a11y, Assistve Technology | Tagged | 1 Comment

Flashing Firefox OS onto a Flame with Windows

Since being involved in the Mozilla Tablet Contribution Program I’ve seen community members often asked about flashing and building on Windows. This seems to be a something of a FAQ for both the ‘flatfish’ tablet and ‘flame’ reference phone, and I wonder if not also for other devices. The Firefox OS development community concentrate on using Linux and Mac OS X, which is another OS with UNIX roots, being based on BSD. Thus support for Windows has been a low priority; why bother when Linux is freely available on any device and works so well?

It turns out the demand and suitability of using a Windows host is quite different for flashing and for building Firefox OS. It makes every sense for flashing, but not so for building. In this post I’ll explore flashing and introduce shallow_flash.bat for Flame. I’ll leave building Firefox OS on Windows to a later post


Flashing a phone is something any community member with a phone might want to do and significant numbers of our wonderful community are not particularly technical and/or many have only a Windows machine at hand. While automatic “Over The Air” (OTA) updates are often available direct to the device, either from the device vendor or Mozilla, these may not meet the user’s requirements. Some vendors do provide flashing updates but often at their own cadence, leaving users a long way behind the latest versions.

There are currently 3 channels of Mozilla supplied Firefox OS updates, with varying levels stability. Release is the most stable and vendors will usually supply and support this. Next is nighlty Aurora, the Beta channel which is fairly stable and just right for early adopters. Finally nightly Master is the latest developer build and likely to be buggy or broken. (Mozilla devs checkin directly to the master branch and don’t use a GitFlow type workflow). We really want people to be testing and improving the latest less stable channels if they are willing to. However, the only way to switch between these channels, say to Aurora from the supplied Release, is to flash your device.

A flashing wrinkle is that some devices, like flatfish, currently use a full flash which updates the entire software stake of Gonk, Gecko and Gaia. Other devices, like the flame, take the approach of flashing a base full-stack image and then partially flashing updates to Gecko and Gaia. At some point another full flash will be required, usually when a new Gonk lands.

Part of the reason for this is that some of the vendor supplied hardware specific components in Gonk cannot be freely redistributed in isolation by Mozilla due to licensing. Ideally, for Firefox OS to be fully open, no such restrictions would apply but the reality is different. For now these proprietary binary blobs get supplied as vendor images that are either combined with Mozilla generated code in a single flash, or provided as a base version for use with subsequent so called shallow flashing of Gecko and Gaia.

Practically however, flashing is almost entirely a matter of running a program to talk to the device and copying files across. This can be done manually but is usually done with a script to combine all the steps and components. In fact, 2 programs from Android are used; ADB and Fastboot. These are similar but have different features and require the device to be in distinct states so they can communicate. Windows versions of these are available, but Windows also often requires device specific USB drivers, though the standard Android ones do often work.

Flashing on Windows is not only desirable but also now achievable. For the TCP flatfish we’ve been providing Windows scripts to do the flash. The base versions for flame also have usable Windows scripts.   However, for devices like Flame, the ‘shallow_flash‘ script provided by the Mozilla Taiwan QA group is written for use on Linux. Now however, after a couple of minor tweaks it also works on Cygwin, a popular Linux emulator for Windows.

Cygwin is fine for anyone with experience of the Linux command line. However those less technical are likely to be uncomfortable with its idiosyncratic installer and quite confused by the Linux-style command line, especially as the file paths are different to Windows.

I wanted to make flashing much easier on Windows, especially after helping someone update their Flame to 2.1. So now a Windows script ‘shallow_flash.bat‘ hides much of the complexity of using Cygwin and running All that is required is to install Cygwin, copy the required Gecko and Gaia archive files and double click on the script to run it.

I plan to update the script to make the Cygwin installation a little easier too.

Posted in development | Tagged | Leave a comment