Busy
day at the W3C
Wow. It’s a bumper bundle of news at the W3C - no less than four new or updated CSS3 Candidate Recommendations were published yesterday (thanks to Tantek for notifying). Now, I appreciate that not everyone who has an interest in web accessibility has the same level of interest in W3C documents. However, as I always do when I post something that’s a little technical, I must make my apologies! For anyone still reading, here is the news:
- CSS3 Text Module
- This must have taken a lot of work. The new features here are all about supporting different written languages and beefs up CSS support for layout and presentation of, say Arabic characters or Chinese, Japanese and so on. Read this and you will proudly be able to say “I know what the ‘text-kashida-space’ property does …”
- CSS3 Color Module
- Oooh, there’s an opacity property. This is good! The color properties of previous CSS Recommendations are merged with features found in SVG 1.0 to give us this new recommendation
- CSS3 Ruby Module
- If you thought the Text Module for CSS3 was quite something, Ruby will knock your socks off. The basic notion is to improve support for simultaneous translation of text, where the explanation runs alongside (either above, below or to the side, depending on the language orientation):
- CSS TV Profile 1.0
- Not a new recommendation, but updated and covering how TV devices should render CSS (one day … one day)
Also published are two new working drafts and these are the CSS3 Generated and Replaced Content Module, which covers such things as “how to insert and move content around a document, in order to create footnotes, endnotes, section notes” and the CSS3 Speech Module, which is very interesting in terms of what it can offer in improved accessibility.
There are a number of features that could potentially be very useful - such as ‘voice-volume’ (I can already image a CSS class of legaldisclaimers that would use this!), ‘pause-before’ and possibly most useful of all ‘interpret-as’. This property aims to provide instruction about what some particular text should be interpreted as (surprise surprise) from the following list:
- date
- time
- currency
- measure
- telephone
- address
- name
- net (URL or e-mail address)
The idea that a speech synthesizer could understand from CSS that a telephone number was just that would make a big difference - actually, all of them would help remove ambiguity. My only concern is that, unlike the screen-based CSS recommendations, the audio side of things has been woefully neglected to date, and these new features could take years to be implemented, which is a shame. That’s not to say that we should disregard the new working draft on the basis that it will never happen. A lot of work goes into these recommendations, and they are done for very good reasons - if only the people making the software and hardware that could tap into it would give it a go.