Show/hide div.. reader not reading divs when I show them


what is the standard method for JAWS & Windows Eyes to read a div after I pop it??

(standard stuff: div is hidden when page loads, it pops when user clicks on a link… I just tested on JAWS & Eyes… Eyes reads it only when you mouse over it (and: reads only the paragr you’re on, if you want the next paragr read you have to mouse over next paragr (obviously for blind users this makes no sense); JAWS doesn’t read the popped div at all…)

is there a specific method I should be using to show/hide the divs so the readers can read them??

thank you…

We would be guessing wildly without seeing HOW you are hiding them and showing them, though I suspect you are using the ‘display:none’ approach, the leading cause of that. It’s why typically absolute positioning with a massive negative “left” value to hide it off screen came into vogue, and why I’ve started using overflow:hidden to chop off div’s.

Is it a pure CSS hover? Are you using scripting? Is this a content DIV or a menu flyout? The rules/approaches are all different for each.

Of course if you’re talking content that you’re hiding for some form of tabs or similar, that’s falling into the category of “how not to design a website” alongside framesets, ajax loading static content to pretend you have framesets, etc, etc…

But as I said, no code/page – we’re guessing wildly in the dark.

thank you for your response…

yes I did with “display:none/block” approach, but: the reader doesn’t detect when JS applies “block” style? (then it wouldn’t detect when you change positioning left from -9999px to 120px either right?)

at any rate, I have an example here,

with two modals, one done with display:none/block, other one done with positioning the modal on and off the viewport…

I find the behavior a bit peculiar: the first one (done with display:block/none) JAWS reads the div fine once it’s opened… it’s the other one that JAWS doesn’t read…
Windows Eyes it’s very weird… when you open modal it doesn’t start reading the modal until you mouse over it… (it reads sometimes when it’s still closed…) also, it reads only one paragr… to get to the next paragr I have to mouse over it… I think this is strange behavior for a reader that is supposed to be used by non-seeing users…

would appreciate comments and suggestions about best approach here… I know this JS-dyn. stuff is not the best for accessibility, but you know how it is, this is what the client wants and we have to try to make it work… we’re always struggling w/this stuff at work and would like to finally find a working/practical method here…

thank you…

PS: since am testing for the reader, had to use “real” copy instead of greeking… I’m afraid I used copyrighted content, but I will take it down once this is solved/thread exhausted… thx…)

Screen readers as a rule ignore positioning – but obey display and/or visibility… and ignore javascript. Its’ why loading content with javascript is considered a steaming pile of /FAIL/ for accessibility reasons.

Think about it, there is no ‘screen’ for it to realize the text has been positioned where you can’t see it, but display:none means ‘don’t show this’ – which they can understand. If there wasn’t a difference between display:none and left:-999em, the latter technique wouldn’t have become popular in the first place!

If you really want to do that, you should NOT set them hidden from the static CSS during load, and instead run a little script right before </body> (so it runs before onload and before CSS is applied for reflow) to set your hidden states. That way scripting off/blocked users aren’t totally shtupped by not being able to get at your content. You didnt’ build it with progressive enhancement, so of course it doesn’t gracefully degrade well to search engines, screen readers, handhelds, scripting blocked users (several million NOSCRIPT plugin for FF users can’t be wrong), and a dozen other unforeseen potential users.

Though this is EXACTLY what I meant by using script to pretend you have framesets – it’s an accessibility train wreck and stuff that doesn’t belong on a website in the first place – given how it’s broken when scripting is disabled. Another of the ‘not viable for web deployment’ concepts that seems to originate from this oddball paranoia about “pageloads are evil”… If you have enough content to be pulling that type of stunt you probably have enough content that a page-load isn’t going to matter. If it’s not enough content, do the world a favor and just let people scroll up and down.

Oh, and what’s with the H4? Makes no sense as it doesn’t look like the start of a new subsection – is there some new book or tutorial in circulation promoting that as it’s like the tenth time in as many days I’ve seen that nonsense. (unless of course you’re actually going to have multiple subsections under each H3 that we’re just not seeing in the pretend content)

Bah, the majority of screen reader users have Javascript enabled for the same reason everyone else does: it’s default on the browser. And just because you learned to use a screen reader doesn’t mean you know jack about browsers, or what Javascript is, or what HTML/CSS is. This can be Grandma territory.

but: the reader doesn’t detect when JS applies “block” style? (then it wouldn’t detect when you change positioning left from -9999px to 120px either right?)

Do you have an old version of JAWS and W-E? They load a page and make a virtual buffer. This is what the user actually interacts with. This is how those readers offer users all sorts of neat ways of navigations etc. This buffer doesn’t refresh automatically (except in newer versions and then not always).
I’m guessing your buffer isn’t refreshing.

For this reason, as Crusty mentioned, you don’t use display: block to hide things. Pull them offscreen with a negative margin or something. Display: none and visibility: hidden basically mean “do not copy this stuff to the virtual buffer” (form controls are often an exception, and also there are bugs). Stuff pulled offscreen ARE copied to the virtual buffer. This is why they “appear” in a screen reader: they were always there. will offer you text you can follow without greeking : ) Though ideally you’d already have real content first.

Hmmm, well. I don’t know why Window-Eyes is being the more problematic of the two, because I don’t test with it, and I don’t have JAWS where I’m sitting now.

But my thinking is that you might need to opt for keeping the content permanently in the DOM and permanently accessible to screen-readers. You can still move it on-screen programmatically when needed. This will, of course, make for content that is announced before any sighted users (be they assisting a blind user or using AT themselves because of limited vision) see it on-screen, but that happens quite a lot and is not normally anything to be too concerned about, I think. Particularly if the alternative risks a failure. It might be better anyway in this specific case. Getting screen-readers to spot changes of content isn’t quite as easy or as reliable as it should be, although you are not doing them any favours if you don’t use ARIA attributes and change their state accordingly. That will help the more recent versions at least.

Have you tried any other screen-readers, by the way? I’d be interested to know what results you get.

Yeah especially if you’re doing a JS-type tab system, ARIA can really help make those tabs make sense and you can place focus in the appropriate place.

My only concern with this is, there is a population of sighted (but impaired) screen reader users who use the reader along with maybe screen magnification, mouse-following, and mouse/cursor highlighting. For those people, things get a bit confusing…

so having the text in question appear visually when activated is still a good idea. But I too try to leave everything in the DOM and available and use CSS (and/or JS) to do the hide-show of stuff.

The problem then is if you want to, say, have Javascript hide normally-visible things (so only users with JS get the hiding action) you have that FOUC where things appear pre-JS-load and that can be very confusing especially on a slow connection. I still don’t know all the JS tricks to prevent those FOUCs…

You can NEVER eliminate it completely at first-load of a site, but you can often minimize the damage by running your ‘startup’ part of your script manually right before </body> instead of waiting for ONLOAD. That way any class or style changes are applied before the initial page-flow is finished, so when the first CSS reflow occurs your added classes and manually added properties are already applied and in place – as opposed to adding the delay of loading all images and stylesheets BEFORE having the script change things.

By the time the browser is down to </body> most all of your markup has been added to the DOM, letting you target it… long before the CSS is even attempted to be applied. Hitting it then doesn’t eliminate FOUC on firstload – but it minimizes how long it’s visible and once your scripting and CSS are cached, any subsequent page-loads on a site generally you won’t even see it!

DS’ approach is very good. With that approach, unless the connection is crazy slow, it will almost eliminate it (if anything it would just be a brief flicker).

Another way is simply with your design. If you can manage to get all of the stuff that would change below the fold, 99% of the time nobody would notice even if it is occurring. This lends itself where to things like tabbed layouts, where your first “tab” would extend below the fold. All of the others could just be divs that are visible, but since they are below the fold, most people won’t scroll down and spot them. Your Javascript runs, stacking them up or whatever, and nobody is the wiser.

Of course this approach only works with certain design elements.

still struggling w/this issue…

actually what we want is for the reader to read it only AFTER user pops it (by which time it’s visible…)
also: don’t laugh, but we don’t support noscript at all where I work (it’s what the client wants), I know this sucks big-time, but it’s beyond my control…
so: the solution you propose of showing it onload and hiding it with JavaScript, still applies?

Do you mean the tag or the plugin?

Unless the client can guarantee that all visitors have JS enabled or installed on the device, the everything-there-first approach is better. Having CSS hide stuff by default and needing Javascript to show it is the problem.

Or build the way the client wants and offer a link to a text version. This is kinda the garbage-way of doing it but deadlines exist and clients can demand things and sometimes that’s how it is. Text versions is not a good solution but it’s better than nothing and costs you almost zero time. You just link to a copy of the page but there’s no CSS it links to to hide stuff.

Though another solution could be, have the things the user clicks to create the popup take the user to another page with just that content. Meaning the divs/content could be hidden onload by CSS, because users still get access whether there’s JS or not.

In this case, you have CSS setting your content to display: none (if really all users shouldn’t be seeing/reading this content at first, but if it’s just a screen-clutter thing I’d stick to off-screen positioning technique instead), and what the user clicks is an anchor with an href. This gives you

  • something natively clickable without extra coding
  • naturally leads to the content (the disadvantage with static pages is, now you have more pages and that could mean more maintenance if you’re also styling these to look pretty)

Javascript, if available, does the showing of the content instead, and makes the anchor who got the click event return false; so the user stays on the page.

An interesting way to do this actually is having static pages as a backup for older browsers, and using a combination of separate content chunks and Ajax and the new History API.
Mark Pilgrim made a little demo on his Dive Into HTML5:
and I thought his demo showed how everything works was pretty cool, but since he took his stuff down from Teh Interwebs I’d have to see if there’s a mirror showing that stuff. The demo was called casey.html

What I thought was interesting and maybe brittle of Mark’s setup was, he had a very certain directory structure of the pages… but it might be just the way his demo worked and my missing something. It might not matter so much.

Anyway History API is so browsers’ back and forward buttons can move the user between the chunks of content if they aren’t getting chunks added by Javascript. Since the fallback you build is still a set of static resources linked together, it generally works no matter what. I remember my FF 3.6 having a little bit of issue with the fallback but again, might’ve been the demo.

*Edit thought of another thing, though I forgot if it doesn’t work in IE7 and below… maybe also not IE8? The :target attribute is a cute way to show hidden things with CSS and let Javascript just step in for older browsers who don’t understand it.
The problem with :target is it requires the user to explicitly click it closed (by clicking on another link, and it should then be another in-page link if you don’t want a screen-refresh or the user brought back to the top of the page) and it muddies URLs by leaving # in the history, which actually isn’t helpful if the user has been clicking around a whole lot.

I’ve been playing with it on a page of mine to show and hide a submenu on click, and it’s fun. Dunno if it can be applied to what you’re working on.

About entries in history. I always prefer to have too many than too few. 99% of JS developers seem to think that no part of what they do ever has to go into the history, which constantly catches me out, even in websites that I use every day. Maybe that’s me being dim, or maybe its an automatic response thing that is perfectly normal. Either way, I often unthinkingly hit the back button under my thumb and find myself taken back to instead of closing an article in my feed reader or the favourites in BBC iPlayer.


Regarding the target pseudo-class: although I am generally quite anti-JS when I have my web user’s hat on, I’m not keen on relying on things that don’t work as effectively/reliably as carefully coded and gracefully degrading JS. NetTuts had a tutorial using target recently, and it just didn’t work as well, for anyone really, as the JS version.

That’s… not how screen readers are even supposed to work… As Stomme Poes just implied, relying on javascript when talking about an accessibility aid is… well… I lack the words in polite company. TARFU, SNAFU, FUBAR…

It sounds like one of those ‘unrealistic expectations’ in ‘design’… which again is why I suggested NOT doing that and just making a bunch of separate pages… you’re basically trying to use scripting to pretend you have framesets – NEITHER approach has ANYTHING to do with accessibility or designing for those types of aids.

If you REALLY have your heart set on that for desktop resolution screen users (the ONLY target that makes ANY sense for) apply it with the javascript including the links, and leave it unrolled/everything showing for everyone else… that or actually break it into separate pages. Playing silly games of hide and seek with scripting is just pissing on the very notion of it being accessible.

@ Crusty: I’m assuming Maya is talking about the majority of users (who wouldn’t be using a screen reader). But again, there are sighted people using screen readers and it is indeed unsettling and confusing to hear content you aren’t seeing (screen mag users are probably pretty used to this, but the usual setting is a read-by-line or read-by-mouse anyway). I figured the comment meant the stuff showing up first even for a moment is a problem.

We don’t know why Maya’s client wants this stuff hidden and only appearing when the user takes action.

It might just be screen clutter. That is, the client thinks a cleaner-looking interface and not too much content is beneficial. Sometimes it is. Same reason we use dropdown menus: put a lot of stuff on one page but not show it all at once and overwhelm the user. For these cases, I like the content being there and either only hidden with JS or with CSS off-screen positioning (instead of display: none etc).

It might be a resources thing. For a company who only wants to send data, and as little as possible, and only when the user requests it, then they’re probably using Ajax to update bits of the page instead of whole pages… and the user is initiating this with a click.
For this, I like (if reasonable/possible) a basic form setup available where users who don’t have JS or don’t get server pushes have to activate a typical form control (submit) and they get a new page with the new content. This fixes the how-to-update-the-virtual-buffer problem in some screen readers, works for all devices, blah blah. Layer some Hijax over that (hide the form with JS and then allow the ajaxy stuff to run) and the majority of users who do have JS don’t notice anything. Offering a static button (the old submit) as a just-in-case is nice.

@ Crusty more and more sites are doing this combination of hide-show or show-as-you-go. Maya’s client has been seeing this on all the popular sites for a long time now. Linkedin, news sites, anyone who has imitated any aspect of Spacebook… all do it. Usually with JS. Users have started expecting this behaviour for some time now. Especially the non-tech-savvy users out there are generally familiar with spacebook/twitter/etc. The separate static pages thing is even something I’d only do if it were feasible for me as a developer to get done in time, otherwise I’d be looking at other options.

The only reason we mentioned this is if you hide by default with CSS then the user MUST have Javascript to see the content. If you do it the other way around, then content is always available, and most people have JS, so most people will get the experience the client wants. Client doesn’t not use JS, right? Client will see what s/he wants. People with JS off will possibly not notice that content was meant to be hidden until they clicked on something. That’s why I like this method.

Screen reader users would normally be okay with either method though, so long as the content isn’t hidden with CSS display: none or visibility: visible. All Javascript is doing in either case is changing the classes on the elements anyway. So also this way, you avoid the problem of virtual buffers not refreshing: they don’t need to, since they always have access to the content. They don’t (generally) care about screen clutter or how sleek it looks.

… and they’re all accessibility rubbish, and exactly WHY frames were supposed to have gone the way of the dodo. It’s like TARGET, which people script around not using instead of going “why aren’t we supposed to use it” or more importantly, understanding it.

Though I’ve not encountered that on any news sites – maybe we visit different ones. Honestly, if a major news site I visited pulled that ****, well, it wouldn’t be one I’d visit.