- Wednesday 12 December 2007
- 6pm onwards
- Bar Broadway
- FREE till the bar tab runs out
- FREE finger food provided
- You must register to attend
For any Brisbanites reading this... BarCamp is heading to Brisvegas on the 24th - yep, vote in the morning and barcamp the rest of the day! BarCamp wiki / BarCampBrisbane. Head on over and RSVP for BarCampBrisbane1.
There are still some sponsorship slots available too, if you run a web business in Brisbane please consider investing a few sponsorship dollars. Add your contact details under the sponsor section if you're interested.
[Just to clarify, I'm not an unorganiser and sadly won't be there myself. All I can do is recommend that you go ;)]
Two quick bits of self-promotion...
- I'll be speaking at Open Standards 2007, which is on here in Sydney on November 15 & 16. Yep, that's soon!
- This blog was featured in Virtual Hosting Blog's Top 100 User-Centered Blogs. I'm flattered to say the least, considering the excellent blogs on the list.
At WDS07 people were going nuts over the iPod Touch. Legend has it that the infamous Perth gang roamed the wilds of Sydney, buying every Touch in their path! While those reports might be slightly exaggerated, the buzz was certainly high since we don't officially have the iPhone here in Australia - so the Touch is the next best thing.
I was briefly caught up in the moment and spent a heady 45 seconds or so planning to buy one myself. Then I heard about the disappointing capacity, particularly against the price. Since my current iPod holds my entire music collection, sixteen gigs just isn't going to cut it for me! ...and while the interface is cool, that alone doesn't justify the price.
but add a keyboard...
Still, after playing with one for a minute I realised what would be a killer app for me: an iPod Touch and a portable keyboard. The sort of fold-out arrangement you can get for Palm Pilots would be ideal:
why a keyboard?
Why? Because an iPod with a portable keyboard would be the ultimate conference and meeting note-taker for me. I don't need to do fully-fledged web dev at a conference, I just need to take notes. I generally use Dreamweaver since that means I don't have to reformat my notes later if I want to blog them; but any text editor will do.
So a laptop is overkill and just weighs me down on the way to the pub. An iPod Touch and keyboard would weigh bugger all compared with a laptop. The keyboard would need its own power, so it could easily contain a kickarse auxiliary battery and still be relatively light. The touch screen also means you don't need a mouse.
Of course I could just go get a Palm Pilot and a wireless keyboard... but the Touch does have a damn fine screen (rivalled only by the PSP) and the touch interface means no stylus to lose on the bus. Not to mention that, yes, the Touch is cooler than a Palm Pilot (don't try to tell me geeks aren't into shiny toys ;)).
Adding a keyboard to some of these devices might also have an accessibility benefit. Touch screens currently lack tactile feedback (you can't feel a touch screen "keypress"), so on-screen keyboards are pretty useless to a blind user. Adding a keyboard would also help anyone who sends a lot of text messages - including hearing-impaired people.
Obviously a keyboard alone won't make devices accessible - many of them don't currently vocalise their menus and so on. But a keyboard would certainly help on the input side of things. They might even prevent a bit of SMS-thumb RSI!
It's a case where accessibility moves into the realm of usability. Sure, the little keypad on a phone works just fine to send texts and peck out short emails - but a full size keyboard is more usable. I can also see full-size keyboards being popular as the mobile-savvy baby boomer generation ages, since the tiny buttons on most phones are hard to use if your eyesight is going or you're not quite as dextrous as you once were.
ultraportability (is that even a word?)
At the end of the day I don't specifically have to have an iPod with a keyboard, it could be any small, web-enabled device with a reasonable screen. Data charges aside, a phone would have the significant advantage of being able to connect even when there's no wifi available. Currently in Australia that basically means "anytime you're not at home", although Meraki mobs may change that.
I'm really just looking at the potential of adding a real keyboard to an existing web-enabled mobile device. As John Allsopp recently highlighted, the mobile web suffers from poor text input. Adding a keyboard would take many devices from "annoyingly slow" to "quick and easy", without adding three kilograms of dead weight to your bag.
Here's hoping more devices start getting portable keyboard options.
The source order question came up again on an email list recently - ie. should content or navigation be first in the source order?
This is a "jury is still out" issue since so far nobody has comprehensive data, just studies with a small number of respondents and opinion informed by observation of a relatively small number of users.
The paper Source Order, Skip links and Structural labels is the best research currently available; and its findings suggest either that the order is less important than other factors, or that there's a slight overall preference for navigation first.
However, the small number of participants in the study means we can't be 100% sure if the findings would be the same with a larger sample size. It's likely and it's backed up by anecdotal evidence, but it's not confirmed.
So, what I think we can say for sure about the source order of content and navigation:
- No matter which way you go, be consistent across the site so users can learn how your site works and trust it to work the same way as they move through the site.
- Either way, include skip/jump links; but...
- Include visible skip links where possible, or use invisible-but-accessible skip links (ie. do not use
display: none;to hide skip links as a very large number of users will never be able to access them).
- If they are hidden, try to make them visible on focus so sighted keyboard users can see them.
- Include visible skip links where possible, or use invisible-but-accessible skip links (ie. do not use
- Use meaningful link text and a logical heading structure. Not only is this just good practice and good for SEO... the accessibility-oriented reason people say this is that some (many? most?) screen reader users don't actually read a page from top to bottom. They use features which extract all the headings or links into a list; read just that list then use that to jump around content. Once they identify that they're on the page they really need, then and only then will they read the whole page.
Please note that I am not saying all screen reader users navigate by link and/or heading. Screen Reader users have habits which are just as varied as other web users. No two people use the web in precisely the same way - but overall trends and common approaches can be identified. Enough disclaimer? :)
- Source Order, Skip links and Structural labels (study methodology and findings)
- OZeWAI presentation: Source Order, Skip links and Structural labels (presentation slides)
- Max Design - About structural labels (includes a good method for hiding links and supplementary headings)
- SonSpring | Accessible display: none (an alternative method for hiding links; although with some noted problems it's still better than
The Big Stonking Post continues for WDS07, with day two...
I had planned to liveblog these as the day went on, but as is now traditional the laptop refused to connect to the wireless network.
So, here they are in One Big Stonking Post™! :)
Friends, developers, geeks! Lend me your picnic blankets!
We believe that not all events require alcohol, extensive agendas and your entire weekend.
We believe lunch should be more interesting than a quick dash to the food court.
To this end, we present Geeks on the Grass: a chance to gather in the park with like-minded people, throw down the picnic blanket and chill out.
A few weeks back, Ajay and I were thinking about web events and we had a crazy idea. What if we - stay with me here - what if we went outside? Where there's stuff like sun, sky and grass...
The first GotG is scheduled for Friday September 21st, 12-2pm in Hyde Park, Sydney; and has been kindly sponsored by News Digital Media. Further details will be posted at GeekWhisperer as they are finalised, so keep an eye on the feed.
Yes, we realise it's a weekday. Seriously, this is on purpose. GotG is an easy commitment - it's a leisurely lunch break in the park on a Friday, with the added bonus that you'll be chilled and sober when you go back to the office. Or you could bunk off afterwards and go to the pub, whatever mows your lawn :)
I've been tagged! The meme is to find photos of yourself from Web Directions... Although I escaped WE05 relatively unsnapped, there are still a few photos of me from that year. WD06 was a different matter. I usually tend to be a bit camera-shy (I'd rather take a photo than be in one), but frankly I had to get over that or I'd have had a meltdown.
As friends gathered the photos started and Amit used me to test stage lighting as a sort of speaker stunt double. The workshops were a bit of a blur, but I can say for sure that I ate muffins and lunch.
At the speaker dinner we made Molly a little worried but thankfully we didn't scare her away. Recursive photos ensued and apparently I nearly ate the man in blue during yum-cha. Then in the lift we proved that at Web Directions, you can't turn around without seeing a camera.
On Day One the Web2.0MFG levels started to pick up, as did the drinks. On day two, my nerves reached their peak and after some magic spells were cast Cheryl and I finally spoke. Then it was all over and we were at the Pump House. Beers, photos and hand gestures were flying and there was even some chimping to be observed. It appears that eventually my brain started glowing and somehow more yum cha was involved.
...now, I'm supposed to tag three people. However I think it's more fun to say anyone who took or appeared in one of the photos above - particularly if you were drunk - should consider themselves tagged! That includes Andrew K, who's been tagged already but won't get around to it if we don't all hassle him :)
Ahhh, September. Conference season!
For anyone thinking of going to Web Directions South 2007, remember that you've got until midnight tonight to register at the Early Bird price. Coincidentally, McFarlane Prize nominations also close at midnight. There's more info at the site: Discount Pricing and McFarlane Prize deadline.
Not only but also..! Oz-IA is on the weekend of September 22-23... they have one of the coolest (and most appropriate) website designs I've seen for a while. Aesthetics aside, I can't adequately praise the value of a really good Information Architect on your team. We should all be drinking the IA Kool-Aid.
I'll be at WDS07, but it looks like the only way I'll get to Oz-IA is if I win the free pass. Fingers crossed :)
Doing the rounds at the moment is a Five Things I Want In Opera tag-meme. Chaals tagged the world of connected people so that's good enough for me. Besides, I don't really want to wait for this meme to work its way out of Europe ;)
Here are five things I want in Opera (no particular order):
- An auto-update feature so people are less likely to let their version lag. This feature alone determined whether I recommended Opera or Firefox to my mother-in-law.
- Web developer tools similar to Firefox's Web Developer Toolbar. There are some add-ons which are pretty good but I'd really push for Opera to integrate more of them into the base install.
- Slicker handling of Opera mods like user js - they sound too techy so the average user won't try them. I'd also like a neater way to add Bookmarkets to a toolbar, even if the only change was being able to set the favicon permanently.
- Let me synch my Opera preferences with my account at my.opera.
- Default toolbars which are more consistent with competitors. Out of the box, the buttons in Opera don't look like other browsers. I have long since thought this was a reason many users "don't get" Opera, leading them to ditch it in that all-important "just trying it out" phase.
Who am I tagging? Anyone who wants to join in - comment with the URL of your post.
Social networking discussion seems to focus on which service will kill another, which ones are going to be the winners or losers. But as far as I'm concerned, I don't want them to kill each other; I just want them to let me publish once and syndicate wherever I choose.
is that a crowded market i see before me?
Social networking sites and services continue appearing thick and fast. Every service wants to be the one that people use; and every new service is evaluated to see what it's supposed to be killing - right now Google has 19,800 results for the query pownce "twitter killer".
But do users really think this way? Do users want one service to rule them all? Do superior services win? MySpace suggests not. Personally I think people just go where their friends are - it has nothing to do with technical merit or clever hooks. There's no point sitting on a technically superior service with nobody to talk to. So you sign up wherever your friends are.
But the thing is your friends don't actually move in neat ranks and dutifully sign up to one social network. They join up all over the place, wherever they saw a concentration of early adopting friends. Since your friends are on these other services, you either sign up to those too or you miss out on the interaction .
Sure, some people are fiercely loyal and attempt to convert their friends; but they are competing with both the force of habit and human nature - they're screwed, in other words. Once people are comfortable with a service, they'd kind of like to stay put. It becomes their primary service for that kind of data. They might sign up for some others, but they're secondary to their chosen service.
the joy of repetition
After signing up for a few services even the most casual observer will notice the similarities. In fact what they'll really notice is that the more social networks you sign up for, the more they subject you to dull, repetitive work. We start thinking "hang on, weren't machines meant to be doing the work?".
People have long since wished that social networks could share contact details so you don't have to grind through finding your friends on the next service. I've long since wished for the next step: syndication of similar information:
- I don't want to write a LiveJournal, I just want it to syndicate my Blogger posts.
- I don't want to create status updates in FaceBook, I want it to import my Twitter statuses and use those.
- I don't want to sign up to Zoomr, but I'd let it import my Flickr stream.
- I'd like del.ico.us to mirror my ma.gnolia bookmarks since most of my friends use del.ico.us.
I have primary services for most of the information social networks are sharing. If something new pops up, say I decide to get into Cork'd or Last.fm, I'll happily add that to the pool. But if something is asking me to do the same thing I'm already doing elsewhere... well, I've already done that. I don't want to do it again. If I have to do it manually, eventually I'll stop doing it in at least one place.
There are some distinct patterns in the type of personal data people create:
- Notifications - system alerts/updates, action required, action recorded
- Messages - short, long, statuses, chat, comments
- Files - photos, videos, music, documents, etc
- Data - URLs/bookmarks, events, contacts
Most network communications are variations and combinations of these basics. Yet the networks don't let users share the same stuff between services. Why not? Well filthy lucre of course. Each service attempts to create a walled garden, so they can make some money in some way.
But by creating walled gardens, the services are insisting that someone has to fail. At the very best, they all bleed. It's a war of attrition and users are the pawns. Realistically all of my status updates should be "updating fifteen social network sites, can't sleep, friend requests will eat me..".
they've shared before
Email and chat have both survived competition. Consider all the options for chat and email - there are far too many to list. Despite the direct competition, all the services survive. How? By sharing or allowing users to combine services.
Email doesn't have to be manually resent to every friend on a different ISP or email provider. The system just handles the transfers regardless; and any provider can survive so long as they can reliably send and receive email.
With chat, users can now run multi-protocol clients which take care of all the options. They have multiple accounts but they only run one bit of software to manage them all. No specific chat service had to beat all the others - they all just accepted their fate and let the multi-clients access their APIs.
Wouldn't it be awesome if status updates worked the same way? Write one Twitter status and friends get that status whether they're on Twitter, Jaiku, Pownce or FaceBook.... sweet!
Then there's the next step... comment aggregation. If we can syndicate information, we'll want to be able to collate comments across all these services. No sweat, in your XML feed include a reference to a definitive source for each message - remember how we have primary sources? Services can share comments and display them according to timestamps.
can't we all just get along?
Surely all of this is possible. It's not that long ago people couldn't imagine networking everyone's computer in any meaningful way! Probably the biggest challenge would be to convince all the services to share data openly. They don't want to share, they want to recruit page impression monkeys (that's the humans, you at the back) so they can make money.
But if services were open, users could choose whichever service they wanted and still get updates from all their friends. No particular service would have to live or die. Users wouldn't have to choose, nor would they have to miss out on what their friends are up to in yet another walled garden.
It seems social networks are anything but social with each other. The people running them seem to hope that their competitors can be beaten. But consider MySpace vs. FaceBook - even if that war can be won, at what cost the victory? Isn't it more likely that the whole thing could force a migration to some other service entirely?
By sharing data instead, services could decrease churn rates. Why shift to another service if you get those updates anyway? People could actually choose the service they like the most, not the service their friends liked the most. Surely that would make them more likely to just relax and actually write new updates. More usage, more data, more hits. Then everyone wins.
A key point of ridicule for Web 2.0 is the endless use of non-final version releases - Perpetual Beta. As I've observed before, Flickr is a notable offender.
Now they have either done it again with the most ridiculous version yet; or maybe they've finally realised that they can call it whatever they want - they're taking money so it's bloody well final.
Flickr has gone through alpha, beta, gamma and now the logo just says "loves you" where it used to say gamma. It's certainly an odd version number. What comes next? Flickr "loves you more... no you hang up... no!... ok let's hang up together... 1, 2, 3... you didn't hang up!".
Sure, Flickr's a great service - I paid up after all - but their approach to version numbering is weird. Not to mention the fact that at this point, the numbers are only really useful internally. Marking a site as Beta just alerts people that the service is not finalised. Flickr really can't argue that point any more.
So anyway, Flickr loves us. Does that mean it's out of gamma? :) The logo's file name is
flickr_logo_gamma.gif.v1.5.gif so who knows.
Over the years I've noticed there are a few pieces of user feedback you can pretty much count on when you relaunch a website. Most developers are familiar with these comments and have learned to breathe deeply and not freak out.
However many of our clients have not seen it all before and do freak out. They'll read a few comments and start asking to change everything. It takes a lot of reassurance for them to leave the site alone for a while, to let users get a feel for the new site.
In any case, I thought it might be fun to run through the pearls of wisdom you will probably get once you open up for comments... and the people behind them :)
loyal fan: unconditional love
"New site is awesome!"
This is great, enjoy it. Just keep in mind that some people will love the new site purely because it has changed. You could produce something ten times worse and still get this feedback, since a change is as good as a holiday.
Should you pay attention? Well sure, since an improved site should attract nice comments. Just remember that it may not mean you've got everything right :)
curmudgeon: unconditional hate
"New site is shit! Why did you change? Put the old site back!"
No matter what you do, some people will hate the new site purely because it changed. They'll hate the colours, the fonts, the structure and navigation. They'll complain bitterly about people who don't know when to leave things alone and say 'if it ain't broke don't fix it'.
Sometimes they'll have a point about something which has actually changed for the worse. Other times they just fear change.
Should you pay attention? Not unless your entire user base complains. Otherwise just keep an eye out for any reasonable points which slipped in between the bile.
speed cop: has the stopwatch on the new colour scheme
"New site is so much faster!" and "New site is so much slower!"
You could produce a site which loads in precisely the same amount of time as the old one; and some users will still be convinced that it is faster or slower. I kid you not, this effect is amplified if the new site is red.
Should you pay attention? Yes and no. If users are reporting problems, you should investigate. But don't simply take their word for it - you should run some tests before and after launch and get some hard data. Load time can be measured, so measure it!
armchair expert: knows better than project staff
"You should use a serif font to make the body text more readable, duh! Don't you know anything about typography?"
"You are all morons. I could do better than this in my sleep. Send me a copy of the website and I'll rebuild it over dinner tonight."
This sort of feedback is a minefield. Sometimes they're spot on, other times they're miles off. When they're wrong it's really hard for staff to argue against it, since the user can sound so well informed. When they're right it can be embarrassing - someone missed something, or the client vetoed the developers who said the same thing.
Should you pay attention? You should definitely pay attention to the points they raise, but don't automatically assume their conclusions are right or think you are required to reinvestigate things you've already addressed. When it comes to the crunch you should trust the people who were actually hired to do the job. Plus, ideally you should also have user research and other data to back up the decisions that were made.
Remember: it's the net. You don't know who is on the other end. It might be a web professional giving you a little freebie consultation; but it might also be some freak reading Don't Make Me Think and smoking crack.
Comment spotters should keep their eyes peeled for the professor of irrelevant studies variation on this theme. Watch out for "I've been teaching programming/screen printing/comparative religion for fifteen years, so I am an expert on web design".
lazy navigator: can't find page x, didn't actually look
"I can't find the deciduous-tree-praising haiku section, I hate this site, you suck."
This feedback can mean one or more of several things. First, if you intentionally removed the content the user wants it back; no matter how obscure the page there was someone who adored it. Second, if it's still there, it might be hard to find. Third, the user may not have made any attempt to find it; and since it's not on the homepage they chose to complain instead of look. Fourth, the user may have simply looked at the old location; received a 404 and sent you a complaint. The list goes on.
Should you pay attention? Well if you misjudged the popularity of some removed content, you should evaluate the pros and cons of bringing it back. If there's a massive red button on every page, marked "deciduous-tree-praising haiku", then you might conclude the user hasn't really tried to get to grips with the new site. But if you get lots of people saying they can't find anything, you might need to change the site.
Just don't decide the entire navigation approach should be scrapped because one single user couldn't find something.
dismissive nitpicker: hates one detail, declares relaunch a failure
"The footer text should be one point larger. This entire site is terrible, what a waste of time."
Some people will hate your entire site because one specific feature ticks them off. There's not a lot you can do about this since you generally can't prompt them to comment on anything else.
Should you pay attention? Depends on the specific detail. If you only have one feature and everyone hates it, you've got a problem. If the detail is "I'm disabled and can't use this site" then you should find out where you went wrong. However if they're talking about the specific shade of blue you used for the bottom border of the navigation, you can probably let it go. Unless you get heaps of comments about the same thing.
how to deal with feedback
So anyway, you will get feedback. You'll like some comments and hate others; you'll find genuinely good ideas and totally ridiculous suggestions; you'll get warm glows and horrible lows. It is hard to read negative feedback.
Some thoughts to help you cope with user feedback:
- You should seek user feedback! Put a call for feedback front and centre. This is a good thing to do.
- For every person who comments, there are hundreds more who won't. Comments take time and people are busy.
- People who are angry or don't like the site are far more likely to comment than people who are happy or unconcerned about the changes.
- You can use the feedback point to gently point people towards the positive features of the site which address their concerns. But don't fall into angry rebuttals - save that for the pub afterwards, when users can't hear you.
- You will get upset when you read the negative feedback. You've invested a lot of time/money/effort and it will feel very personal. So it's often a good idea to read it, say and do nothing, go for a walk and have another look tomorrow.
- Look for common complaints, don't act on lone curmudgeon complaints.
- Don't simply dismiss all complaints. We're not perfect, neither are our sites.
- If you respond, don't do it when you're angry. Be calm and very plainly state your reasons for what you've done. If you do plan to take an idea on board and make a change, gracefully thank the user for their input.
- Enjoy the praise! You've earned it.
We all have to deal with user feedback and the experience is a mixed bag. Take comments with a little dash of humility and humour, and a whole lot of grains of salt.
For a long time I really couldn't get into widgets. I found them too clunky for the functionality they offered - a bad ROI. Konfabulator in particular was not kind to my aging PC's performance.
But then two things happened: I got a second monitor at work and installed some Opera widgets. So no I use a few widgets - some obvious stuff like the weather and the fuzzy clock; plus some very niche stuff like Stay Secure.
The extra monitor was a no-brainer - that just gave me extra room to have widgets in view. The real kicker was having the widgets piggyback my browser - something I'm running all day anyway.
With all the options out there for widgets, Opera has two big advantages going: first, it doesn't require a whole new platform on your computer. If you have Opera, you already have the platform. Second, they're cross-browser and from what I'm told will ultimately be cross-device as well courtesy of Opera Mobile.
Opera have put widgets right where you've already installed software - you can just hit widgets.opera.com and away you go. Of course, if you have OSX or Vista you might choose their widget offerings for much the same reason.
spoiled for choice
So anyway, I know I'm late to the widgets party. But there are an awful lot of users out there who haven't even heard about them yet, or just don't really know where to start.
It's a classic barrier to adoption: widgets sound techy and more than a little geeky; there are lots of competing options, with no clear differentiation for the casual observer; and you have to install stuff before you really know whether you want it.
I was trying to give someone a "quick overview" of widgets a couple of weeks back. I dashed off what was supposed to be a quick email. It took a whole screen just to list and describe the widget engines I could remember off the top of my head (don't look at me like that, they actually needed the info - I wasn't doing a misguided geek braindump!).
If you're an average user, you probably don't even modify your browser settings - much less install add-ons, widgets and customisations. There are plenty of users out there who haven't really got their heads around XML feeds. Can you imagine their confusion trying to figure out widgets?
Then from the publisher's perspective, with all these options how do you pick a widget to release?
don't look now, it's another cowpath thing
Realistically you have to offer widgets wherever people are already using them - even if it is still a wide range of options with small user bases. If someone's not using widgets yet, chances are they're not going to be starting just for you - they'll keep using whatever they're already comfortable with, even if that means visiting your site randomly via in-browser bookmarks.
About the best thing you can do is find all the widget engines you think have a decent user base and release a widget for all of them. With all the quirks and weirdness, you could be spending a lot of time making widgets.
it's like we need a standard or something
A standard format would definitely be useful here - something like the Netvibes Universal Widget API (hat tip to Lachlan for that one). It remains to be seen whether UWA will become a standard, or if everyone will just publish their own "standard" leaving us no better off than before (RSS anyone?).
people are tired of beige boxes
Perhaps widgets will encourage more average, non-geek users to customise their computer. I've observed over the years that a lot of people really are nervous about using computers, while others are simply disinterested. I have a vague theory that only fear could drive someone to put doilies on a computer monitor, but that's probably more about my view of doilies ;)
I don't think fear is too strong a word - people are still worried they could "break the computer" by pressing the wrong key. It doesn't help that current affairs TV spots regularly scream about privacy and security online.
So anyway - whether they're afraid or just ambivalent, a lot of people don't get much enjoyment out of the computer. Even so there's a good chance they're stuck using it all day at work anyway.
giving the humans a look in
Being able to whack the weather, traffic report and your favourite newsfeed right on the desktop gets people to engage with their computer a bit more. It lets them get useful information (or fun, useless information) and gives a simple opportunity to make the machine a bit less threatening (or boring). The computer might be forcing them through the agonies of spreadsheets, but it can also tell them they need an umbrella at lunchtime and to avoid the freeway on the way home because it's backed up for miles.
Sure you can get that info from websites, but widgets are right there already. They're little bite sized bits of web.
To think of it another way, widgets are about the user. They're generally free, most of them are still free of overt advertising. They just fun, or useful, but most of all personal. It's technology which actually does something nice for the humans. Which is a nice change.
So anyway, there's a lot of potential in widgets - if only users can be convinced to use them. Here's hoping a clear standard format emerges to make it easier to give the people what they want...
Most standardistas encounter the "standards and accessibility limit creativity" argument at some stage. Yes, even in 2007. In fact these days it often morphs into "don't criticise AJAX just because it's not accessible", but I'll save that rant for another day.
Personally I don't think standards compliance adds any limitations beyond the natural limitations of the web (all media have their limits). But even if it did, does that prevent creativity?
ANSI art by sq2 of esquemedia.com. Cheers for the permission, Rauri!
I've seen some impressive artistic results from people using limited media. One of the greatest and certainly an influential example in my life was ANSI art. ANSI is a joy I recall from BBSes, back in the day when my internal 14400 modem was hot and my computer's hard drive had less capacity than my current thumb drive.
ANSI was the basis for BBS interfaces, with a whole 16 foreground colours, 8 background colours and 256 characters. Shading was achieved using using combinations of foreground and background colours, a very small number of dithered blocks and the four half-filled blocks. That's it.
Big chunky blocks of colour couldn't possibly produce great art, right? Well no, actually here was an entire international art scene devoted to ANSI art. Plus if you ran a BBS, you had to have a great scroller for when you logged in. So people pushed the boundaries far beyond expectation, they took an incredibly limited medium and created rich artwork.
In a way, ANSI artists worked hard to produce great work because the medium was limited. It took skill to create a great ANSI artwork. You really couldn't fake it, although many people tried. So the greater the skill and the greater the kudos for producing an elite ANSI.
back to the present
The medium allows for blobs of colour. That's it. Did that create a limitation which prevented great work? No! Instead artists looked at the possibilities - the potential of the medium.
I think the peacock demonstrates that well-executed artwork uses the given medium to the best advantage. For best results work with the medium, don't struggle against it.
The point? The limits of a medium simply define the creative space. They don't prevent people being creative within that space.
standards aren't limiting
Web standards just don't limit creativity the way people claim they do. You aren't prevented from producing great web pages just because you make them validate. Standards-based pages don't have to look like useit.com. CSS Zen Garden has proved this ad nauseum.
If you work in the web you have to accept the medium for what it is. You need to accept its limits, play to its strengths and try not to bring unrealistic expectations to the table. You have to accept that you need to make things validate and make them accessible, then add the funky design and behaviour over the top.
Sure, the web isn't a perfect medium. There's no such thing as a perfect medium! Print, photography, video, paint, music... they all have problems. Watercolours can run and ruin a wash; photos can get overexposed; printing presses can screw up colours; guitar strings can break during a gig.
Every medium has limitations. Part of creativity is getting around them and coping with the problems.
So anyway... that's my response to the claim that standards and accessibility mean you have to create boring pages. A canned answer to a canned question ;)
Update... A couple of links to examples of creativity with extremely limited tools:
So it has come to pass that the W3C has decided to take the WHATWG's HTML5 on board. It will form the basis of the W3C's HTML5. The goal is to have a public draft by June - yes, this year. Given that the spec now has to endure the full process of the W3C we'll see how that goes.
Anyway, this got me to thinking: what do I really want from a new markup specification? I've talked about this before but I realised that there's a difference between what I want and what I actually hope for :)
Ultimately it comes down to quite a small subset of the overall picture - the things I genuinely wish for in daily life. There are a few elements I'd like to see created or simply supported consistently by browsers.
These are the basics, the minimum additions to fill in some blanks left by HTML 4.01.
- An extensible, contextual heading/section system
- A way to associate a CAPTION (or LABEL) with images and lists
- Footnotes (which are really endnotes on the web)
It's a short list, since the reality is that the lack of decent CSS support impacts on my daily life far more than the limitations of markup. Frankly most developers out there still haven't mastered the semantics of HTML 4.01 so it's not like adding more elements will stop people making tag soup.
Although this is not an addition to a spec... I'd like to see real support for OBJECT so (amongst other things) we can replace images with the complex explanatory content required for complex graphics. Since certain popular browsers can't cope with this element, we still essentially don't have it.
On the topic of headings, HTML5 does not do what I want since it still relies on H1-H6. I gather the HEADER element is meant to do some kind of section marking but frankly on a first reading it doesn't make a heck of a lot of sense. It certainly doesn't introduce any obvious practical benefit.
XHTML2's H and SECTION system is exactly what I want. I regularly wish I could write a code fragment with a heading, without having to know the heading rank. With the H/SECTION system, I could just define the fragment as a section and know that the heading rank will be sorted out in-situ.
If you maintain a small, stable site, headings may not have ever been an issue. But if you have ever maintained the code base for a very large site, you're probably nodding your head ;)
Even for a small blog headings are a problem. In your average blog the top two heading ranks are probably handled by the site template and CMS; but subheadings in actual posts have to be written in directly with heading tags. So you're probably inserting H3 tags right into your content. Too bad if you later want to change the post pages to have the post title as the H1 - then you'd have a jump from H1 to H3. You either have to stick with the original structure; or you have what I consider an invalid heading rank jump.
Consider the same blog, with H/SECTION... you can adjust the structure around the post as much as you like - it doesn't matter. The sections and corresponding heading ranks take care of themselves.
Headings aren't glamorous. They're not uber-funky AJAX-friendly form inputs which will sparkle in the sun and inspire dancing in the street. They are bread and butter elements which we use every day. HTML has never made them easy to work with, so like it or not they would be a killer app for a markup spec.
In addition to what I do want, I think it's important to think about what a spec excludes. I think it's high time for specs to stop weakly deprecating things and flat-out remove them. I'd kill off the semantically neutral and visual-design-based elements - FONT, B, I, S, U etc... and definitely no get-out-responsibility-free cards for WYSIWYG editors!
The spec should just have them treated and rendered the same as SPAN. They're all semantically meaningless and can be replaced either with CSS or semantically-meaningful elements.
I should note that by my reading, WHATWG's HTML5 deals with B and I by creating semantic meaning for them. While that approach has some merit, I doubt the majority of developers will alter their usage according to the new semantics so those elements' usage will just be incorrect for new reasons. If everyone out there was to adopt the new semantics, I'd probably support the approach :)
These are things I want, but in the balance of things they're not the first things I'd argue to have included. That's the basics list :)
- A dedicated caption or group label for sets of radio buttons - FIELDSET and LEGEND don't really work for long descriptions.
- A drag-and-drop form input which is also keyboard accessible - keystroke/click to pick and keystroke/click to drop. Drag and drop is a useful paradigm but the possible solutions at the moment are not much good for keyboard or screen reader users.
- An element to enclose extra info for assistive technology users, something a little like NOSCRIPT. Having to use CSS tricks to hide assistive content creates a clash between content and style; not to mention putting your content at risk of Google blacklisting. An element named something like ASSIST could be ignored by search engines and enabled by assistive tech like screen readers. [Note - this is a pretty sketchy idea, no doubt there are all sorts of practical issues. I'm not saying it's perfect. It's just that we need some legit way to give extra info to users who need it, without getting blacklisted from Google. A dedicated element might be the way to go - although proper support for OBJECT would help an awful lot with accessibility it still won't help the search engine issues.]
Another short list. I wouldn't say no to specific elements for navigation, but I don't think they would really fix problems. Accessibility basics give way to usability issues - if your navigation is hard to distinguish from content, it's more of a usability issue than a markup issue.
HTML5 has elements for navigation, document content, header, footer etc... I'm not a huge fan of the naming system but I can see the potential benefits. Still, such elements aren't really priorities for me. I'm still going to give users skip links and Google has no plan to reward semantics anyway. If - and it's an if - screen readers were to make use of these new semantic elements then I'd probably use them. But screen readers lag behind and users often can't afford the latest versions anyway, so we're still going to be using skip links anyway.
all i want for christmas...
So basically what I want from a new spec is a few basics that were missed the last time around. I'm not actually hanging out for bells and whistles, although HTML 5 seems full of them and no doubt we'll happily use them.
Has reality lowered my expectations? Perhaps. Will I be glad of some kind of update - something, anything - after all these years? Almost certainly. Remember it has been more than seven years since XHTML 1.0 became a recommendation. That's 70 web years - a long time between updates.
After all that time it seems that most developers had lost faith in the W3C. Taking on HTML 5 seems like the only rational way forward and it was probably the only thing the W3C could do to regain a little bit of relevance in the world of markup. The browser makers certainly seemed to have jumped ship to WHATWG's HTML 5, or were quietly preparing to do so.
When I first heard of the WHATWG I thought it was unnecessary - maybe even a little irresponsible - to break away from the W3C. Many years later I'm glad they did.
So anyway with a June deadline, here's hoping we have a new HTML spec in time for Christmas. Santa... I'll be a good boy, I promise.
How would you find information if Google was down for a day? What if it was just gone?
"Google? Gone?" you say, "that's crazy talk, it's impossible!"
But it's not impossible. Google will only retain its market dominance until something better comes along. After that, it will be remembered but never used any more. Just ask Altavista, Lycos, or any other pre-Google search engine.
Now I don't for a moment wish bad things on Google, I just think people should be a little more aware of the fact they use Google without thinking. How else could "Google" become a verb?
Google has achieved ubiquity. Good for them! But is it good for us?
there can be...well lots, actually
Google is a brilliant general search engine, but it's not the only one out there. In fact, brute force searching isn't even the only way to look for information. In some cases, it's probably counter-productive.
Why counter productive? Well for a start people don't search widely. They just hit the first few results on Google and go with the first answer they find. Too bad if it's wrong, or there's relevant information that wasn't obvious from the first page of search results. So many people are getting bad information because they've developed bad search habits. Google's "I feel lucky" button probably didn't help!
You also have to consider that with systems like PageRank you're not necessarily getting accurate information. Millions of people linking to something doesn't mean that it's the best information, it just means it's popular.
You have to ask yourself, do I actually want the masses to filter information for me? What if I'd just prefer to know what my peers think? After all, I know and trust my peers (and their opinion).
trust and information retrieval
You do need to trust the people delivering search results and the people who influence those results. I don't actually know the people at Google so I have no real basis to trust them. I definitely don't know the masses of people whose links are indexed by the Googlebot. So, search in general does not score well on trust - I can only assume I can trust Google, which is a pretty thin level of trust for such a critical tool.
However, I do trust friends and colleagues... and the cool thing is that many of them use social bookmarking systems like del.ico.us and ma.gnolia. So I can actually search the links which they designate "Good Links". I can tap into the tribal hive mind.
Besides questions of trust, accuracy or information gatekeeping... I just think people should ask the question: "why do I always use Google?".
I think that people should be reminded to question things, pure and simple. It's probably a hangover from my philosophy degree. Everyone should prove that they don't exist at least once ;)
I'm not saying you can't continue using Google - frankly I know we all will! But we should do so with the awareness that we are trusting a single pathway to information. We are putting all our eggs into the one proverbial basket.
a day without google
So why not try going a whole day without using Google. Where would you start?
- Maybe you know a search engine or two off the top of your head (no Googling for search engines, you at the back). Try Teoma, Ask, etc.
- Maybe you'd hit Technorati and search tagged info.
- Maybe you'd hit a major single resource like Wikipedia.
- Maybe you'd search Technorati and Wikipedia for search engines.
- Maybe you'd hit del.ico.us or ma.gnolia and go the social bookmarks method.
- Maybe you'd hit email, IM or Twitter and ask your friends for recommendations.
- Maybe some people would be really freaky and go to the library. Before the web, there were Libraries; before Google there were librarians...
We don't actually need Google. We're just really used to it. It's good at what it does, but it's not the only one nor is it the only way to access data. Every once in a while, we should try some of the other ways. It's not good to be totally reliant on a single source or pathway to information.
There's every chance that one day, Google will fall. One day, it might not even be there. Stranger things have happened.
Until then, we can gleefully google to our hearts' content... but we shouldn't do it out of ignorance.
It's CSS Naked day... basically a worldwide nudie run for websites! The idea is to show off the underlying structure of the page. This is what you see if you're, say, blind; using a mobile without CSS support; using a text browser or have CSS/images disabled; or if you're the Google bot.
For more info check out the Annual CSS Naked Day website.
User Generated Content (UGC) is a hot topic - it seems everyone wants it. A bit like everyone wanted Blogging. They weren't quite sure what it was but they were pretty much convinced that businesses should be Blogging, so they told their IT section to order in some Blogging.
So anyway UGC is the new black (one of them anyway), which seems to translate to "comments" a lot of the time... but hey, that's a step forward in many cases. It all helps and paves the way for the next thing.
Once in a while though you see something which just sets a different standard. A SXSW attendee - a user, in other words - decided that mere note-taking didn't cut it. So they sketched the speakers: Crazy Characters at SXSW | In The Garage | The nGen Works Blog.
Now that is what user-generated content should be like! Compelling, unusual and link-worthy. It reminds me of John Allsopp's WE05 remixes (admittedly calling John a WE05 'user' is pushing it a bit ;)).
Of course it's not exactly UGC if it's hosted on the user's own blog, rather it's meta content... but I like to point out cool moments of user engagement, even if just to remind us all to keep an open mind. The best ideas are probably things we won't even think of ourselves - the users can and will outdo us.
So if you want UGC as part of your product or business, look to the users for ideas on what form the UGC should take. You may suggest or set directions, but you shoudl be willing to accept direction as well.
Maybe the buzzword should be User Generated Concepts.
This is one for the "if you fix a problem, blog it" files.
I've been having a problem with a particular feature on a page I'm building at the moment. Basically I need to position a small div over the top of an image. The image can then change without needing to edit the content in the div.
To do it, I set the size of the container div then used absolute positioning to place the small div where I wanted it within those boundaries. To get it all to work the container is set to position: relative; and the small div is set to position: absolute; - something which is pretty common.
Screenshot 1: Desired result (and before scrolling in Opera 9).
Screenshot 2: After scrolling down and back in Opera, content disappears. The exact result varies; with other combinations of positioning the example div may also disappear immediately after the page loads (that is, it renders then disappears).
The problem occurs in Opera 9 on both PC and Mac. To see this in action, check out the example page.
The large image and the red example div are enclosed in a container div. Initially, the container div was set to
position: relative;; the red example div was set to
position: absolute; and the large image had no specific setting.
To fix the problem in Opera, apply
position: relative; to the large image as well. Some more testing is required to be sure, but I think the principle is to make sure there aren't any elements without explicitly handled positioning.
Again, see the example page for the real thing.
We've just finished a session at BarCamp Sydney. I rocked up to see an empty board, so I proposed To hack code, first hack people.... The idea is that often it's not the code that creates barriers to success, it's the people involved. Perhaps they're resisting, perhaps they're not engaging with the process or they simply can't express what they want to achieve with technology.
Here's my take on some points the group came up with...
- Don't hack code without a reason - find what the people want first, sort out the goals.
- Be open to being wrong - if you push your agenda, you may discover that there's a good reason not to go down a certain path. Don't be so engrossed in your own agenda that you become inflexible.
- If someone is resisting an idea or change, first understand why they are resisting. They may have an excellent reason, or they may simply be scared, or they may just need to understand the idea better.
- To hack code, you must be able to empathise with people
- Coders/geeks need to take their clients/users best interests to heart
- Make people feel safe. Build trust with them, build rapport, then start working. People are often afraid that they're paying for something that won't do what they want.
- Doctors need a good bedside manner, geeks need their own version - webside manner?
- It is our responsibility to make things work, not the user's responsibility.
- We must demystify technology - tell people what it will do, not how it's coded.
- Simplify, don't dumb down.
- Don't build for "everyone", build for your specific target market.
- Geeks... we cannot avoid people! Get over it!
The way you configure your primary browser effects the way you see the web. Using a suite of browsers during your test rounds is one thing, but it's not the same as using a particular browser and configuration on a daily basis.
- Set your system/browser's default background to something other than white
- Disable plugins by default (enable them on demand)
- Disable meta refresh
These three tips will catch a huge number of sites which make assumptions about browsers. The errors you catch with these settings aren't necessarily the end of the world, but they are a sort of litmus test for site build quality. Plus, many sites do in fact fail completely under these simple-sounding conditions.
On a Windows PC (not sure about Macs), a web page with no explicitly stated background colour will be rendered with the Windows default background colour. Since the default Windows background is white, many developers forget to actually set the background when they want it to be white. However to reduce eye strain I set the Windows background to a light grey or off white, so I get to see which sites haven't set a background colour.
It really is quite astonishing how many sites don't actually set their white backgrounds! There are some very popular sites (by large companies) and even a couple of A list bloggers who have forgotten this one.
While it's not the worst error out there, it can certainly be a problem if a user has a black system background and you've got black text. I know someone who has grey on black as their system default; and they regularly have to highlight websites just to read them.
I think Firefox has started overriding the system background and inserting white as a default, but that's not really a solution.
To change your Windows background: click Start → Settings → Control Panel → Display → Appearance → Advanced. Click on the diagram or select Window from the Item dropdown. Change that item's colour and apply the change.
[If someone knows the MacOS equivalent to change the default background in web browsers, feel free to comment or let me know :)]
You should regularly browse with plugins disabled. It's interesting to see how many sites use flash for critical content yet have no fallback at all. Many all-Flash sites don't even have a warning message telling you to install Flash - they just load as a blank screen.
I use Opera and go a bit further, disabling animated graphics and Java. It's trivially simple to switch them back on a for a site, so why not. With Opera's site preferences I can enable plugins for those few sites where I do want the plugins to work (eg. YouTube). To toggle these settings, use the Quick Preferences menu to disable plugins then use Edit Site Preferences to enable them for chosen sites.
I'm not sure if Firefox can disable plugins once they are installed, however there are various extensions (eg. Flashblock) which can disable Flash. It's buried in IE as well but to be honest I don't recommend using IE as your daily browser anyway - it encourages complacency, since you don't notice all the sites out there that don't work in anything other than IE.
disable meta refresh
Meta refresh is particularly problematic for users with screen readers, since the uncontrolled refreshes create confusing and unpredictable experiences. For this and many other reasons, browsers are now making it a lot easier to block meta refreshes.
The thing is, many sites use these to forward the user from one page to another - but they don't include a manual, clickable link. Many people assumed that browsers would never be able to switch off refreshes I guess!
To disable meta refresh:
- Opera: browse to opera:config#UserPrefs|ClientRefresh, then deselect the option and restart Opera.
- Firefox: you can wait for version three, or install the Web Developer's Toolbar and click Disable → Disable Meta Redirects.
- In Internet Explorer: go to Tools → Internet options → Security tab → Custom Level button → Miscellaneous category → set "Allow META REFRESH" to Disable.
[Feel free to comment if you know instructions for other browsers.]
...and that's it
These tips should help remind you to provide fallback content if you're using plugins, scripts or modifying standard page load behaviour in any way. If nothing else, it should remind you that white backgrounds don't magically happen :)
Web 2.0 is suffering, friends. There is a tragic shortage of desperately needed "e"s. Through bad planning and massive overuse of "e-" as a prefix, Web 2.0 has had to make do without the letter e and simply contract their names.
You've seen them on the street, with names like Flickr, Zoomr, Tumblr, Frappr, Talkr, Soonr, Rel8r... the list goes on.
Let's not be fooled here, Web 1.0 is to blame for this problem. Web 1.0 used "e" like there was an unlimited supply, with no thought for sustainability or future generations' need for e. The term "email" was an innocuous and relatively logical start, but sadly it was followed by eCommerce, eBook, eCard, eZines, eBusiness and even eEducation. Thankfully "cyber" can be synthesised cleanly in lab conditions and - despite heavy use - web 1.0 did not plunder natural stock for future generations.
But there is hope. Some applications have overcome great challenges and used e for its intended purpose. Feedburner and Twitter are two services which managed to conserve enough e to avoid the terrible fate of unnecessary contraction.
You can help. Sponsor a web 2.0 startup today and spare as much e as you can. Together, we can save the web.
So debate continues about Twitter. Is it great? Is it awful? Is it a chat tool? Is it a status updater and damn it you chatters are ruining it for everyone? :)
I've come to the conclusion that ultimately it's not really the format that really gets people hooked. Twitter doesn't do anything that we couldn't already do. We already have chat, we already have blogs, we already have post-by-SMS and so forth. sure, Twitter rolls it into a neat package but I don't think that's the key factor.
Personally I think the real reason that people love Twitter so much - the Killer App if you will - is that it's ubiquitous. Twitter pretty much goes wherever you want it to go.
If you're on the web, you can be on Twitter. If you're signed into IM, you can be on Twitter. If you've got your phone in your pocket, you can be on Twitter... ok, you get the point! Twitter followed us home and we want to keep it.
So what Twitter really does is puts your (Twitter-)friends within easy reach. You can always let them know what you're doing. You can always hear from them. If you're travelling, you can read about the little moments of important people back home. If you find yourself walking home unexpectedly, you can tweet about it.
You don't have to have ground-shaking news. You don't really have to have any justification, Twitter doesn't demand formality or deep and meaningful thoughts. Twitter is basically like hanging out with a big group of friends. You can wander in and out and nobody minds; but they're happy to see you when you are there. You can be pretty sure someone's always around.
I had to mention it. I think Twitter feeds into Mark Pesce's concept of youbiquity. It can provide a timeline - some people tweet when they post a blog article or a particularly interesting photo on Flickr. Twitter can also fill in the gaps on services like Jaiku which attempt to track your 'presence stream'. Blogs, bookmarks and photos still feel a bit disconnected... but twitter adds a certain je ne sais quoi and the sensation that you really are seeing a picture of what you've been up to online.
ubiquitous or intrusive?
If it starts getting intrusive, you can close the window/app, or text OFF... and that's it. It'll leave you alone until you feel like dropping back in. Whether we have that willpower is not really Twitter's fault, nor is it unique - we don't close our email down much either, and emails usually take a lot longer to read than a tweet.
So, sure... it can be just as intrusive as any of the communications channels it uses. Mobiles, web and IM can all be intrusive. That's not new to Twitter either :)
the medium is not the message
...but the medium shapes the message. I write longer text messages via USB (ie. on a proper keyboard) than direct on my mobile's keypad. There's no reason to expect anything different when I post to Twitter.
Khoi Vinh observes that the different interfaces subtly encourage different writing styles - that the input mode changes content. I do believe that people using mobiles will post less and be more focussed on 'what am I doing?' tweets than people using the web or IM interface. After all, they're out and about doing it.
So there's another theory: Twitter's addictive quality is ubiquity. It uses technology which is mundane in our worlds, but it achieves a sort of magic. We can stay connected even when the computer is off. We can post by mobile and friends get it via chat.
Twitter pulls together three of the most successful communications methods of the modern world. Then it's interested in what you're doing. Then it tells your friends.
Go-anywhere technology that connects people. No wonder it's popular!
n minutes ago from web... ;)
It's very weird to hear your own voice as others hear it, but curiosity won out: webdirections | september 26-29 2006 » Cheryl Lead and Ben Buchanan - Moving your organisation to web standards.
It's funny how I love public speaking, but I feel embarrassed listening to a recording of it later! :) But that's the nature of things I guess... most of us don't hear our own voices played back too often, so it always comes as a bit of a surprise.
Still, people seemed to enjoy the session and I think we covered some good stuff. The questions were great, too! ...and we remembered to repeat nearly all of them for the podcast :)
One visual thing that didn't really come across... when I said "may geek out without warning" I was referring to the shirt I was wearing :)
So anyway, check it out...
It seems that Firefox 3 will include an option to treat meta refreshes much the same way as popups - blocking them and alerting the user what the page wants to do. It's another step forwards in letting the user take control.
Of course, Opera users already have this option; using
opera:config#UserPrefs|ClientRefresh. Neat, although an alert would be good; as would site-specific settings. Hopefully the feature will be refined in future versions.
Really though, either way is good as it gives the user a little more control over their browser. Automatic refreshes and redirects break accessibility recommendations. They're one of those things which gets written up as "until browsers provide a way to control...".
As these features become more widespread, the importance of fallback options will become even more critical. Just like scripts need a
<noscript>, meta refreshes need a link in the document. Many pages don't have them, though; so no accessibility or SEO juice for them!
It serves as a good reminder that we should provide alternatives any time we modify the behaviour of a page. I have had people say in the past that meta refresh was so simple nothing could go wrong. Well, that assumption will bite them on the arse...! :)
We should always assume that somehow, somewhere such features will be disabled. It's not hard to provide an alternative, so it should remain our habit to do so.
how to disable meta-refresh
- In Opera 9 (Win/Mac): browse to opera:config#UserPrefs|ClientRefresh, then deselect the option and restart Opera.
- Firefox 2 (Win/Mac): install the Web Developer's Toolbar and click Disable → Disable Meta Redirects.
- In Internet Explorer 6 and 7: go to Tools → Internet options → Security tab → Custom Level button → Miscellaneous category → set "Allow META REFRESH" to Disable.
- Safari 2: currently I don't know of a way to disable it in Safari.
There's an SEO conference coming up shortly which will feature several Google employees. Russ is calling for your input on questions you'd like asked; and Scott has further thoughts:
- Max Design - standards based web design, development and training » Our chance to ask Google
- Signals of Quality | Standardzilla
I've already commented fairly heavily at both sites, so I guess this is a meta post :) For reference, my questions (in no particular order):
- Since 301 redirections get you bombed (and longevity is a big factor in pagerank, so new URLs are effectively bombed), is there a way to move a site without losing your pagerank?
- Will Google ever produce valid pages for their own sites? Many standardistas have produced proof of concept versions of Google search for example - standards compliant AND lightweight. Why not use them?
- Do they think Flash will ever be seriously searchable, in a useful manner? Do they think it will be possible? Would they rank Flash content higher or lower than text content?
- Does Google give equal weight to ABBR contents versus spelled-out terms?
- Does Google give additional weight to tags/tagged pages? (...which leads to the next point...)
- Will Google be indexing/weighting microformatted content? What is Google’s view of microformats and their potential benefits to search? If they did support microformats would that also suggest they’d need to pay more attention to semantics?
- I’d also question their views on whether validation is a "signal of quality". In short, if a page validates surely that is an indication that the author/developer has paid close attention to the construction of the site… which would be a signal of quality in my book!
Then from my comments on Standardzilla:
Google:so few websites validate that it isn’t a signal of quality
...incredibly bad logic there. If a site validates at this point in time, it indicates that someone has paid serious attention to the quality of the page. Surely a signal of quality! Maybe they don’t want to open that door since they’d then be admitting that their own pages suck.
Their interest in accessibility is minimal at best. Accessible search is treated as a bit of a curiosity, as far as I can tell. A neat toy produced by someone’s 20%, but that’s about it.
The thing I’ve come to realise about Google is that they do not consider inaction be "doing evil". Despite the tremendous influence they have, they don’t use it to "do good". Personally I think their inaction is a form of doing evil, but that’s just me.
Do you have questions of your own? Head over to Max Design - standards based web design, development and training » Our chance to ask Google and make yourself heard!
Back in 2005 I compared the patch rates of IE, Firefox and Opera. In the past few days the subject of browser security has come up a few times, so I thought I'd revisit the topic to see what (if anything) has changed.
I'm using Secunia advisories again, to keep the data source consistent. The product pages are:
Note that Secunia's data starts from February 2003, regardless of each product's release date. You can investigate Secunia's methodologies if you will; there are some quirks. However I'm not after a perfect scientific investigation, so much as a broad strokes impression.
what am i comparing here?
Since each browser has a different release date and lifespan, comparing raw numbers of problems isn't really useful. However we can compare the percentage of patches/fixes from the vendor - it's not how many security issues were identified, rather it's about how many were fixed.
I would have added in "time to patch" and "days vulnerable" and so on, but Secunia doesn't currently graph that information (as far as I know).
I thought about sorting out standardised timeframes and so on, but the bottom line here is how secure can a user's browser be today? I say "can" since we can't assume that all browsers are up to date with the latest patch (or even close), but we can at least evaluate the potential for a conscientious user to keep up. After all, we can only apply the patches that are available.
Having discussed the user acceptance issue in the previous article (to patch or not to patch?) I won't rehash it here. However I will mention that according to Secunia Opera users really need to update their browsers.
patch rates - july 2005
First, let's remind ourselves of the data from 2005:
|Browser||IE 6||Firefox 1||Opera 7/8**|
|Number of advisories since Feb 2003*||83||21||42|
* Firefox advisories start from August 2004.
** Opera 7 and 8 are combined to create a better comparison in terms of the number of advisories.
[Note - yes I know it didn't really make sense to combine Opera 7 and 8, but both had a 100% success rate so it didn't really change the outcome.]
patch rates - january 2007
First off, let's compare the patch rates of the same browsers (and we'll add Safari so people don't accuse me of forgetting Macs). Remember that these are all superceded versions now:
|Number of advisories since Feb 2003||110||39||15||15|
So, no change for the three browsers compared last time. Safari slots in at second, after Opera and before Firefox.
Now let's have a look at the latest versions of the four browsers:
|Number of advisories since Feb 2003||4||2||3||6|
This produces very clear results, but the low number of advisories exaggerates the margins. The previous versions all have a higher number of advisories, but actually the only change in ranking is that Safari drops from second to third. The sharp drop in patch rate between Safari 1.x and 2.x makes it hard to get any useful conclusions - has Apple really dropped the ball?
For the other three browsers, the rankings remain:
- Opera (100% patched, no change)
- Firefox (50% patched, down from 87%)
- IE (25% patched, down from 67%)
It's worth noting that the patch rate for both Firefox 1.x and IE 6.x improved between 2005 and 2007. However both dropped noticeably between their previous and current versions (same as Safari). The proportion is exaggerated by the low number of advisories for the newest products.
Well, one clear thing is that Opera is the only vendor with a 100% patch record according to Secunia. Opera is also the only vendor that maintained its patch rate between versions - in fact you have to go back to Opera 6 to find an unpatched advisory (and there's only one).
It's also clear that IE has the worst patch rate of all the browsers compared. You could say that's a result of having a much bigger user base and a correspondingly higher incident rate. But then Microsoft has more resources than the other three vendors combined so it's a pretty weak excuse for leaving security issues unpatched.
Meanwhile Firefox does pretty well for an open-source product, consistently beating IE - even if not by much. Apple meanwhile needs to get Safari 2 sorted out; but we'll see what happens as more data becomes available (for all four browsers).
So at this time Opera wins the patch stakes. The argument can be made that Opera attracts fewer attacks due to small marketshare. That could be true - there's no way to truly know, since malicious hackers aren't polled - but when I'm doing my banking I don't care if it's true. I just care that my browser is secure; and Opera currently has the best record for fixing security issues.
A couple of weeks ago, Mike Schinkel was kind enough to let me know that all the archive links on this page were broken (note to self: check error logs more often). Why they broke now is beyond me, although the rollout of the new Blogger might have something to do with it (lots of changes going on).
At any rate the problem was that the first character in the archive path was being dropped, resulting in links to "rchive/" instead of "archive/". The settings panel happened to match Blogger's help page exactly, yet things were still broken.
Blogger support don't really respond personally, they just let you know when a round of fixes go in. However all current work is focussed on the new version of Blogger. That's understandable but it leaves me stranded with an 'old Blogger' site with problems. I can't migrate yet since one of my blogs is "too big" - I guess they're waiting for server load to reduce before they process blogs with 3000+ posts.
Tired of waiting, I tried a few things and found that you now have to include the opening slash as well as the trailing slash: "/archive/". Why? Well someone at Blogger probably knows, but personally I don't really care so long as it works :)
So anyway, the archive links are back in business; and if you are having the same problem try adding the opening slash.
...and before people tell me to migrate to some other blog tool, I still don't want to maintain the blog tool - just the blog!
The first time I heard about Twitter I thought it was a stupid idea, possibly because the name seemed like it would be too accurate for comfort. I also didn't really like the idea of another social network requiring care and feeding. But in the end I got sucked in, and having used it for a couple of weeks I think it has a lot of potential.
I've noticed a few terms floating around, so let's cover a few of them before we get started:
- A person who uses Twitter ("twit" is tongue in cheek, you at the back).
- A group of Twits, eg. "the Aussie Twitterati" (aka. the Auspack)
- A Twitter update. "where's that tweet about the bar meet?"
- Twitterchat/Twitter chat
- A stream of Twitter updates that read like IRC/chat rooms.
- One term used to describe the short posting style of Twitter.
Not everyone will agree with the terminology of course.
life with twitter
It seems fairly obvious that people use Twitter in ways the creators didn't intend. Although it's a service rather than content per se, this quote from Jeff Veen still springs to mind:
Few use your content the way you intend.
Everything you create online is being ripped apart and recombined with other stuff by thousands of curious geeks. Or at least, it should be.
Twitter was intended to be a status system, with all posts answering the question "what am I doing right now?". However as soon as you do that, other people want to respond - humans are social animals, after all. The result often turns into something like a chat room or IRC - and I pity anyone trying to keep up via a mobile phone. It has been said that Aussies use Twitter in IRC mode more often than other groups.
So at once it creates a tension - the great idea was to know what your friends are doing. The problem is that as soon as people respond, the traffic can easily make it impractical to follow the flow when you're out and about (unless you really are happy to be glued to your phone). If you really just want to know what people are doing, Twitterchat adds noise to the signal.
However if you're at home on the computer, Twitterchat is just dandy. IRC was always plagued with technical issues - you had to get the client installed (or know your way around shell) and find a server for a start; then find a channel and hope you didn't get a netsplit (seemed quite common for Aussies). Twitter avoids those problems - you can just hit a website and away you go.
should it be about status or chat?
Twitters know that they should just be saying what they're doing (posting their status). But they also get sucked into the chats. It's almost like you need a filter - actual status tweets versus general chatter or responses. But then you'd need people to tag their tweets one way or another - and if they followed rules like that you wouldn't have the problem in the first place.
So is Twitter a status stream or a chat tool? It's both, depending on the hive mind's mood. Is that a problem? Depends on what you want out of it at the time. Given that the status aspect is rarely going to be seriously useful to me, I don't mind the chat.
You could perhaps have Twitter proper, then let people spin off to a Twitter chat room for general chatter. If you piped the status posts into the chat room, you'd get the best of both worlds. People who don't want to chat wouldn't have to; those who do would still get status messages.
A Twitter Tour is a string of Twitters forming the tale of travelling somewhere. Gian is leading the charge on my contact list with two detailed Twitter Tours so far. I really like the way the string of status messages can tie together a routemap and photos.
The output is not really that different to blogging about going somewhere, except for the fact that you can follow the person's progress and chat to them while they're actually doing it - which is pretty cool, really. Plus you're getting the actual impressions of experiences along the way, rather than the overall/filtered impression after the fact. It feels a bit more "real".
Put people together and sure enough they'll come up with new ways to amuse themselves given the toys...err...tools at hand. Twitter games are a classic case. On my list, Molly is definitely the ringleader for Twitter games. I went to make a list and discovered it was Molly who had suggested all of the following:
Twitter games can be quite fun, although I have to admit it's hard to discuss the finer points of different operating systems using haiku.
Twitter Poll is just a term I use to describe a message along the lines of "should I do x?" or "what do you think about y?". Users regularly solicit thoughts and advice from other users. I've even seen quite effective technical support take place within minutes of someone grumbling that something was broken.
Several flavours of BBC news (eg. twitter.com/bbcnews) are being Twittered thanks to one enterprising user, who used the Twitter API to crunch the Beeb's newsfeeds. No idea what the BBC thinks of it, if anything.
I'm not sure I would really want the news following me on Twitter... I wouldn't mind getting BOM weather alerts though!
what sucks about twitter
There are many things that show that Twitter is a new service that really needs some rough corners knocked off. I'll just cover a couple of them, since I'm not aiming for an exhaustive breakdown of its interface and design.
handling of connections
For a social network, Twitter is rather bad at handling friends requests. For a start, you can't import a list of friends - a problem which has already been discussed at length. So you're left trying to work out which of your friends is on Twitter, using which variant on their online identity... which just feels like work. Boring, repetitive work.
Once you do find people and start adding them, you discover some people have opted for a minor layer of privacy - so you have to wait for them to accept your request to add them to your Twitter friends list. If people notice your request and respond quickly, all is peachy. If they don't notice, it can go pear shaped.
The only alert that a request has been made is a text link on the web interface, so a user could literally never notice that someone has made a friends request. If they don't actively accept or reject a request, there's no way to know if they've been online or not - have they even seen your request?
So it can be quite easy for one user to appear to snub or ignore another user - intentionally or not. It can lead to a new variant on social networking anxiety.
Direct messages are like really private tweets - short emails, essentially. They suffer the same problems as friends requests if they're sent while you are not receiving updates via mobile or chat. The web interface just tells you how many direct messages you've received, not how many are new or unread. You can't archive the old ones, so you just sort of have to remember the number you're up to. So you could easily miss a direct message - once again, social network anxiety fodder.
Twitter's interface makes it quite hard to get a view of the the tweets posted since you last logged in. I guess the intention was that people would just jump back into the flow - or never leave it ;) But naturally enough people want to know what they missed, but can only find the last 24 hours. Which is silly really since everyone's entire history is stored.
It seems like the volume of SMS traffic generated by Twitter is causing some trouble in some parts of Australia.
what rocks about twitter
Twitter at its best is a little like a coffee shop full of friends. You can tune in and out, lurk, or start conversations. Because it's not actually IRC people know that you could be dropping past rather than actively watching. So you can check in a couple of times through the day or have a window sitting open all the time. It's your choice.
It also has to be acknowledged that Twitter really can go where you want it to - web, IM or mobile. That's really quite a broad reach. Since it uses SMS rather than WAP or wifi, even Aussies can use it from their mobiles (although beware of pocket posts).
Twitter is a nice, informal way to keep up with people - especially if you're not in regular email contact. Twitter's forces quick, easy messages - you can't get stuck writing a 1000 word email which never gets sent. This keeps it "light" - even a busy person can participate.
where's the money?
Sooner or later I think we'll see a business plan emerge. A few obvious ideas:
- premium features requiring paid subscription
- charging to receive updates via mobile
- ads on the site
- direct marketing (twitterspam)
In any case, someone must be paying the bills for all the text messages... sooner or later they'll want to see some return on investment. I'll be curious to see how/if Twitter is changed when that happens.
so what to make of it?
I think the tweet that most accurately sums up my experience so far was when I posted: blogs about twitter, twitters about blogging, then goes to bed. If you take it too seriously, you'll go nuts.
It's faster than email; slower than IRC (in a good way); doesn't demand immediate attention like IM and has a social/group aspect that SMS alone can't touch. It is quite odd, but I can't help thinking this is a sign of things to come. Communications channels that are flexible and quick, personal and tribal... it's approaching what I imagined when cyberpunk authors talked about personal comm units. In fact, more recent reading like Cory Doctorow's Eastern Standard Tribe is a little freaky when I consider the disrupted circadian rhythms of certain Twits.
So what do you make of something like Twitter? Whatever you want to, really - at least for now. No doubt it will evolve as time goes on... it will be interesting to watch.