Off the Top: HTML Entries
24 Ways: A Web Holiday Favorite
Nothing makes me happier than to see the winter holiday begin and 24 Ways start its annual release of web development and design goodness. Drew McLellan and the 24 Ways crew have done another great job and I look forward eagerly for every day’s gem that is released.
To make all of this better, 24 Ways is in its 10th year. Congratulations for all the great content and work, from the very first to the current offering of the day.
Mind the Construction Dust
I’m in the midst of a structuring here across all the pieces of vanderwal.net. It started in January with another project, a meetp-up hack to dive into Zurb Foundation. Within a couple weeks of starting down that path I decided it would be fine time to rebuild and redesign vanderwal.net using Foundation. Before I started down the road leaving the horses behind I desided I was going to update the structure of the HTML of these pages and bring them into modern times with HTML5 and CSS3.
This thinking and tinkering has been finally fixing some of the underlying details that bugged me, but it also allowed to set a much better and more object focussed semantics. This shift will also enable the content objects to flow better and be better foundations for a redesign as well.
While I have no idea what the redesign will be and not even thinking of that, I did find the original photo that I modified to be used for the header image and I put that to the pages I have touched. The new image now is much wider to allow for a fluid page and the “vanderwal.net” text is now out of the image and I a truly proper H1, that has alluded me for a long time (and bugged me to no end). The menu of the updated pages has brought back the selected portion of the site with a bleed to page, which was there at the beginning, but some shift in CSS caused it to go away.
I may, possibly likely, shift hosting at some point in the near future, but that may wait until I have some of the underpinnings of the blog tool updated a little. Some of those changes will wait a little, but have been brewing a long time. I don’t think I am bringing comments back, but will likely bring in web mentions (Jeremy Keith has a great explaination). There is a lot going on in the IndieWeb that has been inspiring and may trigger some more changed that I have longed for to finally get put in place.
BTW, this is the short version of this. Two prior attempts at writing up something short both ended up over 2,500 words.
HTML5 Demo Watch
Thanks to the Berg Friday Links I found the Suit up or Die Magazine and Cut the Rope HTML5 demo sites.
Both have me thinking this is really close, then I remember one of my favorite periodical apps, Financial Times went HTML5 more than a year ago. FT went HTML5 to better manage the multi-platform development process needed for iOS and the multitude of Android versions. While many have said the development is roughly 1.5x what it would take for just one platform development it does same incredible amount of time building an app across all platforms. Since all the major smart phone platforms have their native browsers built on webkit, there is some smart thinking in that approach.
Personally, my big niggle with the FT app is while it is browser based doesn’t have Instapaper built-in and it moves me out of the app to send a link of an article (often to myself because lacking Instapaper) rather than natively in the app, or exposing browser chrome so that I can do that while still remaining in the app and in reading their content mode. It would be really smart for FT to sort this out and fix these as it would keep me in the site and service reading, which I am sure they would love. If they could treat both of those like they do with Twitter and Facebook sharing out all within the app it would be brilliant.
Why Ma.gnolia is One of My Favorite Social Bookmarking Tools
After starting the Portable Social Network Group in Ma.gnolia yesterday I received a few e-mails and IMs regarding my choice. Most of the questions were why not just use tags and del.icio.us. After I posted my Ma.Del Tagging Bookmarklet post I have had a lot of questions about Ma.gnolia and my preference as well as people thought I was not a fan of it. I have been thinking I would blog about my usage, but given my work advising on social bookmarking and social web, I shy away talking about what I use as what I like is likely not what is going to be a good fit for others. But, my work is one of the reasons I want to talk about what I like using as nearly every customer of mine and many presentation attendees look at del.icio.us first (it kicked the door wide open with a tool that was light years ahead of all others), but it is not for everybody and there are many other options. Much of my work is with enterprise and organizations of various size, which del.icio.us is not right for them for privacy reasons. I still add to del.icio.us along with my favorite as there are many people that have subscribed to the at feed as they derive value from that subscription so I take the extra step to keep that feed as current.
Ma.gnolia Offers Great Features for Sociality
I have two favorite tools for my own personal social bookmarking reasons Ma.gnolia and Clipmarks (I don't think I have anything publicly shared in Clipmarks). First the later, I use Clipmarks primarily when I only want to bookmark a sub-page element out on the web, which are paragraphs, sentences, quotes, images, etc.
I moved to try Ma.gnolia again last Fall when something changed in del.icio.us search and the results were not returning things that were in del.icio.us. My trying Ma.gnolia, by importing all of my 2200 plus bookmarks not only allowed me to search and find things I wanted, but I quickly became a fan of their many social features. In the past year or less they have become more social in insanely helpful and kind ways. Not only does Ma.gnolia have groups that you can share bookmarks with but there is the ability to have discussions around the subject in those groups. Sharing with a group is insanely easy. Groups can be private if the manager wishes, which makes it a good test ground for businesses or other organizations to test the social bookmarking waters. I was not a huge fan of rating bookmarks as if I bookmarked something I am wanting to refind it, but in a more social context is has value for others to see the strength of my interest (normall 3 to 5 stars). One of my favorite social features is giving "thanks", which is not a trigger for social gaming like Digg, but is an interpersonal expression of appreciation that really makes Ma.gnolia a friendly and positive social environment.
Started with Beauty, but Now with Ease
Ma.gnolia started as a beautiful del.icio.us (it was not the first) and the beauty got in the way of usability for many. But, Ma.gnolia has kept the beautiful strains and added simple ease of use in a very Apple delightful moments sort of way. The thanks are a nice treat, but the latest interactions that provide non-disruptive ease of use to accomplish a task, without completely taking you away from your previous flow (freaking brilliant in my viewpoint - anything that preserves flow to accomplish a short task is a great step). Another killer feature is Ma.gnolia Roots, which is a bookmarklet that when clicked hovers a semi-transparent layer over the webpage to show information from Ma.gnolia about that page (who has linked to it, tags, annotations, etc.) and makes it really easy to bookmark that page from that screen. The API (including a replica of the del.icio.us API that nearly all services use as the standard), add-ons, Creative Commons license for your bookmarks, many bookmarklet options, and feed options. But, there are also the little things that are not usually seen or noticed, such as great URLs that can be easily parsed, all pages are properly marked up semantically, and Microformats are broadly and properly used throughout the site (nearly at every pivot).
Intelligently Designed
For me Ma.gnolia is not only a great site to look at, a great social bookmarking site that is really social (as well as polite and respectful of my wishes), but a great example for semantic web mark-up (including microformats). There is so much attention to detail in the page markup that for those of us that care it is amazingly beautiful. The visual layer can be optimized for more white space and detail or for much easier scrolling. The interactions, ease of use, and delightful moments that assist you rather than taking you out of your flow (workflow, taskflow, etc.) and make you ask why all applications and social sites are not this wonderful.
Ma.gnolia is not perfect as it needs some tools to better manage and bulk edit your own bookmarks. It could use a sort on search items (as well as narrow by date range). Search could use some RedBull at times. It could improve with filtering by using co-occurance of tag terms as well as for disambiguation.
Overall for me personally, Ma.gnolia is a tool I absolutely love. It took the basic social bookmarking idea in del.icio.us and really made it social. It has added features and functionality that are very helpful and well executed. It is an utter pleasure to use. I can not only share things easily and get the wonderful effects of social interaction, but I can refind things in my now 2,500 plus bookmarks rather easily.
Let Me Count the 24 Ways
It is that wonderful time of the year for 24 ways, the wonderful 24 gifts from one web developer to the rest of us. I deeply enjoyed them last year and am looking forward to the remainder of the gems.
Stikkit Is a Nice Example of a Personal InfoCloud Tool
I have been using the newly launched Stikkit for the last day and rather enjoying it. Stikkit, is a web-based postit with super powers of a notepad with bookmark, calendar, lite address book for people, tagging, to do, and reminders to SMS (in the U.S.) and/or e-mail.
Stikkit is the product of values of n start-up that is the founded by Rael Dornfest, formerly of O'Reilly.
This summer I was in Portland and got a preview of Stikkit and was really impressed. It was a slightly different application at that point, but it had the great bones to be a really nice application for one's own Personal InfoCloud. Much of the really good intuitive scripting that turns dates in text into calendar entries, text to do lists into ones that can be checked-off, and other text to real functionality is in the current version and just sings.
When I used the Stikkit bookmarklet it captured pertinent information from a page that I wanted to track, which had date related information that is essential to something I have interest in, it made a calendar entry. The focus of the Personal InfoCloud is to have applications and devices that let people hold on to information that they have interest in and move it across devices, as well as add their own context. Stikkit, really is a wonderful step in making a rather friction free approach to the Personal InfoCloud. It puts the focus on the person and their wants and needs for the use of the information in a page. Stikkit can free the information from the confines of the web page and alert the person to important dates. Stikkit also allows the person to share what they find easily.
I think the key to Stikkit is the term "easily", which is the underpinning of the whole application. The only thing I would love to see is Microformats added so that the information in Stikkit could be dropped into my own address book or calendar and synced (if the gods of syncing shine favorably on me that day). Looking at the markup in Stikkit, it seems to be semantically well structured to easily add microformats in the near future.
This has been cross-posted at Stikkit at personalinfocloud.com where there is commenting turned on.
Rebranding and Crossbranding of .net Magazine
From an e-mail chat last week I found out that .net magazine (from the UK) is now on the shelves in the US as "Web Builder". Now that I have this knowledge I found the magazine on my local bookstore shelves with ease. Oddly, when I open the cover it is all ".net".
Rebranding and Crossbranding
In the chat last week I was told the ".net" name had a conflict with a Microsoft product and the magazine is not about the Microsoft product in the slightest, but had a good following before the MS product caught on. Not so surprisingly the ".net" magazine does not have the same confusion in the UK or Europe.
So, the magazine had a choice to not get noticed or rebrand the US version to "Web Builder" and put up with the crossbranding. This is not optimal, as it adds another layer of confusion for those of us that travel and are used to the normal name of the product and look only for that name. Optimally one magazine name would be used for the English language web design and development magazine. If this every happens it will mean breaking a well loved magazine name for the many loving fans of it in the UK and Europe
What is Special About ".net" or "Web Builder"?
Why do I care about this magazine? It is one of the few print magazines about web design and web development. Not only is it one of the few, but it flat out rocks! It takes current Web Standards best practices and makes them easy to grasp. It is explaining all of the solid web development practices and how to not only do them right, but understand if you should be doing them.
I know, you are saying, "but all of this stuff is already on the web!" Yes, this stuff is on the web, but not every web developer lives their life on the web, but most importantly, many of the bosses and managers that will approve this stuff do not read stuff on the web, they still believe in print. Saying the managers need to grow-up and change is short-sighted. One of the best progressive thinkers on technology, Doc Searls is on the web, but he also has a widely read regular column in Linux Journal. But, for me the collection of content in ".net" is some of the best stuff out there. I read it on planes and while I am waiting for a meeting or appointment.
I know the other thing many of you are saying, "but it is only content from UK writers!" Yes, so? The world is really flat and where somebody lives really makes little difference as we are all only a mouse click away from each other. We all have the same design and development problems as we are living with the same browsers and similar people using what we design and build. But, it is also amazing that a country that is a percentage the size of the US has many more killer web designers and developers than the US. There is some killer stuff going on in the UK on the web design and development front. There is great thought, consideration, and research that goes into design and development in the UK and Europe, in the US it is lets try it and see if it works or breaks (this is good too and has its place). It is out of the great thought and consideration that the teaching and guiding can flow. It also leads to killer products. Looking at the Yahoo Europe implementations of microformats rather far and wide in their products is telling, when it has happened far slower in the Yahoo US main products.
Now I am just hoping that ".net" will expand their writing to include a broader English speaking base. There is some killer talent in the US, but as my recent trip to Australia showed there is also killer talent there too. Strong writing skills in English and great talent would make for a great global magazine. It could also make it easier to find on my local bookstore shelves (hopefully for a bit cheaper too).
More XTech 2006
I have had a little time to sit back and think about XTech I am quite impressed with the conference. The caliber of presenter and the quality of their presentations was some of the best of any I have been to in a while. The presentations got beneath the surface level of the subjects and provided insight that I had not run across elsewhere.
The conference focus on browser, open data (XML), and high level presentations was a great mix. There was much cross-over in the presentations and once I got the hang that this was not a conference of stuff I already knew (or presented at a level that is more introductory), but things I wanted to dig deeper into. I began to realize late into the conference (or after in many cases) that the people presenting were people whose writting and contributions I had followed regularly when I was doing deep development (not managing web development) of web applications. I changed my focus last Fall to get back to developing innovative applications, working on projects that are built around open data, and that filled some of the many gaps in the Personal InfoCloud (I also left to write, but that did get side tracked).
As I mentioned before, XTech had the right amount of geek mindset in the presentations. The one that really brought this to the forefront of my mind was on XForms, an Alternative to Ajax by Erik Bruchez. It focussed on using XForms as a means to interact with structured data with Ajax.
Once it dawned on me that this conference was rather killer and I sould be paying attention to the content and not just those in the floating island of friends the event was nearly two-thirds the way through. This huge mistake on my part was the busy nature of things that lead up to XTech, as well as not getting there a day or two earlier to adjust to the time, and attend the pre-conference sessions and tutorials on Ajax.
I was thrilled ot see the Platial presentation and meet the makers of the service. When I went to attend Simon Willison's presentation rather than attending the GeoRSS session, I realized there was much good content at XTech and it is now one on my must attend list.
As the conference was progressing I was thinking of all of the people that would have really benefitted and enjoyed XTech as well. A conference about open data and systems to build applications with that meet real people's needs is essential for most developers working out on the live web these days.
If XTech sounded good this year in Amsterdam, you may want to note that it will be in Paris next year.
Nick Finck on XHTML Wireframes
Nick does a killer job in a post on XHML wireframing and use and reuse of deliverables. This is something I had been doing for years and found it really made the conception to inception process really quick. It also gives the means to keep your documentation up to date. The time savings with XHTML wireframes has been about a quarter to a third of the development time saved.
Those who don't like giving clients clickable wireframes, the pages can be printed/saved out in PDF and annotated.
The other knock is IAs not knowing XHTML or CSS. Somebody working in the practice of web development and web design that does not have an understanding of the handful of elements in XHTML needs to learn it quickly. Go look at CSS Zen Garden to get an idea of what design can be done on top of properly structured XHTML. Lift the hood and look at the mark-up. It is not that difficult.
In short go read Nick's wonderful piece and give XHTML wireframes a shot.
10 Wonderful Years
Before I forget, as of some point between the 20th and the 30th of November I will have had a personal site on the web for 10 years. All of this started with a few simple pages to say who I am (never very well), post a links page so I could have access to things I have an interest in from any internet access, and play with HTML. Much of the first site was silly with each page having its own look-and-feel (see playing with HTML), but I really wanted to experiment.
The first site was hosted on Compuserve, but with in a year it was moved to Clark.net hosting (that was bought by Verio and went downhill). In 1997 or 1998 I bought this domain name and soon after hosted the site outside (first with ASP and then ColdFusion) in real terms. In 2000 I moved the site to PHP, on the same hosting service, but they did not understand open source server hosting. In very late 2000 I started blogging with Blogger, which in 2001 switch to hand mark-up and then by the end of 2001 I implemented my own blogging tool that still runs the blog (it desperately needs a few hours of attention to get if functioning properly). But, work has largely kept me from making other profound changes to the site since then, there was the redesign to the current presentation in 2002 or so.
These past 10 years have made for a wonderful digital life. If you see me in London or Brighton (given the appropriate venue) lets celebrate this little event of personal expression and personal existence.
Microformats hCard and hCalendar Used for Web 2.0 Conference Speakers
Tantek has posted new microformat favelets (bookmarklets you put in your browser's toolbar). The microformat favelets available are: Copy hCards; Copy hCalendars; Subscribe to hCalendars; feed Copy hCalendars (beta); Subscribe to hCalendars feed (beta). Look at Tantek's Web 2.0 Speakers hCard and hCalendar blog post to understand the power behind this.
Microformats are one of the ways that sites can make their information more usable and reusable to people who have an interest. If you have a store and are providing the address you have a few options to make it easy for people, but a simple option seems to be using the microformat hCard (other options include vCard and links to the common mapping programs with "driving directions").
There will be more to come on microformats in the near future here.
Personal InfoCloud at WebVisions 2005
I, Thomas Vander Wal, will be presenting the Personal InfoCloud at the WebVisions 2005 in Portland, Oregon on July 15th. In all it looks to be a killer conference, just as it has been in the past. This year's focus is convergence (it is about time).
WebVisions is one of the best values in the web conference industry these days, as the early bird pricing is just $85 (US). You don't need an excuse, you just go. You spend a Friday bettering yourself and then Saturday in Powell's Books the evenings are spent talking the talk over some of the world's best beers served up fresh.
Books Read in 2004
I bought and read one standout book this year, Malcolm McCullough's Digital Ground mixed in with many more that I enjoyed. Digital Ground stood out as it combined a lot of things I had been thinking about, but had not quite pulled together. It brought interaction design front and center in the ubiquitous computing and mobile computing spectrum. I have been working on the Personal InfoCloud for a few years now and this really moved my thinking forward in a great leap. I considering better questions and realizing there are many next step, but few of these next steps the design community (in the broad user experience design sense) seems ready for at this time. One of the key components that is not was thought through is interaction design and the difference place makes in interaction design. It was one book that got my highlighter out and marking up, which few books have done in the past couple years.
I greatly enjoyed the troika of books on the mind that came out in 2004. The first was Mind Wide Open by Stephen Berlin Johnson, which was a relatively easy read and brought to mind much of how we use are minds in our daily lives, but also how we must think of the coginitive processes in our design work. Mind Wide Open focussed on improving one's attention, which is helpful in many situations, but I have had a running question ever since reading the book regarding focus of attention and creative problem resolution (I do not see focus of attention good for creative problem resolution).
The second book was On Intelligence by Jeff Hawkins. On Intelligence is similar to Mind Wide Open, but with a different frame of reference. Hawkins tries to understand intelligence through refocussing on predictive qualities and not so much on results based evaluation (Turing Test). I really like the Hawkins book, which throws in some guesses in with scientifically proven (unfortunately these guesses are not easily flagged), but the predictive qualities and the need for computing to handle some of the predictive qualities to improve people's ability to handle the flood of information.
Lastly, for in the mind book troika I picked up and have been reading Mind Hacks by Tom Stafford and Matt Webb. This is one of the O'Reilly Hack series of books, but rather than focussing on software, programming, or hardware solutions these to gents focus on the mind. Mind tricks, games, and wonderful explainations really bring to life the perceptions and capabilites of the grey lump in our head. I have been really enjoying this as bedtime reading.
Others in related genres that I have read this year, Me++: The Cyborg Self in the Networked City by William Mitchell, which was not a soaring book for me, mostly because Ihad just read Digital Ground and it should have been read in the opposite order, if I had cared to. Linked: How Everything is Connected to Everything Else and What it Meands by Albert-Laszlo Barabasi was a wonderful read, once I got through the first 20 pages or so. I had purched the book in hardback when it first came out and I was not taken by the book in the first 20 pages. This time I got past those pages and loved every page that followed. Barabasi does a wonderful job explaining and illustrating the network effect and the power curve. This has been incorporated into my regular understanding of how things work on the internet. I have learned not to see the power curve as a bad thing, but as something that has opportunities all through out the curve, even in the long tail. On the way back from Amsterdam I finally read Invisible Cities by Italo Calvino, which was quite a wonderful end to that trip.
I picked up a few reference books that I enjoyed this year and have bought this year and have proven to be quite helpful. 250 HTML and Web Design Secrets by Molly Holzschlag. CSS Cookbook by Chris Schmitt. More Eric Meyer on CSS by Eric Meyer.
On the Apple/Mac front the following reference books have been good finds this year. Mac OS X Unwired by Tom Negrino and Dori Smith. Mac OS X Power Hound by Rob Griffths.
Two very god books for those just starting out with web design (Molly's book above would be a good choice also). Web Design on a Shoestring by Carrie Bickner. Creating a Web Page with HTML : Visual QuickProject Guide by Elizabeth Castro.
The year started and ended with two wonderful Science Fiction romps. Eastern Standard Tribe by Cory Doctorow. Jennifer Government by Max Barry.
Update: I knew I would miss one or more books. I am very happy that 37signals published their Defensive Design for the Web: How To Improve Error Messages, Help, Forms, and Other Crisis Points, as it is one of the best books for applications and web development on how to get the little things right. The tips in the book are essential for getting things right for the people using the site, if these essentials are missed the site or application is bordering on poor. Professionally built sites and applications should work toward meeting everything in this book, as it is not rocket science and it makes a huge difference. Every application developer should have this book and read it.
Naked Div and Span Tags Lead to Embarassment
A word to the wise, don't use naked div or span tags in your markup, as you are asking for trouble. Many validation tools will let you know you have messed up, but you will soon realize this as you start extending your design with CSS.
What is a naked div or span? Look in your markup and if you see <div> or <span> you have naked tags. A div or a span tag should always have an id or class attribute that defines what it is doing. Calling div or span in your CSS is one giant hint this are going wrong. Add CSS modifications to the semantic markup that must be in place and use an id or class to place all other presentation layers.
Sooner or later a class or id attribute will be dropped in the div or span and it may lose the intended value, but since the CSS and markup were not used correctly the headache begins. Naked div and span tags lead to embarrassment at best or headaches and cursing for those that have to clean up the mess.
Web Standards Opening
Are you looking to practice and hone your standards compliant web design craft? Are you looking for an environment that is Web Standards friendly and want to join solid Web development team? You now have found a possible match. Does your vernacular include: "Zeldman, Eric, Tantek, Bowman, Christopher, Shea, and/or Molly said..."? Are you looking to get recognized for your Standards work? Can you make Photoshop purr? Do you know the bugs in Dreamweaver's rendering engine? Can you live with just one table in your layout? Are you proud of your craft and want to hone it more?
If you answered yes and are looking for a change of scenery read the following and send me an e-mail (see contact above).
We are looking to hire a strong Web Designer who has strong experience with hand-coding Web Standards (HTML, XHTML, and CSS) that validate. The designer must also have experience with accessibility (Section 508) and have solid web graphic design skills. Experience with information architecture and user-centered design processes are very helpful (wireframes, usability testing, etc). Experience with leading design and redesign processes is very helpful. Strong communication skills, including design documentation is essential. We design with Dreamweaver and Homesite and use Adobe and Macromedia graphics applications. [INDUS Corporation Web Designer Job Listing]
Web Standards and IA Process Married
Nate Koechley posts his WebVision 2004 presentation on Web Standards and IA. This flat out rocks as it echos what I have been doing and refining for the last three years or more. The development team at work has been using this nearly exclusively for about couple years now on redesigns and new designs. This process makes things very easy to draft in simple wireframe. Then move to functional wireframes with named content objects in the CSS as well as clickable. The next step is building the visual presentation with colors and images.
This process has eased the lack of content problem (no content no site no matter how pretty one thinks it is) often held up by "more purple and make it bigger" contingents. This practice has cut down development and design time in more than half and greatly decreases maintenance time. One of the best attributes is the decreased documentation time as using the Web Developer Extension toolbar in Firefox exposes the class and id attributes that provide semantic structure (among many other things this great tool provides). When the structure is exposed documentation becomes a breeze. I can not think of how or why we ever did anything differently.
Best Web Development Practices
Those of you looking for a relatively short article or essay on current best Web practices should look no further than the Best Web Development Practices provided by Apple. Yes, this focusses on web standards, but what best practice does not as it is the cornerstone of accessibility as well as makes the same content usable on mobile devices (one caveat the article will not print on 8.5 by 11 inch paper).
Make My Link the P-link
Simon hit on plinks as an echo to Tim Bray's comments and variation on Purple Numbers (Purple Numbers as a reference). As I have mentioned before, page numbers fail us and these steps are a good means to move forward.
Simom has also posted in more plinks and in there points to Chris Dent's Big Day for Purple Numbers.
I have been thinking for quite some time about using an id attribute in each paragraph tag that includes the site permalink as well as the paragraph with in that entry. This would look like: <p id="1224p7">. This signifies permanent entry 1224 and paragraph 7 with in that entry. What I had not sorted out was an unobtrusive means of displaying this. I am now thinking about Simon's javascript as a means of doing this. The identifier and plink would be generated by PHP for the paragraph tag, which would be scraped by the javascript to generate the plink.
The downside I see is only making edits at the end of the entry using the "Update" method of providing edits and editorial comments. The other downside is the JavaScript is not usable on all mobile devices, nor was the speed of scrolling down Simon's page that fluid in Safari on my TiBook with 16MB of video RAM.
Definition Lists Defined
Definition lists explained and examples by Max Design. Definition lists are often used incorrectly, but these examples are done to show how to use definition lists when it is semantically correct to use them. Definition lists are used for presentation purposes then the information is actually a list, ordered or not. Using the proper structure for information is critical for those that can not see the presentation (visually impaired, mobile devices, text readers for those driving, etc.).
[hat tip Mike]
Doing this how long
I realized today that I have been marking-up and posting to my own personal Web pages since November 1995. I have been trying to figure out when all this started. The pages started as "The Growing Place", which included the links page along with a handful of other pages on CompuServe's initial hosting of personal pages. I moved from there to Clark Net in late 1996 so I could get CGI access and have my own e-mail (well not really my own). In late 1997 I bought vanderwal.net and finally moved it to a couple hosting homes in 1998 and 1999. Then has been with its current host since 2000, which has provided great service and resources since then (I actually had an other personal site with this host much earlier and ran not so personal site the host for a short while.
Why all of this today? Don't know. It could be that I finally found when Compuserve started hosting member's pages. It does not seem like that long ago until I think that I have been building a presence on the Web for coming on nine years. I have been doing this professionally since 1996. I have been working professionally as a geek since 1988, either as my full-time role or just one of the hats I wore. I have learned a lot about application development and Web development in all these years. It is still about getting the information into the hands of people that are looking for it when they need it.
XFN Social Network
XFN, the XHTML Friends Network, has been developed and is now explained. Tantek, Matt Mullenweg, and Eric Meyer are the force behind this better than FoAF implementation of relationships. The key behind XFN is ease.
WaSP interview with Todd Dominey
The Web Standards Project interviews Todd Dominey, who was behind the standards-based PGA redesign. The interview raises the problems Content Management Systems cause with valid markup. Todd also highlights it is much easier to move towards standards when working from scratch than cleaning up previously marked-up content.
Harpers redesigned
Harpers Magazine has been redesigned by Paul Ford. Paul discusses the Harpers redesign on his own site Ftrain.
The site is filled with all the good stuff we love, valid XHTML, CSS, accessible content (meaning well structured content). The site is clean and highlights the content, which is what Harpers is all about - great content. The site is not overfilled with images and items striking out for your attention, it is simply straightforward.
We bow down before Paul and congratulate him on a job very well done.
Interdependance of structure, information, and presentation
Peter J. Bogaards explains The Document Triangle: The interdependence of the structure, information and presentation dimensions. This troika is very important clear information consumption, but also information reuse. Structure is extremely important to transmitting information, but also important to information reuse. Information lacking structure nearly as reusable as a newspaper article printed on paper.
One great location to explore the ease of information reuse and the affect the presentation layer has should look no farther than, CSS Zen Garden, where nearly all the content is identical in the various layouts and designs. The structure of the content provides a solid framework to rework the presentation layer. The presentation layer can add to or detract from the clarity of the message as well as the attraction a user may have to the message.
CSS Tabs part 2
Doug Bowman provides Sliding Doors 2 for ALA. The sliding doors rounded tabs done with CSS, meaning the text is not in a graphic and the tabs have rollover effects with out having to build rollover images and deal with JavaScript. Doug's version 2 of sliding doors provides those with pages in CMS or other non-hand built pages. This beats the JavaScript sniffing the URL to set the local tab setting.
Building Web pages for crippled IE browser
Microsoft and others are posting the work arounds needed for the Web pages you build if they require plug-ins. Java and Active Script seem to been the focus at this point. Here we go: Microsoft guide for building to the new neutered IE browser, Apple developer guide for post EOLA development, Real Networks guide for embedded, and Macromedia guide. [hat tip Craig Salia]
Hyper Text 2003 Papers posted
The Hyper Text 2003 Conference has posted the Hyper Text 03 Papers online. There are some great reads in the pile, if you enjoy theoretical and future-current uses of hyper text as a tool and theory.
Kottke and others on standards and semanticsk
Kottke provides a good overview of Web standards and semantically correct site development. Jason points out, as many have, that just because a site validates to the W3C does not mean that it is semantically correct. Actually there are those that take umbrage with the use of the term semantically for (X)HTML, when many consider it structural tagging of the content instead, but I digress. A "valid" site could use a div tag where it should not have, for example where it should have been a paragraph tag instead. Proper structural markup is just important as valid markup. The two are not mutually exclusive, in fact they are very good partners.
One means to marking-up a page is to begin with NO tags on the page in a text editor then markup the content items based on what type of content they are. A paragraph gets a "p" tag, tabular data is placed in a table, a long quote is put in a "blockquote" tag, an ordered list gets "ol" tags surrounding them with items in the list getting wrapped with "li" tags, and so forth. Using list tags to indent content can be avoided in using this method. Once the structure has been properly added to the document it is time to work with the CSS to add presentation flair. This is not rocket science and the benefits are very helpful in transitioning the content to handheld devices and other uses. The information can more easily scraped for automated purposes too if needed.
It is unfortunate that many manufacturers of information tools do not follow this framework when transforming information in to HTML from their proprietary mirth. A MS Word document creates horrible garbage that is both non-structural and not valid. The Web is a wonderful means to share content, but mangled markup and no structure can render information inconsistent at best, if not useless.
While proper development is not rocket science, it does take somebody who knows what they are doing, and not guessing, to get it right.
Others are posting on Jason's post, like Doug Bowman and Dave Shea and have opened up comments. The feedback in Doug's comments is pretty good.
Understanding the Web Medium
Joe Gillespie has posted a current feature Factor-X about understanding the medium of the Web and digital information. Joe explains many that come from the print work of graphic and information design will create the information in graphics and slice and post that information. The Web is not only for reading information, but also reusing information. HTML pages can, if marked-up properly (which is not difficult at all), be read by audible site readers for those with visual impairments or for those that are doing other activities like driving. HTML pages, if built to the standards can also easily be used in mobile devices with nothing more than a browser.
Understanding the medium is where Joe is taking the readers of this article. One of the advantages of the Web is having the ability to structure the information easily and modifying the presentation as needed or wanted. There are standard interface conventions that are easily understood with HTML that get broken in Flash (the hand pointer on for all content, including that which is not clickable). The great advantage of HTML is having access to the information directly so one can quote and have an easy means of attributing quotes through linking to the source.
Go read Joe's article, actually bookmark Web Page Design for Designers and go read monthly, you will be happy you did.
Blogs get higher Google rankings thanks to proper HTML
Matt points out Google ranks blogs highly. This seems to be the result of Google giving strong preference to titles and other HTML elements. Tools like TypePad help the user properly develop their pages, which Google deems highly credible.
Matt's complaint is his very helpful PVR blog is turning up top results in searches for Tivo information, and other recorder info. Matt's site is relatively new and out ranking the information he is discussing.
This is something I personally run into as things I write about here often get higher Google ranking than the information I am pointing to and is the source and focus of the information. I have often had top Google ranks for items that are big news on CNN or the New York Times, which I am pointing to in my posts.
Much of the reason for this seems to be understanding proper HTML uses and not putting my branding at the forefront of the message. CNN puts their name first in the title of their pages (not the headers, which also have benefit if they are in "H&" tags). The tools and people building Web pages with attention to proper naming and labeling will get rewarded for their good work (if a top Google rank is a reward).
I have written on this in the past in Using HTML tags properly to help external search results from April, which mostly focussed on search ignoring Flash, but for the few HTML elements on a page wrapping the Flash. Fortunately there have been enough links pointing to the site that was laking the top rank to raise the site to the top Google rank.
Some of the corrected Google ranking will come over time as more sites begin to properly mark-up their content. The Google ranks will also shift as more links are processed by Google and their external linking weighting assists correcting the rankings.
Adaptive Path redesign exposed
Doug Bowman discusses the Adaptive Path redesign. Doug provides good insight into the CSS based redesign, which can be seen at the Adaptive Path site.
Bray's escaping for RSS
Tim Bray discusses escaping characters in RSS feeds
Safari renders pages much better
Yes, I downloaded the new Apple Safari browser and it now nearly perfectly renders one page it has always mangled. I can now read the International Herald Tribune in my favorite browser.
The one downside in the few minutes of playing with Safari is it still does not let the user tab between all form elements. The user can not tab to select boxes, nor check boxes. This is a major usability problem in my viewpoint, but I may switch and use it to use the CMS tools here are vanderwal.net as I need the spell checking.
Steve Champeon on the Future of Web Design
Steve Champeon on Progressive Enhancement and the Future of Web Design. This is almost like sitting with Steve and getting the background and how that reflects for future of markup and Web design directly from Steve.Testing HTML validation of output of tools
Knopf offers a componarison of how well Help Authoring Tools create HTML. The testing includes compactness of code, but even better is validating the output against the W3C. Dreamweaver MX does quite well in the testing. It would be good to expand the testing to some of the other tools, like FrontPage and GoLive.
Building with Web Standards or how Zeldman got the future now
I awaiting Jeffrey Zeldman's Designing with Web Standards, which is available for order from Amazon (Designing with Web Standards). I have been a believer in designing with Web Standards for years, but it was Jeffrey that pushed me over the edge to evangelist for Web standards. One of the best things going for Web standards is it make validation of markup easy, which is one of the first steps in making a Web site accessible.
I work in an environment that requires Web standard compliance as it provides information to the public as a public good. Taxpayers have coughed up their hard earned dollars to pay for research and services, which are delivered to them on the Web. The public may access information from a kiosk in an underfunded library with a donated computer on a dial-up connection, but they can get to information that they are seeking. The user may be disabled and relying on assistive technology to read the public information. The user may be tracking down information from a mobile device as they are travelling across country on their family vacation. Each of these users can easily get the public information they are seeking from one source, a standard compliant Web page.
Every new page that is developed by the team I am on validates to HTML 4.01 transitional. Why 4.01 transitional and not XHTML? We support older browsers and 4.01 transitional seems to have pretty good access to information no matter the browser or device. We are not on the cutting edge, but we know nearly everybody can get the information their tax dollars have paid for. I dream of a day job building XHTML with full CSS layout, but with the clients I work for we still aim at the public good first.
I am very happy that Jeffrey has his book coming out as it should bring to light to more developers what it means to build to Web standards. Every contract that is signed buy the agency I work for must validate to HTML 4.01 transitional, but very few of the sites do when they come through the door to be posted. We provide a lot of guidance to help other developers understand, but finding a solid foundation to work upon is tough. When hiring folks many claim to have experience building valid sites, but most soon realize they never have to the degree to getting a W3C validation.
Building our pages to 4.01 does not mean we are going to stick with 4.01 forever. We plan for XHTML by closing all tags and stay away from tags deprecated in 4.01 strict. Much of what we create only needs a few scripts run to convert the pages from HTML to XHTML 1.1 transitional. Having the closing tags makes scripting to find information and search and replace much easier. (Enough for now, buy the book, we will have more later).
Using HTML tags properly to help external search results
There are some essentials to building Web pages that get found with external search engines. Understanding the tags in HTML and how they are (rather should be) used is important. The main tags for most popular search engines are the title, heading (h1, h2, etc), paragraph (p), and anchor (a). Different search engines have given some weight in their ranking to metatags, but most do not use them or have decreased their value.
Google gives a lot of weight to the title tag, which is often what shows in the link Google gives its user to click for the entry. In the title tag the wording is important too, as the most specific information should be toward the front. A user searching for news may find a weblog toward the top of the search ahead of CNN, as CNN puts its name ahead of the title of the article. A title should echo the contents of the page as that will help the ranking of the pages, titles that are not repeated can get flagged for removal from search engines.
The headings help echo what is in the title and provide breaking points in the document. Headings not only help the user scan the page easily, but also are used by search engines to ensure the page is what it states it is. The echoing of terms are used to move an entry to the top of the rankings as the mechanical search engines get reinforcement that the information is on target for what its users may be seeking.
The paragraph tags also are used to help reinforce the text within them.
The anchor tags are used for links and this is what the search engines use to scrape and find other Web pages. The text used for the links is used by the search engines to weight their rankings also. If you want users to find information deep in your site put a short clear description between the anchor tags. The W3C standards include the ability to use a title attribute which some search tools also use. The title attribute is also used by some site readers (used by those with visual difficulties and those who want their information read aloud to them, because they may be driving or have their hands otherwise occupied) to replace the information between the anchor tags or to augment that information.
Example
The application I built to manage this weblog section is build to use each of these elements. This often results in high rankings in Google (and relatedly Yahoo), but this is not the intent, I am just a like fussy in that area. It gets to be very odd when my posting weblog posting review of a meal at Ten Penh is at the top or near the top of a Google Ten Penh search. The link for the Ten Penh restaurant is near the bottom of the first page.
Why is the restaurant not the top link? There are a few possible reasons. The restaurant page has its name at "tenpenh" in the title tag, which is very odd or sloppy. The page does not contain a heading tag nor a paragraph tag as the site is built with Flash. The semantic structure in Flash, for those search engines that scrape Flash. Equally the internal page links are not read by a search engine as they are in Flash also. A norm for many sites is having the logo of the site in the upper left corner clickable to the home page of the site, which with the use of the alt attribute in a image tag within an anchor link allow for each page to add value to the home page rant (if the alt attritute would have "Ten Penh Home" for example).
Not only does Flash hinder the scapeing of information the use of JavaScript links wipes out those as means to increase search rankings. Pages with dynamic links that are often believed to ease browsing (which may or may not prove the case depending on the site's users and the site goals in actual user testing) hurt the information in the site for being found by external search engines. JavaScript is not scrapable for links or text written out by JavaScript.
HTML Hell and beyond
Eric S. Raymond offers his Welcome to HTML hell page (including content, style, and extension hell). I agree with every bit in this.
Apple Word Replacement Rumor and Information Structure Dreams
Rumor has it Apple is working on MS Word replacement. This would be a great thing if it would read native Word files seemlessly, but even better would be turning out valid HTML/XHTML. MS Word has always made a huge mess of our information with its conversion to something it "calls" HTML, it is not even passable HTML. One could not get a job using what Microsoft outputs as HTML as a work sample, heck, it would not even pass the laugh test and it may get somebody fired.
One of the downsides of MS Office products is that they are created for styling of information not marking up information with structure, to which style can hang. MS Word allows people (if the turn on or keep the options turned on) to create information sculptures with structure and formatting of the information. What Word outputs to non-Word formats is an information blob that has lost nearly all of its structure and functionality in any other format. It does not really have the format the Word document to begin with. What Web developers do is put the structure back into the information blob to recreate an information sculpture again.
You ask why is structure important? Structure provides the insight to know what is a header and sub-header. Structure provides the ability to discern bulleted lists and outlines. Structure makes it script-kiddie easy to create a table of contents. Structure makes micro-content accessible and easier to find with search. Structure provides better context. Structure provides the ability to know what is a quote from an external document and point to it easily. Structure provides ease of information portability and mobile access easier. These just name a few uses of structure.
Does MS Word have this structure capability? Yes, do people use it? No really. If people use it does MS Word keep the structure? Rarely, as it usually turns the structure into style. This is much like a somebody who spent months in the gym to build a well defined physique only to have the muscles removed to stuff their own shirt with tissue paper to give it the look of being in shape. Does the person with the tissue paper muscles have the ability to perform the same as the person who is really in shape? Not even close.
Structure is important not only for the attributes listed above, but also for those people that have disabilities and depend on the information being structured to get the same understanding as a person with out disabilities. You say MS Word is an accessible application, you are mostly correct. Does it create accessible information documents? Barely at best. The best format for information structure lay in HTML/XHTML/XML not in styles.
One current place that structure is greatly valuable is Internet search. Google is the top search engine on the Internet. Google uses the text in hyperlinks, the information in title tags, and information in the heading tags to improve the findability of a Web page. What are these tagged elements? Structure.
One of the nice things about a valid HTML/XHTML Web document is I can see it aqnd use it on my cell phone or other mobile devices. You can navigate without buttons and read the page in chunks. Some systems preparse the pages and offer the ability to jump between headings to more quickly get to the information desired.
These are just a few reasons I am intrigued with the Apple rumor. There is hope for well structured documents that can output information in a structured form that can validate to the W3C standards, which browsers now use to properly render the information on the page. I have very little hope in the stories that MS is working toward an XML storage capability for Office documents, because we have heard this same story with the last few Office releases and all were functional lies.
Perl site scraper
Screen scraping with Perl www::mechanize will come in handy for many tasks. The information reuse possibilities are wonderful. This does seem to require somewhat valid HTML/XHTML to function properly.
Understanding Visual Organization
Luke Wroblewski has a must read article, Visible Narratives: Understanding Visual Organization published at Boxes and Arrows. The article shows the importance of and how to visually structure information to assist the user with finding and focussing on content they are interested in. This lesson is one that is often missed in Web site redesigns.
A visual presentation of information is an essential tool to have in your tool belt. Lack of a usable visual structure can hinder your users from finding the information they are seeking. Many users come to a new site and perform a quick scan of the information available looking for something to attract their attention as it relates to terms, visual cues, or a vocabulary that will get the user to the nuggets they desire.
The user's eye needs resting places to guide them or help the user jump from topic to topic until the user finds one topic or link draws the user (as the user believes) closer to the information. Visual organization help facilitate the user's scanning and reading.
If the visual organization uses HTML markup's header tags and CSS for presentation the information has an underlying structure. The underlying structure can be used to assist bots (non-human search tools that scrape sites looking for information) in finding information. The automated scraping or searching is augmented by the markup as the information in the headers is often given greater value and can help the information get consumed by users interested in finding and using the information. With a little bit of scripting a properly marked-up Web page can generate a table of contents. This visual structuring eases the reuse of information, which is always a benefit.
Zeldman discusses XHTML 2
Zeldman provides insight into XHTML 2, which provides a response and agreement with Mark Pilgrim's Semantic Obsolescence rant.
Dumbing down of computer and information design books
My trip to bookstores in Florida had me seeing what the person on the street sees as computer books, "Dummies" guides. There were eight shelves of Dummies computer books with a handful of Microsoft publisher books thrown in for color variation.
When I returned home I took a trip to Barnes and Noble and found the computer Web section filled with GUI tool books (Dreamweaver, FrontPage, GoLive, etc.) where there were shelves of HTML, DHTML, CSS, Perl, proper design (by Zeldman and Veen), or Information Architecture books. This trend worried me more than what I saw in Florida. The GUI books did not get into proper markup or understanding of information. The books were concerned with how to make better use of more bandwidth. Not one place in the many books I pulled off the shelf did I see any mention of the user or information use (let alone information reuse). The beauty of learning how to develop properly is knowing when the GUI tools are wrong, but better is knowing what is built properly will work well on broadband and on mobile devices. If the information is important and cared about it should be made available, accessible, and usable.
More future proofing information
Speaking of future proofing your information, Mark discusses CMS and information reuse. One quote that brings this to light is:
This ties you to your content management system. The further removed your raw data is from your published form, the harder it will be to migrate away from the tools that convert one to the other.
Mark also discusses how using HTML he then created PDF files of his Dive into Accessibility essays. HTML has much of the semantic tools needed and the structure to provide a reusable information repository.
Text to HTML tool
Dean is building a text to HTML tool to add structural and typographical features easily. This would provide proper tagging and encoding of typographical characters without the user having to know what these elements are, just what they do.
Accessible persona
I was reminded today of Marcus a persona in Mark Pilgrim's Accessibility tutorial for Weblogs (and anybody else interested). Marcus is actually a real person (as pointed out by Mark), which drives the persona home. This may be my favorite example currently for accessibility.
At work we constantly get outside developers turning over non-accessible sites or applications. The client I work for is put through the painful task of explaining what needs to be done to meet Section 508 requirements. The teeth pulling the client goes through is shameful as the outside contractors want every single item spelled out and they want to know why (they usually have built the application or site through reusing a previous product built by somebody that is no longer there and that way they can do the job cheaply and make a better profit, had they built from the beginning knowing and understanding the requirements it would have been easy and inexpensive to do). Often times I am asked to help define what needs to be done and why something fails compliance, usually as a sanity check (accessibility has been an area of strength for four years or more). The regulations are very broad and do not define the exact actions that should be avoided (this is the correct approach to allow for technological improvements).
Marcus is a great example to have on the shelf as much of the information I work with during the day is public information that the taxpayers paid for, whether they are sighted, physically able, have their hearing, or not. We know that there is a decent number of users that come to government sites from publicly available systems (like in libraries) that have technology that is nowhere near current. These people should be able to get to the information and use the information and applications around it as others can use it. Marcus is usually what we see as worse case scenarios using Lynx, but also what we think of as our baseline. Knowing Marcus exists and is really helps greatly.
There is also a benefit side to building accessible information, it is future ready information. The information that is fully accessible is ready to use with no (or is rare cases slight) modification on mobile devices. This is the wonderful thing about building accessible information. One of the first steps is building information that validates to a standard. The next thing is separating style from the content by using style sheets, which make it easy to over ride any style that is problematic or to easily allow for scalable styles. This two helps create information that is future compatible. Accessible information can also be easily reused in from its presentation as it is built to standards that ease.
Accessible information is also structured properly. Structuring information properly is far more than how it looks, it is how is marked up. A header on a Web page has an "h1, h2, etc" tag around it, which eases the ability to build a table of contents or use that header as a contextual aid to summarize the information below it (that is if headers are tagged properly and the content in the header is properly descriptive). Structuring the information helps the information be reusable out of the Web page as that is what HTML does, provides structure elements in the markup tags. If information to be reused has needs (including structure and context that is easily discernible), which validating HTML provides as a basic foundation -- of course there is much that can be improved upon the basic HTML markup, but it addresses the information needs. Building accessible information applications (Web sites included) keeps money from being wasted in the future and it does not require buying a third-party application, which are often cause more problems than they solve where accessibility is concerned (this will not always be the case).
As Joe Clark's book, Building Accessible Websites points out accessible does not mean ugly or plain. Joe walks the reader through how to make beautiful sites that are also wonderfully to folks like Marcus (side note: Mark Pilgrim edited Joe's book). Another excellent book on accessibility, and is my favorite book on accessibility, as it works very well for Web application developers (and I agree with its approach to information in complex tables more than Joe's approach) is Accessible Web Sites. These are two great resources for leaning how to do things properly. I will be working on longer reviews of each in the near future.
Zeldman uncovers the mess of Aventis site
Zeldman hits the ugly nail on the head discussing Aventis. I believe that anybody who believes there is not an poor information design or site that is screaming for an Informaiton Architect has not been to Aventis, there are so many problems that begin with and end with the drop down menus that overlap. Zeldmen points out, as he always does, the need to understand what the HTML markup and code do in a browser. Not only understanding the browser but the user. The Aventis site fails in many areas, but the tucking product information under "About Aventis" makes it very difficult to find.
Zeldman has also been sharing his wonderful redevelopment pains and discoveries. I may tackle the last couple layout bugs I have left if he cracks the right nut.
Title to the top
One of the modifications here at vanderwal.net was making better use of the title in the HTML header. This is something that I preach at work, the title should describe the information as it is used by search engines. Google uses it in their algorithms and in their hyperlink to the information. I took the category in my homebuilt CMS and placed the category name in the title and put the same title in the H1 header tag at the top of these pages. After the first Googlebot scrape of this site the incoming Google clicks quadrupled in 24 hours and have stayed rather constant.
I knew something like this would happen, but not to this extent. I guess there are so many poorly formed Web pages out there that a properly formed page sticks out (tounge partially in cheek). The categories are set based on my personal taxonomy and each entry can be cross classified as there is often cross-cutting issues in a post. The things people are seeking and ending up on these pages is extremely broad, much like the topics covered here. Some of the Google queries end up at Off the Top as it is near the top in the search results, but not nearly as on target as others that are farter down the list that have not structured their information properly.
GUI vs Hand Coding for HTML
Dori posts a matrix explaining how well GUI Web mark-up tools perform and it is not suprising that most do poorly. Dreamweaver MX does admirably, but the JavaScript is not up to par. The best coding is still hand coding. If you do not know how to hand code your job will never be done, it will not be right either. The tools have come a long way, but they are nto there yet.Redesign explained
You most likely have noticed. There has been a redesign here. This new site is nearly all XHTML and using CSS box model. Going through this process introduces one to all the bug that browsers have that you need to work around. I found that IE 5.5 and up on the PC is horribly buggy and does not follow standard box model too well. Netscape 7 on the PC is the best browser. On Mac OS X the best browser has been Navigator/Chimera and IE 5.2 (through this Chimera became my favorite browser on most any platform).
You dare ask why the redesign? Well it was well past time. The last design had been around for a year or so and the CSS was giving me fits. I really wanted cleaner markup and I wanted to have a font size that scales. I believe that the font scales on all web standards compliant browsers and platforms. It should even scale on the PC's IE 5.5 and 6 browser (this has had broken functionality since day one, if you need a browser to scale font sizes properly get a real browser, one that is Mozilla based will do just fine). I am trying to remove the thin white line under the logo graphic and above the menu bar, it is showing up in IE on the PC and on versions of Mozilla on the Mac (Please contact if you have a solution).
I also wanted a better layout that would permit a cleaner layout. I moved the global navigation to the top bar and it uses and unordered list and CSS to put it in line and give it the roll-over (I stole part of the code from Scott and tweaked it). I also moved the local navigation to the left, which has been a joy as it is near the scroll bar and has made life a little easier. The right navigation may also be a place for other goodies. The right navigation has also helped me on the links page as there are a ton of links and I wanted a sub-navigations (yes, the links page is going to be getting an over haul in the near future with some needed integration with other elements in the site). The redesign also give the opportunity to introduce some small photos or images on the pages and not have other colors overwhelm them.
The box model drove me crazy, but I created some cheats I hope to share in the near future, once I get some minor tweaks around here done. The redesign was done solely on the TiBook and using a combination of the Macromedia MX Studio (Dreamweaver MX is a decent text editor, but I could not find a way to have it show a passable rendering of the pages in its own browser) and BBEdit. I started the process with outlines in Omni Outliner (a tool that rocks and is unparralled) as well as Omni Graffle to put together some wireframes to help me sort out the layout and functionality. This set of tools has been one of the best combinations I have used, I wish I could use this combo at work. I really am missing Adobe Photoshop, which may become my next software purchase, as it is a great tool that saves time.
Please, please write wit questions or bugs found. Thank you. I did this for me, but I hope you enjoy it.
Zeldman redoes and keeps teaching
Zeldman is redesigning and explaining the whole thing as he moves his site into the present. Many of us learned for Zeldman's A List Apart in the early days of Web development and he keeps up this wonderful spirit today. Openly sharing and the desire to learn are what the Web was built upon and is what keeps it great.Wired goes X(HTML)
While trying to catch up on friends and knowledge I ran across Zeldman's discussion of Hot Wired moving fully to valid XHTML and CSS, which is a bold and wonderful move. Zeldman also points to Wired's defense of their move to the new up-to-date site. These are good reads and help us understand why good markup is important and to understand what goes into making these decisions and the work to make it come to life.Markup gives structure to information
I have been missing a lot of things on the Web the past few weeks. I just found Steve Champeon's article on the importance of understanding mark-up over at Web Monkey. HTML markup, some call it HTML code (not correct), helps structure information so that it can be used and reused properly in the proper context. This is extremely important when you are trying to add style to the content, such as adding the desired size and weight to a header or modify positioning to an unordered list. I see a lot of HTML tags that are not used properly in the work we clean-up on a regular basis. There are very few applications, like MS Word that come close to using HTML markup properly. Cleaning up application generated markup is demoralizing as getting markup right in the first place is easier than having to clean up the mess made. Go read Steve's article and anything else you can put your hands on that he has written and you will be much better off than before, believe me.
Why is markup important? Many folks and applications try styling the information without considering the structure of the information. If you have much of a background in communication, journalism, information science, etc. you understand that information needs structure. There are headers that indicate to the user what the content and tone of the content that follows will contain. There are many elements on a page that need structure, like knowing where a paragraph begins and ends, where in the body of text an image should be tied, words that need to stand out (strong), a string of items in a list, or a structured ordered list with sub-elements. Not having thes information properly marked up would make understanding how to best treat that information very difficult. This may seem irrelevant to those that only deal with a Web browser, but if you want to read the informaiton on a PDA, print the information and use the best styling for reading, or need a screen reader to vocalize the words on the page and give the words that compise the information being communicated the same understanding you need structured information. It would be like trying to bake a cake with out sides on the pan, the cake needs structure to rise and be best consumed. People that guide you away from properly strucutring information, more often than not are not informed on the need and the benefits to structuring information.
Table art
Take your shot at table art, a non-judged competition showcase for those that can have their way with tables.5k Contest is live again
Yes, it is that time of the year for the 5k Contest. Yes, 5kb of wholesome goodness with which to work. That is graphics, HTML, scripting, and all the the ones and zeros you can pack in.MS Office special character translation
MS Office special character conversion to Unicode and definition that helps with converting Office documents in to HTML.Yesterday was all about getting the synapses to fire in the right order at SXSWi. I was running on sever sleep deprivation from phones and alarm clocks ringing when I had not finished my needed sleep cycle. None-the-less I had a great time. I greatly enjoyed Steve Champeon's peer panel on Non-Traditional Web Design, as it focussed on the fine art of tagging content, understanding the uses of information, and the true separation of content, presentation, and application controlling the information. The Web Demo panel I was on seemed to go rather well as there were a broad spectrum of sites reviewed and the information from the panel to the developers was of great use (I hope) as I think we all learned something.
The evening provided good entertainment, a wonderful gattering at the EFF party. Once again many folks adjourned to the Omni Hotel lobby for the after-hours social gathering. I spent much of the time just listening to conversation and occasionally partaking. Of intrigue was Rusty of Kuro5hin and Adam of Brian of Slashdot discussion development of site tools that will help a dynamic site fly, keep in mind all these tools are in Perl.
Personally I think I would extend the Hillman Curtis quote, "Web designer has to think of every pixel and the role it plays in brand" and extend it to the code behind the design. Every choice in the code impacts the display of the information or the way users, particularly with disabilities use the information. Sites that are well crafted have more usable information than poorly coded sites. Unfortunately, I have run across a lot of poor code of late, which the developers of the code believe everything is fine as long as it displays properly in their browser. The problem is not everybody has their browser. The poor coding not only adversely affects the display of every pixel on the page of other browsers, but provides poor usability of the information for the sight impared. The best step is to learn the standard code, learn to code my hand, learn what every tag and element does, learn to write a page efficiently, and most of all learn how to code for everybody in your user base. Lacking this we are just blindly coding in the dark and wasting our own time, the time those that thought they could use our information, and those who have to recode the information to make it usable.
Having watched the desktop publishing (DTP) trend "empower" people to design their own newsletters and brochures, I thought the Web would have followed in a similar growth path. DTP came to popular being in the late '80s with the advent of Adobe's PageMaker. Having formal training in communication design I realized the tool was powerful, but also dangerous. Moving into the workforce I watched the folly of the DTP trend. This powerful application when in untrained hands, could create output that was as far from what anybody would want being put out by a professional organization. I heard more than my share of executives screaming down corridors, "What is this cr*p". The DTP in the hands of the admin staff or the intern with out design backgrounds or training created about what was expected, garbage. DTP was quickly relegated to the hands of trained graphic artists, who turned out great products from the same application and often same machine.
What took four or five years with DTP is not being realized with Web development. Part of this may be Web development is more accessible and children can do it from home. The novelty of Web development has not reached the ends of the earth. Another driver that sets the Web apart is the embarrassment of people's children being able to build pages, which leads some folks logic patterns to the belief Web development/design is not difficult. Much like DTP, it is not difficult to build "something", but is does take a lot of work to build something good that is usable and maintainable. I still hear some executive yelling down the hall about the poor quality of a Web page, but the conversion of those developing sites to knowledgeable developers or turning the site over to experienced expert staff is still a slow transition. The glamour of the Web has worn thin, which is helping move the development to the hands of craftspeople and those with the passion to learn all the details.
I still have hope, actually I work in an environment that gives me great hope as the people with the power to say no do so for all the right reasons. The reasons are development that does not meet the minimum standards of a professional organization. The Web reaches far more people with the messages of our organization that the world prior. The Web imprints user's minds with the impression of a solid organization that cares about the information it handles, or it can do the opposite with equal ease. The experience and impression is in the hands of the professionals to see that these standards are met and adhered to. I am happy to work with not only professionals, but people with the passion to understand what is right to get the information to the people and get it there properly.
I think a note of clarification is needed regarding the frames comments from the other day. I am a huge fan of the Content Management Bible and have been perusing it for a couple months (or so) now. The use of frames is not all bad, if used in a proper context.
One reason to use frames is using the browser client as an application interface and there are distinct sections with quasi-interrelated functionality. A mapping application (select any one of these elements on the page to see the use of frames - keep in mind there is a heavy use of JavaScript that requires a version 4.5 browser or higher). The application interface often has command elements that are essentially toolbars and definition selection elements that set the metadata layers of the information to be displayed. These toolbars direct the actions of the other frames or provide tools to be used in other frames (a zoom tool, etc.). The functionality in a toolbar is not an element of the map display and it should not be an incorporated element of the map as it has a much different functionality from the map display. Conversely, our users are familiar with navigation being incorporated into the Webpage and that is now a common and preferred construct. But, we are looking at an application being displayed in a Web browser, which requires a different mind set.
Another use of frames is in a controlled environment that has a plethora of distinct content items that are within a contiguous text, such as an extensive table of contents. Here the Metatorial CM Bible is a good example of when to use frames. There table of contents is a helpful information tool to quickly scan through the information to place the reader at distinct point in a larger body of text. The table of contents is a large (long) element of text that could work as an element is one distinct page, but that would require rebuilding those elements of the page with every snippet of information delivered to the browser.
Frames should be used when the distinct content elements require each other. The table of contents and the page display elements should not work with out the other components (if they can we really have to ask ourselves why we are using frames). If we can enter a page in the CM Bible without the table of contents the functionality of the site is broken. The navigation is not available and the assistive information (navigation and/or metadata elements) is not available.
The last item is to ensure that if a frame can stand alone as its own page, please ensure there are the needed navigational elements on the page. In the example that drove my frames rant (largely because the CM folks understand information and its need to be used, but the site breaks information use constructs we know from experience and research to be proper and needed) the thing that was disconcerting was each of the frame elements needed the other to provide complete information for the user. The user needs context. We need to provide the user a means to get to our front page or to other areas within our sites, because if they like our information we should offer them more. If we build a site using framed elements and these elements can be used on their own (no JavaScript sniffers to ensure the other frames are open as a requirement for displaying the content, or other similar technique) the content must have navigation elements (the footer is an unobtrusive placement) and really should have some branding or other statement of ownership.
We know that users of information have varied purposes and methods of using our information. We need to provide the users the tools to help the user provide this information. We are often proud of our information work, but if a user does not know it is us or we do not want to claim our work is decreases credibility.
We need to embrace functional information architecture to ensure proper information use. This bleeds in to user experience design, but understanding how information is used and the information interface is used must be integrated into the IA. Proper functional IA should keep improper use of frames from occurring. Functional IA would walk through a string of questions using a wireframe of a site and ask how the frame sections would interact. We would ask what information is lost if not all the frames function (a surprisingly common occurrence). We would ask if frames maintain context for the information. We would look at methods of insuring the whole of the frames remains so to provide proper navigation, proper context, and proper metadata to help understand the information provided. Not asking these questions is not being responsible to the information, those that collected the metadata and spent time understanding how the information is to be used, and is not responsible to the consumers of the information.
The solution to the = in the link is to use  in its place. This may require a solid tweak to my home-rolled weblog application so to sniff, parse, and replace the symbol prior to inserting into the database.
Arg, running into issues in links when the link has an equal sign followed by a number or two (") does not look like (=22). This will take some playing around as I am using the "quoted_printable_decode" in php to print things properly.
Do you build Web pages? Do you have Mac? Do you have to convert text to HTML/XHTML? If you answered yes (if you didn't you should see what you are missing) please go check out Dean Allen's AppleScript for writing on the Web. These should be wonderful additions to our tool belt.
In a follow-up to the Wall Street Journal site redesign, Webreference explains how WJS achieves the faster page load (scroll down a bit).
After procrastinating for long enough and reading Nick's review in Digital Web I upgraded to Homesite 5. This is what I used to update and validate sections of this site to XHTML. I have been using Homesite for work projects for a few years, but rarely used it for this site or other personal projects (just a quirk) as I usually do my work handcoding with TextPad. Some of the layout of the tools has changed slightly from Homesite 4.5.2 to 5, but it was not a major difference. I really liked the XHTML elements and collapsing the code, which makes finding non-closed tags an easy task.
I have been able to read through all of this month's Digital Web and can say it is a solid issue from end to end. I really enjoyed the interview with Hether Hesketh. I am also a fan of the two business pieces, Managing the client: A fairy tale and Building the Business Game Plan. Both of these business articles I have pointed others to already as they are great insights from experience.
Moving to XHTML and general updates
There are some changes around here. The links page has been updated with some new links, updated links, and a few removed (ones that I was not visiting for various reasons or had gone dead).
The links and about pages are both converted to XHTML and are validating, for the most part, to XHTML Transitional. The next step will be to get this section, Off the Top, to validate. This will be a little more effort as it will require making some edits to the templates and internal code validation. Not a monsterous task, but a task none-the-less. A large part of the conversion in this section is creating compliant output from non-standard input. Much of this section does not use starting paragraph tags (<p>), which will take some work to ammend.
This means that this site is finally moving toward being standards compliant. This means that it will be easier to display information across browsers (standards compliant browsers, which most are becoming), ease of maintenance, and information reuse.
Champeon interviewed in pixelview
Steve Champeon is interviewed in Pixelview. Yes, the list-mom for webdesign-L shows his softer side.Zeldman has been busy while I was a way. It is always good to keep an eye on what Jeffery is upto, particularly when he is talking about the the Web and standards.
Those of you that have been following the KPMG Linking poor mindedness, would probably enjoy Chris Raettig and his Sunday Stroll (Chris it the chap KPMG was peeved with for linking to their site, particularly their company song).
Foundations of Hypertext Navigation, Part 1.1
Another resource for getting to the foundation of the navigation metaphor, Navigating Hypertext: Visualising Knowledge on the Net. It has a poor interface, as the words on the left are links, but missing any interactive component to let one know they are links.Foundations of Hypertext Navigation, Part 1
Another discussion on Peterme that has fallen into the discussion of spatial metaphors and the Web. The general feeling is that the spatial metaphor provides a poor descriptive language and metaphorical base to discuss the Web. Finding a replacement seems to be the focus, but there is an embedded base in the population of users that have adopted these analogies. I agree to a great degree that the spatial metaphor is not the best (agreeing with the negative of a positive superlative is the easy way out as there is very little room to be wrong so it is a false method of looking smart).There is a chapter on "NAVIGATION THROUGH COMPLEX INFORMATION SPACES" from Hypertext in Context by Cliff McKnight, Andrew Dillon, John Richardson, which provides a solid understanding of some of the history of the navigational metaphor in hypertext services.
A Nick posts the outline and links for he and Ross Olson's Web Standards discussion, which was delivered in Portland, Oregon at the Portland Multimedia | Internet Developer's Group.
Shirley Kaiser has redesigned her professional site, SK Designs and provided a fantastic redesign write-up on her personal site Brianstorms and Raves. The redesign is quite nice and provides a nice job of chunking the information with headers and bullets for scanning. The write-up is a very good approach regarding when and how to go about a redesign.
[hat tip Nick Finck at Digital Web - What is New]