For the past six months my co-worker John Croslin and I have been hammering away at this project, and it’s finally launched: the new University of Texas School of Law Events Calendar. After comparing many popular (and not-so-popular) open source and commercial calendaring projects, it was determined that none of them fully met UT Law’s specific needs and infrastructure, so we tried to figure out which features worked best in each, and started from scratch.
On the surface, the public view has all of the trappings of a fairly generic calendar (grid + list views, date-based navigation, multiple “calendars”, iCalendar downloads), but behind the scenes there’s a fairly impressive feature set. A quick list of what’s going on:
Entire system designed and built from the ground up, using cross-browser-friendly HTML, CSS, and a dash of jQuery
Object-oriented PHP with an Oracle backend (which is what we’re running now, but it could be modified easily to use MySQL or PostgreSQL instead)
Custom workflow routing that hooks into our faculty / staff / student directories and makes efficient use of our special events and media services departments’ resources (if an event requires catering the system notifies our Special Events department for approval, student-submitted events are first screened by the Student Affairs Office, etc.)
Recurring events are possible with more flexibility than what’s found in Outlook: you can edit most of an event’s details without requiring the removal of the whole series, and you can choose whether changes affect only the single occurrence, if changes ripple forward, or if the changes ripple to all sibling events in the series
Integration with our Exchange server via Exchange Web Services to provide room availability (free/tentative/busy) info to users when creating new events, to help with room selection
“Pretty” permalinks that are navigable for all calendar views (for example /calendar/today/ lists the current day’s events, /calendar/2010/08/ displays August, 2010, in a monthly grid view, /calendar/2010/08/faculty-events/ narrows that further to faculty-specific events, etc., and using the date navigation controls doesn’t kick users out of the specific view)
Coming real soon: iCal/RSS feeds, embeddable calendar widgets, better Exchange integration, mobile views, and more
One week in, it’s already shaping up to be a very useful resource for our users. We might have the code available as an open source download at some point, especially if there’s interest in adapting or extending it. If you’re looking for something right now, you might be interested in the great work being done with UNL Events Publisher and Bedework, two open source projects I took inspiration from. Otherwise, feel free to take a look at what’s happening at UT Law!
Directory Search — if you’re affiliated with UT Law School you can search our internal phone and email directory by name or department, using the native iPhone apps to place calls and send emails directly,
Event listings and Notices pulled from our existing calendar and Law Mail announcement systems,
RSS feed view of our press releases,
Recent Twitter posts from our Communications office (this will make more sense when/if we have more than one Twitter account posting official news, and can combine them into one stream here),
Maps: detailed building maps, Google maps that use the iPhone location services to guide you to our building, KML-based maps of public parking, nearby hotels, and restaurants,
There are a lot of things already in the works for the next iteration. The number one goal is to support other popular devices, to live up to the ideal of “one web, any browser”. As a developer who has wrestled against the wide range of inconsistent desktop browsers and all of their HTML and CSS inconsistencies over the years, though, it was really, really, nice to work with a single browser that already supports HTML5 and CSS3 presentation out of the box. Now I’m spoiled.
In one of the best eye-tracking technology projects I’ve seen, the folks from the Graffiti Research Lab and FAT Lab have teamed up with Theodore Watson, Zachary Lieberman, and Christine Sugrue to tackle a novel accessibility problem: enabling pioneering graffiti artist Tempt, hospitalized for over two years with the muscle atrophy of ALS (Lou Gehrig’s Disease), to be able to tag again. Out of all of the things I heard about at SXSW this year, I think this project was the thing that excited me the most — open source hardware + software hacking, vision work, accessibility concerns, graffiti and a great story!
The system they’re developing is using the excellent openFrameworks library and two small cameras: the left can be used as a “mouse button” event by holding that eye closed, and the right eye’s pupil is tracked for gesture. The result is a simple hands-free drawing app, which they will connect with the GRL’s laser tag tools, giving Tempt the ability to express himself through graf writing again.
You can check out the rest of their videos under the TEMPT1 tag on fffff.at (“Release early, often, and w/ rap music.”), but here’s a good one to get you started:
Freshened up my personal blog and portfolio site for 2009. While similar to the transitional look and content that you’ve seen for the past couple of years, this theme has been hand re-written from scratch and features many advancements over the old style. The entire site is better integrated through WordPress than ever before using features newly available in WP 2.7.1 (gravatars, per-post styles, threaded comments, etc), a handful of customized plugins, subtle jQuery enhancements, and Subversion to tie it all together on the backend. I’ve also moved to a new domain after about ten years of being at asnorwood.com. All of the old links should still point to the right place (or get you pretty close), but let me know if you find something missing.
The bulk of the improvements are behind-the-scenes, but I can at least say that the following changes make my life easier and me happier:
Uploading new portfolio work is much more straightforward.
No more need for a separate gallery plugin!
The category and link organization is more sensible! Tags, too!
Better error-handling — hopefully you won’t end up 404 Not Found, but you at
least have a sporting chance of getting unstuck now!
The search engine optimization (I hate that term) seems to be working
already, too. Thanks, Google!
The search form pulls up better, more accurate results!
All of this tech stuff is secondary, of course, and I’m still trying to decide how best to balance the blog entries between my different interests. Maybe I’ll eventually split off into two or more distinct sites to keep things from rambling together. I’d also like to figure out a better way to incorporate the side-channel links (currently I’m using del.icio.us) and scrap-collecting elements (I love Tumblr for gathering quotes and other detritus, but not sure how best to tie that content in with my main site). Being nearly the fifteenth anniversary of my first website, you’d think I’d have this all figured out by now!
Some of the best panels and meetups I attended at this year’s SXSW (the famous technology/music/film/designer eyewear festival) were on accessibility and adaptive technology, a good forum to hear what’s stirring in those fields. In particular, it seems like there’s a growing open source movement to provide tools for users with special needs and to help web designers produce accessible content.
Closed source software like JAWS will face a real challenge as open screen readers like the NVDA project become more mature and build on the popularity of other software like Firefox — while NVDA is certainly lacking the features and polish found in the more widely-used commercial products, the price (free vs. $1000) and ease-of-installation certainly make it compelling.
I also learned about the following accessibility-checking programs and Firefox extensions, immediately adding them to my developer’s toolbox:
Colour Contrast Analyser, a great tool available for Windows and OS X that gives you two color pickers: one to choose a foreground color (probably your main text color) and a second to pick a color from the background to compare it with. It then gives you detailed contrast ratio information for the two colors along with clear indicators as to whether your site or application complies with the suggested contrast needed for visually impaired users and for colorblindness. It’s one of those tools that simply works as advertised.
Fangs, a screen reader emulator built as a Firefox extension. When run on a page, Fangs displays a mashed-together, color-highlighted, text-only version of your content as a screen reader would read it aloud. If you’re a sighted web developer, this is a handy tool for getting a quick impression of how your page will hold up under JAWS or similar. (Bonus points for having an attractive, accessible website)
The Firefox Accessibility Extension from the Illinois Center for Information Technology Accessibility. This tool helps you generate reports on various accessibility issues, can display information about your page’s semantics (headings, list items, links), lets you easily switch into various high contrast modes, etc. It’s a great companion to the awesome Web Developer extension.
You should also check out Color Oracle, the cross-platform color blindness simulator. It’s pretty sobering if you have regular vision like I do, and it will make you appreciate that yes, two different hues can be very, very similar-looking to a good portion of your audience, and yes that’s a big problem.
(It goes without saying that these are useful but imperfect tools, never capable of giving you the full insight that would come from actual user testing. The only real way to know what real frustrations an impaired user will have with your new web app or site? Get one to come in and give it a spin!)
These are the kinds of open source projects that I really dig: good for users, good for shaking up the established software licensing model, and good for helping solidify support for web standards. Know of any other good tools?
Today sees a new homepage for the University of Texas School of Law. This iteration is more of a realign than a redesign as the decision was made to keep our interior pages intact while we continue a long-term look at our branding and online presence. The biggest design challenge was creating something cleaner and more useful for our visitors while retaining most of the same content and enough of the previous design to tie it in comfortably with our current site’s look-and-feel.
The new version emphasizes our communication pieces, changing the rotating banner graphic into something more dynamic: the accompanying text is now HTML-based and will link to richer features similar to our Clinical Education stories. Our previous 75×75 pixel highlight buttons (which themselves were reduced from the intricate 200×140 highlight graphics of two years ago) have been folded into our general News list to help simplify the page. The navigational links were dramatically reorganized to make the hierarchy clearer and more contextual. Everything’s still there, it’s just been reshuffled.
Make it pretty
The goal aesthetically was to reduce the homepage’s clutter and to make the information presented more visually balanced. I designed the old homepage, so I’m to blame! To accommodate the larger banner graphic I increased the width of the site to 840 pixels, and then subdivided that width into a five-column layout. The typography is much more consistent, and care was taken to align the text vertically on a baseline grid. The colors are lifted from the previous version but greatly toned down — far less orange, no more crazy orange-stripe-gradient thing, and a nice white background with some subtle color at the top. Still feels like UT, but doesn’t scream it, and the new design continues to match our internal pages.
Behind the scenes
I’ve shifted the site from Transitional to XHTML 1.0 Strict and have made greater use of XML for the maintenance of the feature stories and news items. The layout and typography are all still handled with plain CSS: if you strip away the stylesheet, you’ll find that the homepage is semantic, streamlined, and very navigable with screenreaders or other assistive technologies. Text can be adjusted in the browser to just about any size without breaking the layout. We’re also sporting a bit of hCard markup so that folks can easily scrape our contact and location info into more useful formats.
Hopefully the refresh is just what we need to help carry us along until the sitewide redesign. I think the updated technology and cleaner look will do a lot for us, and it should help increase our visibility as one of the top-ranked law schools. If you have any comments about the design or about site refreshes, I’d love to hear them.
Looking for an image for last week’s entry on shadows I turned to art history databases JSTOR/ArtSTOR, ECCO, and the WGA (academic database people seem to have a penchant for initialism). After getting badgered with various login options, access restrictions, rules for use, off-campus policies and so on, I turned to the hoi polloi: between Google Images and Flickr’s Creative Commons search I quickly turned up a worthwhile painting, free to use. Three news items from today confirmed that I’m not alone in thinking that keeping academic publishing behind university paywalls is a bit counterproductive.
The New York Timesran a piece detailing a proposal presented today to the faculty at Harvard. Their scholars’ academic work could soon be automatically published online, publicly available, on a surprisingly opt-out basis. While many professors already publish their work online at one journal repository or another, this could become a compelling centralized resource. This kind of no-cost open access has the journal and database publishers a bit worried, and for good reason. I haven’t been able to turn up any info yet on whether the proposal passed or not, or when exactly it’s up for vote (if that’s the way it works). Will developments like this kill niche journals that rely on their sibling publications’ high subscription fees? Will this change the business model of scholarly journals?
Next, Professor Lawrence Lessig of Creative Commons fame writes that the “Legal Commons” project has seen their first release of case data, available as CCØ-licensed XML. This organization seeks to bring 1.8 million pages of federal case law into the public domain before the year is out, available for free for any use or purpose. It’s an ambitious goal, especially considering the clout of the expensive subscription-based alternatives, but a worthy one. After all, shouldn’t the word of the law be in the hands of the public?
Finally, I came across a post on the O’Reilly Radar blog about a newly announced non-profit service called CK-12. Their system provides a UI that will allow educators, students, and the public to assemble their own textbooks using open data and resources. Right now it sounds like it’s mostly limited to flat text, but in the future they plan to incorporate more dynamic items like RSS feeds, videos, and widgets. A bit of Web 2.0 for the classroom. I hope that it catches on with a least a few tech-savvy teachers. I’ll have to browse through the other news coming out of O’Reilly’s Tools of Change conference to see what else is going on along these lines in the publishing world.
UPDATE: Looks like Harvard’s faculty overwhelmingly accepted the proposal. There are more open law projects cropping up (like The Public Library of Law, which includes some commercial links) and public.resource.org’s archive is being picked up on legal information sites like Justia.
“If the swift moment I entreat:
Tarry a while! You are so fair!
Then forge the shackles to my feet,
Then I will gladly perish there!
Then let them toll the passing-bell,
Then of your servitude be free,
The clock may stop, its hands fall still,
And time be over then for me!”
— “Faust,” Norton Critical Edition, lines 1699 – 1706
The above lines are from Goethe’s story about the scholar Dr. Faust and his famous bargain. The scholar promises his soul to the devil in exchange for earthly knowledge and power, on the condition that his life will be forfeit only when he experiences a moment that he wishes would persist. What does this have to do with Internet Explorer 8? It’s a tortured and overblown metaphor to be sure, but for some reason this week’s developments in the world of web development reminded me of this fable.
If you haven’t already, start off with this article from A List Apart and perhaps move along to Eric Meyer’s analysis of the news. These articles appeared almost simultaneously with a post on the IEBlog about the scheme. To grossly summarize, the IE team has worked out a deal with some of the major players in the web standards scene and representatives of the browser makers to introduce a new <meta> tag allowing developers to target specific browser implementations. They argue that this move will help prevent complaints of new browser versions “breaking the web” when they are released to the public.
This news seems to have come as quite a surprise, with heated discussion (mostly negative as far as I can tell, and at times sadly mean-spirited) breaking out in the usual forums. Molly Holzschlag provides the most level-headed analysis I’ve read so far, and alludes to the secretive, NDA-protected discussions that led up to this decision. Even Ars Technica and El Reg have weighed in on the issue.
The contemptible part of the new specification is that it’s designed to allow sites to lock into a current implementation, and Microsoft has made the decision that the default rendering engine for pages lacking this meta tag will be IE7 (not IE8, the browser that’s introducing this feature, or the more sensible default of “latest version”!). The implications of this are that future versions of IE (and other major browsers?) will contain emulation code allowing it to switch back to a previous engine at will so that sites will always look and act the same as the designer intended, quirks and all. If you have IE10 and look at a page lacking the proper meta tag, it will use IE7 to display the page. I guess that means IE8 won’t pass the Acid2 test by default? What does that even mean?
In my humble, semi-educated opinion, this could be a major setback to the web standards movement and to the speedier development of better web technologies (and things already move at a glacial pace in the web world). We’ve been taught for years that the road to enlightenment was paved with progressive enhancement and future-proofing, and this goes against that grain. I find the idea disquieting too for its other more pragmatic implications — how will this actually be implementable? I was relieved to find in a post by Robert O’Callahan, a coder who works on Firefox, that he was puzzled by many of the same questions I was having. Won’t this increase dramatically the footprint of each successive browser release? And I’ve used emulators of all kinds in the past, and they simply aren’t perfect.
Will this end web development as we know it, or kill the open standards movement? No, of course not. But it’s confusing enough and sudden enough that it’s not surprising that more than a few people are upset by the news. Maybe I’ll warm up to it when I hear more specifics about how this will actually work in the real world, but for now I’m highly skeptical.
Update: The debate continues, with two further articles from A List Apart. The first, “They Shoot Browsers, Don’t They?” by Jeremy Keith makes the case that a good beta version of IE8 would go along way towards making the case for one side of the other, depending on how its display holds up on the current web. I’d have to agree, and I’m glad to see ALA giving a contrary opinion some space, after the articles from last month caused such an uproar. Having said that, Zeldman’s “Version Targeting: Threat or Menace?” fans the flames a bit as he tries to make his case again in favor of the default opt-in method. This time it’s revealed that major DOM scripting changes are the root cause of Microsoft’s concern, which I don’t think was mentioned previously. The argument doesn’t seem to stick, at least based on the response the article’s drawn. I still stand by my view that this is a pretty bad deal, and one that’s only intended to help Microsoft’s short-term financial interests. Check out the commentary on these two articles, though, for a handful or other good opinions. Even though there doesn’t seem to be much traction, public discourse is always welcome.
This summer I had the pleasure of designing and coding one of our largest recent projects at UT Law: the new system that would house the collection of English translations of international law for the Institute for Transnational Law. The previous version of the system was an ungainly assortment of static, invalid .html and .shtml SSI files that were inherited from another university (meaning no offense), and at a couple of thousands pages deep it was a bear to maintain. The new version of the site is now available for your perusal.
We consulted with main campus ITS to build the perl scripts that culled the juicy bits from the old html, the resultant data scrubbed a bit and dropped neatly into an Oracle database. After a few strong shots of relational SQL kung fu and a bit of object-oriented PHP, everything is up and running efficiently. The front end display is now (mostly) valid XHTML, with CSS for the visual styling. Google’s much happier, I’m much happier, and hopefully the refreshed site will help make this important legal resource even more visible and valuable.
Chase Manhatten bank is getting in a bit of trouble for using digital projectors to beam their corporate logo onto the sidewalks of NYC, an act that the Times article construes as guerrilla marketing. Representatives from the neighborhood are describing the logos as visual blight and officials from their Department of Transportation equate it with defacement. I can’t say that I argue with it being an eyesore, and based on the photos I’ve seen I think it looks a bit obvious to be “guerrilla” (the logos are projected directly in front of the Chase outlets, they’re quite noticeably the Chase octagon logo). What I find interesting is that laws applying to physical defacement of city property are coming into play to stop the projection. This makes it different from previous cases of corporate ad-graffiti (like when IBM tried it with stenciled Linux logos or Sony’s PSP graffiti misfire), and brings it more into the realm of artist-hacker types like fi5e/Graffiti Research Lab and other more established artists who use projection in public spaces as a disruptive technology (along the lines of Krzysztof Wodiczko or Jean-Christian Bourcart). Obviously, the function of art is very different than advertisement or branding (right? hmm…), but I wonder if this will lead to something of a crackdown against unapproved projected imagery in major cities – will light be considered as infringing or destructive as painted graffiti?