Simon Willison’s Weblog

Items tagged urls in Feb

Filters: Month: Feb × urls ×


How did slashes become the standard path separators for URLs?

I’m going to take an educated guess and say it’s because of unix file system conventions. Early web servers mapped the URL to a path on disk inside the document root—this is still how most static sites work today.

[... 57 words]

URLs are supposed to represent resources. A web app can be a resource, and there are techniques for managing state within those. Hashbangs might be one of these. But when large web properties are converting all their links to _articles_ and other _bits of text_ (tweets/twits/whatever) into these monstrosities, it’s not innovation. It’s a huge mistake that ought to be regretted now and will certainly be regretted in the future.

Reed Underwood # 10th February 2011, 4:56 pm

Before events took this bad turn, the contract represented by a link was simple: “Here’s a string, send it off to a server and the server will figure out what it identifies and send you back a representation.” Now it’s along the lines of: “Here’s a string, save the hashbang, send the rest to the server, and rely on being able to run the code the server sends you to use the hashbang to generate the representation.” Do I need to explain why this is less robust and flexible? This is what we call “tight coupling” and I thought that anyone with a Computer Science degree ought to have been taught to avoid it.

Tim Bray # 10th February 2011, 6 am

Going Postel. Jeremy points out that one of the many disadvantages of publishing JavaScript dependent content on the Web is that a single typo can render your entire site unusable. # 9th February 2011, 2:18 am

Breaking the Web with hash-bangs. Mike Davies explains why Gawker’s new Ajax fragment-tastic redesign is a web architecture error of colossal proportions. # 9th February 2011, 2:17 am

Specify your canonical. You can now use a link rel=“canonical” to tell Google that a page has a canonical URL elsewhere. I’ve run in to this problem a bunch of times—in some sites it really does make sense to have the same content shown in two different places—and this seems like a neat solution that could apply to much more than just metadata for external search engines. # 14th February 2009, 11:28 am

A proposal: email to URL mapping. Brad’s just too damn smart. A simple solution to mapping an e-mail address to an OpenID that takes advantage of existing technology (YADIS) and doesn’t adversely affect e-mail privacy. # 8th February 2008, 11:39 am

.php? .cgi? .who-cares? J-P Stacey argues that “URLs need to be hackable by the developer as well as by the user”. There’s certainly room for improvement in keeping complex URL structures maintainable from a server-side developer’s perspective. # 9th February 2007, 1:01 am

There’s an unfortunate side-effect to altogether eliminating the sub-domain name from your site URLs [...] Every cookie you may want to set for that site will automatically “bleed” down to *all* sub-domain-based websites you might want to add later.

Már Örlygsson # 6th February 2007, 12:01 am

Adam Vandenberg on disambiguated URLs. He was fighting for cache-friendly URLs at Encarta Online way back in 1998. # 4th February 2007, 5:18 pm

www. is deprecated. I wouldn’t go as far to say avoid www—just as long as you pick one and redirect the other. # 4th February 2007, 2 pm

Why you should be using disambiguated URLs

Good URLs are important. The best URLs are readable, reliable and hackable.

[... 553 words]