Web technologies are implementation and content-driven

I’ve been teaching some web technologies lately and I found myself telling the exact same story over and over again. Something along the lines of “feature X has initially been added in browser Y. Web developer used it, so other browsers copied it after reversed-engineering, bugs included”. If the feature has bugs, it must be standardized as such since content in the wild may rely on particular behaviors (bugs included) to run/be rendered properly. Standardizing a feature in a way that is not backward-compatible with existing content is called “breaking the web” in standards mailing-list and obviously, no one ever wants to do that!

The above quote is pretty much how everything happened after HTML4, CSS2 and ECMAScript 3 CSS (all released in 1998-1999). HTML5, CSS2.1 (and 3 to some extent) and ECMAScript 5 have been works of understanding and standardizing web technologies how they were actually implemented in browsers and used by web authors… and adding a few features. Such work is still ongoing is isn’t likely to be finished anytime soon.

Examples

JavaScript

A very early example is the inclusion of JScript in Internet Explorer, soon after Nescape shipped JavaScript. The name does not matter a second, but what did at the time was for Microsoft to run the content (and specifically scripts) written by web authors for Netscape. As the rumor goes, typeof null === 'object' was a bug in the original Netscape version that has been copied by Microsoft (and later standardized as it should since changing that could have broken scripts)

Microsoft innovations

In the opposite direction, Microsoft invented .innerHTML and <iframe>s which have been used by web authors and very soon have been implemented in other browsers. Later standardized as part of HTML5.

This day deployed jQuery influenced ECMAScript 5

I’ll let you read this interesting announcement where is described how some deployed jQuery code enforced a change in ECMAScript.

Enters mobile web: Internet Explorer may consider implementing __proto__

It came as a shock when I first read that Microsoft was considering implementing __proto__, but it makes a lot of sense. The mobile world is dominated by browsers implementing __proto__ (Safari, Opera), so naturally (?), people writing mobile libraries or mobile web content take it for granted.

-webkit-what?

There has been a huge debacle about browsers considering to implement -webkit- CSS properties. No one is really happy of the state of things, but reality is that there is content relying on -webkit- properties. If a new browser wants to enter the mobile field, it has to render existing websites and unfortunately, it seems unavoidable to implement some -webkit- properties.

Just to clarify, I’m not expressing a judgement or an opinion here, just stating facts and drawing the natural conclusion that comes out of it.

“WTF??!!1! H.264 videos in Firefox?? Is Mozilla forgetting its mission?”

An interesting thread started by Andreas Gal discusses the possibility for B2G to use OS codecs when they are available, even for H.264 (including .flv videos which are certainly the most widespread video format on the web) and MP3. As one can imagine, support for these patent-encumbered formats in Firefox after “fighting” against them for so long is at the very least surprising. But it makes sense too: since there is no Flash on iPhones, it is very likely that web apps with video elements embed H.264 videos (only format supported on iphones according to caniuse). Android also supports this.

Knowing that there is H.264 video content out there on mobile websites, what choice is left to Mozilla? Not render any video and never enter the mobile market? As hard as it is for me to accept it, this is certainly not the right choice. I think supporting H.264 when the OS has a codec for it is a reasonable sacrifice to be made in order for Mozilla to reach users and bring its vision of how the web should be with B2G, persona, apps and the likes.

Anyway, back to the main topic…

Last but not least: “Encrypted Media proposal”

This example is not in a mature stage as the previous ones. No one implemented it and there is no content for this now, but Microsoft, Google and Netflix are proposing an extension for encrypted media. As far as I’m concerned, just the initial picture seems too complicated to be realistic.

But technical difficulties are not that big of a deal. Some quotes in a discussion thread are more disturbing: “Our business wouldn’t be viable at all without regional restrictions.” or the no less excellent: “[Content Decryption Modules] implementations that may not be [Free and Open Source Software]-implementable *are* at this time (but not necessarily in the future) a business requirement for the
commercial video provider members of the W3C”
.

The careful reader will make the connexion between the “at this time” and “in the future” and realize that if the CDM technology is deployed and has content relying on it, no matter if “commercial video provider members of the W3C” change their mind, the content relying on the initial CDM technology will never be readable by FOSS and that’s a bit annoying, isn’t it?

Counter-examples

ActiveX, VML

No one really ever implemented that besides Microsoft. Certainly because there was few to no content to support.

Firefox’s __noSuchMethod__

Hopefully, you don’t even know what that it and that’s a good thing🙂

Standards and validators

So I’ve claimed that web technologies are driven by implementation and content, but there are technical standards and validators, right? Software standards used to be this almost sacred thing that implementors had to follow and it was a working model. That’s how web standards were originally thought. But that’s not a relevant model anymore. The WHATWG (group behind HTML5) was founded by browser makers, because there was a need for web browsers to agree on what they implemented and to implement it in an interoperable way, with the “don’t break the web” rule as a priority that apparently the W3C had not understood (but Bruce Lawson explains it better than I do)

I really think we should stop thinking standards as Bible-like texts, but rather as an implementation agreement. Obviously, since there is the backward compatibility constraint, there is a need for implementors to hear about what web authors (aka “developers”) have to say about how they currently use the technologies and standards mailing lists are open to this. Finally, web authors can provide feedback and suggestions to new features. That’s a rough and incomplete picture of how web standards currently work and I’m glad it does work like this.

Also, very much like implementations (software) evolve, so must the standards, hence the HTML living standard model.

In the end, there are validators. I have met some people who would never ship a website if all pages do not validate. This is a wrong idea. Validators are the continuity of the ideal that standards are sacred texts. They are as wrong. They are even more wrong if the validator is kept up to date against the latest evolution of web standards (and I am confident they are not). Moreover, the analysis they provide is only partial. As any piece of software, validators can have bugs. Worse than anything else, a validator does not guarantee your content to actually renders properly on web browsers (which is what you actually care about when writing a page). Anyway, you can use validators, but stay critical on their limitations.

HOWTO: remove annoying technologies

Most of the time, it won’t be possible. However, in some cases, some technologies are used only in a certain way and it’s still possible to standardize the technology with a restriction. Obviously, it requires a study of how people use the technologies. This was not so hard for the __proto__ “property” and led to a very reasonable proposal that’s unlikely to break the web.

It is a vastly more complicated work when vendor-specific CSS properties. Here, the outcome would be to decide which properties Mozilla is willing to implement rather than which properties could be removed, but the same result could be used by Webkit to remove the rarely used properties.

Looking forward

For a web technology to be adopted, it takes 2 ingredients: 1) implement it in a wide-spread browser 2) create content using (and relying on) this technology. In retrospect, the whole prefix thing was doomed to fail. It was basically saying “we’ll do 1, but please, developers all around the world, don’t do 2!”. Clearly not a winning startegy. Needless to say that every attempt to evangelize against prefixes is meant to fail since it’s not possible to change all the content of the web (remember it’s decentralized and all of that?)

Conclusion

If I was ever asked, my advice to browsers would be: be careful of what you implement and ship, otherwise, everyone may get stuck!

One thought on “Web technologies are implementation and content-driven

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s