On the Mozilla Foundation needing “to evolve into a metrics-driven organization”

So apparently, the Mozilla Foundation needs to evolve into metrics-driven organization (related video explanations by Ryan Merkley). This is worrisome.

Limitations of metrics as success measurement

I’d like to step back on what success means. Let’s take the example of a teacher. By measuring what is a teacher considered successful? Good average class grade? Good students mood? Whatever idea you come up with, you’ll have to acknowledge very soon that either it cannot be measured quantitatively (students mood) or if it can (average grade), it has limitations (grades cannot really be compared if applied on different topics, teachers could be tempted to bias their grade to be seen as more successful).

Being a “metrics-driven organization” also means that the mission you’re trying to achieve is reduced to your metrics. I hope I’m not shocking anyone if I say that this is a wrong idea in many ways. Many things are not accurately “metricable”. Acknowledge it and accept it or you’re just being intellectually dishonest.

Metrics done right

Metrics in themselves are not an issue. Metrics are necessary for (at least) two reasons: first, it gives an insight to the outside world on what you do, how your activity evolves over time (not only in the numbers but also in the evolution of the metrics you choose). Second, it’s good for people motivation. Having your success measured and being able to say “we were there, we are here and we moved toward the right direction” is a powerful tool. Also, in periods of doubts, it’s a good thing to have a very concrete answer to “where are we going now?”.

What is an issue though is deciding of a metrics and not being crystal clear on where numbers come from, what their source of bias is, what is the limitation of what is being measured. Having a decent study of this prevents (or at least reduces) misinterpretations.

“Contributors”

It has been decided that there will be one metrics: contributors. Three categories are being considered: coders, templaters, instructors. For each, a brief description is provided. But I’m sorry, even when making a big effort, I can’t figure out numbers based on these vague definitions.

The question is being asked at 26’15”: how do we define a contributor? I doesn’t find its answer in the video unfortunately, while it’s a the core of the issue being discussed. I’ll try to elaborate on the question and get to the specifics: “when do we consider that one person is an additional contributor?”. If never is defined what the definition of “one more contributor” is, then the metrics has no meaning and as a consequence, “being (un)successful” is meaningless as well.

I’m not saying it’s an easy task, but for sure it is a necessary one, so I set up an Etherpad to try to define what a contributor is. Please join the discussion.

Protéger une idée

Ce post est une réponse à l’article de Séverin Naudet. Une première partie constitue une réponse, la seconde discute d’autres formes de protection de l’innovation.

Réponse

Un débat stérile a trop souvent opposé le logiciel libre à la notion de propriété intellectuelle.

=> Pour reprendre la définition de Wikipedia sur la priopriété intellectuelle, il y a 2 morceaux:
1) “la propriété littéraire et artistique, qui s’applique aux œuvres de l’esprit, est composée du droit d’auteur, du copyright et des droits voisins.”
Cette partie est revendiquée par le logiciel libre. Toutes les licences du logiciel libre se reposent sur le droit d’auteur/copyright pour dire en substance “je possède les droits de cette création et je décide que les conditions de réutilisations du code source sont…”.

2) propriété industrielle, brevets
Cette partie fait beaucoup plus grincer des dents dans les communautés du libre. Un brevet sur un logiciel a autant de sens qu’un brevet sur une preuve mathématique.

Dans le reste de ma réponse, je vais restreindre la définition de “propriété intellectuelle” à la notion de “propriété industrielle”, vu qu’il y a concensus sur la l’autre partie.

Mais la protection d’un capital intellectuel…

L’idée d’origine de la “protection” est intéressante, mais irréaliste, j’y reviendrais. Le fait est que dans trop de cas, les brevets sont utilisés non pour une protection de l’innovation, mais entre grosses entreprises américaines pour se tirer dessus à boulets rouge ou de manière défensive. Il faut bien comprendre qu’une bonne partie du système américain repose sur les procès. Par exemple, des médecins ont des honoraires dans lesquelles sont pris en compte les procés de patients mécontents.
Le même état d’esprit règne dans les grosses entreprises. La relation des entreprises à la justice dans la culture française est très différente.

Malheureusement, si on aimerait que les brevets protègent l’innovation, la réalité est que parfois l’innovation est ralentie par un usage abusif des brevets logiciels.

C’est bien parce qu’une entreprise se sait préservée de la contrefaçon qu’elle peut engager les investissements qui la feront grandir. C’est parce qu’un entrepreneur sait son invention protégée par un brevet qu’il peut s’engager et lever des fonds.

Pour reprendre les mots exacts de la phrase: “un entrepreneur sait son invention protégée par un brevet”. Juste pour clarifier, un brevet ne protège rien en soi. Un brevet est un bout de papier avec des informations techniques et des signatures rangé dans un bureau. Ce qui est sensé protéger, c’est une chance plus grande de gagner un procès contre un potentiel violeur de brevet.

Je me suis, ici, gardé de dire “ce qui garantit une protection”, car il n’y a aucune garantie. Une des raison est qu’il faut prouver que le brevet a été violé, ce qui n’est pas toujours évident. Même si cela est possible, cela demande du temps, de l’argent. Quand une grosse entreprise “vole” une idée à une petite, la petite n’a pas forcément les ressources pour engager un procès. Faire vivre l’entreprise est déjà une activité prenante, rares sont celles et ceux qui prennent aussi le temps de se préparer à des procès (ou ne serait-ce que de déposer un brevet).

Une brevet n’est pas une protection. C’est une éventuelle protection nécessitant de s’engager dans un procès. Je concède que ma définition est plus déprimante que la vision originale, mais elle est sûrement plus réaliste.

Il y a aussi des cas subtiles où il n’est pas sûr qu’un brevet serait vraiment utile pour se protéger.

Comment protéger son innovation?

De tous les entrepreneurs que j’ai rencontrés, aucun ne “savait” que son invention était protégée par un brevet. Ils ont choisi d’autres moyens de protéger leurs inventions que la voie juridique.

“Moi aussi, j’avais eu l’idée d’un réseau social mondial”

XKCD ironise à la perfection sur l’état d’esprit des gens qui sont sortis de “The Social Network” et qui disaient “ah mais moi aussi j’avais eu cette idée”. Oui. Tout le monde a des idées tous les jours. Mais elles ne valent rien. On peut jouer l’autruche en allant créer des brevets, en criant “on m’a voler mon idée”, mais ça n’a aucune importance : une idée non développée, non diffusée n’a aucune valeur.

Une forme de protection plus efficace que n’importe quel brevet, c’est la diffusion. Si j’arrive à diffuser mon idée dans un produit, si j’ai des clients, alors commence ma protection. Les gens commencent à connaître mon entreorise pour ce qu’elle a d’innovant, commencent à avoir confiance dans la qualité du produit ou service qui concrétise cette innovation.

D’autres ont eu l’idée d’un réseau social. Aucun n’a réussi à se diffuser mieux que Facebook. Mark Zuckerberg rafle la mise, rangez vos brevets si vous en aviez.

De manière plus théorique, une idée qui n’est jamais diffusée d’une manière ou d’une autre n’a aucune valeur pour la société. Le simple fait d’avoir eu l’idée ne devrait pas fournir de privilège, même si on a passé du temps à déposer un brevet.

“Mais si le jeu est en JavaScript, on peut me piquer mon code !?”

J’ai récemment assisté à 2 évènements liés aux jeux vidéos sur la web et cette même remarque est revenue à chaque fois. La meilleure réponse a été donnée par Joe Stagner (qui a passé 10 ans chez Microsoft) : il n’existe aucun moyen de protéger l’intelligence de votre application, vous n’avez qu’à regarder le décompileur fournit avec Visual Studio.

La réalité brute pour quiconque un peu au courant de comment fonctionne techniquement un logiciel est celle-ci : vous ne pouvez pas protéger un logiciel dont vous souhaitez une utilisation grand publique.

On pourrait sombrer dans la déprime, mais non. Parce que la vraie innovation logicielle n’est pas dans une version figée d’un logiciel. L’innovation est dans l’inertie. Une équipe de développement connait son produit, a une vision et peut sortir des amélioration beaucoup plus rapidement qu’une équipe neuve qui part d’une certaine version décompilée. La protection de l’innovation vient de la connaissance du secteur et de l’inertie due à la connaissance du produit.

Conclusion

Les brevets logiciels sont un leure un peu déconnecté de la réalité. Mais l’innovation n’est pas morte pour autant, parce que les innovateurs ont trouvés d’autres moyens “naturels” de se protéger : en convaincant d’autres du caractère innovant de leurs idées, en acquérant une expertise non-copiable, en sachant s’adapter aux besoins changeant, en concrétisant leurs idées dans des produits et services de qualité.

J’ai plutôt un avis négatif sur les brevets logiciels, mais pas tellement pour ce qu’ils sont. Je suis plus embêté par la fausse promesse de protection qu’ils procurent et le détournement du débat sur ce qui devrait faire la protection d’une invention à savoir la preuve de sa qualité par sa confrontation au publique et l’inertie due à l’expertise dans le domaine.

Discriminating those who discriminate

Let’s start with a Mozilla drama

Some time ago, a Mozillian expressed his opinion on gay marriage. This post has been relayed on Planet Mozilla, an aggragator of blogs of a lot of mozillians. Planet includes technical stuffs as well as personal stuff. It’s not curated, so everthing shows up, regardless of what it’s about and regardless of endorsement by Mozilla or even the community. Anyway. Since the position isn’t the politically correct one, it became a drama.

It has generated a lot of comments. I’ve been surprised by some: Gay marriage isn’t far left wing. It’s good for everyone. Looking forward to the day when this is obvious in the Mozilla community.. What does it have to do with the Mozilla community? It was the opinion of one person!

A lot of my thoughts on people’s reaction is well summed up by Kairo:

Being open is to a large degree about accepting that other people have different views, even on controversial topics, tolerating their views, and requiring other people to change those to be able to work in the same community with them.
Unfortunately, it’s is often those who insist most to call themselves “open” who seem to not be able to accept that.

I am seeing this very often. Those who call for accepting differences, who are against discriminations are shockingly discriminating towards those who disagree on that point, having the exact attitude they reproach others to have.

Discrimination and work

Today was publish a post on RudeBaguette on recruiting a developer. Some parts are rather disturbing. With work, the equation of how to handle discrimination is becoming harder to solve.

Raw response

People can have gaps in their CV. They will always try to cover these up. The causes can be very different – from drug abuse, unemployment (especially in France), sickness, laziness, failed relationships. Without trying to pry into their personal lives, try to get an idea of these gaps if you see them. Obviously the drug abuse is a red flag…

So now, to be hired somewhere, you have to not have touched drug of your entire life? And maybe no parking ticket as well? How is it the business of an employer if someone did drug and is now clean?

Obvious Red Flags – Offensive behavior: Any racist remark or female-bashing, is a no-no for me. In theory I don’t care about what people think about this stuff, but if they need to air it during a job interview, they can keep looking for that perfect job position at the Front National.

Ok, so now, expressing political opinion makes you not being hired. That’s just plain recrutment discrimination. Aren’t there laws against that?

The grey area

I strongly disagree with the above quotes especially with how radical (“red flag”) they are. I however understand the underlying intention. It is definitely easier to work with people you agree with. It probably looks better for the company to not have an ex-junky as employee. But are those reasons to not hire someone.

I strongly disagree with all form of extremism, both discrimination and positive actions and I think a middleground should be looked for.

Going back to Mozilla, I do not know Gerv that much, but before his post appeared on Planet, what I knew of him was his action among Mozilla, the work he was doing. And as far as I know, he works hard, he fights for the same Mozilla mission. I don’t give a shit about his political opinions as long as they are expressed with respect to other opinions and stay within the realm of opinions. Caring for the same mission and working towards it is what made Mozilla the meritocracy it is.

I think what is true for Mozilla is also for any human group. People are not all the same, they disagree on some things, agree on other. It doesn’t prevent them to be able to work together on the same projects. There is no reason this couldn’t be true in a startup. There is no reason being an ex-junky or ex-alcoholic makes someone less reliable or efficient for a developer job.

Web technologies are implementation and content-driven

I’ve been teaching some web technologies lately and I found myself telling the exact same story over and over again. Something along the lines of “feature X has initially been added in browser Y. Web developer used it, so other browsers copied it after reversed-engineering, bugs included”. If the feature has bugs, it must be standardized as such since content in the wild may rely on particular behaviors (bugs included) to run/be rendered properly. Standardizing a feature in a way that is not backward-compatible with existing content is called “breaking the web” in standards mailing-list and obviously, no one ever wants to do that!

The above quote is pretty much how everything happened after HTML4, CSS2 and ECMAScript 3 CSS (all released in 1998-1999). HTML5, CSS2.1 (and 3 to some extent) and ECMAScript 5 have been works of understanding and standardizing web technologies how they were actually implemented in browsers and used by web authors… and adding a few features. Such work is still ongoing is isn’t likely to be finished anytime soon.

Examples

JavaScript

A very early example is the inclusion of JScript in Internet Explorer, soon after Nescape shipped JavaScript. The name does not matter a second, but what did at the time was for Microsoft to run the content (and specifically scripts) written by web authors for Netscape. As the rumor goes, typeof null === 'object' was a bug in the original Netscape version that has been copied by Microsoft (and later standardized as it should since changing that could have broken scripts)

Microsoft innovations

In the opposite direction, Microsoft invented .innerHTML and <iframe>s which have been used by web authors and very soon have been implemented in other browsers. Later standardized as part of HTML5.

This day deployed jQuery influenced ECMAScript 5

I’ll let you read this interesting announcement where is described how some deployed jQuery code enforced a change in ECMAScript.

Enters mobile web: Internet Explorer may consider implementing __proto__

It came as a shock when I first read that Microsoft was considering implementing __proto__, but it makes a lot of sense. The mobile world is dominated by browsers implementing __proto__ (Safari, Opera), so naturally (?), people writing mobile libraries or mobile web content take it for granted.

-webkit-what?

There has been a huge debacle about browsers considering to implement -webkit- CSS properties. No one is really happy of the state of things, but reality is that there is content relying on -webkit- properties. If a new browser wants to enter the mobile field, it has to render existing websites and unfortunately, it seems unavoidable to implement some -webkit- properties.

Just to clarify, I’m not expressing a judgement or an opinion here, just stating facts and drawing the natural conclusion that comes out of it.

“WTF??!!1! H.264 videos in Firefox?? Is Mozilla forgetting its mission?”

An interesting thread started by Andreas Gal discusses the possibility for B2G to use OS codecs when they are available, even for H.264 (including .flv videos which are certainly the most widespread video format on the web) and MP3. As one can imagine, support for these patent-encumbered formats in Firefox after “fighting” against them for so long is at the very least surprising. But it makes sense too: since there is no Flash on iPhones, it is very likely that web apps with video elements embed H.264 videos (only format supported on iphones according to caniuse). Android also supports this.

Knowing that there is H.264 video content out there on mobile websites, what choice is left to Mozilla? Not render any video and never enter the mobile market? As hard as it is for me to accept it, this is certainly not the right choice. I think supporting H.264 when the OS has a codec for it is a reasonable sacrifice to be made in order for Mozilla to reach users and bring its vision of how the web should be with B2G, persona, apps and the likes.

Anyway, back to the main topic…

Last but not least: “Encrypted Media proposal”

This example is not in a mature stage as the previous ones. No one implemented it and there is no content for this now, but Microsoft, Google and Netflix are proposing an extension for encrypted media. As far as I’m concerned, just the initial picture seems too complicated to be realistic.

But technical difficulties are not that big of a deal. Some quotes in a discussion thread are more disturbing: “Our business wouldn’t be viable at all without regional restrictions.” or the no less excellent: “[Content Decryption Modules] implementations that may not be [Free and Open Source Software]-implementable *are* at this time (but not necessarily in the future) a business requirement for the
commercial video provider members of the W3C”
.

The careful reader will make the connexion between the “at this time” and “in the future” and realize that if the CDM technology is deployed and has content relying on it, no matter if “commercial video provider members of the W3C” change their mind, the content relying on the initial CDM technology will never be readable by FOSS and that’s a bit annoying, isn’t it?

Counter-examples

ActiveX, VML

No one really ever implemented that besides Microsoft. Certainly because there was few to no content to support.

Firefox’s __noSuchMethod__

Hopefully, you don’t even know what that it and that’s a good thing :-)

Standards and validators

So I’ve claimed that web technologies are driven by implementation and content, but there are technical standards and validators, right? Software standards used to be this almost sacred thing that implementors had to follow and it was a working model. That’s how web standards were originally thought. But that’s not a relevant model anymore. The WHATWG (group behind HTML5) was founded by browser makers, because there was a need for web browsers to agree on what they implemented and to implement it in an interoperable way, with the “don’t break the web” rule as a priority that apparently the W3C had not understood (but Bruce Lawson explains it better than I do)

I really think we should stop thinking standards as Bible-like texts, but rather as an implementation agreement. Obviously, since there is the backward compatibility constraint, there is a need for implementors to hear about what web authors (aka “developers”) have to say about how they currently use the technologies and standards mailing lists are open to this. Finally, web authors can provide feedback and suggestions to new features. That’s a rough and incomplete picture of how web standards currently work and I’m glad it does work like this.

Also, very much like implementations (software) evolve, so must the standards, hence the HTML living standard model.

In the end, there are validators. I have met some people who would never ship a website if all pages do not validate. This is a wrong idea. Validators are the continuity of the ideal that standards are sacred texts. They are as wrong. They are even more wrong if the validator is kept up to date against the latest evolution of web standards (and I am confident they are not). Moreover, the analysis they provide is only partial. As any piece of software, validators can have bugs. Worse than anything else, a validator does not guarantee your content to actually renders properly on web browsers (which is what you actually care about when writing a page). Anyway, you can use validators, but stay critical on their limitations.

HOWTO: remove annoying technologies

Most of the time, it won’t be possible. However, in some cases, some technologies are used only in a certain way and it’s still possible to standardize the technology with a restriction. Obviously, it requires a study of how people use the technologies. This was not so hard for the __proto__ “property” and led to a very reasonable proposal that’s unlikely to break the web.

It is a vastly more complicated work when vendor-specific CSS properties. Here, the outcome would be to decide which properties Mozilla is willing to implement rather than which properties could be removed, but the same result could be used by Webkit to remove the rarely used properties.

Looking forward

For a web technology to be adopted, it takes 2 ingredients: 1) implement it in a wide-spread browser 2) create content using (and relying on) this technology. In retrospect, the whole prefix thing was doomed to fail. It was basically saying “we’ll do 1, but please, developers all around the world, don’t do 2!”. Clearly not a winning startegy. Needless to say that every attempt to evangelize against prefixes is meant to fail since it’s not possible to change all the content of the web (remember it’s decentralized and all of that?)

Conclusion

If I was ever asked, my advice to browsers would be: be careful of what you implement and ship, otherwise, everyone may get stuck!

What my PhD would have been on

Last year, in March, I joined the Software Engineering team at LaBRI to start a PhD. In late October, I dropped it. This post will describe what I worked on and the definition of my PhD subject as I left it.

Beginner wanderings

“Design and implementation of mobile applications in a REST ecosystem” was the vague topic I started with. My PhD was funded by a project aiming at studying software engineering practices in the realm of client-side web programming and apply it to a new approach to e-commerce

From REST…

I remember again a project I was involved in three years ago. I was a student and the project was a partnership between my school and a company. Some meetings or discussions involved words like “REST”, “Ajax”, “Drag and Drop”, “an HTML div”… All this jargon was unknown at the beginning and was understood at the end. With the exception of REST for which no one really had a definition I could find satisfying. And the PhD topic I was given had this word. So a first thing I did was reading Roy Fielding’s dissertation.

I recommend this reading to anyone who write web applications. Alongside with this recommendation comes a warning that from now on, I’ll slap in the face anyone talking to me about a “REST API” (call them “Web API”, “URL API”, “HTTP API” if you wish instead) or putting “REST” in a list that already includes “XML, JSON”.

Along the way, I discussed once with my friend Thomas who is a PhD student at UCI (where coincidentally Roy Fielding works) who mentionned CREST (Computational REST). The idea is interesting, but I didn’t find really how I could use it in practice. Worth mentioning, though, especially the first part that studies how REST principles have been misapplied and some nowadays good practices.

… to cookies …

A part of Fielding’s dissertation discusses cookies and how they violate REST:

Cookies also violate REST because they allow data to be passed without sufficiently identifying its semantics, thus becoming a concern for both security and privacy. The combination of cookies with the Referer [sic] header field makes it possible to track a user as they browse between sites.

As a result, cookie-based applications on the Web will never be reliable. The same functionality should have been accomplished via anonymous authentication and true client-side state. A state mechanism that involves preferences can be more efficiently implemented using judicious use of context-setting URI rather than cookies, where judicious means one URI per state rather than an unbounded number of URI due to the embedding of a user-id. Likewise, the use of cookies to identify a user-specific “shopping basket” within a server-side database could be more efficiently implemented by defining the semantics of shopping items within the hypermedia data formats, allowing the user agent to select and store those items within their own client-side shopping basket, complete with a URI to be used for check-out when the client is ready to purchase.

anonymous authentication” is still to be understood for me. Fortunately, I recently (December, so after leaving the PhD) came across a different idea that could probably help fill the same role.

However, “true client-side state” is a technology that now exists and is well-deployed: local storage

… to client-side storage …

Cookies have been used as local storage for a long time, so I made a brief study of what was available to replace that. Actually, most of the work had already been done by someone else, so it was quite easy.

To be noted is a meeting where I presented this and someone asked me “oh, you know HTML5? Have you heard about canvas?”. It was probably the strongest proof that I was in a different world. Different from what I was used to when talking about the web. Before this PhD, my discussions about the tech side of the web were mostly with people in standard mailing-lists and Mozilla (the overlap between both is big). So yeah, I’ve heard about canvas…

Shift to repository mining and programming languages

While I was working on web architecture, the rest of the software engineering research team was working on… software engineering. They are developing a tool called VPraxis. In a nutshell, this tool allows one to query a repository. For instance questions like “in 2011, who worked on classes that implement the interface X?”. Actual repository (SVN, Git, Mercurial…) is abstracted out and the tool is expected to be language agnostic (and why not cross-language (imagine queries dealing with HTML classes used in CSS)).

Several discussions on this topic with the team and guests who stayed for a couple of days increased my interest. What I consider to be the most interesting part of my work was the definition of the “dependency upgrade problem” and ideas to help solving it.

The Dependency Upgrade Problem

The problem

Initial conditions are as follow: A developer (or a team) has a codebase using a dependency (for now, only one dependency is considered since it’s already enough work). A dependency can be a library (the developer writes an application using jQuery), or a platform (the developer writes a software on top of Linux, or a Firefox add-on). Over time, this dependency changes (bugfixes, performance improvements, API changes…). The developer wants her code to work with the new dependency version. Most of the time, the developer can do that when she wishes, but in the case of Firefox add-on, for instance, you have to adapt to the platform at a pace you do not decide (because “imposed” by the 6 weeks release schedule).

problem description

Here is how it works currently to adapt code to a modifying dependency: The dependency author (or team) writes a changelog, the developer reads this changelog, figures out how the described changes affects her code and starts adapting her code.

how the problem is currentyl addressed

This is hugely error-prone for 2 reasons. First of all, the dependency author (or team) is/are a human being(s), so the changelog (if it exists at all!) may be incomplete or inaccurate. Second of all, the developer needs to match how this changelog describes things that may affect her code. Even if the changelog was perfect, it would still require a lot of work and work that is error prone (because places can be missed, introducing new bugs).

An error-prone process on top of another error prone process, no wonder people avoid as much as they can to upgrade. Another consequence is what has become a good practice in library authoring which is to never (or almost never) break an API. The only reason this is a good practice is because breaking APIs require more (error-prone) work to library clients. The downside is libraries that keep old code around forever, having “deprecated methods” that are never removed and growing in size, making them harder to read and maintain. Size of a library is a particular problem on the web. So that jQuery is considering removing parts of the API.

Towards a partial solution

The potential of mining a repository and extracted fine-grained information about code that is changing gave me two complementary ideas to help solving the aforementioned problem.

First of all, the changelog. Dependency code is in a repository. All changes between a version and another version are stored somewhere in this respository. One idea is to build a tool that reads in a repository all changes that may affect client code and generate a semantic (with information like “such function has a new argument”, “the implementation of such function changed”, “such classes haven’t been touched at all”), machine-understandable (not sentences written in a human language) changelog. First, this changelog would be complete, by definition of reading in the repository. And, the notion of “all changes that affect may client code” is probably undecidable, but conservative assumptions can be made; it would just make the machine-understandable changelog a bit bigger. It has to be noted that closed-source libraries (for which there is no public access to a repository) could release a semantic changelog to their clients without providing access to the repository itself.

The second step is to have another tool that takes the developer code and the semantic changelog (hence the need for the changelog to not be in human sentences form) as input and provides suggestions on how to transition the code as output. Some adaptations could be fully automated (public method rename), but most cannot, so a recommendation engine is probably the best that can be done. Associated with decent UI, it would certainly be a big win.

Towards a solution

Of course, the two tools I suggest wouldn’t entirely solve the problem (since it’s certainly undecidable). The human beings would still need to do some work, but I intuit it could be reduced to parts than cannot be done by a human being. On the good sides is the ability for a program to tell you that some parts of your code are not affected by the change. I intuit that having such information would be a powerful motivation to adapt the code. Imagine a tool that would tell you “the 80% of your code base that is in these files are unaffected by the dependency change”.

The end

I planned to work on such two tools for JavaScript (obviously?). I wrote a JavaScript static analysis tool to retrieve an API exposed by a file. This experience taught me that static analysis isn’t enough, so I gave up the static analysis tool. The definition JavaScript API by itself is actually difficult and I’m not sure I’ve been able to find the perfect answer to this question yet. The only thing I’m sure of is that answering it requires some static AND dynamic analysis.

Anyway, I didn’t go further, but thought it was worth sharing where I left my work.

Another way of thinking online payment

I was reading an interesting post about encryption and it made me feel a need to respond on what is said about credit cards.

Capabili-what?

Very soon after I joined es-discuss, I read some messages by Mark S. Miller. Soon enough, I watched his infoQ talk. This talk introduces the notion of object capabilities. This talk and this concept blew my mind. “Modularity increases my security?”. And he also shows the problem (and a solution) of distributed secure currency. Any “smart” idea I’ll write in this post are actually more or less already in this part of the talk.

Unrelatedly, I watched a talk by Douglas Crockford which suggested people to go watch The Lazy Programmer’s Guide to Secure Computing by Marc Stiegler which strongly emphasis POLA. I did and took the same sort of mind-blowing shower. I will learn later that Marc Stiegler and Mark Miller have been working together.

This led me to start reading Mark Miller’s thesis (haven’t finished yet, but still working on it) and to watch some other talks. It also led me to read about petnames, rich sharing, website passwords, web introducer and many other interesting things.

There are years of serious research poorly summurized in the links above. I highly encourage to read and watch all of this, but I admit it takes a lot of time to.

Thousands of credit cards numbers stolen during the Sony Playstation network hack

People have given their credit card number to Sony. Sony got hacked. People were annoyed. Who is to blame? Sony for its flawed security? Let’s take another look at the problem.

I want to pay…

I want to pay online. I want to buy one item once or pay regularly (like in a monthly payment to Sony). What option am I given? Giving my credit card number. And this is a terrible idea!

…but not to give my credit card number!?

When I send my credit card number and any “secret” written on the card, I do not allow for a one-time payment (or regular) to one company for a given amount of money I choose. Rather, I give the authority to anyone reading my info to do a payment of any amount directed to anyone, anytime. And that’s a source of insecurity.

Another way of thinking online payment

Here is how payment could happen: I go to my bank website, I have a form where I choose the amount I want to pay, who I want to pay to and to which frequency (one time, once a month, etc.). The two last fields are optional. In exchange, the bank gives me a secret (a URL for instance). I share this secret with who I want to pay. End of story.

Of course, this is just an example crafted in 2 minutes that could probably be improved.

“Oh fuck! Sony is getting hacked again!”

So, In my imaginary world, Sony (or anyone, it’s not about Sony as you’ve understood) does not have access to my credit card number, but only to a secret allowing a payment only to it at a frequency that I chose and to an amount that I chose as well. Sony gets hacked? WHATEVER!

We could imagine extensions where I could tell my bank “such secret has been compromised. Please stop paying through it”, “regenerate a secret for the same parameters”, etc.

Conclusion

As Ben Adida mentions in his blog post, encryption is not the final answer to security. His analysis of how encryption may get in the way of social features is interesting.

I wrote this post to show that security without encryption can exist, even for payments. Object capabilities seems to have a huge misknown and underused potential to achieve this form of security.

In the particular case I described, it’s not here because it requires cooperation from banks. I’m looking forward to see banks implementing this!

A response to “How Google is quietly killing Firefox”

Here is a response to this article

Article summary

The article explains that browsers (all including Firefox and Chrome according to the author) crash more frequently because of a lack of memory since web applications are now more JavaScript intensive. The author explains that it’s not always the browser’s fault to be memory hungry and sometimes is the fault of the web developer. I agree with this part. I would even also add that it is not because JavaScript has a garbage collector that it avoids memory leaks. Some leaks are at the application level and I really think a “WebValgrind” should emerge to tell at the JavaScript level where a web app leaks.

Then starts paranoia:

Mozilla’s greatest revenue source today (accounting for more than 80 percent of annual income) is Google. Mozilla is deeply dependent on Google for operating revenue.

Mozilla is not dependent on Google. Mozilla is dependent on search-related contracts. Asa Doltzer wrote about it 4 years ago. He was right at the time and what he wrote still stands. Mozilla is not dependent on Google.

And it goes on:

If you buy the theory that most people who abandon Firefox do so because it crashes (runs out of memory) unpredictably, it stands to reason that all Google has to do to pick up market share in the browser world is publish AJAX-intensive web pages (Google Search, Gmail, Google Maps, etc.) of a kind that Firefox’s garbage-collection algorithms choke on — and in the meantime, improve Chrome’s own GC algorithms to better handle just those sorts of pages.

Response on different points

Google creates AJAX-intensive web apps that purposefully leak

This is a ridiculous accusation. Imagining that all competitors started to have better garbage collection, what would they be left with?

Google makes AJAX-intensive applications for the purpose of improving the user experience. End of story. Memory leaks are the result of the current software attitude which is to keep adding features without caring long-term performance.

Memory leaks are made to make Firefox crash

This is ridiculous as well. Does anyone really think that Google web devs wake up in the morning thinking “hey, what about I add a few more memory leaks to make a some browser crash?”. If the browser crash with a given service (Google maps, for instance), some people will change of browser, some others will just change of service and this is not in Google interest.

Also, why does Firefox crash in the first place? Maybe Firefox should work toward improving it’s memory management? Oh wait! they are already working on that!! (and these are just a few links). Once we see improvements of these bugs on Firefox, I guess Google web devs will have to work a lot harder to make Firefox crash. Good luck, guys!

On “Google is a uniform corporation with an evil plan to kill Firefox, booo!”

I recently attended JSConf.eu and I’ve had the occasion to chat a few minutes with Erik Corry after his talk on improving V8 garbage collection. Call me naive or stupid, but the image I kept from him was the one of a dedicated (maybe even passionate) engineer working to improve his product. Is he a slave serving an evil Masterplan to control the universe? I don’t think so.

On hyperbolic blog article titles

How Google is quietly killing Firefox“, “Is Google Chrome the New IE6?“… What is the next title? “Chrome is bringing back Nazis”?

Chris Heilmann already warned us about hyperboles (start at slide 74). I think we really should stop these titles, because they create more confusion that anything. No, Chrome’s plan is not to kill Firefox by purposefully introducing memory leaks in its webapps. No, Chrome is not IE6. There are many differences.

I agree that there are some disturbing informations about Google and its commitment to openness, but it does not make Chrome IE6.

Side note. It took me some time to understand why my previous post hit 1400 views and I get now that it probably its title was sort of flashy. I regret it.