Monday, June 30, 2014

Evil Hangman and functional helper functions

Evil hangman is what looks to be an ordinary game of hangman, but even if you cheat by knowing all of the possible words available it can still be a challenge. Try out your skill right now at http://icefox.github.io/evilhangman/

The reason the game is evil is because the game never picks a word, but every time the user guesses a letter it finds the largest subset of the current words that satisfies the guesses and makes that the new list of current words. For example if there are only the following six words and the user guesses 'e' the program will divide the words into the following three answer groups and then pick _ee_ because it is the largest group.  This continues until the users is out of guesses or the group size is 1 in which case the user has won.

_ee_ : beer been teen
__e_ : then area
____ : rats

A few months ago I saw a version of this written in C, taking up hundreds of lines of code while it was efficient it was difficult to read and modify.  With only 127,141 words in the entire dictionary file many of the complex optimizations for memory, data structures, and algorithms were silly when running on any modern hardware (including a smartphone).  The code instead should concentrate on correctness, ease of development and maintainability.  Using JavaScript primitives combined with the underscorejs library the main meat of the program fits neatly in just 24 lines including blank lines.  Using map, groupBy, max and other similar functional functions I replaced dozens of lines of code with just a handful of very concise lines of code.


For a long time most of my projects were coded in C++ using STL (or similar libraries) for my collections.  I had a growing sense of unhappiness with how I was writing code.  Between for loops sprinkled everywhere, the way that the STL doesn't include convenience functions such as append() my code might be familiar to another STL developer, but the intent of the code was always harder to determine.  As I played around with Clojure I understood the value of map/filter/reduce, but didn't make the connection to how I could use it in C++.  It wasn't until I started working on a project that was written in C# and learned about LINQ did it all come together.  So many of the for loops I had written in the past were doing a map/filter/reduce operation, but in many lines compared to the one or two lines of C#.

When codewars.com was launched I tried to solve as many problems I could using JavaScript's built in map, filter, and reduce capabilities.  I discovered that I could solve the problems faster and the resulting code was easier to read.  Even limiting yourself to just map, filter, reduce and ignoring other functions like range, some, last, and pluck it dramatically changes the ease that others can read your code.  The intent of your code is much more visible.  Given the problem of "encrypting" a paragraph of text in pig latin here are two solutions:


Using chaining and map it is clear that the second solution does three things, splinting the paragraph into words, doing something with each word, and combines them back together.  A user doesn't need to understand how each word is being manipulated to understand what the function is doing.  The first solution is more difficult to reason about, leaks variables outside of the for loop scope and much easier to have a bug in.  Even if you only think of map, filter, and reduce as specialized for loops it increases a developers vocabulary and by seeing a filter() you instantly know what the code will be doing where with a for loop you must parse the entire thing to be sure.  Using these functions remove a whole class of issues where the intent is easily hidden with a for loop that goes from 0 - n, 1 - n or n - 0 rather than the common case of 0 - (n-1) not to mention bugs stemming from the same variables used in multiple loops.

Functional style helper functions in non functional languages are not new, but historically hasn't been the easiest to use and most developers were taught procedural style for loops.  It could just be a baader-meinhof-phenomenon, but it does seem to be a pattern that has been growing the last decade.  From new languages supporting anonymous functions out of the box to JavaScript getting built in helper functions and even C++ is getting anonymous functions in C++11.  The rise of projects like underscorejs or the fact that Dollar.swift was created so shortly after Swift was announce I fully expect that code following this style will continue to grow in the future.

Thursday, March 06, 2014

How to stop leaking your internal sites to Gravatar, while still use them.

Gravatar provides the ability for users to link an avatar to one or more email addresses and any website that wants to display user avatars can use Gravatar. This include not just public websites, but internal corporate websites and other private websites. When viewing a private website even when using ssl the browser will send a request to Gravatar that includes a referer headers which can leak information to Gravatar.

When you viewing the commits of a repository on GitHub such as this one https://github.com/icefox/git-hooks/commits/master you will see a Gravatar image next to each commit.  In Chrome if you open up the inspector and view the Network headers for the image's you will see along with other things that it is sending the header:
  1. Referer: https://github.com/icefox/git-hooks

The past decade urls have for the better gained more meaning, but this can result in insider information leaking through the referer tag to a 3rd party. What if you were working for Apple and running a copy of GitHub internally, it might not be so good to be sending https://git.apple.com/icefox/iwatch/browse out to Gravatar. Even private repositories on GitHub.com are leaking information. If your repository is private, but you have ever browsed the files in your repository on GitHub you have leaked the directory structure to Gravatar.

While it seems common knowledge that you don't use 3rd party tools like Google's analytics on an internal corporate website, Gravatar images seem to slip by. Besides outright blocking one simple solution (of many no doubt) I have found is to make a simple proxy that strips the referer header and than point Gravatar traffic to this machine. For Apache that would look like the following

<virtualhost *:8080>
RequestHeader unset referer
RequestHeader unset User-Agent
ProxyPass /avatar http://www.gravatar.com/avatar
</virtualhost>

Edit: This post has spawned a number of emails as so I want to clarify my setup:

Using Chrome version 33 I browsed to a server running apache setup with ssl  (the url would looks like: https//example.com/) and on that page it had a single image tag like so:

<img src="https://www.gravatar.com/avatar/205e460b479e2e5b48aec07710c08d50">

When fetching the image Chrome will send the referer header of https://example.com/ to gravatar.com.

While Chrome's inspector says it sends the header just to be sure it wasn't stripped right before the fetch I setup a fake gravatar server with ssl that dumped the headers received and pointed the page to it and found as expected the referer header were indeed being sent.

For all of those that told me to go look at the spec I would recommend that you too read it  rfc2616#section-15.1.3 where it only talks about secure to in-secure connections which is not the case we have here.

Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol.

Thursday, January 23, 2014

Ben's Law

When every developer in a company is committing to the same branch the odds that a commit will break the build increases as more developers are hired.

Sunday, December 15, 2013

Tragedy of the commons in software

Unowned resources that are shared in software seem to inevitably end up disorganized.

A few instances of this I have seen include:
  • Shared libraries
  • Shared revision control repositories
  • Shared database
  • Shared folders
  • Shared log files
When different projects are using the same shared resource they often have different needs, goals, rules and cadences.  The resources themselves usually don't provide a way to split up the resource cleanly and one projects ends up spilling over into another one.  Two simple examples would be in a source code repository while one group might name their branches release/minor_release another group might follow build/* and in a shared library one project might declare static objects that eat up ram harming another group that is trying to reduce memory usage.
  The inevitable cleaning up of the shared resource grows to become a monumental and bureaucratic task.  Even what seems like a simple task of who maintains/own something can be a large task and at least once is found to be a guy that is no longer with the project (and by the way it is no longer used).
  Because the resource is shared by many different users it already takes up a decent amount of "stuff" (ram, hd space, bandwidth).  There is an admin team that is in charge of making sure more "stuff" is added when needed and the users inevitably take advantage of this to an extreme example where a user decided to check in a Visual Studio install into the revision control system (deep within the source too).  From their perspective they don't really feel the pain that everyone else might suddenly be burdened with an additional 10GB+.
  Some projects run very lean and clean.  They have very strict rules about how things should work and be stored, but there are many more that don't and as time marches on and new projects are added they end up having cruft all over the place and dependancies across what should have been the project divides.  This abuse of the shared resource ends up hurting all of the projects.  Rules are put in place and even if there is a good reason they are difficult to change.
  What seems inevitable is that slowly some projects start breaking off and using a different resource that is hopefully not shared and in the process the old shared resource gets less attention and is unlikely to ever recover as more and more becomes unmaintained.  Sadly it is often not a big thing, but many small problems that users put up with until one day they realize that abandoning everything will give them a significant boost.
  The only solutions to this problem I am aware of is to first recognize the problem early and second to have a steward, someone whose job it is to rapidly respond to problems and anticipate new ones before users start to leave.  The steward's job include occasionally striking down long held rules about the resource because they are found to be harmful and being the one that causes all the projects pain by forcing a migration.  It is only through these actions that the shared resource can maintain its viability in the long run.

Wednesday, April 03, 2013

Code analysis reporting tools don't work

Code analysis tools are good at highlighting code defects and technical debt, but it is when the issues are presented to the developer that determines how effective the tool will be at making the code better.  Tools that only generate reports nightly will be magnitudes less effective than tools that inform developers of errors before a change is put into the repository.

A few weeks ago I played with a code analysis tool that generates a website showing errors that it found in a codebase.  Like most reporting tools this one was made to run on a nightly cron job to generate its reports.  Upon reflection of my career I have never seen tools of this type produced more than a small improvement in a project.  After introduction there are a few developers that strive to keep the area they maintained clean and an even smaller pockets of developers that utilized the tools to raise the quality of their code to the next level, but they were the exception and not the norm.  A scenario I have seen several times over my career was a project that had tools to automatically run unit tests at night.  With this in place you would expect failures to be fixed the next day, but often I saw the failures continue for weeks or months and were only fixed right before a release.  Once the commit was in the repository the developer moves onto another task and considers it done. You could almost call it a law: Before a developer gets a commit into the repository they are willing to move the moon to make the patch right, but after it is in the repository the patch will have to destroy the moon before they will think about revisiting it and even in that case they will ask if you want to fix it so they don't have to.  This means that code analysis reporting tools are able to make only a small impact but no where near what the desired result is.

After pondering why the reporting tools do so poorly  and how they could be improved to make a bigger impact I finally figured out what was really nagging at me, these tools were created because our existing processes are failing.  If we could catch the issues sooner it would both be cheaper to fix the issue and eliminate a whole class of time wastes. While you could think about new developer training, better code review's, mentoring, etc all of which can be improved, a simpler solution would be to move the tools ability closer to the time when the change is made.

In 2007 I started a project that included local commit hooks with Git.  Anytime I had something that could have been automated it was added as a hook.  When you modify file foo.cpp it would run foo's unit tests, code style checking, project building, xml validation and more.  This idea was wildly successful and there were only a few times (~six?) in the lifetime of the project that the main branch failed to build on one of the three OS's or had failing unit tests.  More importantly the quality of the code was kept extremely high though out the project lifetime.   When working in a the much larger WebKit project when you put up a patch for review on the project's Bugzilla a bot would automatically grab the patch and run a gantlet of tests against it adding a comment to the patch when it was done.  Often it was done before the human reviewer even had a chance to look at the patch.  These bots would catch the same technical debt problems and the report tools, but because it was presented at the time of review it would be cleaned up right then and there when it was cheap and easy to do. Automatically reviewing patches after they are made but before they go into the main repository is a very successful way to prevent problems from ever appearing in the code base.

But why stop at commit time?  Many editors have built in warnings from code style to verification of code parsing.  A lot has been written about LLVM's improved compiler warnings and even John Carmack has written how powerful turning on /analyze is for providing static code analysis at compile time.  Much more could be done in this area to find and present issues to the developer in as soon as they create them or even in real time.

Code analysis reporting tools will always be useful because they can provide a view into legacy code, but for new code project using error reporting before commit time with hooks, bots, and editor integration will be able to actually prevent technical debt and do more for quality than nightly reports ever could.

Wednesday, August 22, 2012

The minimal amount of data needed to lock in users

I recently upgraded to OS X Mountain Lion only to find that RSS support wasn't just moved out of Mail, but out of Safari too.  RSS bookmarks were the only reason I was still using Safari on a daily basis so this removal is forcing me to migrate them somewhere else and in the process stopping my daily usage of Safari.

Stepping back, I realized how crazy it was that I was using Safari to read RSS.  The last five years I have been working on WebKit and browsers, three of those years (until RIM legal killed it) were spent making my very own browser called Arora.  And yet through all those years I still kept using Safari because the switching cost of the RSS feeds were "too high" (I had a mac around with safari so why not just keep using it...).  I even started hacking on a desktop RSS reader at one point.  RSS feed's are not locked into Safari, the Export Bookmarks action is right there in the File menu* and Safari doesn't keep feed data for more then about a month so it wasn't even the rss history I cared about, just the urls.

Here is a case of the bare minimum of data locking and yet it was able to keep a user that writes browsers (including rss feed plugins), uses a different OS as his primary desktop for years.  In the past when I thought about data lock in I thought about databases, custom scripts, iCloud, but with this I realize that the bar is much lower.  It wasn't until they forcefully took away the feature that I sat up in a daze wondering what I was doing and went looking for an alternative and in the process am going to abandon the application entirely.

Now imagine you are a Windows user and suddenly all your apps don't work on the new Metro arm based laptops.  It is probably the needed kick in the pants to sit up and go checkout what those mac's, web applications, and ipads are all about.  Scary stuff for Microsoft.

* You would think with Safari RSS users suddenly not having their RSS feeds apps like NetNewsWire would provide a bookmarks import, but oddly they don't (as of yesterday when I checked with the current version).

Wednesday, May 23, 2012

When publishing onto two platforms one will end up being the "lesser" of the two.


When a company produces a product for multiple platform invariably one of the platform is the primary platform. This can take on a number of forms such as:

  •  Releasing to one platform first.
  •  Releasing updates only to one platform.
  •  Releasing a reduced feature set for the later platforms.
  •  Releasing a product for a later platform that while works doesn't fit in or follow that platforms UI guidelines.
  •  The primary platform is stable while the secondary ones have bugs/crash.

Some big examples:

  • Video drivers: Windows XP v.s. Linux
  • Flash: Windows v.s. Linux
  • Video games: PS3/360 v.s. the Wii
  • Mobile apps: iphone v.s. android
  • Git: Linux v.s. Windows
  • Books: physical v.s. ebook
  • DVD's: U.S. v.s. Australia

There are many reasons for why this happens such as management believing that the primary platform will make more money, or the company (or the developers) have more experience in the primary platform or even as silly as the CEO getting the primary platform for christmas and mandating it is the primary. The secondary platforms are seen as nice to have and a possible extra source of revenue, but it would be foolish to think that they will have the same quality/features as the primary platform.

If the company has any hard times they will kill the secondary platform first.  If the product is ever killed it will almost always first be killed on the secondary platforms.

This is often a frustrating thing for the consumer as they typically can't do much about it, but at least realizing that you are on a secondary platform can help you schedule extra testing time and lower your expectations about what you will get.

The one nice thing is once you realize that product X is the future and its primary platform is not on your platform of choice and you believe that platform is the future then there is an opportunity.  The Wii can't run 360 games, but it does have a set of games that take advantage of its hardware that can't run on the 360.  ebooks coming from publishers wont replace traditional books, but a company that creates reading product that targets tablets first and physical books second will come to dominate ebooks.

Look around at the tools and products that you use.  What is their primary platform?  Is that your platform?

Popular Posts