Tag Archives: programming languages

You were never meant to do that with SQL

There seems to be a lot of hatred for SQL in the world at the moment: I can’t think of any other reason why the term NoSQL would catch on in the way that it has, when the key technological distinction is actually the lack of ACID guarantees (which are entirely orthogonal to whether or not SQL is used, as evidenced by non-ACID MySQL and HiveQL, which offers a pretty familiar SQL-like interface on an entirely non-traditional backend).

I wonder whether one of the unspoken reasons for this hatred is that at one point or another almost everyone has ended up doing this sort of thing:

   builder.Add("SELECT foo FROM bar WHERE id = ");

   if (additionalConstraint)
      builder.Add(" AND frobbable = 1 ");

   /* ... ad nauseam ... */

SQL is a hard language to like: it’s never been properly standardised (or rather, it has, but the standard has never been implemented) meaning that you spend too much time worrying about compatibility. Its theoretical underpinning is poor, leading to constructions that are hard for the engine to optimise (meaning more manual work).

However, SQL is a language in its own right, and was never intended to be generated programmatically by another programming language. This shouldn’t come as a surprise, as I struggle to think of any programming language that has been designed to work in this way.

Using SQL from a decent command-line environment is a powerful tool and often a pleasure to use. By comparison, generating SQL programmatically is an abomination that would be worth of The Daily WTF were it not for the fact that nobody’s ever invented an API that offers the same flexibility.

Personally, I blame the vendors. Until RDBMSs can offer the same quality of optimisation that modern compilers can (that is, write in a high-level language and never even think about micro-optimisation) high-performance relational database access will remain a sea of vendor-specific optimiser hacks. Maybe there’s a theoretical reason why optimisers will never be this good, in which case perhaps we do need to abandon the relational model in practice. But let’s not pretend it has anything to do with SQL.

Pass the Markup Hat

This is a game for ten or more programming languages. The rules are as follows:

  1. Every single non-alphanumeric character in the ASCII set is put into a hat. Players draw from the hat without seeing what they are drawing.
  2. At the beginning of the game, LISP draws two matched parentheses from the 2-card “LISP deck”, which is then exhausted. Both LISP and the LISP deck play no further part in the game.
  3. The players sit in a circle, and play proceeds clockwise. Programming languages choose in a random order, apart from ALGOL, who chooses first.
  4. Each player borrows 3 characters from the player to their right, and draws from the hat as many additional random punctuation characters as they feel they need to create an expressive language and a related documentation format. They must then choose an escape character, which must not be the same as any of their characters, but must duplicate a non-escape character picked by a previous language.
  5. At any point during the 1990s, a player may play the Unicode card. From then on, the original ASCII hat is supplemented by a Unicode bin containing every single character used by a living language, and most dead ones.
    • NB: It is a common house rule that all players ignore the unicode bin with the exception of F#, who chooses last and often has to root around in it for some unused characters
  6. Any player who feels that the game is proceeding too slowly may play the SGML card, and may pick any matched pair of punctuation used by a previous player and redefine it to be the foundation upon which further markup is based. Subsequent players no longer need to define a documentation format, though they must now pick two escape characters.
  7. Once all the players have chosen their punctuation, each must write an operating system. Points will be deducted for any code that is valid in more than one language.

First impressions of F#

I’ve just read through Foundations of F# and written one or two trivial scripts in F#, so it’s too early to make a proper balanced assessment of it as a language. However, I wanted to record my immediate reactions to it so far while they are still fresh in my mind.

I don’t have a strong background in functional programming, but I’ve dabbled with Common Lisp, Haskell, Clojure and Scala. I’m excited by the new trend of functional languages targetting the JVM and .Net CLR, since this seems to solve the major problem with functional languages of the past in not having library code in sufficient quantity and quality.

Targetting either of the widely available virtual machines seems to be a double-edged sword, however. From what I can tell neither the JVM nor the CLR are well suited to functional languages. Clojure, Scala and F# all seem to have (and I’m being polite here) idiosyncracies forced upon them by the underlying runtime.

My immediate reaction to F# is that it seems to work hard to make it easy to interact with OO languages on the CLR, at the expense of being a great functional language in its own right. Haskell and Clojure both have strong (and different) concepts of lazy evaluation that permeate the language and let you program in a whole new way. If F# has this, it’s not been obvious to me in the first couple of hours using it.

Nor (as far as I can tell) does F# have particularly good support for parallelisation, which is one of the key advantages of functional programming for me. In my opinion Clojure has the strongest claim here, having been built from the ground up to be parallelised safely. Haskell apparently has very strong tool support in the form of ghc. What does F# offer in this direction? It’s not at all clear to me, but with the ability to create mutable fields with a single keyword, and no support for inferring immutability via the type system, it doesn’t fill me with confidence.