A Few Thoughts on the iPad

Here is a jumble of thoughts about the iPad, after finally getting a chance to watch the Stevenote last night. If you haven’t watched it, I think that there are some parts of it that are worth watching, particularly the app developer parts. When I first saw the online coverage of the iPad announcement, I wasn’t that impressed. On the surface, the iPad is pretty unsurprising. It’s a tablet and it’s based on the iPhone OS (or may be we should really be calling it OS X Touch).   

User Interface

One the one hand, the iPad is the same iPhone OS that is familiar to 70 million iPhone users. On the other hand, some of the keynote demos show that the larger form factor is going to have some interesting UI potential.

I usually try to pay careful attention to presentations by game developers. It’s not because I am a big gamer myself, but it’s because the people doing games are usually doing some of the most insane, crazy, and interesting things in the business, and it’s worth paying attention to the things that they say are important. Both of the game demos for the iPad had some pretty interesting UI and commentary on the experience of the machine as a whole.

The other really interesting part of the keynote was the iWork demo. I am very impressed with the way that iWork has been adapted to the touch screen. There are a number of really cool multitouch gestures that were demonstrated. This is going to be the beginning of some very interesting user interface stuff.

Integration

I spent most of yesterday watching the Oracle/Sun strategy webcast, and a major theme was the way that Oracle plans to tightly integrate Sun’s hardware, and to optimize the entire hardware and software stack. The Oracle Exadata database machine was repeatedly touted as an example of this kind of integration. If the benchmarks and early customer experiences are indicative, this integration has paid off handsomely, as it has also with the Sun Storage 7000.

The new A4 processor powering the iPad received only brief mention during the keynote, but here too is the same kind of integration. Details on the A4 are very scarce, but speculation is that it was done by the team that Apple acquired from PA semiconductor. It appears to be an ARM compatible (iPhone apps do run) system on a chip design, and I would bet that it is contributing to the (relatively) low price, long battery life, and high performance (according to Gruber) of the device.

I think that it’s worth noting that companies like Google are also doing this kind of vertical integration, building their own custom PC designs, having custom Linux kernels and other software. Many of us in the “open” world decry vertical integration because it is almost inevitably closed, but the kind of engineering virtuosity that is on display does impress.

Wireless

Apple appears to have gotten iPad users a deal on 3G pricing from AT&T. I am not really sure that this is a step in the right direction. If Apple is to believed, we are entering a world where a person could have no less that 3 devices (phone, pad, laptop) in need of wireless data (and voice) connectivity. A contract/plan for each device might be great for the carriers, but it is horrible for the users. Since even Apple has backed down in the face of the carriers, it doesn’t look like this is going to change much, but it ought to.

Me

Will I buy one? I’ve been toying with the idea of buying Kindle for some time now. I wanted the size of the Kindle DX, since I wanted to read PDFs of books and research papers, but I felt that $499 for the DX was too much to pay for a book reader. The iPad is obviously a much more capable device than a Kindle, and I’d expect Amazon to upgrade their Kindle iPhone app to run on the iPad.   

I think that the iPad would be vastly superior to my iPhone as a means of showing my photographic portfolio. I can also imagine using an iPad as a tethered shooting target, which would definitely be interesting. The tablet form factor could lead to some pretty interesting photography applications, and the iPad CPU appears to be reasonably capable.

I’ll say this much – I definitely want to play with one.

2009 in Photography

Here’s a roundup of what I saw through the lens in 2009.

January

This year I did a lot more work with local and regional dancers. Here’s a danceseattle rehearsal shot.

danceseattle rehearsal

February

I continued my role as the official photographer for Bainbridge Island Chinese Connection’s Chinese New Year Celebration.

Seattle Chinese Orchestra

March

I caught this shot of Guido van Rossum at PyCon by being in the right place at the right time. My camera was lying on the table next to me when Guido suddenly grabbed the Django Pony and started running down the aisle. He was moving fast enough that I had to snap off a bunch of frames to catch him in focus.

PyCon 2009

April

April was busy dance month. The Olympic Performance Group put on “The Toymaker’s Doll” (also known as Coppelia).

OPG Toymaker's Doll 2009

danceseattle had their first ever performance.

danceseattle Looking Glass Glimpses 2009

For the first time in a long time, I actually was able to show up to a Seattle Flickr Garage shoot.

UW Garage Shoot 9

May

May is when the weather in the Seattle area starts to get decent, so I was able to get some nature subjects in front of the lens.

Focus stacking experiments

Memorial Day Low Tide Beach walk

June

I headed to San Francisco for JavaOne in early June.

CommunityOne 2009

I finished out June with Bainbridge Ballet’s end of year recital.

Bainbridge Ballet Recital 2009

July

The Bainbridge Island Fourth of July Parade is always a family and photographic staple.

Bainbridge Island Fourth of July Parade

Also in July, we had the first guinea pig born in our house.

The latest addition to the family

August

I made it to a second Seattle Flickr garage shoot.

UW Garage Shoot 10

September

Senior pictures for Bainbridge High School are due at the end of September, and I did 4 sessions in the space of 12 days or so.

Blake - Class of 2009

Blake - Class of 2009

Stefan - Class of 2010

Matt - Class of 2010

Michael - Class of 2010

October

School was in full swing in October, and one of the science lessons that Julie did with the girls involved extracting DNA using Bacardi 151 rum.

Homeschool: Extracting DNA with Bacardi 151

November

This year was the 10th Anniversary of the Apache Software Foundation (and my involvement with it). I did take a few shots while I was at ApacheCon.

ApacheCon US 2009

ApacheCon US 2009

I was also fortunate enough to get a slot to J. Mark Wallace’s US Meetup Tour when it hit Seattle.

Images from the Mark Wallace US Meetup Tour

December

Photographically, December is dominated by the Olympic Performance Group’s production of the Nutcracker.

OPG Nutcracker 2009

OPG Nutcracker 2009



ApacheCon US 2009

[This post is late because I came down with the flu right after I got back from ApacheCon. I guess next year I will get a flu shot]

Talks

This year I was unable to attend all of the conference due to some scheduling problems, so I can’t give an in depth report on talks. I used some of the time that I might normally have spent in talks to catch up with people that I haven’t seen in a while. I was able to attend a good number of the talks in the Hadoop track. The track was larger than last year’s track (due in part to a larger room), but I felt that last year’s track was stronger. It might also be that I’ve become a bit more familiar with Hadoop, making it harder to make a bing impression. It’s definitely the case that there was a lot of interest in Hadoop, and I expect that to continue.

ApacheCon US 2009

Unfortunately, I missed the NoSQL meetup during the Apache BarCamp. I think that there could/should have been an entire NoSQL track, especially given the fact that Cassandra and CouchDB are both frequently mentioned NoSQL technologies, and both are housed at Apache.

One talk that surprised me was Ross Gardler’s talk Teaching and Learning about Open Development, originally I didn’t think that I would have time to stay for that talk slot, but a rearrangement of my return flight loosened my schedule so that I could stick around. Ross is the chairman of the newly created Community Development PMC at Apache. This is a new effort aimed at improving the experience of contributors and new committers. Some of the people on the PMC have been heavily involved in the ASF’s Google Summer of Code outreach, and will be bringing their experiences over with them. It seems like this PMC will also be a good place for people concerned about diversity issues to dig in and help in a concrete fashion.

Celebration

This year’s ApacheCon’s have been a celebration of the 10 year anniversary of the founding of the Apache Software Foundation. At Oakland, there was a cake, a proclamation from the Mayor of Oakland, and (I didn’t get to see this) a letter of congratulations from the Governor of California. Rather than try and describe the festivities in prose, I’ll leave you with some photos:

ApacheCon US 2009
ApacheCon US 2009
ApacheCon US 2009
ApacheCon US 2009
ApacheCon US 2009

The entire set of photos is up on Flickr.

10 Years of Apache

10th_Anniversary_logo_final_w_URL.gif

November is just around the corner, which means that once again it’s time for ApacheCon US. This year is a special year for the Apache Software Foundation – its 10 year anniversary. Since I got involved with Apache just a few months after the foundation was created, it is also my 10 year anniversary of being involved in open source software.

This year I am going to be speaking twice. On Wednesday I’ll be speaking on the Apache Pioneers Panel, and on Thursday I’ll be giving a talk titled How 10 years of Apache has changed my life. I owe a huge professional debt to the ASF and its members and committers, so in my talk I’ll be interweaving important events in the life of the foundation with my own personal experiences and lessons learned.

Unfortunately, I’m not going to be there for all of the conference this year – I’ll be arriving Tuesday afternoon and flying out on Thursday evening. If you want to meet up, I’m in the ApacheCon Crowdvine, and I’ll be around with camera in hand (and on the LumaLoop).

The LumaLoop

Back in September, my friend James Duncan Davidson stopped to visit me and the family here on Bainbridge Island. Duncan has been working on a new design for a camera strap, and during that visit he showed me one of the prototypes of the LumaLoop. I spent a good portion of our time playing with the strap, and was quite taken with the design. Needless to say, I didn’t really want to give it back to him when it was time for him to go.

The following week at DjangoCon, I lost the strap portion of my Upstrap quick release strap. I liked the Upstrap, but it wasn’t ideal. The Upstrap was great because of the non stick rubber pad that they use – it really won’t move. But like most other camera straps, I found that I was constantly getting it fouled in my arms or something, especially between landscape and portrait modes.

Duncan had promised me one of the early prototypes of the LumaLoop, so I put the official black and neon yellow strap on the D3 and waited patiently. Yesterday, my LumaLoop arrived, and I quickly installed it in place of the Nikon strap. The LumaLoop is a “sling strap” similar to the Black Rapid R-Straps that have become popular recently. The Black Rapid straps screw into the tripod socket on your camera, which is a problem if you have any kind of heavy duty tripod plate mounted on your camera, or if you shoot vertically a lot (this is even more of a problem if you have small hands and a camera with a battery grip). The LumaLoop attaches to one of the regular strap mounts on your camera, and once attached, you can slide the camera up and down the strap. The mounting loop is attached with a quick release clip, so swapping cameras/straps is easy as well. Duncan has a series of blog posts that detail the reasoning behind the design:

Here’s a quick snapshot of mine:

My Luma Labs LumaLoop camera strap

You can see the loop part that goes on the camera, as well as the quick release between the loop and the rest of the strap. It’s a bit harder see the padded non-slip shoulder pad.

The LumaLoop is going to be available from Luma Labs sometime very soon (Duncan gave me perimission to talk about the LumaLoop in advance of its general availability). You can follow Luma Labs on Twitter to keep up with all of the news and the official announcement. I’m excited to have a strap that both holds my camera securely and stays out of my way when the action gets going.

Concurrency => Parallelism

I wanted to clarify a point from my post The Cambrian Period of Concurrency.

I made the statement

From where I sit, this is all about exploiting multicore hardware

because I’ve seen a pile of actor and other concurrency libraries which have not taken parallel execution of the concurrent program seriously. If I am going to go to the trouble of writing a concurrent program, then I want that execution to be parallel, especially in a multicore world.

Simon Marlow from the GHC team said that if programming multicore machines is the only goal we ought to be looking at parallelism first and concurrency only as a last resort. Haskell has some nice features for taking advantage of parallelism. However, I explicitly stated that I was not as interested in highly regular or data parallel computations, which is what Haskell’s parallelism tools are aimed at. These are fine ways to get parallelism, but I am interested in problems which are genuinely concurrent, not just parallel. In a Van Roy hierarchy, these are the problems with observable nondeterminism. I also specifically called out reduction of latency as one of my goals, something which Marlow says is a possible benefit of concurrency. The GHC team is interested in a different mix of problems than I am.

Van Roy in short

I also forgot to mention Peter Van Roy’s paper Programming Paradigms for Dummies: What Every Programmer Should Know, which includes an overview of his stratification of concurrency and parallelism (and other stuff). If you don’t have time to read his book, the paper is shorter and more digestible.

The Cambrian Period of Concurrency

Back in July, I gave an OSCON talk that was a survey of language constructs for concurrency. That talk has been making the rounds lately. Jacob Kaplan-Moss made referred to it in a major section of his excellent keynote Snakes on the Web, and Tim Bray has cited it as a reference in his Concur.next series. It seems like a good time for me to explain some of the talk content in writing and add my perspective on the current conversations.

The Cambrian

The Cambrian period was marked by a rapid diversification of lifeforms. I think that we are in a similar situation with concurrency today. Although many of the ideas that are being tossed around for concurrency have been around for some time, I don’t think that we really have a broad body of experience with any of them. So I’m less optimistic than Tim and Bruce Tate, at least on time frame. I think that we have a lot of interesting languages, embodying a number of interesting ideas for working with concurrency. I think that some of those languages have gained enough interest/adoption that we are now in a position to get a credible amount of experience so that we can start evaluating these ideas on their merits. But I think that the window for doing that is pretty large, on the order of 5 to 10 years.   

What kinds of problems

The kinds of problems I am interested in are general purpose programming problems. I’m specifically not interested in scientific, numeric, highly regular kinds of computations or data parallel computations. Unlike Tim, I do think that web systems are a valid problem domain. I see this being driven by the need to drive down latency to provide good user response time, not to provide additional scalability (although it probably will).

It’s not like Java

Erik Engbrecht, one of Tim’s commenters said:

To get Java, you basically take Smalltalk and remove all of the powerful concepts from it while leaving in the benign ones that everyday developers use.

I think there’s something to be learned from that.

This presupposes that you know what all the good concepts are and what the benign ones are. It doesn’t seem like we are at that point. When Java was created, both Lisp and Smalltalk had existed for quite sometime and it was possible to do this kind of surgery. I don’t have a clear sense of what actually works well, much less what is powerful or benign.

The hardware made me do it

From where I sit, this is all about exploiting multicore hardware, and when I say this I mean machines with more than 4 or 8 hardware threads (I say threads, not cores – actual parallelism is what is important). The Sun T5440 is a 256 thread box. Intel’s Nehalem EX will let you build a 128 thread box later this year. Those are multicore boxes. If you look at experiments, you see that systems that seem to work well at 2 or 4 threads don’t’ work well at 16 or 64 threads. Since there’s not a huge amount of that kind of hardware around yet, it’s hard for people to run experiments at larger sizes. Experiments being run on 2 thread MacBook Pro’s are probably not good indicators of what happens at even 8 threads.. This is partially because dealing with more hardware threads requires more administrative overhead, and as the functional programming people found out, that overhead is very non-trivial. The point is, you have to run on actual hardware to have believable numbers.   This makes it hard for me to take certain kinds of systems seriously, like concurrency solutions running on language implementations with Global Interpreter Locks. See David Beazley’s presentation on Python’s Global Interpreter Lock, for an example.

Comments on specific languages

At this point I am more interested in paradigms and constructs as opposed to particular languages. However, the only way to get real data on those is for them to be realized in language designs and implementations.

  • Haskell – Functional Laziness aside, the big concurrency thing in Haskell is Software Transactional Memory (STM). There are other features in Haskell, but STM is the big one. STM is an active research field in computer science, and I’ve read a decent number of papers trying to make heads from tails. Among the stack that I have read, it seems to be running about even between the papers touting the benefits of STM and the the papers saying that STM cannot scale and will not work in practice. The jury is very much out on this one, at least in my mind.
  • Erlang – I like Erlang. It’s been in production use for a long time, and real systems have been built using it. In addition to writing some small programs and reviewing some papers by Erlang’s designers, I spent a few days at the Erlang Factory earlier this year trying to get a better sense of what was really happening in the Erlang community. While there’s lots of cool stuff happening in Erlang, I observed two things. First, the biggest Erlang systems I heard described (outside of Facebook’s) are pretty small compared to a big system today. Second, and more importantly, SMP support in Erlang is still relatively new. Ulf Wiger’s DAMP09 presentation has a lot of useful information in it. On the other hand, BEAM, the Erlang VM is architected specifically for the Erlang process/actor model. This feels important to me, but we need some experimental evidence.
  • Clojure – Clojure as a ton of interesting ideas in it. Rich Hickey has really done his homework, and I have a lot of respect for the work that he is doing. Still it’s the early days for Clojure, and I want to see more data. I know Rich has run some stuff on one of those multiple hundred core Azul boxes, but as far as I know, there’s not a lot of other data.
  • Scala – The big thing in Scala for concurrency is Actors, but if you compare to Erlang, Actors are the equivalent of Erlang processes. A lot of the leverage that you get in Erlang comes from OTP, and to get that in Scala, you need to look at Jonas Boner’s highly interesting Akka Actor Kernel project. Akka also includes an implementation of dataflow variables, so Akka would give you a system with Actors, supervision, STM, and Dataflow (when it’s done).   
  • libdispatch/Grand Central Dispatch – Several of Tim’s commenters brought up Apple’s Grand Central Dispatch, now open sourced as libdispatch. This is a key technology for taking advantage of multicore in Snow Leopard. GCD relies on programmers to create dispatch queues which are then managed by the operating system. Programmers can send computations to these queues via blocks (closures), which are a new extension to Objective-C. When I look at Apple’s guide to migrating to GCD from threads, I do see a model that I prefer to threads, but it is not as high level as some of the others. Also, the design seems oriented towards very loosely coupled computations.   It will be several years before we can really know how well GCD is working. I am typing this post on a 16 thread Nehalem Mac Pro, and I rarely see even half of the CPU meters really light up, even when I am running multiple compute intensive tasks. Clearly more software needs to take advantage of this technology before we have verdict on its effectiveness in production.
  • .Net stuff like F#/Axum, etc – There is some concurrency work happening over on the CLR, most notably in F# and Axum. I spent some time at Lang.NET earlier this year, and got a chance to learn a bit about these two technologies. If you look at paradigms, the concurrency stuff looks very much like Erlang or Scala, with the notable exception of join patterns, which are on Martin Odersky’s list for Scala. I will admit to not being very up to speed on these, mostly for lack of Windows and the appropriate tools.

Other thoughts

Jacob’s take away from my talk at OSCON was “we’re screwed”. That’s not what I wanted to convey. I don’t see a clear winner at the moment, and we have a lot of careful experimentation and measuring to do. We are quite firmly in the Cambrian, and I’m not in a hurry to get out – these things need to bake a bit longer, as well as having some more experimentation.

In addition to my talk, and Tim’s wiki page, if you are really interested in this space, I think that you should read Concepts, Techniques, and Models of Computer Programming by Peter van Roy and Seif Haridi. No book can be up to date with the absolute latest developments, but this book has the best treatment that I’ve seen in terms of trying to stratify the expressiveness of sequential and concurrent programming models.

DjangoCon 2009

Last week I attended DjangoCon 2009 in Portland. Due to scheduling conflicts, I wasn’t able to attend DjangoCon last year, and I was disappointed that I missed that inaugural event. I’ve seen some Django stuff at PyCon, and I’ve written some Django code, being at a conference like DjangoCon helps me to understand the community and technology in a way that just reading the documentation doesn’t.

Talk Highlights

Here are some of the talks that I found notable:

Shawn Rider and Nowell Strite of PBS gave a talk titled: Pluggable, Reusable Django Apps: A Use Case and Proposed Solution. I think that very few people in the Django community had any idea that Django was being used extensively in PBS. That definitely falls into the category of pleasant surprises.    One of the strengths of Django is the focus on building small single purpose applications which can be used to build up larger applications. Doing this is harder than it sounds, and Shawn and Nowell described some of the problems that they ran into, as well as showing some ways of dealing with those issues. There was some crosstalk between the PBS guys and the Pinax developers, who are also doing a lot with reusable apps. I hope that these folks will work to share and combine their knowledge and then disseminate that to the broader Django community.

There were a number of talks which followed the theme of “how not to use parts of Django”. It’s interesting because people like Django, and even if they don’t like some parts, they want to use the rest, and are willing to work to make that possible. You would expect people to just walk away from the framework in cases like these.

Eric Ocean, one of the developers of SproutCore gave what I considered to be a pretty interesting talk. Unfortunately, his talk didn’t have much of a connection to Django, other than to suggest some things that Django could do to support SproutCore better. I know from watching the IRC and Twitter back talk, that people were put off by the style of presentation (a little too like a commercial), and the weak connection to Django. SproutCore is interesting to me because it’s at a different level than most of the Javascript frameworks. It’s at a higher level, which I think will be necessary as browser based applications become more sophisticated. I know that I am going to be taking a closer look at SproutCore, and I hope that a useful Django/Sproutcore collaboration will emerge from the sprints.

Simon Willison gave a keynote about Cowboy programming. The big piece for me was his description of how the Guardian built an application to help the public scrutinize the expenses of British MP’s. There’s something about these situations that appeals to me, against my knowledge and better judgment of “sound software engineering practices”. I guess it’s a guilty pleasure of sorts.

Ian Bicking gave a keynote which might be described as “a free software programmer’s midlife crisis”. Ian was very philosophical and reminded us that free software (as opposed to open source software) was rooted in a set of moral (not economic or process) imperatives. It was a very thoughtful speech, and I think that its worth several reads of his text (something which is hard to do on a train ride with an iPhone) and some additional ponderings.

Avi Bryant’s keynote took its root in his experiences building Trendly. As one might expect, Avi started building Trendly using Seaside. But by the time he finished, he noticed that very little of Seaside was actually being used. He attributed this to the fact that Trendly’s architecture involves loading a single HTML, with a ton of Javascript. That Javascript then manages all of the interaction with the server, which consists of snippets of JSON data. This range true to me because we used a similar architecture for Chandler Hub, the web based version of Chandler (our interaction with the server was based on atom and atompub, not JSON), and it’s the kind of architecture that GMail is based on. Avi also treated us to a demonstration of Clamato, his Smalltalk dialect that compiles to Javascript. Again, another attempt to deal with the challenges of engineering large Javascript applications in a web browser.

There were plenty of other good talks, and many of the slides are already available.

My keynote

I’m afraid that I am not equal to the task of writing out my presentation text in full as Ian and Jacob have done, so you will have to settle for the highlights and wait until the video appears.

My keynote was organized around two major sections.

The first section was a look at what I see in the Django community at present. This includes a look at some pseudo statistics around job postings and a poll of web frameworks being used by startups in an effort to get some view into whether and how much adoption of Django is happening. The short answer is that things look promising, but there is still plenty of room to grow. On the technology side, I pointed out the emphasis on combining applications and the work of the Pinax and PBS folks. The other major technology thing that I called out was GeoDjango, which is undoubtedly the most sophisticated GIS functionality in any web framework in any language or platform. This is going to be very attractive to people building location aware mobile apps, and I showed two examples of augmented reality applications as illustrations. This section ends with some observations about the Django community, using the PyCon sprints as an example. Ok, there are also some lighthearted slides about Django’s mascot, the djangopony.

The remainder of the talk was about the ways that web applications are changing and how Django might adapt to them. There are (at least) three groups of people that will be impacted by these changes. From the view point of users, the two big things are richer, more interactive applications, and access from location enable devices. Developers are going to need help in dealing with these new requirements, and the people who operate web applications need much more support than they currently have.   

I see several technologies that will be important in facilitating these changes. The first of these is some Rich Internet Application technology. The second is API’s to web applications. A digression on this point. When the iPhone was introduced, the only way to develop applications was using web technologies. This made a lot of people very angry, and Apple followed up with the ability to build native platform applications. It should be possible to build rich web interfaces on the iPhone. My observation is that given the choice beween a rich web interface and a native iPhone application, users pick the native application. Look no further than the furor over the native Google Voice application. The native applications are talking to the servers using API’s. Those API’s are not just cool Web 2.0 frosting. The last technology is cloud computing, which started out as a deployment/operations technology and is now moving up to impact application development at many levels.

In light of this, what are framework developers to do? I did a quick survey of several web frameworks which have interesting ideas or approaches in them, so that the Django folks could see what their “competitors” are up to. The frameworks that I included are Rails, Lift (Scala), Webmachine (Erlang), Nitrogen (Erlang), CouchApps (CouchDB + JavaScript – this isn’t quite a framework in the traditional sense, but it met the spirit of my criteria), and Javascript. In the case of Javascript, the observation is that the rapid increase in Javascript performance coupled with a good Javascript framework leads to something which is economically attractive (same technology in the server and client).

The talk finishes with a set of proposals for “science projects” that might be attempted in the context of Django. Some of what I outlined is emerging, and in some cases speculative. Django doesn’t need to blow itself up and start over. Instead, what’s needed is for people with Django sensibilities to look some of these problems and see if a Django flavored solution can be found. Here’s the list of projects:

  1. Asynchronous Messaging – if there’s any use of messaging, it’s typically to do jobs in the background. What would happen if we made the use of messaging pervasive throughout the framework?
  2. Comet – I think that the Django+orbited approach to Comet is limited in comparison to what you see in Lift or Nitrogen. Can Django do Comet support at the same level (or better) than these frameworks? What would happen if the Comet stuff were hooked up directly to the messaging stuff I just described? Imagine the equivalent of urls.py that routed Comet requests to messaging.
  3. REST – There are several good packages for dealing with REST in Django. It would be nice to have this all packaged up neatly and made available for people.
  4. Deployment – This is really a mess. Are there changes that could be made to Django to make it easier to deploy, or to work better with tools like Puppet, Chef, Fabric, etc?
  5. Monitoring – Typically frameworks provide very little monitoring information. It seems like there is a lot that could be done here.
  6. Analytics – Once you have raw monitoring information, the next step is to do some analytics on it. Django is famous for creating admin UI’s with a very small amount of effort. What if we applied that same thinking to analytics?
  7. Cloud – If you add up the first 6 items, you are well on your way to what might be a cloud friendly framework. There is still some work to do in terms of making applications on the framework adapt to elastic deployment scenarios, but it would be a good step.
  8. Stacks – A very basic step towards cloud stuff would be to build a preconfigured stack of software to run/develop a Django app. This is a controversial idea, because everyone has their own idea of what software should be in such a stack, and how all the configuration switches should be set. I still that having one (or more) such stack would help more than it would hurt. In my ideal world this stack would be delivered as a virtual machine image that could also be uploaded to cloud providers.
Here are the slides:


All of the talks were videotaped so those of you who were unable to attend will be able to catch up soon.
I had a great time hanging out with the Djangonauts. My thanks to Leah Culver and Robert Lofthouse for inviting me to speak.

Re: Snakes on the Web

Jacob Kaplan-Moss, one of the BDFL’s for Django has published the text and slides for his upcoming talk(s) “Snakes on the Web” at PyCon Argentina and PyCon Brazil. Jacob says that he’s trying to answer three questions:

  1. What sucks, now, about web development?
  2. How will we fix it?
  3. Can we fix it with Python?

There’s some good stuff in here, and it’s definitely a worthwhile read.

Jacob is in South America, which means he won’t be at DjangoCon next week. I was disappointed about that before I saw his talk, and I’m even more disappointed now. I’ll be giving the last keynote at DjangoCon, and I’ll be discussing some thoughts and ideas for where Django (and other web frameworks) might go next. It would have been a great opportunity to carry on the conversation in person. I guess we’ll be doing by blog instead.

One of the topics that Jacob covered in his talk was concurrency, and he pointed to my OSCON talk on concurrency constructs as something that has influenced his thinking. I do think that he got the wrong idea from my conclusion. At the moment I don’t see a clear solution for concurrency, but I don’t believe that the situation is hopeless, either. I think that we are looking at a period where we have a lot of experimentation, and I think that’s a good thing. It is way premature to say that we have a solution, and I’d rather people keep experimenting.

I have some more thoughts on some of Jacob’s points, but those are in the keynote, so I’ll save those until after I’ve actually given the presentation at DjangoCon on Thursday. I suspect that the other keynote speakers, Avi Bryant, and Ian Bicking to have some thoughts in this general direction as well.    I think it would be great to have an open space on these topics sometime on Thursday.

Design and Commons Based Peer Production

On Tuesday, Chris Messina wrote a post about open source and design, where he laments that open source (whatever that means nowadays) and design seem to be opposed to each other. The crux of the problem seems to be that good design requires a unity about it and that since in open source all voices are heard, you inevitably end up with something that has been glommed together rather than designed. This is something that Mimi Yin and I discussed in our 2007 OSCON talk about the challenges of the Chandler design process. Chris is gloomy on the prospects of open source design processes, because he doesn’t feel that there are any examples that have succeeded. I think that this is a legitimate place to be. I don’t really see any successful open source desktop application which was designed in the kind of open design process that Chris or we at OSAF had in mind.

Is organization the problem?

On the other hand, I think that I’m slightly more optimistic about the situation than Chris is. Chris holds up the idea that there ought to be a design dictator, who drives the design and preserves the unity of the design. I’d point out that there are some open source communities where there are such people. Perhaps the best example that I can come up with are the programming languages. A good language is very hard to design. It needs to have the kind of unity that one expects to find in a good design. In some of the language communities, these designers have titles such as “Benevolent Dictator for Life”, and the community as a whole has recognized their giftedness and given them the ability to make final binding decisions about design issues. This isn’t end user facing desktop or web software, but it’s also not bunches of libraries, or implementations of JSR’s, IETF RFC’s, W3C recommendations or POSIX standards. These situations are very delicate mixes and their success is highly dependent on the particular individuals who are involved, so they tend to be rare. Nonetheless, I do think that its possible for communities to work even if there is a chief designer.

I also don’t think that there needs to be a single chief designer. Chris cited Luke Wroblewski’s description of the design process at Facebook. Very early in that post you read:

The Facebook design team works on product design, marketing, UI patterns, branding, and front-end code. The team consists of 15 product designers, 5 user interface engineers, 5 user experience researchers, 4 communication designers, and 1 content strategist. 25 designers in a company of 1,000.

Design can be done by teams. I think that we all know that, but in many of the discussions that I’ve had on this topic, the focus seems to be on the need for a dictator. The dictator model works, but so does a team model.

I think that the organizational challenges of design (dictator vs team) can be dealt with. If you bootstrap a community with a DNA that is committed to good design, and to the value of a good designer, and if the designer has won the respect of the community, then I can see a path forward on that front.   

The problems that I see

In my mind the problems are:

How do you find a designer or designers who want to work in this kind of environment? We know that not all developers are well suited to distributed development. I’d venture that the same is true for designers. It’s much easier for coders to self select into a project than it is for all other types of contributors, including designers.

How can a non-coding designer win the respect of a (likely) predominantly coding oriented community? If you believe that open source projects should be organized around some notion of merit, then what are the merit metrics for designers? Who evaluates the designers on these metrics? Are the evaluators even qualified to do so? In my examples of communities with designers, those designers are all coders as well.

Can we actually do design using the commonly accepted tools of e-mail, version control, wiki’s and bug trackers? The design process relies very heavily on visual communications. The code (including design and architecture of code) process is predominantly a text based process. It is very difficult to do design efficiently in a distributed setting using the existing stable of tools. This is going to be a challenge not just for designers but for many other problem domains that could benefit from commons-based peer production.

What’s with you and that long word?

I prefer to use Yochai Benker’s term “Commons Based Peer Production” instead of the term open source. The problem with the term open source is that everyone means something different when they use it. Some people just mean licensing. Some people think of a particular community’s set of practices. Others think that it means some kind of fuzzy democracy and mob rule.   
One of the reasons that I went to work at OSAF was to see if it was possible to design a good end-user application via some kind of community oriented design process. As far as I am concerned the jury is still out.