Ted Leung on the air: Open Source, Java, Python, and ...
... to John Lam on his new job at Microsoft. Another step forward for dynamic languages. Who knows, maybe now John and I will even get to shoot some photos more than once a year...
In the comments to my post about IronPython and JRuby there were some comments about C libraries for Python or Ruby which would be unusable in in a CLR or JVM based implementation. This is correct, of course, and it is a problem for people trying to port software across implementations.
Earlier this week, Joel Spolsky made some comments about Ruby performance which triggered a bunch of posts, including a lesson from Avi on the 20 year old technique of in-line method caching. David Heinemeier Hansson weighed in with a post titled "Outsourcing the performance-intensive functions", where he argues that one of the benefits of scripting languages is that you can "cheat" by calling functions written in some other language.
Of course, that capability isn't limited to scripting languages. Other languages like Smalltalk, Lisp, Dylan, and others have foreign functions interfaces that let them talk to C code, and SWIG, which is a favorite tool for making it easy to link bind C libraries to scripting languages, also works for those languages. Reusing existing C code is a fine and worthwhile thing to do.
I don't agree, however, that users of dynamic languages should just agree to outsource high performance functions to C. The whole point of using a dynamic language is developer productivity, and that should be the case for performance critical code as well. And it's not like this is an impossible task either. There are implementations of dynamic languages which are very efficient, and applying those techniques to "scripting languages" is worthwhile endeavor, which is being pursued by folks like the PyPy team. As Avi also points out, the StrongTalk VM has now been open sourced, which may make it easier for language implementors to adopt some of the rich body of work that has been done on dynamic language performance.
Will there be cases where even the most advanced implementation techniques won't yield enough performance? Sure. That's why C compilers have a feature called in-line assembly code. But you rarely see it used. Having to rewrite my performance critical dynamic language code in C should be a rarity. The better the VM's get, the more rare those occasions will be, and that's a good thing. Let's not throw up our hands and say "yeah, you're right, we're slow, but it doesn't matter because we can cheat".
The IronPython and JRuby announcements have been making the rounds in the blogosphere. and of course I want to add my 2c.
I think that these announcements are very significant and should be welcomed by people in both the Python and Ruby communities, because I believe that Microsoft and Sun's support of these languages will make it much easier to persuade people to look at Python and Ruby. Today people's biases are still against dynamic languages as whole, as opposed to particular languages, so I think that getting "corporate legitimacy" for either Ruby or Python helps both.
IronPython is already faster than CPython, and JRuby appears to be headed in a similar direction, although we won't actually know until JRuby beats one of the C-based VM's. There is a huge amount of effort being expended on the performance of the JVM and CLR implementations, and if that effort starts to benefit Ruby and Python users, then think that is a good thing too.
I've read some postings speculating on Microsoft and Sun anointing either Python or Ruby over the other, and/or over all other dynamic languages. I don't believe that this is the case. At OSCON in 2003, I attended a BOF organized by Microsoft people who were interested in improving support for dynamic languages on the CLR. If I recall, many of the "major" dynamic languages were represented. Also I know that Microsoft has been talking to folks like John Lam and others who are working on getting Ruby onto the CLR. As for Sun, JSR-223 is aimed at all scripting languages, Sun accepted a JSR for Groovy, and Tim Bray (who helped the JRuby thing get done) also helped organize a meeting at Sun for lots of dynamic language folks. I think that in part, IronPython and JRuby got picked up because the people involved were willing to work with the companies involved.
Other commentary has focused on whether or not Sun or Microsoft is ahead of/behind the other in this area. I suppose this makes sense if you are a partisan of one language over another. It's probably more true if you look at Python, since IronPython's baseline for comparisons is Python 2.3, while Jython is still catching up to Python 2.2. Overall, I think that we are still early in this game, and that neither side has an insurmountable lead over the other. If you look at the pace of VM support, I think that it's not so one-sided. Yes, Microsoft has been at this longer, but they also seem to have a longer cycle time to pick stuff up, since the pace of CLR improvements is gated by releases of Windows. Yes, you can download new versions of the CLR, but that makes deployment a harder deal. Sun still has to get its extensions specified, much less implemented in the JVM, and the cycles on the JVM are also long, but I also think that the window for broad adoption of dynamic languages still has not arrived, so both companies still have time, which also blunts the potential advantage of being first.
I'm happy to see all this going on, but the CLR stuff is far away from me, since my primary platform doesn't really have good CLR support. There's Mono, but it doesn't seem to be getting much uptake on OS X. I am basing this on the amount of buzz and/or actual Mac apps being developed on Mono, not on actual statistics, and I am sure that Miguel will be quick to disprove me with facts... I can at least see a world where I might use something like JRuby or Jython, since I have done a bunch of Java in previous lives.
These announcements also create some interesting points for observation. Here are some things that I am going to be keeping an eye on as these projects march forward:
- Community building around the implementation - I will feel most comfortable if these language implementations are community driven, and not vendor driven. I know from listing to Jim at PyCons that this is a goal, and the JRuby guys have been very clear about this as well. The recent buzz about these two projects gives them that PR bump that might allow them to draw more people into their communities. It will be interesting to see if they can convert attention into participation
- Compatibility - Both Ruby and Python are "open source" languages. By that, I mean that cross platform compatibility has been accomplished by having a single reference language implementation. It will be interesting to see if the JVM and CLR dialects are able to achieve a decent compatibility story or whether they end up essentially forking (or if you are suspicious, embracing and extending) the languages. One possibility is that we end up with some kind of standardization in order to keep this from happening. Of course, standardization doesn't mean compatibility - just look at the situation with Javascript 1.7, where you have a standard, but you have significant uncertainty about whether all browser vendors will implement it, thus reducing its usefulness.
- Performance - The IronPython team has shown that they can beat the performance of CPython. The JRuby folks have yet to do that, and both the Python and Ruby communities have higher performance VM implementations underway. This situation reminds me a lot of the situation with x86, Alpha, Sparc, and PowerPC, where you had different architectural approaches which were supposed to produce performance benefits. But in the end, large amounts of money, process technology and non-architectural considerations produced an outcome that was different that what you might have expected by just analyzing the processor architectures.
- Velocity - Having people who are working full time on these implementations is going to make a difference in the velocity of these projects. The question is how much, and at what expense versus creating a sustainable community?
- Tooling - Much has been made about the JRuby folks being chartered to work on tooling in someway. There's been speculation about NetBeans versus Eclipse, and there are also other Ruby IDE's. I haven't heard much about tooling on the CLR side, but it seems plausible that you could see Visual Studio support for IronPython and/or one of the CLR Ruby's should people at Microsoft decide it was worthwhile.
In the end, I think that having languages like Python and Ruby be "legitimized" by the recognition of big industry players, makes easier it is for me. It gives me one more argument to use when talking to people, which I hope reduces the amount of time I have to spend trying to convince people of the merits. That leaves me more time to work in a language that I like. Then again, we have Erlang, Scala, and Io just around the corner...
We've been going through the Ruby on Rails book in our Bainbridge Island reading group, so I was interested to see Cedric Beust's post Why Ruby on Rails won't become mainstream. It took me a few moments to get over the sting of being called someone who hasn't learned anything in 30 years (despite the fact that I only learned Lisp/Scheme 22 years ago). That bit of discomfort aside, there are some interesting points in his post. I am looking for a web framework for some experimental projects, and I think that many of the points that Cedric made are just as applicable to Python/Turbogears/Django, etc.
Cedric's definition of mainstream includes being appealing to Visual Basic and PHP programmers. That seems to be the backdrop of his first two points, that Ruby and Rails are too hard for these folks. I can see some of these points - folks in our reading group have been somewhat mind bent by some of the Ruby concepts, and they are Java/C# folks, which would put them higher on the food chain than VB and PHP programmers. I think that some of this is just unfamiliarity as opposed to difficulty, but there's not doubt that there is a learning curve there.
The next major point is about the lack of an IDE. I think that the term IDE has gotten a bad rap due to a class of products whose only real value add was to generate reams of horrible looking boilerplate code. But IDE's should be more than that, and the better ones are. One of the few things I miss from Java is Eclipse (never used the much vaunted IntelliJ). I use WingIDE for Python, which is at about the same level that Symantec Cafe was for Java. I mostly used Cafe for the debugger, which is the big reason I use Wing too. For me, the big jump in Java IDE's came with Eclipse (maybe IntelliJ was there first, I can't remember anymore), which could do refactoring and other semantic operations. This was the point where it was really worthwhile for me to use an IDE in preference to Emacs. Those of us that cut our teeth on the original dynamic languages (Smalltalk and Lisp) did so in the context of the fully integrated and graphical development environments of the Smalltalk and Lisp machines. This is in contrast with the currently popular dynamic languages which originated in the text only world of UNIX/Linux. So the only quibble I'd have with Cedric on this one is that is 2006 not 1984.
As far as fanaticism, I haven't personally experienced this, but then again, I haven't said anything negative about Ruby on Rails either.
The last two topics, 1) "enterprise" capabilities and scalability and 2) lack of support from Internet Providers, are both related to deployment.
The scalability question came up in our reading group as well, and I'd agree that the jury is still out on whether there are enough large deployments to settle the question conclusively. One area that makes me personally nervous is that a lot of work has been done on improving the VM for Java than on the VM's for Ruby, Python, or Perl. People in the Ruby, Python, and Perl camps seem uncomfortably sanguine about the performance of the VM's for these languages.
The ISP support question goes back to the assumption that the mainstream target audience is the Visual Basic and PHP developers. I know that ISP support for Python was problem for a number of folks who wanted to use Pyblosxom. In the days of User Mode Linux, Xen and so forth, you'd think that this should be a solved problem, but it doesn't seem to be, and if the target is PHP, then I'd have to agree that this is a problem.
So I'm mostly in agreement with Cedric, if the definition of mainstream is PHP and Visual Basic. But I have to wonder if that's the right definition for mainstream, or whether being mainstream should really be the goal. I know that I would personally like an outcome where Rails (or Turbogears/Django) was an acceptable technology to build a web application. By acceptable, I mean that you wouldn't have to pay a premium for developers familiar with the technology, there was an acceptable quality IDE, and your use of the technology wouldn't be regarded as a risk by investors or prospective investors. I don't know whether taking over the niche occupied by PHP and Visual Basic is necessary for that outcome to happen.
It is very hard for a technology to become mainstream. My own view is that Java succeeded because it arrived at a moment when there were serious problems with web development, and there was a need for a solution which had some of the properties that Java had. As Cedric pointed out, even in that situation Java had a number of problem areas to overcome before it became successful. Java also received a shot of enterprise legitimacy from IBM, something that Ruby has not (I think James Governor is jumping the gun on IBM's support for Ruby). I don't think we're at the point where the problems in web development are so bad that it's very difficult to get the job done. Annoying, perhaps. More complicated and expensive than necessary, perhaps. But it doesn't feel like it's so bad that the majority of people are going to switch over to a very different way of doing things. There's a rule of thumb that says that something new has to be around one order of magnitude better than the thing it's replacing. Regardless of how much I like Rails/Turbogears/whatever, I have to wonder whether that one order of magnitude is clearly there. I personally feel that it's close enough to be worth it for me and the things that I am interested in. But that's different from saying it's there for the entire market.
This year folks from OSAF were around for 2 days worth of sprints. Jeffrey Harris had a very successful (I thought) sprint on vobject. This was a direct result of having the sprints after the conference, because most of the people at the sprint were not existing vobject contributors.
The Chandler sprint broke down into several sub groups. One group worked on a parcel to grab data out of 43 Things and integrate it into Chandler, one group worked on getting a very simple address book going, one group worked on importing and exporting Chandler data to VTODO and vCard (providing a nice tie to the vobject sprint), and there was also some work going on to try to figure out just why it is that Chandler is so slow on the Mac.
I paired with Ralph Green from the Dallas Pythoneers on the address book parcel. We've done a lot of work on the Chandler parcel APIs since last PyCon, and it looked like that made a big difference this year. Whereas last year it took people about a day and a half and a lot of hand-holding to get a parcel running, Ralph was able to get quite far with only minimal help from me. I've wanted to get some Person/Contact/Address Book functionality into Chandler (ahead of schedule) so that I could experiment with some ideas around linking web data to people and working with the social network that is nascent in your address book. So while what we did is pretty simple, I'm happy that there's now a starting point for experimentation.
This year's PyCon experience was different for me. While I did see a number of people that I knew, and had very good conversations with them, I didn't feel as much connectedness to people as I have at previous PyCon's. There are a bunch of reasons for that. A lot of people that I know were not here -- the DivMod guys didn't come, the SubEtha gang was totally not around, and a few other people were also missing.
I think that moving the sprints and the environment of the sprints had a big impact on this as well. By the time I got through the tutorial day and the 3 days of the conference, I was already tired, which made it hard to muster enough person energy for even 2 days of sprints (not to mention 4). Also, the sprint rooms had sliding dividers, which meant that the various sprints were pretty isolated from each other. So there was almost none of the spontaneous interaction between sprints that happened in previous years, and I missed that.
A final factor is that I spent a lot of time with Chandler people this year. The Chandler contributors are getting more geographically dispersed, so there are fewer opportunities for us to get together face to face. While this is the norm for open source projects, the OSAF staff has been centralized and is becoming more decentralized. This is definitely a good thing, and I see us taking advantage of face to face time just like the other projects.
I was very happy to see that PyCon was going to be in a single hotel this year. It did make it easier to run in to people. In previous years, things pretty much ended after dinner did, because people were scattered in various hotels. This year, people came back to the hotel and hung out in the lobby and various public areas.
As far as I could tell there were only two downsides to the conference. The first was that the hotel's wireless network was not up to the task of hosting PyCon. This isn't a surprise to me. Pretty much every time a tech conference goes to a venue for the first time, the venue's wireless networking get thrashed. The only real question is whether or not the venue recovers or not. The second issue is that I personally didn't care for Dallas as a location. Then again, I didn't particularly care for Washington D.C for the last few PyCons, either, so this is really just a minor complaint.
This post and the day 3 post were delayed because the iBook that I borrowed started freaking out the night before the sprints, and became so unreliable that I couldn't even get it to boot. Needless to say, this was pretty frustrating, especially since it hosed things for the sprints. Fortunately, Ralph had a computer so we were able to make plenty of progress. I just couldn't do much with e-mail or write any blog posts. When I got home last night, I plugged the machine in, hoping to coax it to stay up long enough to get a few critical files off of it. Instead, the crazy machine has been running all day.
Regular readers know that I usually write this post on the ferry home after some excruciatingly bad travel experience. If you're waiting for that report, you'll be happy to know that this year I got home from PyCon with absolutely no travel glitches at all.
Day 3 of PyCon was a short day -- things finished up around 3:30pm. I went to a number of solid how-to talks on topics such as Turbogears and Django. Unfortunately, I missed the PyPy status talk, which I later heard had some good content on managing an open source project. I'm sorry that I ended up missing that one. Due to some hallway conversation, I also missed the PyPy architecture session. This year's PyCon had far fewer programming language hacker kinds of talks. I have mixed feelings about this, since I generally find these talks interesting. However, it is probably better for there to be more talks on web stuff and application building.
The most interesting talk of the day was Martin Blais' talk on Nabu, which he calls a publishing system using text files. Martin is using reStructured Text as a way of marking up structured data (events, people, etc) in a reStructured Text document. He gave a timely example of a travel text file, which was to represent his trip to PyCon. In this file he had regular text describing various topics, and embedded in the text was marked up meeting information, contact information and so it. Nabu can process the file, extract all the marked up data, and then do a variety of things with it, including publishing a web page kind of view of all the data. While Martin says it's not a PIM, I think that it does some interesting things that PIM users might be interested in, particularly some of the 43Folders / life hacks crowd. Martin is very clear that this is not a "for your Mom" kind of tool, but if you are a keep it all in text files kind of person, and want to take things up just a little bit, I think it might be worth looking at. Most of the OSAF folks were in Martin's talk and it sparked some interesting discussions about Martin's work. The travel file example that he showed is exactly the kind of thing that we are aiming to be able to support using our collection system, so it was nice to see something that was similar in spirit.
Day 2 started off with Guido keynote, which was mostly a review of community activity and new stuff in 2.5. Most of the community projects that he called out had to do with stuff being done by the PSF, related the PSF infrastructure. I was surprised that he didn't call out the large amount of progress on web frameworks, especially given his recent interest in web stuff.
As far as the new features of Python 2.5, Guido said that 2.5 will have the most new stuff in it since 2.2. There's a fair amount of stuff related to expanding the usefulness of generators for coroutines and for managing various kinds of resources. I suppose that I'll have to go look at the PEPs for the new features, but my initial reaction was that it seemed kind of complicated, and that if you weren't scared of this stuff you wouldn't be scared of something like continuations either.
Brian Kirsch's I18N talk went pretty well. There were a good number of questions which were all practically focused, which is a good sign that other people are also struggling with building internationalized applications.
This year's Chandler BOF was one of the better ones. I attribute a lot of that to actually having enough software that questions shifted from "what are you going to do" to "can it do this". We did a brief demo of the latest version of Chandler and after that we had a lively back and forth about lots of topics. I hope that means that some people will be excited enough to join us at the sprints.
The most packed out talk that I saw so far was Ian Bicking's talk on Python Eggs. It was standing (and sitting) room only, which resulted in an encore presentation in one of the large ballrooms. I was really happy to see this, because this sort of thing has been long overdue.
Lightning talks are frequently among the best talks at conferences that have them. Here are that ones that stuck out to me:
- Titus Brown showed some cool web testing / debugging using twill and wsgi_intercept
- David Creemer from MerchantCircle talked about their experiences building a commercial web app using Python. A lot of their components are the same components in Turbogears
- Ben Collins-Sussman gave a very entertaining talk about his experiments in replacing himself with an IRC bot. I wonder if his Werewolf moderator bot will get broader distribution.
- Katie Parlante from OSAF showed off Chandler 0.6.1 and some of our recent parcels, including evdb/eventful and a Mac dashboard widget
Here's brief report on the first day of PyCon.
Almost none of the usual notetaking suspects made it to PyCon this year, and even if they had, the hotel's network appears to be Bonjour unfriendly. This means that unlike previous years, there aren't any detailed session notes to post.
The Django and Turbogears projects are doing pretty well on the marketing front as well as the technology front. The Turbogears people showed some very nice browser based UI building stuff. I've been looking for a web framework to do some poking around, and I've been looking a lot at Ruby on Rails via our study group on Bainbridge Island. I've been waiting till PyCon to check out the happenings with Django and Turbogears.
The first two OSAF presentations were prettty well received. Actually, there was more interest than I expected, so that was gratifying. A few people asked me why there was no Chandler talk this year. It was fun to watch their faces when I replied that actually there were 3 talks this year. I am glad that people are starting to see that we are contributing stuff back into the Python community.
I was really interested in the presentation on bzr. It was interesting to see some of the work that Canonical has done, not only on bzr, but tools around it, such as the patch queue manager and launchpad.net. I hope that people continue to work on bzr support for trac. Martin Pool rightly pointed out that people who are afraid of decentralized version control systems are afraid of a bunch of bad community things that can happen using these systems. He also rightly pointed out that these are inherently social problems, and I think that he is right. Just because you have a centralized system doesn't mean people are going to work together in a good way.
Perhaps flying overnight isn't the best idea. Between various factors, I didn't end up getting any sleep. Alaska Airlines managed to deliver us to Dallas about 40 minutes early, which put a crimp in my plans a bit.
My flight was originally scheduled to land at 5:45AM, and via sunrisesunset.com, I knew that sun rise in Dallas was around 7am. Factoring in the amount of time needed to deplane and collect my check bag, I figured that I might be done with all that just in time to catch the golden hour and grab some nice sunrise pictures of Dallas. Things moved a bit faster than I thought, and Dallas turned out to be quite cold (good thing I had my Seattle cold weather gear), so I ended up just heading for the hotel.
I spent the morning helping out with various aspects of the conference, including manning the registration desk. The coffee that I had over breakfast pretty much ensured that I wasn't going to sleep, so finding something productive to do was good. It was a great opportunity to see who was going to be around, and to greet people that I knew as they registered.
The reason that I came on tutorial day was to see the Agile Development and Testing tutorial. Normally I take a pass on technical tutorials, because I can usually learn more by reading the documentation for a project for 3 hours than I can listening to someone talk for 3 hours, unless the topic is difficult or unless the presenter is really good. Titus and Grig's tutorial was different because they have looked at a bunch of tools that are useful for agile development (I was mostly interested in testing), and presented what they felt to be the best tool set. Stuff that stuck out to me was nose, coverage,py, twill, and usage of the statistical profiler. This definitely saved me the time of trying to sift through the piles of projects that are out there.
The rest of the day turned out to be pretty low energy. From our hotel room we could see huge swarms of birds flocking back an forth between trees and buildings. So while Grant went to the panel for authors, I wandered out to the parking lot and tried to grab a decent photo of these bird swarms. I don't think that I got one, and now of course, I wish that I had brought the big lenses with me (I only brought a little bit of camera stuff).
Grant was gracious enough to put up with my skating mania, so I was able to see NBC's coverage of the ladies final. Commentary on that somewhere else, some other time.
This fall in our Bainbridge Island reading group, we are going through the Ruby on Rails and Ruby (Pickaxe) book. At our last meeting, one of the things that we discussed was Ruby closures, and I was trying to help people understand what was going on. Turns out there was one area where I didn't quite understand what was going on: the ability to pass an existing function as a block argument. I thought that you'd be able to do that, but apparently you can't.
A few days afterwards I finally got around to reading this post by Dave Thomas on the Symbol#to_proc hack, which I've excerpted:
The Ruby Extensions Project contains an absolutely wonderful hack. Say you want to convert an array of strings to uppercase. You could write
result = names.map {|name| name.upcase}
Fairly concise, right? Return a new array where each element is the corresponding element in the original, converted to uppercase. But if you include the Symbol extension from the Ruby Extensions Project, you could instead write
result = names.map(&:upcase)
Now that’s concise: apply the upcase method to each element of names.
Ouch. My idea of concise would have been:
result = names.map(upcase)
Or am I missing something here?
[via HREF Considered Harmful ]:
Best quote I've heard from a Seattle Mind Camp roundup:
my favorite was from Ryan Davis, giving a presentation on Rails, who said something like
I defy anyone to come up here and use any other framework to duplicate what we’re doing in Rails as quickly. Except Avi.
The aggregator is always good for some serendipity. Here's today's.
Sam Ruby wrote about Bruce Tate's new book, Beyond Java:
In particular, I agree that in the next few years we are likely to see a shift from a small number of dominant languages/platforms to a plurality of solutions focusing on approachability, community, and metaprogramming.
It was interesting to me that when Sam analyzed Tate's list of alternative languages, there was a little discussion of metaprogramming/DSL's, but not much on approachability or community. Earlier in the aggregator session I had read Ian Bicking's post Friendship and hand holding, which gets at the community side of approachability (versus the approachability of the language), and adds some thoughts on the Python community, mostly from the point of view of friendliness. But there's more to a community than friendliness, as Ian points out:
Now I'm not saying comp.lang.python is a mean-spirited place. But Python has calcified in certain ways that Ruby has not. Just like a child is more flexible than an adult, the Ruby community is more flexible than the Python community. I think there's more open space in Ruby than Python, there's more openness to some new ideas, there's more acceptance of the opinion of outsiders. The barriers to contribution are smaller. Backward compatibility? Not as big a deal with Ruby. Add new syntax? Suggestions along those lines won't be dismissed for Python, but all new syntax is met with extreme suspicion; all the moreso if you aren't aware of past conversations on the matter. And there are lots of past conversions on just about any new syntax you'll think of -- which makes it hard to jump in and contribute ideas on that level. So I suspect you'll get a more friendly reaction from Rubyists on syntax. But then, the cutting edge of Python hasn't been the core language for a long time (by design).
When looking at Python, Bruce and Sam say that it needs a "killer app". Bruce dismisses Smalltalk (and I would assume Lisp, which didn't even get a mention), because they haven't been adopted after 30 years or so, which kind of sounds like it also needs a "killer app". Arguably, Ruby has a "killer app", Ruby on Rails. So my question is: What is it (if anything) about these three communities that results in only one "killer app" amongst the three? Rails could have appeared in Python, Smalltalk, or Lisp. But it didn't. Some people will say it's just the timing, that it's just iteration n+1 of web frameworks. But I'm not so sure. Look at what iteration n+1 of the Java web frameworks look like. The culture of a community is a powerful influence on what it chooses to pursue, and the means by which those pursuits are undertaken.
My drooling over Aperture produced some other thoughts.
In the wake of this month's Web 2.0 conference, there's been yet another round of "all applications are moving to the web". Aperture is an application that would be very hard to do on the web. I'm trying to imagine Aperture's multiple display support in a web app. I suppose that I can, but it's pretty unpalatable. My XT can shoot 3fps in RAW mode. Each one of those RAW's is 7MB. Now imagine any decent sized gallery of them, including some stacks of bursts. That data is not going to move over the web anywhere near fast enough to be responsive.
The attention to UI is one of Apple's strengths (when they remember to do it) and from what I've seen, they pulled out the stops for Aperture. Good UI does matter. And it takes a certain kind of skill and taste in order to make that happen. It's more than the flashy visuals. Aperture appears to work the way that I wish I could work with my photos. The designers took the time to understand the work needs and patterns of their target audience, and then built that in.
While I was watching the videos, some part of my brain overrode a bunch of things that I've spent a bunch of time writing about on this blog. Did I care that Aperture is written in a low level, mostly static language like Objective-C? Nope. Did it bother me that I couldn't get the source code, or access the community of developers? Nope. I'll probably change my mind about that once I return to my senses. But the visceral reaction that I had to Aperture is a big reminder that the final result matters a lot, not just the process and technology.
As I said, it looks like Aperture will be very fun to use. I can't say that I feel the same way about the tools that I use day to day for writing programs. Where are the incredibly fun programming tools? I did Emacs, Eclipse, some IntelliJ, WingIDE, and a few more. None of them are really all that fun to use. Many of them take out some of the tedious tasks associated with programming, but none of them give me that feeling that they are enhancing my creativity or thinking. Computers should extend the mind, hands, eyes, and ears.
Recently, I've felt kind of down on Apple, because of some difficulties I've been having with my hardware. I was also quite happy with my Ubuntu experience, and I was sort wondering if getting off the Mac and onto Ubuntu would be a mart idea. Aperture is not an app for everybody, but it is for me. More importantly, it represents the spirit of the Mac, which is a spirit that I think is still missing from the Linux and Windows communities. It is hard to get me really excited about a piece of software, and my reaction to Aperture is one that I haven't had in a long time.
I am torn between the two cultures: the innovative culture of the Mac that has brought forth apps like Aperture and NetNewsWire, and the culture of open source, which emphasizes participation and liberty. When will an open source project produce an app that's at the level of NetNewsWire or Aperture?
Maybe I've just been drinking too much Kool-Aid.
[via Clever Name TBD ]:
If you are interested in WebDAV in Python, then you might be interested in Fred Sanchez's TwistedDAV:
Yesterday, I got an OK from Apple to release TwistedDAV, a Python WebDAV server add-on to twisted.web2 that I've been working on at Apple, as an open source project under an M.I.T. license.
[via Chris Double's Radio Weblog ]:
Chris Double reports that the Gwydion Dylan hackers have released fhe first beta for OpenDylan, which is the open sourced version of Harlequin/Functional Objects Dylan implementation. Since it generates native code, it's only available for x86, meaning Windows and Linux. Good thing Macs are going Intel....
In other Dylan related news, the Dylan Hackers placed 2nd in this year's ICFP contest.
Thanks to Luis Gonzalez for pointing me to Mark Dofour's ShedSkin project. ShedSkin is a Python to C++ compiler based on Mark's master's thesis work, and was one of the Python Software Foundation's Google Summer of Code projects. It looks like ShedSkin can compile a subset of Python, and runs on Windows, Linux, and OS X. Unfortunately, there are no published benchmarks, so it's hard to get an idea of how much of an improvement ShedSkin actually produces. Mark is looking for additional help, so you should definitely drop by his blog if you are interested.
Don Box said "Scheme is Love", and supplied a nice reference list.
In the OSAF IRC, PJE said
"I'm beginning to believe that there is No Language But Lisp, and Python Is Its Prohpet. :)"
It's been quite sometime since I made it to a SeaJUG meeting. Jayson Raymond, our host, mentioned that SeaJUG's 10 year anniversary is coming up, but I haven't been gone quite that long. It's just that SeaJUG is (apparently -- I'm not up on JUG lifetimes) one of the older JUGs. In fact it's been so long that I didn't even know that we were meeting in a new location. Normally I ride over with a friend who lives in Seattle, but he wasn't available, so a bunch of us from Bainbridge piled into a car and drove over.
Last night's meeting was a presentation by Ramnivas Laddad on "What's new in AOP". This was our yearly preview presentation ahead of the excellent Pacific Northwest Software Symposium. Several years ago I did a SeaJUG presentation on AspectJ, and I was curious to see how much things have changed.
There are now more syntaxes for expressing pointcuts and advice. You can have your choice of AspectJ syntax (nice because the AspectJ compiler can verify your pointcut expressions), Jave 1.5 annotations (nice because you aren't learning a special syntax, except that you are because you have to learn the mini-language for specifiying pointcuts), and even an XML based syntax (handy for instantiating abstract aspects at deployment time).
Also, you can now write pointcuts that rely on JDK 1.5 metadata. This is controversial. You get more control if you limit your pointcuts to annotations, but then you have to add annotations, which goes against the AOP idea of not having to modify the crosscut code.
The tooling has improved a lot. There's now very decent support in Eclipse (and some other IDE's), and that support does cool stuff like show you all the advised methods and so forth. There's also a history view that lets you view how aspects have been added or removed to parts of the system over time -- a nice usability improvement would be the ability to do this between revisions in the version control system.
When I gave my presentation, we had a long discussion about applications of AOP. I wish that there had been more discussion about this topic. It's easy to see the application of AOP to systems types of concerns, logging, performance testing, security, transactions, and so forth. It would be nice to see some patterns of aspect use as it relates to problem domains. Of course, the whole field is still pretty new, so it's understandable that there isn't a lot of data about this just yet. One interesting usage the was discussed was to advise constructors or factory methods as a way of injecting mock objects for use during unit testing.
I'm not doing a lot of Java these days, but I like to keep my nose in what's going on. As I was listening, I was thinking about having some of these capabilities in Python. Of course, you can do a lot of AOP style things fairly easily in Python, because various kinds of interception are easy to do. The things that I think are really missing are the pointcut expression language for expressing where interception is to take place, and whatever runtime support is needed to do the weaving. It sure would be handy...
Patrick Logan posted some pointers to information about T, including a project to revive that fine implementation of Scheme.
The summer after I graduated from high school, I was fortunate to work at Burroughs' System Development Corporation subsidiary, which did some defense work, and some research. I was working on building a compiler for the functional language Super, which was a derivative of SASL. This was the first time that I saw a functional language (we were doing combinator graph reduction), and the first time that I saw a language where indentation was part of the syntax -- they called in the offside rule. Python had not yet been born, but I suppose it's some strange circle of life that I'm once again working in a language with an offside rule.
Lots of important Lisp and Scheme related research came out of the T project, so I'm glad to see it get a little more time in the sun.
[via hack a day ]:
Apparently Phil Zimmerman's latest project is based on shtoom:
The app is based on the shtoom project (open source VOIP written in Python) and the crypto is strapped ontop. A nice feature of the protocol is hashing part of the previous conversation’s key into the current conversation. If you and the other person read the hash aloud and they match it means that this conversation and every previous one has been fully secure.
He’s shopping the project around to venture capital right now to make a commercial product written in C. The source will still be free though.
Good thing Anthony is in town this week...