Wednesday, March 28, 2007

jMock 2 (RC1)

The ever-so-infrequent releases of jMock lead to great excitement when they do release a new version. jMock 2 RC1 has been coming for some time, so it's great to see it happen. With new syntax and better support for automated refactoring, I'm expecting this to be an enjoyable upgrade.

If you haven't tried jMock, I'd urge you to give it a look. Although its authors sometimes take their design approach to the point of near-zealotry (which I do find occasionally frustrating), it's still the best 'Mock Objects' tool I've used, precisely because it allows you to indicate what you want to test and what you don't care about in a way that no other mock framework that I've used really addresses.

Simple is Better: Why Your Complex Requirements Aren't Worth It

Over the last few years, most of my effort has gone into a single software product. This product has a lot going for it, and I've learned a lot in the process. However, the core process of negotiation has become increasingly complicated in release after release. This has been coming to a head for several releases now, but in the current release cycle, it has become more evident than ever that there's a price to be paid for this complexity.

The Price to be Paid
In this last release, we've put some effort into rethinking a few of the base principles of negotiation, allowing the customers to make better use of negotiation after having formed an agreement with our clients.

In order to get to a clear functional requirement that described the business and high-level user-system interactions that was agreed-upon by the relevant stakeholders, it took nearly a month of effort, including a number of cross-functional discussions.

Perhaps some of you are used to working in environments where going from a request to clearly understood requirements often takes a month or longer, I'm not sure. I'm not used to that, and I'm not convinced that it's healthy, even with a long-term product like ours.

Even with that under our belts, the implementation effort has suffered several setbacks as we've dug in, discovering areas of complexity that weren't anticipated. No matter how complex the business side gets, the technical side will always add another level of complexity, and sometimes that level will bite you.

So, having built a system with this kind of complexity, we've lost some agility and some predictability, because it's difficult to make rapid changes and predict the cost of those changes.

How Did We Get Here?
I can't speak for everyone, but I think that we got here for a few reasons: simplicity is hard, and we value features over simplicity.

Some people think simple is easy; it's what you do before you add the complexities. This usually isn't true. One of the things I've learned from experience in writing, building software, and any number of other life tasks is that simple is much harder than complex, short and concise are more work than long and verbose. If you do something the easy way, it'll have rough edges, edge cases and complexities. This isn't always a problem, it's just the way it is.

Smoothing the rough edges, reducing the edge cases, striving for a simple and complete model, tightening the prose, these are all tasks that take additional effort. That effort is the essence of the trade-off. Do you want five features with a few small edge cases, or two fully polished and reconciled ones? In a business environment, this is a tradeoff that a business manager should be able to make, as long as they're well-informed about the aforementioned costs of complexity: that technical and design debt is paid sooner or later, and that it's often sooner than you think.

And then there's time constraints. Given the cost of complexity, it's not uncommon that a business will make the tradeoff and decide that "more features" is more important than something as abstract as simplicity. For any one feature, this may be the right decision, but if you make this decision for a significant percentage of your application release after release, you're likely going to find that your costs have risen slowly without you knowing, and that over the long term, you may end up paying more for less.

Worse, this is self-reinforcing.

Self-Reinforcing Cycle
If you've racked up technical debt and you've paid the price of complexity, you'll likely discover that it takes you more and more time to analyze and implement the features. This puts more pressure on the time constraints and causes you to sacrifice even more simplicity.

On the other hand, if you've done a good job of paying down any debt you incur, by paying the price of simplicity, you may find that you've paid a little up-front in order to avoid paying a lot later, that all of your features can be analyzed and developed quickly. This reduces the time-pressure, and helps you to continue to strive for simple solutions.

How do I Avoid This? How do I Get Out?
The easiest way to avoid this is to not incur design debt and technical debt. Seek simple answers to complex problems, and don't give up until you find them. When you need, as a business, to incur a little debt, make sure you pay it back as soon as you can. Finally, having built up significant debt, don't be afraid to take a step back. It's only going to get worse, so you might as well deal with it now.

Are your projects complicated? How do you deal with it?

Tuesday, March 27, 2007

Object-Relational Transparency II: the Designs

I was left feeling a little dissatisfied with this morning's post about the dangers of object-relational transparency. Although I felt I had communicated the high-level point I was trying to make, without the designs, it feels somewhat abstract.

In this instance, I was trying to map an an entity, Agreement, to different versions. Most of these versions would be effective for a particular timeframe (one month to twelve months), but some would be entirely superceded by the next version.

In the database, it seemed sensible to map this structure as follows:


create table AGREEMENT_VERSION
(
AGREEMENT_ID number not null,
EFFECTIVE_DT date,
-- Other Stuff here
)


The first version would have an effective-date in line with the agreement start, while subsequent versions would take on later effective dates. When a later version's effective-date is the same as a previous one's, the previous version's effective-date would be nullified, indicating that it was never effective.

Simple enough, so on to the object model. I wanted to be able to quickly run through the versions in effective-date order to find the version effective for a given date. Hibernate supports sorted collections via SortedSet and SortedMap. Trying to map the versions into a SortedSet is where it all starts to fall apart.

Set items are unique, requiring me to have a sort order across all items. For superceded versions with no effective date, there's no obvious sort order. I toyed with some silly ideas including using the hash code, but none felt quite right. I was trying to impose an arbitrary order on unordered things in order to support order for the ordered things, just because both happened to be stored within a single table.

After chatting with a few colleagues, partly design discussion and partly venting, I was struck, as I said this morning, with the realization that I was trying to force-fit my data model design onto the object model. Hibernate supports where clauses on collections, which allowed me to put the superceded versions into one collection and the effective versions in another:


<set name="effectiveVersions" table="AGREEMENT_VERSION" sort="natural" where="EFFECTIVE_DT is not null">...</set>
<bag name="archivedVersions" table="AGREEMENT_VERSION" where="EFFECTIVE_DT is null">...</bag>


Et voila; the object model makes sense, the data model makes sense, and the two meet happily inside a Hibernate mapping file. How sweet.

Monday, March 26, 2007

The Dangers of Object-Relational Transparency

One best qualities of a good abstraction layer is that it doesn't require you to think about that which has been abstracted. When writing C and Java, one doesn't have to know, or care, about the state of the machine's registers.

On the other hand, this can be a danger, particularly when the abstraction has leaks, even small ones. I was reminded of this the other day when, having assembled a design for a problem I was facing in the data model, I was having trouble finding a way to translate it to the object model.

After discussing several approaches with two of my colleagues, I eventually realized that the biggest stumbling block was my desire to translate the data model design to the object model, encouraged by the use of Hibernate, which does a great job of mapping like objects and tables.

In this instance, the best solution to my problem was to use a different object design than I was using in the database. As soon as I framed it this way, my challenges dropped away and I quickly found a suitable design for the object model.

This problem occurs at least in part because we are unable to accept that the database is an abstracted layer. There are some good reasons for this:

  • database-application interaction remains one of the primary architecture elements that has a significant impact on scale and performance of almost any web application.
  • database-level integration for two or more applications and/or tools is inexpensive and pervasive.
There is, however, a price -- you need to consider the design of each, and sometimes the best design in one is not the best design for another. I needed a reminder of this the other day. Do you?

Friday, March 23, 2007

Canon High-Definition Camcorder for only $25!


Tiger Direct (Canada) is offering a Canon High-Def Camcorder for $24.95! What a steal!

Strange how it looks like a 2Gb SD Card, though. I don't even see a lens ....

Wednesday, March 21, 2007

Wireless in Canada

Canada is a country that is so large, but not so populous that its geographically dispersed population has often been quick on the uptake of connecting technologies: broadband internet, wireless phones and devices.

However, when it comes to mobile phones, Canada is not in the forefront, and I blame that firmly on the Carriers: Bell, Telus, Rogers/Fido, etc.

I read about interesting mobile applications daily. Things like GMail and Google Maps on mobile, like Radar or (forgive me) even Twitter. And yet, these don't impact my life, or the life of my friends. Why? Because the Canadian mobile market is all locked up, and those who have the keys are just looking for the leverage to make a little more money.

When infrastructure is new, or difficult to acquire, people seek to control it. Electricity, internet access. These slowly, sometimes very slowly, become commodities, even subsidized, and the community, both business and personal, often benefits from it. While electricity helped to turn on the lights and automate laborious tasks, cheap electricity brought us radio, the telephone, television and the internet. Yes, I admit, it also brought us wall-mounted singing fish and the home shopping channel, but nothing comes without its price.

Similarly, you don't build more bandwidth into the network because people need it for the applications they already have, you build it because they don't even know what they can do with it until they have it: videoconferencing, iptv, voip -- these are technologies that become possible when bandwidth becomes cheap.

This is why I wish that the wireless providers could get it through their dense heads that giving us increased, unfettered access to wireless data and the applications that require that data only makes us their dependent customers. That the idea of paying $100/mo for 250MB of data, 200MB of Data while limiting access to applications and charging monthly fees for access to your phone's GPS capabilities is hurting them as much as it's hurting us.

Get over it. Free us from your mindless restrictions and help us help you make wireless applications and data a ubiquitous part of our daily lives. We'll keep paying you for this stuff, just stop getting in the way.

Ad-Hoc Queries, Oracle, Using and Table-Qualified Wildcards

When I'm doing ad-hoc queries in Oracle as part of my development, debugging or research, I'll often make use of the USING clause. This lets me join two tables with a minimum of actual text.

SELECT *
FROM customer
INNER JOIN address USING (address_id);


This is more compact than specifying the id in both tables, given that our corporate nomenclature is <table>_id, so both tables tend to have columns with the same name.

Having done so, if I'm not exceptionally familiar with these tables, and haven't opened the table definition in another window, I may well want to find out the columns of one of the tables before deciding how to nail down the query. For instance, if I know I want some of the address, but I haven't decided which fields, I might do:

SELECT customer_name, address.*
FROM customer
INNER JOIN address USING (address_id);


At which point Oracle throws an ORA-25154. This is because address.* implies address.address_id, and Oracle doesn't like the idea that I might be qualifying a field that I've already indicated is the same as another field.

This frustrates me regularly. I'm not entirely sure why this is different from:

SELECT customer_name, address.*
FROM customer
INNER JOIN address ON customer.address_id = address.address_id;


Is there a way around this silliness that I have yet to discover?

Monday, March 19, 2007

Microsoft Developer Tools are Too Easy To Use?

Microsoft does a very good job of making sure that it's easy to get up and running building a program using their tools. Over time, this has shown itself to be both a strength and a weakness.

Skill, Education and Background
With a Microsoft toolset, people with very little skill, education and background in software development and the internals of computers and computing can get started building programs. This is very powerful, and encourages businesses to innovate.

Unfortunately, this also means that people who have learned how to build programs using your tools may have very little skill, education and background in software development. A little knowledge can be a dangerous thing.

Tasks: Simple and Complex
It can be easy to accomplish simple, common tasks. Want to build a form, connect the form fields to a database table and allow people to enter data? You can build that in Visual Studio while hardly touching a line of code.

On the other hand, sometimes when you want to go beyond the simple things to the complex things, there's very little support for you to do that. The tools, the technologies and the documentation focuses so strongly on ease of use, simple tasks and beginners that sometimes the "step up" is not only difficult for those beginners, but can be difficult even for experienced programmers, who have a hard time finding the resources and information they need.

Recency
I haven't used many of the Microsoft tools of late, so it's interesting to see that trend continue via 'Just another Blog's hiring experience.

Thursday, March 15, 2007

Advertising Brokers and Censorship

Advertising and Censorship have had a long and storied history. If you're an advertising-supported content business you have to juggle the needs of your readers and the needs of your advertisers.

Your readers want interesting content to read, and some of that interesting content may be controversial. That content, or the controversy therein, can be directly against the interests of your advertisers. For instance, an article arguing there's too much processed corn in what we eat might be a problem if you're getting a lot of money from Doritos. An article about abortion could scare off advertisers of all kinds.

Advertising Brokers
In internet advertising, much of the advertising is placed through brokers, like Google AdSense, where the advertiser and the content site don't interact directly. Under this model, it must be easier for the content creator, who doesn't know which advertising will be placed on his site. For instance, a blogger using Google AdSense isn't told which advertising is coming up, and doesn't interact with the advertiser directly. Further they will often know that the advertiser hasn't explicitly asked to be placed on their site, and may not monitor the content.

This is especially true of the smaller site. If you're a major internet portal, you may well be managing your own placements and have similar concerns. Anyone work in that space and want to comment?

Tuesday, March 13, 2007

WADL: A description language for the REST of us

If you must make an API available over a network or over the web, you've got a number of ways to do it. Conventional wisdom these days argues for WS-*, SOAP, SOA, and their ilk. These have a lot of power, but, to be honest, I'm much more likely to use something simple: XML over HTTP.

I'm not going to get into a discussion about the pros and cons of a WS-* approach vs. XML/HTTP, nor the theories behind REpresentational State Transfer. There are lots of places you can go for that.

However, if you are building or consuming APIs using XML over HTTP, it may have occurred to you that there's no consistent way of documenting and describing these services. WADL, the Web Application Description Language, attempts to fill that gap, and it seems relatively sensible.

I'm a little wary, because I've had to create and consume WSDL, and I'm not yet convinced that a description language adds enough value to warrant any additional complexity to the XML/HTTP stack. That said, it's interesting to the applications people have considered for such a thing:

  • Generating Documentation from WADL
  • Generating
    • Client libraries in multiple languages from WADL
    • WADL from Examples (e.g. in Documentation)

Of course, there are alternatives. If you're writing XML over HTTP, are you considering a description language?

Checked Generic Collections Microbenchmark

After skim-reading "How'd this String get into my List?", I idly pondered the cost of using the 'checked' collections when using generic collections in Java.

One little microbenchmark later, and it seems, at least in my environment, that a checked collection takes about twice as long to do an add().

By way of example:
1.95M adds: 582ms unchecked, 1300ms checked
0.95M adds: 475ms unchecked, 616ms checked
450K adds: 239ms unchecked, 348ms checked

The results weren't so consistent that I'd want to lay money on it, but it does, at least, imply that the overhead cost of a checked collection is fairly low, with the possible exception of extreme cases where you're trying to find any performance advantage.

If you're exceptionally curious about how the microbenchmark was performed, I'm willing to post code, but as with any microbenchmark, you're better taking it with a grain of salt, and verifying your own scenarios if it's vital for your work.

Friday, March 9, 2007

Talkers and Doers

Reading LazyCoder's "Do'ers vs. Talkers", choosing to read the blogs of those actively doing, rather than just talking, struck a chord, as I've been leaning that way myself. I'm not sure there's just doers and talkers, though, I think there's a continuous spectrum.

We can talk about some points along the spectrum, but those points don't represent the only choices, they represent some rough clusters that we can talk about, with no hard lines between them. One blogger might fall between these clusters, or represent two or more clusters, depending on the post. There are no hard and fast rules.

Doers
As the title says, these are people who are actively "doing" rather than "talking". Most of their time is focused on getting something (e.g. software development) done. When they're not busy with that, they take the time to share their experience, their frustrations, challenges and successes with us.

These people are valuable because they have first-hand experience with the subject matter, they know the things that you only learn when you stick with it day after day. These insights are hard to get any other way: it's why I really value what they have to say.

Sometimes, these are people who aren't directly involved in the day-to-day doing, but are so closely connected as to be indistinguishable. Look at Martin Fowler: I'm not sure if he spends much time coding, or even directing the architecture of projects these days, but he does a fine job of synthesizing architectural patterns.

Because doers focus on doing, they are usually less prolific than most of the other categories here. They're also fairly focused in theme, as a given doer is focused on the things he or she is doing, and there's only so much one person can do in a day. They're also fairly focused: there's only so much one doer can do in a day.

Conversing Doers
When one doer listens to another doer and joins the conversation, he or she becomes a conversing doer. This conversation can take place in the blogosphere, in person, at a conference, over email -- as long as one of the two shares the results of the conversation. This can be more valuable than the doer alone, because you may get the benefit of two or more different, hard-won insights on one subject.

Conversing doers can be slightly more prolific than doers, because they don't have to have put as much effort into an area to share some insights. For instance, if someone were to share their EJB3 experience, I might not have any of my own, but I can contrast that with how I use Hibernate, or my ancient experience with EJB 1.1.

Doer-Connected Talkers
These are people who are focused on talking, rather than doing, but who are closely (or directly) connected to the doers. They know so many doers that they can share tidbit about what each is working on. They can help direct you to the doers, and to what they're doing.

In some cases, Doer-Connected Talkers will simply be consuming what the doers the know are already sharing, and passing on the pieces they believe are interesting. In other cases, they'll talk to doers directly, and share 'by proxy' what they've learned. This is particularly useful when the doers don't have the time or the interest to share their insights directly.

Some of the information is often lost in translation; because these talkers don't know the subject matter as intimately as the doers, they aren't likely to do a perfect job of sharing what there is to know. However, they can be very prolific, and cover a broad range of topics, because they focus on talking, and on finding interesting things that the doers are doing. This niche is filled by people like Robert Scoble.

Aggregators
Unlike the doer-connected talkers, aggregators aren't directly connected to doers. They're in the business of finding interesting information and passing it on: aggregated communication. It doesn't matter if this information comes from a doer directly, second-hand, third-hand, or somewhere else entirely, as long as they believe it will be interesting and that a significant proportion of their readership has not already read the original source of the information.

There's another spectrum here, one of speed. An aggregator can try and react to information quickly, directing you to a broad range of topics rapidly with a minimum amount of added information. They're trying to stay hours or days behind the information as it arrives. These are the Slashdots, Diggs, Reddits, the linkblogs.

Other aggregators do not focus on speed, but rather on comprehensive coverage of the topic. Some of these make a deliberate choice to 'slow down', while others are slow due to the medium (e.g. print, rather than web). They're likely to be days, weeks, even months behind the day's activities.

This 'slowness' gives you the freedom to do deeper analyses, to commission content rather than simply share what others hae created, to talk to an array of sources about one subject to synthesize the informed opinion of many people. This kind of work takes time, the kind of time most fast aggregators don't have. These are the newspapers, books, magazines, conferences, and the websites that share 'articles' rather than 'blog entries'.

Who Do You Listen To?
Although I do read things from each category here, I do so for different reasons. RSS has done a great job of allowing me to stay connected to doers directly. This has reduced my reliance on aggregators.

Doer-connected talkers and aggregators are still useful to help get me to the doers in the first place, much like Radio can help you discover the music you enjoy.

So what about you? Do you find yourself increasingly moving to one end of the spectrum?

Thursday, March 8, 2007

Eclipse 3.3 Plan Updated

The plan for Eclipse 3.3 has finally been updated:

  • Milestone 6: March 23rd
  • Milestone 7: May 4th
  • Complete: Late June
Now if only it were possible to understand and believe the dates for the Europa simultaneous release, which have yet to make any real kind of sense to me (at least in part because they haven't been on target previously).

Plaster Dust

The joys of owning an old home. We're taking down some older plaster walls that don't have enough integrity to bother repairing, and will be putting up drywall in their place. (Yeah, apparently some people think that's a shame, but it's certainly easier to deal with).

We've tarp'd what we can, and closed off some doorways with plastic, but ripping plaster apart creates a lot of dust, and there's very little you can do to keep it fully contained. It takes me near twenty-four hours to feel de-dusted, and then I'm back in with the respirator mask and the renovator's bar from Lee Valley, getting re-dusted.

Feel my pain. Better yet, come over and help, bring a truck, and take some of my plaster home with you as a souvenir!

Wednesday, March 7, 2007

Too Many Technologies and Specialization

The project on which I spend most of my time has, over time, developed a fairly heterogeneous architecture, from Java to .Net, from Oracle to SQL Server, from Reporting and Monitoring tools through shell scripts and web frameworks.

Technological Proliferation
As developers, it's important that we're able to make use of new technologies that we believe will improve productivity. An increasing technology mix can be desired sign in a project, even healthy, showing a kind of living vitality that stagnant projects lack. However, a ballooning technology mix can be a warning sign as well.

Getting that balance right is difficult, and probably warrants a post of its own. I'll come back to that another day. However, I've begun to notice a particular danger of a heterogeneous application mix recently: specialization.

Specialization
Some technologies are easily mixed by a given team. Even in the early stages of the project, it wasn't that hard to keep developers informed about how to use Tapestry, Spring and Hibernate in concert. Java developers are used to using a mix of libraries and frameworks to get their job done, and this project is no different. Although we had a single specialist in the form of a database developer, there was a fair amount of overlap, as all of the Java developers were in and out of the database on a regular basis, making changes, learning the structure of the schema; the specialization was really one of 'ownership', not of knowledge.

As the project grew, a number of external forces caused the technology mix to grow. We added an analytical datastore, reporting technology, an ETL tool, monitoring and scheduling. We mixed in some .NET technology to take advantage of some components that we couldn't easily incorporate otherwise, and web services to bridge the two.

As this happened, the team started to specialize. Reporting developers were added who knew the reporting technology and became familiar with the analytical datastore. The database development team was expanded to handle the analytical datastore and the ETL tooling. Some operational people became experts in the monitoring and scheduling. Some of the developers worked on the .NET technologies while others didn't.

The Impact of Specialization
When your team members specialize, they gain some efficiencies. A developer can focus on a particular technology or set of technologies. He or she can become expert by working in that technology alone. There are also some costs to pay.

Specialization reduces your 'bus number': the number of team members who can be 'hit by a bus' before your project is SOL. It also reduces your resourcing flexibility. Don't have any pressing reports? Too bad your reporting developers can't add a core feature. Having trouble with the ETL? Too bad your Java developers don't know how to install SSIS, let alone use it.

Each project has to make this tradeoff for itself; neither is wrong, it's just a decision. But it may not be a decision you have to make.

Do We Have to Specialize?
Here's the key: specialization is starts to be appealing if, and only if, you start to use a large number of disconnected technologies. So when you're adding a technology to your project, ask yourself if you're walking down the wrong path.

For instance, if you're considering writing scripts to interact with the database because you can't afford to build a user interface, ask yourself: Is there another way? Can we fold this into the interface using code generation, or a framework that simplifies CRUD? Is there a way to build this in a technology you're already using?

Does your team specialize? Do you use a wide mix of technologies, or have you found a way to keep it down to a small set?

Tuesday, March 6, 2007

Double-Click Protection in Firefox

A common class of errors in web applications is failing to handle the same request twice, as occurs when a user impatiently clicks a link or a button a second time while the first request is taking longer than usual.

Recently, someone reported an issue just like that on an application we develop: If you click an image button twice, the system gets, shall we say, cranky. This morning, I attempted to verify it using Firefox 2.0.X, and couldn't do it.

I tried the same thing in Internet Explorer 7.X, and voila. No problem, error verified.

That seems to imply that Firefox (or possibly one of the extensions I use) is protecting me from a multiple-submit problem. Interesting. I wasn't able to turn up anything in a quick Google search; does anyone know more about this?

Monday, March 5, 2007

Profiling a Build

I've been working on a product for the past few years. When we started, the build was a sub-minute job, but by the time of the first release, it had made its way up to five minutes, mostly in tests.

At the end of the second release, it was twenty minutes, again, mostly tests, although a UI smoke-test and some additional database tooling had been added. That was longer than I'd gotten used to, but livable, if need be.

Now, four or so more releases, we're at forty minutes. With our Continuous Integration server, developers can check in without running the entire cross-project build, as long as they take reasonable steps to ensure that the checkin is not likely to break the build. It has become clear that the "total build" has increased in time, and will continue to do so, unless we take measures to reduce it.

What Can We Do?
Although this is a challenge our project faces, our project is not unique. As a project matures, we add to it, in tests, tools, project structure. These additions cost time in the build process, and that time can hinder the productivity of the team.

As with any kind of performance improvements, it's best to start by measuring the performance and determining bottlenecks: the problem may not be where you think. This means: profiling your project's build.

Since our project is not unique, you'd expect to find information on profiling the build of a project. Surprisingly, I haven't, in six months of occasional queries, found much to satisfy my interest in this subject-matter. There seems to be a dearth of information on profiling builds.

Most build tools don't have any kind of built-in profiling capabilities. You could use a Java profiler, but these tend lack the context-sensitivity to be truly valuable, particularly when you're dealing with infrastructure that tends to repeat.

Build Tools
I'm surprised that Build tools (sucks-rocks.com) don't seem to come with some basic profiling capabilities. Particularly with Maven (1/2), which have a standard build process. Maven could easily tell me things like: the time it takes to compile vs. test vs. create the distributable vs. deploy to the repo; what percentage of the multi-project build each project takes. This kind of technology would not be difficult to build, so I'm surprised that no-one has gotten around to doing it. (And, no, I don't plan on spending the required time in Jelly myself. Jelly is evil.)

With a large multi-project build, it'd be really great to have some sense of where your time is being spent. Does your build tool tell you this kind of information?

Profilers
Profilers are generic, so they tend to lack the context that makes it easy to interpret the results. That's not to say that you can't run your entire build through a profiler, just that the results will require more work to gather and interpret than a simple build-timing report would.

Some profilers attempt to look for context, to give you information about web requests, database transactions and queries, etc. Some even try and extend that context-finding control to you, the developer (see the interceptor API). But none of them, that I'm aware of, come with any sense of a build context out of the box, which is a shame.

Other Approaches and Closing the Feedback Loop
So what do you do? How do you profile the build in your project? Have you found a way to do this that's more effective than anything I've considered here?

Sunday, March 4, 2007

The Two Sides of Insourced Software Support

It's amusing the disconnected conversations you can find for yourself in the blogosphere if you look hard enough.

Just the other day, I was reading Joel Spolsky's take on Customer Service: "When we handle a tech support incident with a well-qualified person here in New York, chances are that’s the last time we’re ever going to see that particular incident. So with one $50 incident we’ve eliminated an entire class of problems."

And this afternoon, skimming through the javablogs email, I found arrghh's frustrated entry on the same subject (quoting a response to a desired rewrite): "No thanks, the support team rely upon the money that comes in from all the calls. $100 a call then $50 for each hour they spend on it works up to a good holiday a year for each of them"

So there you go. I guess it depends who you work for.

Chateau: Gate


Chateau: Gate
Originally uploaded by diathesis.
As I said on Friday, I have a pile of photos on my laptop to go through, process, post and archive. This photo, the Chateau Frontenac, was taken on our return trip from Moncton to Toronto after our wedding. It as a foggy night, so there are a few nicely atmospheric shots of the Chateau, whose lights catch the fog in an evocative way.

Friday, March 2, 2007

Musings: Meetings, Lightroom, Netbeans, Ruby

Where does the time go?

Exceptionally busy at work this week, I've been bouncing from meeting to meeting. This is always a tricky balance, as many of these seem to be productive meetings, but at the same time, if all I have time for is meetings, I think the balance has shifted in a bad way.

Spent some more time with Lightroom 1.0 tonight: I've got something like 40Gb of RAW photos on the laptop to organize, process, upload and burn, and Lightroom seems better than most at helping me with an efficient workflow.

Lightroom behaved for me tonight. No sign of the flakiness, shutdowns and other nonsense that plagued me the other night. That's a positive sign. I'll keep my fingers crossed. If I can get through the trial without experiencing more of that, I could be persuaded to enjoy Lightroom.

Trying to clear some photo space so I can download/install Netbeans M7; I've got a minor ruby project I'd like to toy with, and I've been meaning to try the Netbeans/Ruby support, so I can kill two birds with one stone.