“Just Ship It.”

In my experience, most software-dependent startups fail because they never actually finish the software. It’s really that simple.

I’m trying to figure out how I feel about Jeff Atwood’s recent proclamation: “Version 1 Sucks, But Ship It Anyway.” As a connoisseur of Software Projects That Suck, I get Atwood’s point: you don’t really know what’s wrong with Version 1.0 until some real customers get to play with it and let you know what they think. I’ve often spent days or weeks perfecting a feature that nobody cared about! Why not save that time by getting feedback first?

Continue reading “Just Ship It.”

Coding sideways

Am I right that some of the hardest programming is when you’re modifying some existing code that is not quite well enough documented?

I’m looking at this scientific application, in which I struggled with all the code that leads up to drawing some graphs, and the graphs are still obviously wrong. Between me and success I have a few layers of trigonometry and matrix algebra.

Obviously the physicist who wrote the original code had some intention for the methods with names like get_x() and InverseTransform(). And I’ve gotten through about half of this stuff with revelations like, “Oh, this converts the photo grid coordinates into screen coordinates!” or “I can cut this scaling out entirely because I already have a transformation matrix on the Graphics context.”

But you know what would really help?

Continue reading Coding sideways

The silver’s all around you

This is a response to Peter Kretzman’s coherent and insightful blog post, “No silver bullets. Really!” You should go read that first. I’ll wait.

Peter’s post, in turn, is a response to the classic paper, “No Silver Bullet: Essence and Accidents of Software Engineering” by Fred Brooks, in which Brooks says that complexity in software development is essential, not accidental. You should read that too.

Now here’s what I think.

Continue reading The silver’s all around you

Small isn’t the new Big; that’s okay.

A while back, I was led to Jason Cohen’s blog post on not trying to look like a big company when you’re not. Jason makes a good point, that when Lockheed Martin is ready to order 1000 copies of new software they probably won’t buy it from a small company anyway. That’s often true. But I think there’s a more important reason to knock off the high-falutin’ corporate image thing.

Continue reading Small isn’t the new Big; that’s okay.

When your C# has to be fast.

When Microsoft came up with .NET environment and the snazzy new C# programming language to go with it, one of the design goals was to support code that could be shoved willy-nilly across unsecure networks. Thus the Common Language Runtime (CLR), which among other things creates a runtime environment that’s a little like the Java Virtual Machine (JVM) from Sun.

Code for .NET doesn’t execute directly on the CPU, and it doesn’t talk directly to the operating system. Instead, the CLR runtime–part of that big download when you pull “the .NET Framework” from Microsoft’s download site–treats the code as input and does whatever needs to be done at the CPU and operating system level for you.

Safety first, usually

Why would anyone bother with that? Among other things, the CLR itself is assumed to be “safe.” The CLR won’t do anything to interfere with processes that it’s not supposed to control; it observes security restrictions; and it doesn’t let you access code or data through pointers. So in theory, and most of the time in practice, CLR code won’t mess up your computer and therefore it is safe to run CLR code from untrusted sources.

The downside, again one among many, is that as a programmer you don’t have access to pointers in safe code. They’re automatically considered “unsafe.” Any code with pointers in it is “unsafe.” Any code that uses code with pointers in it… is “unsafe.” Any code that uses code that uses code… blah blah blah.

When it has to be fast

Right now I’m working on a .NET application that needs to pull a few megabytes of data from a legacy device driver and do a lot of math on it. It’s too slow to load all the data into a big byte[] array, and then to access each element one by one for calculations. Instead, I wrote something like this, with the names changed to protect the innocent:

[sourcecode language=”csharp”]
unsafe public class BitContainer
{
private IntPtr data;
public static int Size; // initialized elsewhere
public static int Count // however many shorts can fit into Size
{
get
{
return Size / sizeof(short);
}
}
public BitContainer()
{
data = Marshal.AllocHGlobal(Size);
}
~BitContainer()
{
Marshal.FreeHGlobal(data);
}
public void LoadFromStream(BinaryReader input)
{
ushort* p = (ushort*) data.ToPointer();
for (int i = 0; i < Count; i++)
{
*p = input.ReadUInt16();
p++;
}
}
public void GetItemsOverThreshold(int threshold, int max, ref List<Item> items)
{
log.Write();
items.Clear();
unsafe
{
ushort* p = (ushort *) data.ToPointer();
for (long i = 0; i < Count; i++)
{
if (*p >= threshold)
{
if (items.Count >= max)
{
throw new ApplicationException("Too many items passing! Turn down the gain?");
}
else
{
long x = i % FRAME_WIDTH;
long y = i / FRAME_WIDTH;
items.Add(new Item(x, y));
}
}
}
}
}
}
[/sourcecode]

The alternative to tearing through data with a C-style pointer would have been to pick through the elements of a .NET array of byte or short values. Each array access goes through the CLR for bounds-checking and the like, and when you’re hitting literally millions of input values that’s just too slow. I originally did the BitContainer code above that way, and it took hours.

The takeaway

It’s a pain to test, and it’s sometimes hazardous, and it contaminates your entire application with the unsafe label, but dropping under the CLR and writing fast code with pointers is sometimes the only way to get acceptable runtime speed.