I’m back from a trip to a customer.
How was it?
Okay. I got more snow that I expected on the way there, so the drive wasn’t much fun. Then again, a part of the trip goes through a beautiful forest that was worth everything else.
Also, while showing the customer a new feature, the app crashed.
Typical. Blame it on Murphy!
That’s what I did at first. Then I blamed it on the developer. And then I finally went looking at the C# code to find out why it happened.
What was it?
It turned out to be a rather common but not obvious mistake. See the code below and tell me what is the value of each of
float f = 0.0;
int i = (int)f;
object o = f;
var doubleF = (double) f;
var doubleI = (double) i;
var doubleO = (double) o;
I’m sensing a catch here, but I’ll bite. They’re all cast from the same original variable
f so I’m guessing they’d all end up 0.0…?
You would, wouldn’t you? But you’re wrong.
The final line in that code will throw an
InvalidCastException at you — and crash your app if you don’t catch it, as was the case in our app.
Wait what? How? How come you can’t cast 0.0 to
Well, you can. For instance, this works perfectly —
var d = (double) 0.0f;
But this doesn’t —
object f = 0.0f;
var d = (double) d;
It makes no sense!
Actually it does. The problem is taking
object to mean “anything.” Which incidentally it does, just not the way most people think. You see,
object is a type representing
Object, which is a class other types inherit from but not all. You can store anything as
Object boxes whatever object you put in it. It stores the value internally but the compiler doesn’t know what type is stored there.
No no no! I know for a fact that you can too check what type is stored in an object
You’re right, you can. For instance —
object o = /* something */
This will print the type of whatever you put in the variable
o. But this is at run time: the compiler doesn’t know.
That’s why we using casting. If we know for a fact that variable
o will contain a, say,
int, we can help the compiler and tell it about it with a cast. Remember, when you cast something, you are telling the compiler what type will be stored in the variable. The compiler can’t be held responsible if you lie to it.
Let’s get back at the original problem —
object o = 1.0f;
var d = (double) o;
You told the compiler that
o will be a
double, but it isn’t. Remember a
double is a shorthand for the struct
float is for
Single. And guess what? A
Double is not a
Single. When you stored a
float in the variable
o of type
float value was boxed inside an object of type
Object. When you cast, the compiler has to unbox whatever was inside
o and guess what, the value stored in
o is of a different structure, with different methods and storage, than what you told it it was. You could convert between them, but they are not the same.
So the compiler expects an object of type
Double but it has a
Single and things fail miserably.
But you just said that we can convert between them! Why don’t the compiler does it?
It could. But think of how this would work out in real life. Remember the compiler doesn’t know what will be inside
o so it needs to test what the value is. It would need to test if the type is a, say,
string. If it is, then convert
Double. If it isn’t then check if it is a
Int32. Then a
Int64. Then a
DateTime. The number of possibilities is enormous and the compiler would have to generate all this code every time it needs it finds a cast. This would be a lot of code. It would be so much code in fact that you’d be mad not to put it all in separate methods. It would also be slow so the compiler won’t do this by default.
That’s why we have the
Convert class, which in turn depends on types implementing the
IConvertible interface. Whenever you want to convert a value of TypeA to TypeB, you can use this conversion methods. You can do —
object o = 1.0f;
var d = Convert.ToDouble(o);
The compiler authors had to make a decision: either they’d generate lots of slow code to test for the type and convert the value, or they’d leave the decision for the programmer who can call
Convert.ToSomething when needed.
And they chose the former.
Exactly. I believe it was reasonable. If you know something will be of a given type at run time, you can still cast it. Otherwise, you should convert it.