Thursday, December 5, 2013

Var-ious Thoughts

I started my programming career as a Smalltalker (before Java or C# even existed) and enjoyed the "everything is an object" dynamically-typed variables. In the late 90's, I worked on the VisualAge for Java product (which eventually became the Eclipse platform) using Java. Then I switched jobs and ended up using VB6. It wasn't as bad as I feared because VB6 at least had objects, classes and interfaces (but no inheritance). Still, I was happy when I convinced my bosses that we should switch to C#. It was nice to be back using a fully object-oriented language.

Coming from a Smalltalk background I had been reluctant to embrace a strongly-typed language (let alone strongly-type collections). I found after awhile I liked the some of the benefits like having the compiler catch type exceptions and having the type declaration be a way of self-documenting the code. With the introduction of the var keyword in C# 3.0 I was concerned that we were taking a step back to the VB days. I understood the scenarios with anonymous types they needed to add it for but didn't want to lose documentation benefits. I've come to realize that when you are declaring a variable and then instantiating and assigning an object to it 'var' can save a bunch of typing. Just don't overuse it when the type declaration is not readily apparent (ie don't make me think or have to search to figure out the type).

Here are my general guidelines for using 'var':

  1. Don't use 'var' when the type of variable is not readily apparent.
  2.   var s = SomeMethod();
  3. Use 'var' when declaring a variable and assigning an obvious type to it.
  4.    var collection = new List<string>();
  5. Use 'var' when required to (e.g. with anonymous types).
  6.   var anon = new { Name = "Joe", Age = 34 };

Wednesday, December 4, 2013

Fast XML parsing with XmlReader and LINQ to XML

Using XmlDocuments to parse large XML strings, as we know, can spike memory usage. The entire document is parsed and turned into an in-memory tree of objects. If we want to parse the document using less memory there are a couple of alternatives to using XmlDocuments. We could use an XmlReader but the code can be messy and it’s easy to accidentally read too much (see here). We could use XPath but that’s more designed for searching sections of XML rather than parsing an entire document. Lastly we could use LINQ to XML which offers the simplicity of XmlDocument along with LINQ queries but by default will load the entire document into memory.

This blog post offered an interesting alternative of combining LINQ to XML with XmlReaders. This hybrid approach seemed to offer the speed of forward parsing XmlReaders with the simplicity and functionality of LINQ objects.

The first step was creating a method in an utility class that abstracted out the reader and returns just the matching elements. The secret sauce is the ‘yield return’ keyword which I will explain below.

/// <summary>
/// Given an xml string and target element name return an enumerable for fast lightweight 
/// forward reading through the xml document. 
/// NOTE: This function uses an XmlReader to provide forward access to the xml document. 
/// It is meant for serial single-pass looping over the element collection. Calls to functions 
/// like ToList() will defeat the purpose of this function.
/// </summary>
public static IEnumerable<XElement> StreamElement(string xmlString, string elementName) {
    using (var reader = XmlReader.Create(new StringReader(xmlString))) {
        while (reader.Name == elementName || reader.ReadToFollowing(elementName))
            yield return (XElement)XNode.ReadFrom(reader); 
    }
}
Say you have a large CD catalog to read in like:
<Catalog>
  <CD>
    <Title>Stop Making Sense</Title>
    <Band>Talking Heads</Band>
    <Year>1984</Year>
  </CD>
  ...
</Catalog>
If you were using an XmlDocument to read that from an XML string and process each element you might have code like:
XmlDocument xmlDoc = new XmlDocument();
xmlDoc.LoadXml(catalogXml);
XmlNodeList discs = xmlDoc.GetElementsByTagName("CD");
foreach (XmlElement discElement in discs) {
    //... Process each element
}
You can convert that to using the hybrid LINQ/XmlReader approach like the following:
IEnumerable<XElement> discs = from node in XmlUtils.StreamElement(catalogXml, "CD") select node;
foreach (XElement discElement in discs) {
    //... Process each element
}
The one big caveat is that you can’t call any functions on the discs collection that would require looping over all of the items to get the answer (eg ToList(), Count, etc). This is because we are relying on yield to return each element one at a time. We process it and then move on to the next one. This allows memory associated with individual elements to be garbage collected as we go along and not held into memory en masse. This approach works best when we have an XML document with a set of homogenous elements that can be forward processed.

More on yield:
You consume an iterator method by using a foreach statement or LINQ query. Each iteration of the foreach loop calls the iterator method. When a yield return statement is reached in the iterator method, expression is returned, and the current location in code is retained. Execution is restarted from that location the next time that the iterator function is called.
One thing to stress when making any performance related changes is that you need to establish baseline performance numbers and then verify that the changes improve it. So for each method record the time and memory use before any changes are made and after. You can use something like the following to determine the baseline and any performance gains.
Stopwatch stopWatch = Stopwatch.StartNew();
long startMem = GC.GetTotalMemory(false)

// Code to benchmark

stopWatch.Stop();
long endMem = GC.GetTotalMemory(false);
Console.WriteLine ("{0} ms", stopWatch.Elapsed.TotalMilliseconds);
Console.WriteLine ("{0} mem", endMem - startMem);

Sunday, December 1, 2013

iPhone photos and an empty DCIM folder

My iPhone was overflowing with photos. I've used iCloud to sync them to the computer in the past and it has been ok. The lack of online viewing/sharing is a real negative with iCloud. Recently I tried out the improved SkyDrive app. I like the online viewing of photos and the sharing options are great from SkyDrive. The app needs to be launched each time to start the upload process. Once started it will sync a little over 100 photos at a time but then it needs to be refreshed to continue. It really needs to use the location services to trigger an upload when I get home. The other downside is that it doesn't upload videos.

Once all of my photos were uploaded to SkyDrive I wanted to download the videos and remove the photos to clear up some space. I've avoided installing iTunes on my latest computer, but no problem I figured I would just plug in the iPhone and direct access the pictures from the DCIM folder. After plugging in the phone the DCIM folder was empty and Windows photo import listed 0 pictures. Some panicking and searching lead me to the solution (one of the more helpful links here). I needed to unlock my phone first and then plug it in. Security is often an annoyance especially when there's no visible feedback about what to do.