Posts tagged computer science

Photo URL is broken

Never having taken any computer science courses beyond the freshman level, there are some glaring holes in my computer science knowledge. According to Customs and Border Protection, every self-respecting software engineer should be able to balance a binary search tree.

I used a library-provided red-black tree in Policy-Based Data Structures in C++, so I've been aware of the existence of these trees for quite some time. However, I've never rolled my own. I set out to fix this deficiency in my knowledge a few weeks ago.

I've created an augmented AVL tree to re-solve ORDERSET.

The Problem

The issue with vanilla binary search trees is that the many of the operations have worst-case $O(N)$ complexity. To see this, consider the case where your data is sorted, so you insert your data in ascending order. Then, the binary search tree degenerates into a linked list.

Definition of an AVL Tree

AVL trees solve this issue by maintaining the invariant that the height of the left and right subtrees differ by at most 1. The height of a tree is the maximum number of edges between the root and a leaf node. For any node $x$, we denote its height $h_x$. Now, consider a node $x$ with left child $l$ and right child $r$. We define the balance factor of $x$ to be $$b_x = h_l - h_r.$$ In an AVL tree, $b_x \in \{-1,0,1\}$ for all $x$.

Maintaining the Invariant

Insertion

As we insert new values into the tree, our tree may become unbalanced. Since we are only creating one node, if the tree becomes unbalanced some node has a balance factor of $\pm 2$. Without loss of generality, let us assume that it's $-2$ since the two cases are symmetric.

Now, the only nodes whose heights are affected are along the path between the root and the new node. So, only the parents of these nodes, who are also along this path, will have their balance factor altered. Thus, we should start at the deepest node with an incorrect balance factor. If we can find a a way to rebalance this subtree, we can recursively balance the whole tree by going up to the root and rebalancing on the way.

So, let us assume we have tree whose root has a balance factor of $-2$. Thus, the height of the right subtree exceeds the height of the left subtree by $2$. We have $2$ cases.

Right-left Rotation: The Right Child Has a Balance Factor of $1$

In these diagrams, circles denote nodes, and rectangles denote subtrees.

In this example, by making $5$ the new right child, we still have and unbalanced tree, where the root has balance factor $-2$, but now, the right child has a right subtree of greater height.

Right-right Rotation: The Right Child Has a Balance Factor of $-2$, $-1$, or $0$

To fix this situation, the right child becomes the root.

We see that after this rotation is finished, we have a balanced tree, so our invariant is restored!

Deletion

When we remove a node, we need to replace it with the next largest node in the tree. Here's how to find this node:

  1. Go right. Now all nodes in this subtree are greater.
  2. Find the smallest node in this subtree by going left until you can't.

We replace the deleted node, say $x$, with the smallest node we found in the right subtree, say $y$. Remember this path. The right subtree of $y$ becomes the new left subtree of the parent of $y$. Starting from the parent of $y$, the balance factor may have been altered, so we start here, go up to the root, and do any rotations necessary to correct the balance factors along the way.

Implementation

Here is the code for these rotations with templates for reusability and a partial implementation of the STL interface.

#include <cassert>
#include <algorithm>
#include <iostream>
#include <functional>
#include <ostream>
#include <stack>
#include <string>
#include <utility>

using namespace std;

namespace phillypham {
  namespace avl {
    template <typename T>
    struct node {
      T key;
      node<T>* left;
      node<T>* right;
      size_t subtreeSize;
      size_t height;
    };

    template <typename T>
    int height(node<T>* root) {
      if (root == nullptr) return -1;
      return root -> height;
    }

    template <typename T>
    void recalculateHeight(node<T> *root) {
      if (root == nullptr) return;
      root -> height = max(height(root -> left), height(root -> right)) + 1;
    }

    template <typename T>
    size_t subtreeSize(node<T>* root) {
      if (root == nullptr) return 0;
      return root -> subtreeSize;
    }

    template <typename T>
    void recalculateSubtreeSize(node<T> *root) {
      if (root == nullptr) return;
      root -> subtreeSize = subtreeSize(root -> left) + subtreeSize(root -> right) + 1;
    }

    template <typename T>
    void recalculate(node<T> *root) {
      recalculateHeight(root);
      recalculateSubtreeSize(root);
    }

    template <typename T>
    int balanceFactor(node<T>* root) {
      if (root == nullptr) return 0;
      return height(root -> left) - height(root -> right);
    }

    template <typename T>
    node<T>*& getLeftRef(node<T> *root) {
      return root -> left;    
    }

    template <typename T>
    node<T>*& getRightRef(node<T> *root) {
      return root -> right;
    }

    template <typename T>
    node<T>* rotateSimple(node<T> *root,
                          node<T>*& (*newRootGetter)(node<T>*),
                          node<T>*& (*replacedChildGetter)(node<T>*)) {
      node<T>* newRoot = newRootGetter(root);
      newRootGetter(root) = replacedChildGetter(newRoot);
      replacedChildGetter(newRoot) = root;
      recalculate(replacedChildGetter(newRoot));
      recalculate(newRoot);
      return newRoot;
    }

    template <typename T>
    void swapChildren(node<T> *root,
                      node<T>*& (*childGetter)(node<T>*),
                      node<T>*& (*grandChildGetter)(node<T>*)) {
      node<T>* newChild = grandChildGetter(childGetter(root));
      grandChildGetter(childGetter(root)) = childGetter(newChild);
      childGetter(newChild) = childGetter(root);
      childGetter(root) = newChild;
      recalculate(childGetter(newChild));
      recalculate(newChild);
    }

    template <typename T>
    node<T>* rotateRightRight(node<T>* root) {
      return rotateSimple(root, getRightRef, getLeftRef);
    }

    template <typename T>
    node<T>* rotateLeftLeft(node<T>* root) {
      return rotateSimple(root, getLeftRef, getRightRef);    
    }

    template <typename T>
    node<T>* rotate(node<T>* root)  {
      int bF = balanceFactor(root);
      if (-1 <= bF && bF <= 1) return root;
      if (bF < -1) { // right side is too heavy
        assert(root -> right != nullptr);
        if (balanceFactor(root -> right) != 1) {
          return rotateRightRight(root);
        } else { // right left case
          swapChildren(root, getRightRef, getLeftRef);
          return rotate(root);
        }
      } else { // left side is too heavy
        assert(root -> left != nullptr);
        // left left case
        if (balanceFactor(root -> left) != -1) {
          return rotateLeftLeft(root);
        } else { // left right case
          swapChildren(root, getLeftRef, getRightRef);
          return rotate(root);
        }
      }
    }

    template <typename T, typename cmp_fn = less<T>>
    node<T>* insert(node<T>* root, T key, const cmp_fn &comparator = cmp_fn()) {
      if (root == nullptr) {
        node<T>* newRoot = new node<T>();
        newRoot -> key = key;
        newRoot -> left = nullptr;
        newRoot -> right = nullptr;
        newRoot -> height = 0;
        newRoot -> subtreeSize = 1;
        return newRoot;
      }
      if (comparator(key, root -> key)) { 
        root -> left = insert(root -> left, key, comparator);
      } else if (comparator(root -> key, key)) {
        root -> right = insert(root -> right, key, comparator);
      }
      recalculate(root);
      return rotate(root);
    }

    template <typename T, typename cmp_fn = less<T>>
    node<T>* erase(node<T>* root, T key, const cmp_fn &comparator = cmp_fn()) {
      if (root == nullptr) return root; // nothing to delete
      if (comparator(key, root -> key)) { 
        root -> left = erase(root -> left, key, comparator);
      } else if (comparator(root -> key, key)) {
        root -> right = erase(root -> right, key, comparator);
      } else { // actual work when key == root -> key
        if (root -> right == nullptr) {
          node<T>* newRoot = root -> left;
          delete root;
          return newRoot;
        } else if (root -> left == nullptr) {
          node<T>* newRoot = root -> right;
          delete root;
          return newRoot;
        } else {
          stack<node<T>*> path;
          path.push(root -> right);
          while (path.top() -> left != nullptr) path.push(path.top() -> left);
          // swap with root
          node<T>* newRoot = path.top(); path.pop();
          newRoot -> left = root -> left;
          delete root;

          node<T>* currentNode = newRoot -> right;
          while (!path.empty()) {
            path.top() -> left = currentNode;
            currentNode = path.top(); path.pop();
            recalculate(currentNode);
            currentNode = rotate(currentNode);
          }
          newRoot -> right = currentNode;
          recalculate(newRoot);
          return rotate(newRoot);
        }        
      }
      recalculate(root);
      return rotate(root);
    }

    template <typename T>
    stack<node<T>*> find_by_order(node<T>* root, size_t idx) {
      assert(0 <= idx && idx < subtreeSize(root));
      stack<node<T>*> path;
      path.push(root);
      while (idx != subtreeSize(path.top() -> left)) {
        if (idx < subtreeSize(path.top() -> left)) {
          path.push(path.top() -> left);
        } else {
          idx -= subtreeSize(path.top() -> left) + 1;
          path.push(path.top() -> right);
        }
      }
      return path;
    }

    template <typename T>
    size_t order_of_key(node<T>* root, T key) {
      if (root == nullptr) return 0ULL;
      if (key == root -> key) return subtreeSize(root -> left);
      if (key < root -> key) return order_of_key(root -> left, key);
      return subtreeSize(root -> left) + 1ULL + order_of_key(root -> right, key);
    }

    template <typename T>
    void delete_recursive(node<T>* root) {
      if (root == nullptr) return;
      delete_recursive(root -> left);
      delete_recursive(root -> right);
      delete root;
    }
  }

  template <typename T, typename cmp_fn = less<T>>
  class order_statistic_tree {
  private:
    cmp_fn comparator;
    avl::node<T>* root;
  public:
    class const_iterator: public std::iterator<std::bidirectional_iterator_tag, T> {
      friend const_iterator order_statistic_tree<T, cmp_fn>::cbegin() const;
      friend const_iterator order_statistic_tree<T, cmp_fn>::cend() const;
      friend const_iterator order_statistic_tree<T, cmp_fn>::find_by_order(size_t) const;
    private:
      cmp_fn comparator;
      stack<avl::node<T>*> path;
      avl::node<T>* beginNode;
      avl::node<T>* endNode;
      bool isEnded;

      const_iterator(avl::node<T>* root) {
        setBeginAndEnd(root);
        if (root != nullptr) {
          path.push(root);
        }
      }

      const_iterator(avl::node<T>* root, stack<avl::node<T>*>&& path): path(move(path)) {
        setBeginAndEnd(root);
      }

      void setBeginAndEnd(avl::node<T>* root) {
        beginNode = root;
        while (beginNode != nullptr && beginNode -> left != nullptr)
          beginNode = beginNode -> left;
        endNode = root;        
        while (endNode != nullptr && endNode -> right != nullptr)
          endNode = endNode -> right;
        if (root == nullptr) isEnded = true;
      }

    public:
      bool isBegin() const {
        return path.top() == beginNode;
      }

      bool isEnd() const {
        return isEnded;
      }

      const T& operator*() const {
        return path.top() -> key;
      }

      const bool operator==(const const_iterator &other) const {
        if (path.top() == other.path.top()) {
          return path.top() != endNode || isEnded == other.isEnded;
        }
        return false;
      }

      const bool operator!=(const const_iterator &other) const {
        return !((*this) == other);
      }

      const_iterator& operator--() {
        if (path.empty()) return *this;
        if (path.top() == beginNode) return *this;
        if (path.top() == endNode && isEnded) {
          isEnded = false;
          return *this;
        }
        if (path.top() -> left == nullptr) {
          T& key = path.top() -> key;
          do {
            path.pop();
          } while (comparator(key, path.top() -> key));
        } else {
          path.push(path.top() -> left);
          while (path.top() -> right != nullptr) {
            path.push(path.top() -> right);
          }
        }
        return *this;
      }

      const_iterator& operator++() {
        if (path.empty()) return *this;
        if (path.top() == endNode) {
          isEnded = true;
          return *this;
        }
        if (path.top() -> right == nullptr) {
          T& key = path.top() -> key;
          do {
            path.pop();
          } while (comparator(path.top() -> key, key));
        } else {
          path.push(path.top() -> right);
          while (path.top() -> left != nullptr) {
            path.push(path.top() -> left);
          }
        }
        return *this;
      }
    };

    order_statistic_tree(): root(nullptr) {}

    void insert(T newKey) {
      root = avl::insert(root, newKey, comparator);
    }

    void erase(T key) {
      root = avl::erase(root, key, comparator);
    }

    size_t size() const {
      return subtreeSize(root);
    }

    // 0-based indexing
    const_iterator find_by_order(size_t idx) const {
      return const_iterator(root, move(avl::find_by_order(root, idx)));
    }

    // returns the number of keys strictly less than the given key
    size_t order_of_key(T key) const {
      return avl::order_of_key(root, key);
    }

    ~order_statistic_tree() {
      avl::delete_recursive(root);
    }

    const_iterator cbegin() const{
      const_iterator it = const_iterator(root);
      while (!it.isBegin()) --it;
      return it;
    }

    const_iterator cend() const {
      const_iterator it = const_iterator(root);
      while (!it.isEnd()) ++it;
      return it;
    }
  };
}

Solution

With this data structure, the solution is fairly short, and is in fact, identical to my previous solution in Policy-Based Data Structures in C++. Performance-wise, my implementation is very efficient. It differs by only a few hundredths of a second with the policy-based tree set according to SPOJ.

int main(int argc, char *argv[]) {
  ios::sync_with_stdio(false); cin.tie(NULL);

  phillypham::order_statistic_tree<int> orderStatisticTree;
  int Q; cin >> Q; // number of queries
  for (int q = 0; q < Q; ++q) {
    char operation; 
    int parameter;
    cin >> operation >> parameter;
    switch (operation) {
      case 'I':
        orderStatisticTree.insert(parameter);
        break;
      case 'D':
        orderStatisticTree.erase(parameter);
        break;
      case 'K':
        if (1 <= parameter && parameter <= orderStatisticTree.size()) {
          cout << *orderStatisticTree.find_by_order(parameter - 1) << '\n';
        } else {
          cout << "invalid\n";
        }
        break;
      case 'C':
        cout << orderStatisticTree.order_of_key(parameter) << '\n';
        break;
    }
  }
  cout << flush;
  return 0;
}

Photo URL is broken

After solving Kay and Snowflake, I wanted write this post just to serve as notes to my future self. There are many approaches to this problem. The solution contains three different approaches, and my approach differs from all of them, and has better complexity than all but the last solution. My solution contained a lot of code that I think is reusable in the future.

This problem involves finding the centroid of a tree, which is a node such that when removed, each of the new trees produced have at most half the number of nodes as the original tree. We need to be able to do this for any arbitrary subtree of the tree. Since there are as many as 300,000 subtrees we need to be a bit clever about how we find the centroid.

First, we can prove that the centroid of any tree is a node with smallest subtree such that the subtree has size at least half of the original tree. That is, if $T(i)$ is the subtree of node $i$, and $S(i)$ is its size, then the centroid of $T(i)$ is \begin{equation} C(i) = \underset{k \in \{j \in T(i) : 2S(j) - T(i) \geq 0\}}{\arg\min} \left(S(k) - \frac{T(i)}{2}\right). \end{equation}

To see this, first note the ancestors and descendents of $k = C(i)$ will be in separate subtrees. Since $S(k) \geq T(i)/2 \Rightarrow T(i) - S(k) \leq T(i)/2$, so all the ancestors will be part of new trees that are sufficiently small. Now, suppose for a contradiction that some subset of the descendents of $k$ belong to a tree that is greater than $T(i)/2$. This tree is also a subtree of $T(i)$, so $C(i)$ should be in this subtree, so this is ridiculous.

Now, that proof wasn't so hard, but actually finding $C(i)$ for each $i$ is tricky. The naive way would be to do an exhaustive search in $T(i)$, but that would give us an $O(NQ)$ solution.

The first observation that we can make to simplify our search is that if we start at $i$ and always follow the node with the largest subtree, the centroid is on that path. Why? Well, there can only be one path that has at least $T(i)/2$ nodes. Otherwise, the tree would have too many nodes.

So to find the centroid, we can start at the leaf of this path and go up until we hit the centroid. That still doesn't sound great though since the path might be very long, and it is still $O(N)$. The key thing to note is that the subtree size is strictly increasing, so we can do a binary search, reducing this step to $O(\log N)$. With a technique called binary lifting described in Least Common Ancestor, we can further reduce this to $O\left(\log\left(\log N\right)\right)$, which is probably not actually necessary, but I did it anyway.

Now, we need to know several statistics for each node. We need the leaf of path of largest subtrees, and we need the subtree size. Subtree size can be calculate recursively with depth-first search. Since subtree size of $i$ is the size of all the child subtrees plus 1 for $i$ itself. Thus, we do a post-order traversal to calculate subtree size and the special leaf. We also need depth of a node to precompute the ancestors for binary lifting. The depth is computed with a pre-order traversal. In the title picture, the numbers indicate the order of an in-order traversal. The letters correspond to a pre-order traversal, and the roman numerals show the order of a post-order traversal. Since recursion of large trees leads to stack overflow errors and usually you can't tell the online judge to increase the stack size, it's always better to use an explicit stack. I quite like my method of doing both pre-order and post-order traversal with one stack.

/**
 * Does a depth-first search to calculate various statistics.
 * We are given as input a rooted tree in the form each node's
 * children and parent. The statistics returned:
 * ancestors: Each node's ancestors that are a power of 2
 *     edges away.
 * maxPathLeaf: If we continually follow the child that has the
 *     maximum subtree, we'll end up at this leaf.
 * subtreeSize: The size of the node's subtree including itself.
 */
void calculateStatistics(const vector<vector<int>> &children, 
                         const vector<int> &parent,
                         vector<vector<int>> *ancestorsPtr,
                         vector<int> *maxPathLeafPtr,
                         vector<int> *subtreeSizePtr) {
  int N = parent.size();
  if (N == 0) return;
  // depth also serves to keep track of whether we've visited, yet.
  vector<int> depth(N, -1); // -1 indicates that we have not visited.
  vector<int> height(N, 0);
  vector<int> &maxPathLeaf = *maxPathLeafPtr;
  vector<int> &subtreeSize = *subtreeSizePtr;
  stack<int> s; s.push(0); // DFS
  while (!s.empty()) {
    int i = s.top(); s.pop();
    if (depth[i] == -1) {       // pre-order
      s.push(i); // Put it back in the stack so we visit again.
      depth[i] = parent[i] == -1 ? 0: depth[parent[i]] + 1;
      for (int j : children[i]) s.push(j);
    } else {                    // post-order
      int maxSubtreeSize = INT_MIN;
      int maxSubtreeRoot = -1;
      for (int j : children[i]) {
        height[i] = max(height[j] + 1, height[i]);
        subtreeSize[i] += subtreeSize[j];
        if (maxSubtreeSize < subtreeSize[j]) {
          maxSubtreeSize = subtreeSize[j];
          maxSubtreeRoot = j;
        }        
      }
      maxPathLeaf[i] = maxSubtreeRoot == -1 ?
        i : maxPathLeaf[maxSubtreeRoot];
    }   
  }
  // Use binary lifting to calculate a subset of ancestors.
  vector<vector<int>> &ancestors = *ancestorsPtr;
  for (int i = 0; i < N; ++i)
    if (parent[i] != -1) ancestors[i].push_back(parent[i]);
  for (int k = 1; (1 << k) <= height.front(); ++k) {
    for (int i = 0; i < N; ++i) {
      int j = ancestors[i].size();
      if ((1 << j) <= depth[i]) {
        --j;
        ancestors[i].push_back(ancestors[ancestors[i][j]][j]);
      }
    }
  }
}

With these statistics calculated, the rest our code to answer queries is quite small if we use C++'s built-in implementation of binary search. With upper_bound:

#include <algorithm>
#include <cassert>
#include <climits>
#include <iostream>
#include <iterator>
#include <stack>
#include <vector>

using namespace std;

/**
 * Prints out vector for any type T that supports the << operator.
 */
template<typename T>
void operator<<(ostream &out, const vector<T> &v) {
  copy(v.begin(), v.end(), ostream_iterator<T>(out, "\n"));
}

int findCentroid(int i,
                 const vector<vector<int>> &ancestors,
                 const vector<int> &maxPathLeaf,
                 const vector<int> &subtreeSize) {
  int centroidCandidate = maxPathLeaf[i];
  int maxComponentSize = (subtreeSize[i] + 1)/2;
  while (subtreeSize[centroidCandidate] < maxComponentSize) {
    // Alias the candidate's ancestors.
    const vector<int> &cAncenstors = ancestors[centroidCandidate];
    assert(!cAncenstors.empty());
    // Check the immediate parent first. If this is an exact match, and we're done.
    if (subtreeSize[cAncenstors.front()] >= maxComponentSize) {
      centroidCandidate = cAncenstors.front();
    } else {
      // Otherwise, we can approximate the next candidate by searching ancestors that
      // are a power of 2 edges away.
      // Find the index of the first ancestor who has a subtree of size
      // greater than maxComponentSize.
      int centroidIdx =
        upper_bound(cAncenstors.cbegin() + 1, cAncenstors.cend(), maxComponentSize,
                    // If this function evaluates to true, j is an upper bound.
                    [&subtreeSize](int maxComponentSize, int j) -> bool {
                      return maxComponentSize < subtreeSize[j];
                    }) - ancestors[centroidCandidate].cbegin();
      centroidCandidate = cAncenstors[centroidIdx - 1];
    }
  }
  return centroidCandidate;
}

int main(int argc, char *argv[]) {
  ios::sync_with_stdio(false); cin.tie(NULL);
  int N, Q;
  cin >> N >> Q;
  vector<int> parent; parent.reserve(N);
  parent.push_back(-1); // the root has no parent
  vector<vector<int>> children(N);
  for (int i = 1; i < N; ++i) {
    int p; cin >> p; --p; // Convert to 0-indexing.
    children[p].push_back(i);
    parent.push_back(p);
  }
  // ancestors[i][j] will be the (2^j)th ancestor of node i.
  vector<vector<int>> ancestors(N);
  vector<int> maxPathLeaf(N, -1);
  vector<int> subtreeSize(N, 1);
  calculateStatistics(children, parent,
                      &ancestors, &maxPathLeaf, &subtreeSize);

  vector<int> centroids; centroids.reserve(Q);
  for (int q = 0; q < Q; ++q) {
    int v; cin >> v; --v; // Convert to 0-indexing.
    // Convert back to 1-indexing.
    centroids.push_back(findCentroid(v, ancestors, maxPathLeaf, subtreeSize) + 1);
  }
  cout << centroids;
  cout << flush;
  return 0;
}

Of course, we can use lower_bound in findCentroid, too. It's quite baffling to me that the function argument takes a different order of parameters in lower_bound, though. I always forget how to use these functions, which is why I've decided to write this post.

int findCentroid(int i,
                 const vector<vector<int>> &ancestors,
                 const vector<int> &maxPathLeaf,
                 const vector<int> &subtreeSize) {
  int centroidCandidate = maxPathLeaf[i];
  int maxComponentSize = (subtreeSize[i] + 1)/2;
  while (subtreeSize[centroidCandidate] < maxComponentSize) {
    // Alias the candidate's ancestors.
    const vector<int> &cAncenstors = ancestors[centroidCandidate];
    assert(!cAncenstors.empty());
    // Check the immediate parent first. If this is an exact match, and we're done.
    if (subtreeSize[cAncenstors.front()] >= maxComponentSize) {
      centroidCandidate = cAncenstors.front();
    } else {
      // Otherwise, we can approximate the next candidate by searching ancestors that
      // are a power of 2 edges away.
      // Find the index of the first ancestor who has a subtree of size
      // at least maxComponentSize.
      int centroidIdx =
        lower_bound(cAncenstors.cbegin() + 1, cAncenstors.cend(), maxComponentSize,
                    // If this function evaluates to true, j + 1 is a lower bound.
                    [&subtreeSize](int j, int maxComponentSize) -> bool {
                      return subtreeSize[j] < maxComponentSize;
                    }) - ancestors[centroidCandidate].cbegin();      
      if (centroidIdx < cAncenstors.size() &&
          subtreeSize[cAncenstors[centroidIdx]] == maxComponentSize) {
        centroidCandidate = cAncenstors[centroidIdx];
      } else { // We don't have an exact match.
        centroidCandidate = cAncenstors[centroidIdx - 1];
      }
      centroidCandidate = cAncenstors[centroidIdx - 1];
    }
  }
  return centroidCandidate;
}

All in all, we need $O(N\log N)$ time to compute ancestors, and $O\left(Q\log\left(\log N\right)\right)$ time to answer queries, so total complexity is $O\left(N\log N + Q\log\left(\log N\right)\right)$.


Photo URL is broken

Happy New Year, everyone! Let me tell you about the epic New Year's Eve that I had. I got into a fight with the last problem from the December 2016 USA Computing Olympiad contest. I struggled mightily, felt beaten down at times, lost all hope, but I finally overcame. It was a marathon. We started sparring around noon, and I did not vanquish my foe until the final hour of 2017.

Having a long weekend in a strange new city, I've had to get creative with things to do. I decided to tackle some olympiad problems. For those who are not familiar with the competitive math or programming scene, the USAMO and USACO are math and programming contests targeted towards high school students.

So, as an old man, what am I doing whittling away precious hours tackling these problems? I wish that I could say that I was reliving my glory days from high school. But truth be told, I've always been a lackluster student, who did the minimal effort necessary. I can't recall ever having written a single line of code in high school, and I maybe solved 2 or 3 AIME problems (10 years later, I can usually do the first 10 with some consistency, the rest are a toss-up). Of course, I never trained for the competitions, so who knows if I could have potentially have done well.

We all have regrets from our youth. For me, I have all the familiar ones: mistreating people, lost friends, not having the best relationship with my parents, losing tennis matches, quitting the violin, and of course, the one girl that got away. However, what I really regret the most was not having pursued math and computer science earlier. I'm not sure why. Even now, 10 years older, it's quite clear that I am not talented enough to have competed in the IMO or IOI: I couldn't really hack it as a mathematician, and you can scroll to the very bottom to see my score of 33.

Despite the lack of talent, I just really love problem solving. In many ways it's become an escape for me when I feel lonely and can't make sense of the world. I can get lost in my own abstract world and forget about my physical needs like sleep or food. On solving a problem, I wake up from my stupor, realize that the world has left me behind, and find everyone suddenly married with kids.

There is such triumph in solving a hard problem. Of course, there are times of struggle and hopelessness. Such is my fledging sense of self-worth that it depends on my ability to solve abstract problems that have no basis in reality. Thus, I want to write up my solution to Robotic Cow Herd and 2013 USAMO Problem 2

Robotic Cow Herd

In the Platinum Division December 2016 contest, there were 3 problems. In contest, I was completely stuck on Lots of Triangles and never had a chance to look at the other 2 problems. This past Friday, I did Team Building in my own time. It took me maybe 3 hours, so I suspect if I started with that problem instead, I could have gotten a decent amount of points on the board.

Yesterday, I attempted Robotic Cow Herd. I was actually able to solve this problem on my own, but I worked on it on and off over a period of 12 hours, so I definitely wouldn't have scored anything in this case.

My solution is quite different than the given solution, which uses binary search. I did actually consider such a solution, but only gave it 5 minutes of though before abandoning it, far too little time to work out the details. Instead, my solution is quite similar to the one that they describe using priority queue before saying such a solution wouldn't be feasible. However, if we are careful about how we fill our queue it can work.

We are charged with assembling $K$ different cows that consist of $N$ components, where each component will have $M$ different types. Each type of component has an associated cost, and cow $A$ is different than cow $B$ if at least one of the components is of a different type.

Of course, we aren't going to try all $M^N$ different cows. It's clear right away that we can take greedy approach, start with the cheapest cow, and get the next cheapest cow by varying a single component. Since each new cow that we make is based on a previous cow, it's only necessary to store the deltas rather than all $N$ components. Naturally, this gives way to a tree representation shown in the title picture.

Each node is a cow prototype. We start with the cheapest cow as the root, and each child consists of a single delta. The cost of a cow can be had by summing the deltas from the root to the node. Now every prototype gives way to $N$ new possible prototypes. $NK$ is just too much to fit in a priority queue. Hence, the official solution says this approach isn't feasible.

However, if we sort our components in the proper order, we know the next two cheapest cows based off this prototype. Moreover, we have to handle a special case, where instead of a cow just generating children, it also generates a sibling. We sort by increasing deltas. In the given sample data, our base cost is $4$, and our delta matrix (not a true matrix) looks like $$ \begin{pmatrix} 1 & 0 \\ 2 & 1 & 2 & 2\\ 2 & 2 & 5 \end{pmatrix}. $$

Also, we add our microcontrollers in increasing order to avoid double counting. Now, if we have just added microcontroller $(i,j)$, the cheapest thing to do is to change it to $(i + 1, 0)$ or $(i, j + 1)$. But what about the case, where we want to skip $(i+1,0)$ and add $(i + 2, 0), (i+3,0),\ldots$? Since we're lazy about pushing into our priority queue and only add one child at a time, when a child is removed, we add its sibling in this special case where $j = 0$.

Parent-child relationships are marked with solid lines. Creation of a node is marked with a red arrow. Nodes still in the queue are blue. The number before the colon denotes the rank of the cow. In this case, the cost for 10 cows is $$4 + 5 + 5 + 6 + 6 + 7 + 7 + 7 + 7 + 7 = 61.$$

Dashed lines represent the special case of creating a sibling. The tuple $(1,-,0)$ means we used microcontrollers $(0,1)$ and $(2,0)$. For component $1$, we decided to just use cheapest one. Here's the code.

import java.io.*;
import java.util.*;

public class roboherd {
    /**
     * Microcontrollers are stored in a matrix-like structure with rows and columns.
     * Use row-first ordering.
     */
    private static class Position implements Comparable<Position> {
        private int row;
        private int column;

        public Position(int row, int column) {
            this.row = row; this.column = column;            
        }

        public int getRow() { return this.row; }

        public int getColumn() { return this.column; }

        public int compareTo(Position other) {
            if (this.getRow() != other.getRow()) return this.getRow() - other.getRow();
            return this.getColumn() - other.getColumn();
        }

        @Override
        public String toString() {
            return "{" + this.getRow() + ", " + this.getColumn() + "}";
        }
    }

    /**
     * Stores the current cost of a cow along with the last microcontroller added. To save space,
     * states only store the last delta and obscures the rest of the state in the cost variable.
     */
    private static class MicrocontrollerState implements Comparable<MicrocontrollerState> {
        private long cost;
        private Position position; // the position of the last microcontroller added

        public MicrocontrollerState(long cost, Position position) {
            this.cost = cost;
            this.position = position;            
        }

        public long getCost() { return this.cost; }

        public Position getPosition() { return this.position; }

        public int compareTo(MicrocontrollerState other) {
            if (this.getCost() != other.getCost()) return (int) Math.signum(this.getCost() - other.getCost());
            return this.position.compareTo(other.position);
        }
    }


    public static void main(String[] args) throws IOException {
        BufferedReader in = new BufferedReader(new FileReader("roboherd.in"));
        PrintWriter out = new PrintWriter(new BufferedWriter(new FileWriter("roboherd.out")));
        StringTokenizer st = new StringTokenizer(in.readLine());
        int N = Integer.parseInt(st.nextToken()); // number of microcontrollers per cow
        int K = Integer.parseInt(st.nextToken()); // number of cows to make
        assert 1 <= N && N <= 100000 : N;
        assert 1 <= K && K <= 100000 : K;
        ArrayList<int[]> P = new ArrayList<int[]>(N); // microcontroller cost deltas
        long minCost = 0; // min cost to make all the cows wanted
        for (int i = 0; i < N; ++i) {
            st = new StringTokenizer(in.readLine());
            int M = Integer.parseInt(st.nextToken());
            assert 1 <= M && M <= 10 : M;
            int[] costs = new int[M];
            for (int j = 0; j < M; ++j) {
                costs[j] = Integer.parseInt(st.nextToken());
                assert 1 <= costs[j] && costs[j] <= 100000000 : costs[j];
            }
            Arrays.sort(costs);
            minCost += costs[0];
            // Store deltas, which will only exist if there is more than one type of microcontroller.
            if (M > 1) {
                int[] costDeltas = new int[M - 1];                
                for (int j = M - 2; j >= 0; --j) costDeltas[j] = costs[j + 1] - costs[j];
                P.add(costDeltas);
            }
        }
        in.close();
        N = P.size(); // redefine N to exclude microcontrollers of only 1 type        
        --K; // we already have our first cow
        // Identify the next best configuration in log(K) time.
        PriorityQueue<MicrocontrollerState> pq = new PriorityQueue<MicrocontrollerState>(3*K);
        // Order the microcontrollers in such a way that if we were to vary the prototype by only 1,
        // the best way to do would be to pick microcontrollers in the order 
        // (0,0), (0,1),...,(0,M_0-2),(1,0),...,(1,M_1-2),...,(N-1,0),...,(N-1,M_{N-1}-2)
        Collections.sort(P, new Comparator<int[]>() {
           @Override
           public int compare(int[] a, int[] b) {
               for (int j = 0; j < Math.min(a.length, b.length); ++j)
                 if (a[j] != b[j]) return a[j] - b[j];
               return a.length - b.length;
           }
        });
        pq.add(new MicrocontrollerState(minCost + P.get(0)[0], new Position(0, 0)));
        // Imagine constructing a tree with K nodes, where the root is the cheapest cow. Each node contains
        // the delta from its parent. The next cheapest cow can always be had by taking an existing node on
        // the tree and varying a single microcontroller.
        for (; K > 0; --K) {
            MicrocontrollerState currentState = pq.remove(); // get the next best cow prototype.
            long currentCost = currentState.getCost();
            minCost += currentCost; 
            int i = currentState.getPosition().getRow();
            int j = currentState.getPosition().getColumn();            
            // Our invariant to avoid double counting is to only add microcontrollers with "greater" position.
            // Given a prototype, from our ordering, the best way to vary a single microcontroller is replace
            // it with (i,j + 1) or add (i + 1, 0). 
            if (j + 1 < P.get(i).length) {
                pq.add(new MicrocontrollerState(currentCost + P.get(i)[j + 1], new Position(i, j + 1)));
            }
            if (i + 1 < N) {
                // Account for the special case, where we just use the cheapest version of type i microcontrollers.
                // Thus, we remove i and add i + 1. This is better than preemptively filling the priority queue.
                if (j == 0) pq.add(new MicrocontrollerState(
                        currentCost - P.get(i)[j] + P.get(i + 1)[0], new Position(i + 1, 0)));
                pq.add(new MicrocontrollerState(currentCost + P.get(i + 1)[0], new Position(i + 1, 0)));
            }
        }        
        out.println(minCost);
        out.close();
    }
}

Sorting is $O(NM\log N)$. Polling from the priority queue is $O(K\log K)$ since each node will at most generate 3 additional nodes to put in the priority queue. So, total running time is $O(NM\log N + K\log K)$.

2013 USAMO Problem 2

Math has become a bit painful for me. While it was my first love, I have to admit that a bachelor's and master's degree later, I'm a failed mathematician. I've recently overcome my disappointment and decided to persist in learning and practicing math despite my lack of talent. This is the first USAMO problem that I've been able to solve, which I did on Friday. Here's the problem.

For a positive integer $n\geq 3$ plot $n$ equally spaced points around a circle. Label one of them $A$, and place a marker at $A$. One may move the marker forward in a clockwise direction to either the next point or the point after that. Hence there are a total of $2n$ distinct moves available; two from each point. Let $a_n$ count the number of ways to advance around the circle exactly twice, beginning and ending at $A$, without repeating a move. Prove that $a_{n-1}+a_n=2^n$ for all $n\geq 4$.

The solution on the AOPS wiki uses tiling. I use a different strategy that leads to the same result.

Let the points on the cricle be $P_1,P_2, \ldots,P_n$. First, we prove that each point on the circle is visited either $1$ or $2$ times, except for $A = P_1$, which can be visited $3$ times since it's our starting and ending point. It's clear that $2$ times is upper bound for the other points. Suppose a point is never visited, though. We can only move in increments of $1$ and $2$, so if $P_k$ was never visited, we have made a move of $2$ steps from $P_{k-1}$ twice, which is not allowed.

In this way, we can index our different paths by tuples $(m_1,m_2,\ldots,m_n)$, where $m_i$ is which move we make the first time that we visit $P_i$, so $m_i \in \{1,2\}$. Since moves have to be distinct, the second move is determined by the first move. Thus, we have $2^n$ possible paths.

Here are examples of such paths.

Both paths are valid in the sense that no move is repeated. However, we only count the one on the left since after two cycles we must return to $P_1$.

The path on the left is $(1,2,2,2,1)$, which is valid since we end up at $A = P_1$. The path on the right is $(1,1,1,1,1)$, which is invalid, since miss $A = P_1$ the second time. The first step from a point is black, and the second step is blue. The edge labels are the order in which the edges are traversed.

Now, given all the possible paths with distinct moves for a circle with $n - 1$ points, we can generate all the possible paths for a circle with $n$ points by appending a $1$ or a $2$ to the $n - 1$ paths if we consider their representation as a vector of length $n - 1$ of $1$s and $2$s. In this way, the previous $2^{n-1}$ paths become $2^n$ paths.

Now, we can attack the problem in a case-wise manner.

  1. Consider an invalid path, $(m_1,m_2,\ldots,m_{n-1})$. All these paths must land at $P_1$ after the first cycle around the circle. Why? Since the path is invalid, that means we touch $P_{n-1}$ in the second cycle and make a jump over $P_1$ by moving $2$ steps. Thus, if we previously touched $P_{n-1},$ we moved to $P_1$ since moves must be distinct. If the first time that we touch $P_{n-1}$ is the second cycle, then, we jumped over it in first cycle by going moving $P_{n-2} \rightarrow P_1$.
    1. Make it $(m_1,m_2,\ldots,m_{n-1}, 1)$. This path is now valid. $P_n$ is now where $P_1$ would have been in the first cycle, so we hit $P_n$ and move to $P_1$. Then, we continue as we normally did. Instead of ending like $P_{n-1} \rightarrow P_2$ by jumping over $P_1$, we jump over $P_n$ instead, so we end up making the move $P_{n-1} \rightarrow P_1$ at the end.
    2. Make it $(m_1,m_2,\ldots,m_{n-1}, 2)$. This path is now valid. This case is easier. We again touch $P_n$ in the first cycle. Thus, next time we hit $P_n$, we'll make the move $P_n \rightarrow P_1$ since we must make distinct moves. If we don't hit $P_n$ again, that means we jumped $2$ from $P_{n-1}$, which means that we made the move $P_{n - 1} \rightarrow P_1$.
  2. Consider an existing valid path, now, $(m_1,m_2,\ldots,m_{n-1})$. There are $a_{n-1}$ of these.

    1. Let it be a path where we touch $P_1$ $3$ times.
      1. Make it $(m_1,m_2,\ldots,m_{n-1}, 1)$. This path is invalid. $P_n$ will be where $P_1$ was in the first cycle. So, we'll make the move $P_n \rightarrow P_1$ and continue with the same sequence of moves as before. But instead of landing at $P_1$ when the second cycle ends, we'll land at $P_n$, and jump over $P_1$ by making the move $P_n \rightarrow P_2$.
      2. Make it $(m_1,m_2,\ldots,m_{n-1}, 2)$. This path is valid. Again, we'll touch $P_n$ in the first cycle, so the next time that we hit $P_n$, we'll move to $P_1$. If we don't touch $P_n$ again, we jump over it onto $P_1$, anyway, by moving $P_{n-1} \rightarrow P_1$.
    2. Let it be a path where we touch $P_1$ $2$ times.

      1. Make it $(m_1,m_2,\ldots,m_{n-1}, 1)$. This path is valid. Instead of jumping over $P_1$ at the end of the first cycle, we'll be jumping over $P_n$. We must touch $P_n$, eventually, so from there, we'll make the move $P_n \rightarrow P_1$.
      2. Make it $(m_1,m_2,\ldots,m_{n-1}, 2)$. This path is invalid. We have the same situation where we skip $P_n$ the first time. Then, we'll have to end up at $P_n$ the second time and make the move $P_{n} \rightarrow P_2$.

      In either case, old valid paths lead to $1$ new valid path and $1$ new invalid path.

Thus, we have that $a_n = 2^n - a_{n-1} \Rightarrow \boxed{a_{n - 1} + a_n = 2^n}$ for $n \geq 4$ since old invalid paths lead to $2$ new valid paths and old valid paths lead to $1$ new valid path. And actually, this proof works when $n \geq 3$ even though the problem only asks for $n \geq 4$. Since we have $P_{n-2} \rightarrow P_1$ at one point in the proof, anything with smaller $n$ is nonsense.

Yay, 2017!


Photo URL is broken

Certain problems in competitive programming call for more advanced data structure than our built into Java's or C++'s standard libraries. Two examples are an order statistic tree and a priority queue that lets you modify priorities. It's questionable whether these implementations are useful outside of competitive programming since you could just use Boost.

Order Statistic Tree

Consider the problem ORDERSET. An order statistic tree trivially solves this problem. And actually, implementing an order statistic tree is not so difficult. You can find the implementation here. Basically, you have a node invariant

operator()(node_iterator node_it, node_const_iterator end_nd_it) const {
  node_iterator l_it = node_it.get_l_child();
  const size_type l_rank = (l_it == end_nd_it) ? 0 : l_it.get_metadata();

  node_iterator r_it = node_it.get_r_child();
  const size_type r_rank = (r_it == end_nd_it) ? 0 : r_it.get_metadata();

  const_cast<metadata_reference>(node_it.get_metadata())= 1 + l_rank + r_rank;
}

where each node contains a count of nodes in its subtree. Every time you insert a new node or delete a node, you can maintain the invariant in $O(\log N)$ time by bubbling up to the root.

With this extra data in each node, we can implement two new methods, (1) find_by_order and (2) order_of_key. find_by_order takes a nonnegative integer as an argument and returns the node corresponding to that index, where are data is sorted and we use $0$-based indexing.

find_by_order(size_type order) {
  node_iterator it = node_begin();
  node_iterator end_it = node_end();

  while (it != end_it) {
    node_iterator l_it = it.get_l_child();
    const size_type o = (l_it == end_it)? 0 : l_it.get_metadata();

    if (order == o) {
      return *it;
    } else if (order < o) {
      it = l_it;
    } else {
      order -= o + 1;
      it = it.get_r_child();
    }
  }

  return base_type::end_iterator();
}

It works recursively like this. Call the index we're trying to find $k$. Let $l$ be the number of nodes in the left subtree.

  • $k = l$: If you're trying to find the $k$th-indexed element, then there will be $k$ nodes to your left, so if the left child has $k$ elements in its subtree, you're done.
  • $k < l$: The $k$-indexed element is in the left subtree, so replace the root with the left child.
  • $k > l$: The $k$ indexed element is in the right subtree. It's equivalent to looking for the $k - l - 1$ element in the right subtree. We subtract away all the nodes in the left subtree and the root and replace the root with the right child.

order_of_key takes whatever type is stored in the nodes as an argument. These types are comparable, so it will return the index of the smallest element that is greater or equal to the argument, that is, the least upper bound.

order_of_key(key_const_reference r_key) const {
  node_const_iterator it = node_begin();
  node_const_iterator end_it = node_end();

  const cmp_fn& r_cmp_fn = const_cast<PB_DS_CLASS_C_DEC*>(this)->get_cmp_fn();
  size_type ord = 0;
  while (it != end_it) {
    node_const_iterator l_it = it.get_l_child();

    if (r_cmp_fn(r_key, this->extract_key(*(*it)))) {
      it = l_it;
    } else if (r_cmp_fn(this->extract_key(*(*it)), r_key)) {
      ord += (l_it == end_it)? 1 : 1 + l_it.get_metadata();
      it = it.get_r_child();
    } else {
      ord += (l_it == end_it)? 0 : l_it.get_metadata();
      it = end_it;
    }
  }
  return ord;
}

This is a simple tree traversal, where we keep track of order as we traverse the tree. Every time we go down the right branch, we add $1$ for every node in the left subtree and the current node. If we find a node that it's equal to our key, we add $1$ for every node in the left subtree.

While not entirely trivial, one could write this code during a contest. But what happens when we need a balanced tree. Both Java implementations of TreeSet and C++ implementations of set use a red-black tree, but their APIs are such that the trees are not easily extensible. Here's where Policy-Based Data Structures come into play. They have a mechanism to create a node update policy, so we can keep track of metadata like the number of nodes in a subtree. Conveniently, tree_order_statistics_node_update has been written for us. Now, our problem can be solved quite easily. I have to make some adjustments for the $0$-indexing. Here's the code.

#include <functional>
#include <iostream>

#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>

using namespace std;

namespace phillypham {
  template<typename T,
           typename cmp_fn = less<T>>
  using order_statistic_tree =
    __gnu_pbds::tree<T, 
                     __gnu_pbds::null_type,
                     cmp_fn,
                     __gnu_pbds::rb_tree_tag,
                     __gnu_pbds::tree_order_statistics_node_update>;
}

int main(int argc, char *argv[]) {
  ios::sync_with_stdio(false); cin.tie(NULL);

  int Q; cin >> Q; // number of queries

  phillypham::order_statistic_tree<int> orderStatisticTree;
  for (int q = 0; q < Q; ++q) {
    char operation; 
    int parameter;
    cin >> operation >> parameter;
    switch (operation) {
      case 'I':
        orderStatisticTree.insert(parameter);
        break;
      case 'D':
        orderStatisticTree.erase(parameter);
        break;
      case 'K':
        if (1 <= parameter && parameter <= orderStatisticTree.size()) {
          cout << *orderStatisticTree.find_by_order(parameter - 1) << '\n';
        } else {
          cout << "invalid\n";
        }
        break;
      case 'C':
        cout << orderStatisticTree.order_of_key(parameter) << '\n';
        break;
    }
  }
  cout << flush;
  return 0;
}

Dijkstra's algorithm and Priority Queues

Consider the problem SHPATH. Shortest path means Dijkstra's algorithm of course. Optimal versions of Dijkstra's algorithm call for exotic data structures like Fibonacci heaps, which lets us achieve a running time of $O(E + V\log V)$, where $E$ is the number of edges, and $V$ is the number of vertices. In even a fairly basic implementation in the classic CLRS, we need more than what the standard priority queues in Java and C++ offer. Either, we implement our own priority queues or use a slow $O(V^2)$ version of Dijkstra's algorithm.

Thanks to policy-based data structures, it's easy to use use a fancy heap for our priority queue.

#include <algorithm>
#include <climits>
#include <exception>
#include <functional>
#include <iostream>
#include <string>
#include <unordered_map>
#include <utility>
#include <vector>

#include <ext/pb_ds/priority_queue.hpp>

using namespace std;

namespace phillypham {
  template<typename T,
           typename cmp_fn = less<T>> // max queue by default
  class priority_queue {
   private:
    struct pq_cmp_fn {
      bool operator()(const pair<size_t, T> &a, const pair<size_t, T> &b) const {
        return cmp_fn()(a.second, b.second);
      }
    };
    typedef typename __gnu_pbds::priority_queue<pair<size_t, T>,
                                                pq_cmp_fn,
                                                __gnu_pbds::pairing_heap_tag> pq_t;
    typedef typename pq_t::point_iterator pq_iterator;
    pq_t pq;
    vector<pq_iterator> map;

   public:
    class entry {
     private:
      size_t _key;
      T _value;      

     public:
      entry(size_t key, T value) : _key(key), _value(value) {}

      size_t key() const { return _key; }

      T value() const { return _value; }
    };

    priority_queue() {}

    priority_queue(int N) : map(N, nullptr) {}

    size_t size() const {
      return pq.size();
    }

    size_t capacity() const {
      return map.size();
    }

    bool empty() const {
      return pq.empty();
    }

    /** 
     * Usually, in C++ this returns an rvalue that you can modify.
     * I choose not to allow this because it's dangerous, however.
     */
    T operator[](size_t key) const {
      return map[key] -> second;
    }

    T at(size_t key) const {
      if (map.at(key) == nullptr) throw out_of_range("Key does not exist!");
      return map.at(key) -> second;
    }

    entry top() const {
      return entry(pq.top().first, pq.top().second);
    }

    int count(size_t key) const {
      if (key < 0 || key >= map.size() || map[key] == nullptr) return 0;
      return 1;      
    }

    pq_iterator push(size_t key, T value) {
      // could be really inefficient if there's a lot of resizing going on
      if (key >= map.size()) map.resize(key + 1, nullptr); 
      if (key < 0) throw out_of_range("The key must be nonnegative!");
      if (map[key] != nullptr) throw logic_error("There can only be 1 value per key!");
      map[key] = pq.push(make_pair(key, value));
      return map[key];
    }

    void modify(size_t key, T value) {
      pq.modify(map[key], make_pair(key, value));
    }   

    void pop() {
      if (empty()) throw logic_error("The priority queue is empty!");
      map[pq.top().first] = nullptr;
      pq.pop();

    }

    void erase(size_t key) {
      if (map[key] == nullptr) throw out_of_range("Key does not exist!");
      pq.erase(map[key]);
      map[key] = nullptr;      
    }

    void clear() {
      pq.clear();
      fill(map.begin(), map.end(), nullptr);
    }    
  };
}

By replacing __gnu_pbds::pairing_heap_tag with __gnu_pbds::binomial_heap_tag, __gnu_pbds::rc_binomial_heap_tag, or __gnu_pbds::thin_heap_tag, we can try different types of heaps easily. See the priority_queue interface. Unfortunately, we cannot try the binary heap because modifying elements invalidates iterators. Conveniently enough, the library allows us to check this condition dynamically .

#include <iostream>
#include <functional>
#include <ext/pb_ds/priority_queue.hpp>

using namespace std;

int main(int argc, char *argv[]) {
  __gnu_pbds::priority_queue<int, less<int>, __gnu_pbds::binary_heap_tag> pq;
  cout << (typeid(__gnu_pbds::container_traits<decltype(pq)>::invalidation_guarantee) == typeid(__gnu_pbds::basic_invalidation_guarantee)) << endl;
  // prints 1
  cout << (typeid(__gnu_pbds::container_traits<__gnu_pbds::priority_queue<int, less<int>, __gnu_pbds::binary_heap_tag>>::invalidation_guarantee) == typeid(__gnu_pbds::basic_invalidation_guarantee)) << endl;
  // prints 1
  return 0;
}

See the documentation for basic_invalidation_guarantee. We need at least point_invalidation_guarantee for the below code to work since we keep a vector of iterators in our phillypham::priority_queue.

vector<int> findShortestDistance(const vector<vector<pair<int, int>>> &adjacencyList,
                                 int sourceIdx) {
  int N = adjacencyList.size();
  phillypham::priority_queue<int, greater<int>> minDistancePriorityQueue(N);  
  for (int i = 0; i < N; ++i) {
    minDistancePriorityQueue.push(i, i == sourceIdx ? 0 : INT_MAX);
  }
  vector<int> distances(N, INT_MAX);
  while (!minDistancePriorityQueue.empty()) {
    phillypham::priority_queue<int, greater<int>>::entry minDistanceVertex = 
      minDistancePriorityQueue.top();
    minDistancePriorityQueue.pop();
    distances[minDistanceVertex.key()] = minDistanceVertex.value();
    for (pair<int, int> nextVertex : adjacencyList[minDistanceVertex.key()]) {
      int newDistance = minDistanceVertex.value() + nextVertex.second;
      if (minDistancePriorityQueue.count(nextVertex.first) &&
          minDistancePriorityQueue[nextVertex.first] > newDistance) {
        minDistancePriorityQueue.modify(nextVertex.first, newDistance);
      }
    }
  }  
  return distances;
}

Fear not, I ended up using my own binary heap that wrote from Dijkstra, Paths, Hashing, and the Chinese Remainder Theorem. Now, we can benchmark all these different implementations against each other.

int main(int argc, char *argv[]) {
  ios::sync_with_stdio(false); cin.tie(NULL);
  int T; cin >> T;              // number of tests
  for (int t = 0; t < T; ++t) {
    int N; cin >> N;            // number of nodes
    // read input
    unordered_map<string, int> cityIdx;
    vector<vector<pair<int, int>>> adjacencyList; adjacencyList.reserve(N);
    for (int i = 0; i < N; ++i) {
      string city;
      cin >> city;
      cityIdx[city] = i;
      int M; cin >> M;
      adjacencyList.emplace_back();
      for (int j = 0; j < M; ++j) {
        int neighborIdx, cost;
        cin >> neighborIdx >> cost;
        --neighborIdx; // convert to 0-based indexing
        adjacencyList.back().emplace_back(neighborIdx, cost);
      }
    }
    // compute output
    int R; cin >> R;            // number of subtests
    for (int r = 0; r < R; ++r) {
      string sourceCity, targetCity;
      cin >> sourceCity >> targetCity;
      int sourceIdx = cityIdx[sourceCity];
      int targetIdx = cityIdx[targetCity];
      vector<int> distances = findShortestDistance(adjacencyList, sourceIdx);
      cout << distances[targetIdx] << '\n';
    }  
  }
  cout << flush;
  return 0;
}

I find that the policy-based data structures are much faster than my own hand-written priority queue.

Algorithm Time (seconds)
PBDS Pairing Heap, Lazy Push 0.41
PBDS Pairing Heap 0.44
PBDS Binomial Heap 0.48
PBDS Thin Heap 0.54
PBDS RC Binomial Heap 0.60
Personal Binary Heap 0.72

Lazy push is small optimization, where we add vertices to the heap as we encounter them. We save a few hundreths of a second at the expense of increased code complexity.

vector<int> findShortestDistance(const vector<vector<pair<int, int>>> &adjacencyList,
                                 int sourceIdx) {
  int N = adjacencyList.size();
  vector<int> distances(N, INT_MAX);
  phillypham::priority_queue<int, greater<int>> minDistancePriorityQueue(N);
  minDistancePriorityQueue.push(sourceIdx, 0);
  while (!minDistancePriorityQueue.empty()) {
    phillypham::priority_queue<int, greater<int>>::entry minDistanceVertex = 
      minDistancePriorityQueue.top();
    minDistancePriorityQueue.pop();
    distances[minDistanceVertex.key()] = minDistanceVertex.value();
    for (pair<int, int> nextVertex : adjacencyList[minDistanceVertex.key()]) {
      int newDistance = minDistanceVertex.value() + nextVertex.second;
      if (distances[nextVertex.first] == INT_MAX) {
        minDistancePriorityQueue.push(nextVertex.first, newDistance);
        distances[nextVertex.first] = newDistance;
      } else if (minDistancePriorityQueue.count(nextVertex.first) &&
                 minDistancePriorityQueue[nextVertex.first] > newDistance) {
        minDistancePriorityQueue.modify(nextVertex.first, newDistance);
        distances[nextVertex.first] = newDistance;
      }
    }
  }  
  return distances;
}

All in all, I found learning to use these data structures quite fun. It's nice to have such easy access to powerful data structures. I also learned a lot about C++ templating on the way.


Photo URL is broken

I rarely apply anything that I've learned from competitive programming to an actual project, but I finally got the chance with Snapstream Searcher. While computing daily correlations between countries (see Country Relationships), we noticed a big spike in Austria and the strength of its relationship with France as seen here. It turns out Wendy's ran an ad with this text.

It's gonna be a tough blow. Don't think about Wendy's spicy chicken. Don't do it. Problem is, not thinking about that spicy goodness makes you think about it even more. So think of something else. Like countries in Europe. France, Austria, hung-a-ry. Hungry for spicy chicken. See, there's no escaping it. Pffft. Who falls for this stuff? And don't forget, kids get hun-gar-y too.

Since commercials are played frequently across a variety of non-related programs, we started seeing some weird results.

My professor Robin Pemantle has this idea of looking at the surrounding text and only counting matches that had different surrounding text. I formalized this notion into something we call contexts. Suppose that we're searching for string $S$. Let $L$ be the $K$ characters the left and $R$ the $K$ characters to the right. Thus, a match in a program is a 3-tuple $(S,L,R)$. We define the following equivalence relation: given $(S,L,R)$ and $(S^\prime,L^\prime,R^\prime)$, \begin{equation} (S,L,R) \sim (S^\prime,L^\prime,R^\prime) \Leftrightarrow \left(S = S^\prime\right) \wedge \left(\left(L = L^\prime\right) \vee \left(R = R^\prime\right)\right), \end{equation} so we only count a match as new if and only if both the $K$ characters to the left and the $K$ characters to right of the new match differ from all existing matches.

Now, consider the case when we're searching for a lot of patterns (200+ countries) and $K$ is large. Then, we will have a lot of matches, and for each match, we'll be looking at $K$ characters to the left and right. Suppose we have $M$ matches. Then, we're looking at $O(MK)$ extra computation since to compare each $L$ and $R$ with all the old $L^\prime$ and $R^\prime$, we would need to iterate through $K$ characters.

One solution to this is to compute string hashes and compare integers instead. But what good is this if we need to iterate through $K$ characters to compute this hash? This is where the Rabin-Karp rolling hash comes into play.

Rabin-Karp Rolling Hash

Fix $M$ which will be the number of buckets. Consider a string of length $K$, $S = s_0s_1s_2\cdots s_{K-1}$. Then, for some $A$, relatively prime to $M$, we define our hash function \begin{equation} H(S) = s_0A^{0} + s_1A^{1} + s_2A^2 + \cdots + s_{K-1}A^{K-1} \pmod M, \end{equation} where $s_i$ is converted to an integer according to ASCII.

Now, suppose we have a text $T$ of length $L$. Define \begin{equation} C_j = \sum_{i=0}^j t_iA^{i} \pmod{M}, \end{equation} and let $T_{i:j}$ be the substring $t_it_{i+1}\cdots t_{j}$, so it's inclusive. Then, $C_j = H(T_{0:j})$, and \begin{equation} C_j - C_{i - 1} = t_iA^{i} + t_{i+1}A^{i+1} + \cdots + t_jA^j \pmod M, \end{equation} so we have that \begin{equation} H(T_{i:j}) = t_iA^{0} + A_{i+1}A^{1} + \cdots + t_jA_{j-i} \pmod M = A^{-i}\left(C_j - C_{i-1}\right). \end{equation} In this way, we can compute the hash of any substring by simple arithmetic operations, and the computation time does not depend on the position or length of the substring. Now, there are actually 3 different versions of this algorithm with different running times.

  1. In the first version, $M^2 < 2^{32}$. This allows us to precompute all the modular inverses, so we have a $O(1)$ computation to find the hash of a substring. Also, if $M$ is this small, we never have to worry about overflow with 32-bit integers.
  2. In the second version, an array of size $M$ fits in memory, so we can still precompute all the modular inverses. Thus, we continue to have a $O(1)$ algorithm. Unfortunately, $M$ is large enough that there may be overflow, so we must use 64-bit integers.
  3. Finally, $M$ becomes so large that we cannot fit an array of size $M$ in memory. Then, we have to compute the modular inverse. One way to do this is the extended Euclidean algorithm. If $M$ is prime, we can also use Fermat's little theorem, which gives us that $A^{i}A^{M-i} \equiv A^{M} \equiv 1 \pmod M,$ so we can find $A^{M - i} \pmod{M}$ quickly with some modular exponentiation. Both of these options are $O(\log M).$

Usually, we want to choose $M$ as large as possible to avoid collisions. In our case, if there's a collision, we'll count an extra context, which is not actually a big deal, so we may be willing to compromise on accuracy for faster running time.

Application to Snapstream Reader

Now, every time that we encouter a match, the left and right hash can be quickly computed and compared with existing hashes. However, which version should we choose? We have 4 versions.

  1. No hashing, so this just returns the raw match count
  2. Large modulus, so we cannot cache the modular inverse
  3. Intermediate modulus, so can cache the modular inverse, but we need to use 64-bit integers
  4. Small modulus, so we cache the modular inverse and use 32-bit integers

We run these different versions with 3 different queries.

  • Query A: {austria} from 2015-8-1 to 2015-8-31
  • Query B: ({united kingdom} + {scotland} + {wales} + ({england} !@ {new england})) from 2015-7-1 to 2015-7-31
  • Query C: ({united states} + {united states of america} + {usa}) @ {mexico} from 2015-9-1 to 2015-9-30

First, we check for collisions. Here are the number of contexts found for the various hashing algorithms and search queries for $K = 51$.

Hashing Version A B C
1 181 847 75
2 44 332 30
3 44 331 30
4 44 331 30

In version 1 since there's no hashing, that's the raw match count. As we'd expect, hashing greatly reduces the number of matches. Also, there's no collisions until we have a lot of matches (847, in this case). Thus, we might be okay with using a smaller modulus if we get a big speed-up since missing 1 context out of a 1,000 won't change trends too much.

Here's the benchmark results.

Obviously, all versions of hashing are slower than no hashing. Using a small modulus approximately doubles the time, which makes sense, for we're essentially reading the text twice: once for searching and another time for hashing. Using an intermediate modulus adds another 3 seconds. Having to perform modular exponentiation to compute the modular inverse adds less than a second in the large modulus version. Thus, using 64-bit integers versus 32-bit integers is the major cause of the slowdown.

For this reason, we went with the small modulus version despite the occasional collisions that we encouter. The code can be found on GitHub in the StringHasher class.


Photo URL is broken

It's currently a rainy day up here in Saratoga Springs, NY, so I've decided to write about a problem that introduced me to Fenwick trees, otherwise known as binary indexed trees. It exploits the binary representation of numbers to speed up computation from $O(N)$ to $O(\log N)$ in a similar way that binary lifiting does in the Least Common Ancestor Problem.

Consider a vector elements of $X = (x_1,x_2,\ldots,x_N)$. A common problem is to find the sum or a range of elements. Define $S_{jk} = \sum_{i=j}^kx_i$. We want to compute $S_{jk}$ quickly for any $j$ and $k$. The first thing to is define $F(k) = \sum_{i=1}^kx_i$, where we let $F(0) = 0$. Then, we rewrite $S_{jk} = F(k) - F(j-1)$, so our problem reduces to finding a way to compute $F$ quickly.

Computing from $X$ directly, computing $F$ is a $O(N)$ operation, but updating incrementing some $x_i$ is a $O(1)$ operation. On the other hand, we can precompute $F$ with some dynamic programming since $F(k) = x_k + F(k-1)$. In this way computing $F$ is a $O(1)$ operation, but if we increment $x_i$, we have to update $F(j)$ for all $j \geq i$, which is a $O(N)$ operation. The Fenwick tree makes both of these operations $O(\log N)$.

Fenwick Tree

The key idea of the Fenwick tree is to cache certain $S_{jk}$. Suppose that we wanted to compute $F(n)$. First, we write $$n = d_02^0 + d_12^1 + \cdots + d_m2^m.$$ If we remove all the terms where $d_i = 0$, we rewrite this as $$ n = 2^{i_1} + 2^{i_2} + \cdots + 2^{i_l},~\text{where}~i_1 < i_2 < \cdots < i_l. $$ Let $n_j = \sum_{k = j}^l 2^{i_k}$, and $n_{l + 1} = 0$. Then, we have that $$ F(n) = \sum_{k=1}^{l} S_{n_{k+1}+1,n_k}. $$ For example for $n = 12$, we first sum the elements $(8,12]$, and then, the elements $(0,8]$, secondly.

We can represent these intervals and nodes in a tree like this. Like a binary heap, the tree is stored as an array. The number before the colon is the index in the array. The number after the colon is the value is the sum of $x_i$, where $i$ is in the interval $(a,b]$. The interval for node $i$ is $(p_i,i]$, where $p_i$ is the parent of node $i$.

Now, suppose $X = (58,62,96,87,9,46,64,54,87,7,51,84,33,69,43)$. Our tree should look like this.

Calculating $F(n)$

Suppose we wanted to calculate $F(13)$. We start at node $13$, and we walk up towards the root adding the values of all the nodes that we visit. In this case, we find that $F(13) = 738$. Writing the nodes visited in their binary representations reveals a curious thing: \begin{align*} 13 &= \left(1101\right)_2 \\ 12 &= \left(1100\right)_2 \\ 8 &= \left(1000\right)_2 \\ 0 &= \left(0000\right)_2. \end{align*} If you look closely, at each step, we simply remove the rightmost bit, so finding the parent node is easy.

Updating a Fenwick Tree

This is a little trickier, but it uses the same idea. Suppose that we want to increment $x_n$ by $\delta$. First, we increase the value of node $n$ by $\delta$. Recall that we can write $$ n = 2^{i_1} + 2^{i_2} + \cdots + 2^{i_l},~\text{where}~i_1 < i_2 < \cdots < i_l. $$ Now if $j < i_1$, node $n + 2^{j}$ is a descendant of node $n$. Thus, the next node we need to update is $n + 2^{i_1}$. We repeat this process of adding the rightmost bit and updating the value of the node until we exceed the capacity of the tree. For instance, if we add $4$ to $x_5$, we'll update the nodes in blue.

Two's Complement

If you read the above carefully, we you'll note that we often need to find the rightmost bit. We subtract it when summing and add it when updating. Using the fact that binary numbers are represented with Two's complement, there's an elegant trick we can use to make finding the rightmost bit easy.

Consider a 32-bit signed integer with bits $n = b_{31}b_{30}\cdots b_{0}$. For $0 \leq i \leq 30$, $b_i = 1$ indicates a term of $2^i$. If $b_{31} = 0$, then $n$ is positive and $$ n = \sum_{i \in \left\{0 \leq i \leq 30~:~b_i = 1\right\}}2^i. $$ On the other hand if $b_{31} = 1$, we still have the same terms but we subtract $2^{31}$, so $$ n = -2^{31} + \sum_{i \in \left\{0 \leq i \leq 30~:~b_i = 1\right\}}2^i, $$ which makes $n$ negative.

As an example of the result of flipping $b_{31}$, we have \begin{align*} 49 &= (00000000000000000000000000110001)_{2} \\ -2147483599 &= (10000000000000000000000000110001)_{2}. \end{align*}

Now, consider the operation of negation. Fix $x$ to be a nonnegative integer. Let $y$ be such that $-2^{31} + y = -x$, so solving, we find that $$y = -x + 2^{31} = -x + 1 + \sum_{i=0}^{30}2^i.$$ Therefore, $y$ is the positive integer we get by flipping all the bits of $x$ except $b_{31}$ and adding $1$. Making $x$ negative, $-x = -2^{31} + y$ will have $b_{31}$ flipped, too. Using $49$ as an example again, we see that \begin{align*} 49 &= (00000000000000000000000000110001)_{2} \\ -49 &= (11111111111111111111111111001111)_{2}. \end{align*}

This process looks something like this: $$ x = (\cdots 10\cdots0)_2 \xrightarrow{\text{Flip bits}} (\cdots 01\cdots1)_2 \xrightarrow{+1} (\cdots 10\cdots0)_2 = y. $$ In this way $x$ and $y$ have same rightmost bit. $-x$ has all the same bits as $y$ except for $b_{31}$. Thus, $x \land -x$ gives us the rightmost bit.

Fenwick Tree Implementation

Using this trick, the implementation of the Fenwick tree is just a couple dozen lines. My implemenation is adapted for an $X$ that is $0$-indexed.

class FenwickTree {
  vector<int> tree;
public:
  FenwickTree(int N) : tree(N + 1, 0) {}
  // sums the elements from 0 to i inclusive
  int sum(int i) {
    if (i < 0) return 0;
    ++i; // use 1-indexing, we're actually summing first i + 1 elements 
    if (i > tree.size() - 1) i = tree.size() - 1;
    int res = 0;
    while (i > 0) {
      res += tree[i];
      i -= (i & -i); // hack to get least bit based on two's complement
    }
    return res;
  }
  // sums the elements from i to j inclusive
  int sum(int i, int j) {
    return sum(j) - sum(i - 1);
  }
  // update counts
  void update(int i, int delta) {
    ++i;  // convert to 1-indexing  
    while (i < tree.size()) {
      tree[i] += delta;
      i += (i & -i);
    }
  }
};

Vika and Segments

The original motivation for me to learn about Fenwick trees was the problem, Vika and Segments. Here's the problem statement:

Vika has an infinite sheet of squared paper. Initially all squares are white. She introduced a two-dimensional coordinate system on this sheet and drew $n$ black horizontal and vertical segments parallel to the coordinate axes. All segments have width equal to $1$ square, that means every segment occupy some set of neighbouring squares situated in one row or one column.

Your task is to calculate the number of painted cells. If a cell was painted more than once, it should be calculated exactly once.

The first thing to do is join together the horizontal lines that overlap and the vertical lines that overlap. The basic idea is to count all the squares that are painted by the horizontal lines. Then, we sort the vertical lines by their x-coordinates and sweep from left to right.

For each vertical line, we count the squares that it covers and subtract out its intersection with the horizontal lines. This is where the Fenwick tree comes into play.

For each vertical line, it will have endpoints $(x,y_1)$ and $(x,y_2)$, where $y_1 < y_2$. As we sweep from left to right, we keep track of which horizontal lines are active. Let $Y$ be array of $0$s and $1$s. We set $Y[y] = 1$ if we encounter a horizontal line, and $Y[y] = 0$ if the horizonal line ends. Every time that we encounter a vertical line, we'll want to compute $\sum_{y = y_1}^{y_2}Y[y]$, which we can quickly with the Fenwick tree.

Now, the range of possible coordinates is large, so there are some details with coordinate compression, but I believe the comments in the code are clear enough.

struct LineEndpoint {
  int x, y;
  bool start, end;
  LineEndpoint(int x, int y, bool start) : x(x), y(y), start(start), end(!start)
  {}
};

void joinLines(map<int, vector<pair<int, int>>> &lines) {
  for (map<int, vector<pair<int, int>>>::iterator lineSegments = lines.begin();
       lineSegments != lines.end(); ++lineSegments) {
    sort((lineSegments -> second).begin(), (lineSegments -> second).end());
    vector<pair<int, int>> newLineSegments;
    newLineSegments.push_back((lineSegments -> second).front());
    for (int i = 1; i < (lineSegments -> second).size(); ++i) {
      if (newLineSegments.back().second + 1 >= (lineSegments -> second)[i].first) { // join line segment
        // make line as large as possible
        newLineSegments.back().second = max((lineSegments -> second)[i].second, newLineSegments.back().second);
      } else { // start a new segment
        newLineSegments.push_back((lineSegments -> second)[i]);
      }
    }
    (lineSegments -> second).swap(newLineSegments);
  }
}

int main(int argc, char *argv[]) {
  ios::sync_with_stdio(false); cin.tie(NULL);
  int N; cin >> N; // number of segments
  map<int, vector<pair<int, int>>> horizontalLines; // index by y coordinate
  map<int, vector<pair<int, int>>> verticalLines; // index by x coordinate
  for (int n = 0; n < N; ++n) { // go through segements
    int x1, y1, x2, y2;
    cin >> x1 >> y1 >> x2 >> y2;
    if (y1 == y2) {
      if (x1 > x2) swap(x1, x2);
      horizontalLines[y1].emplace_back(x1, x2);
    } else if (x1 == x2) {
      if (y1 > y2) swap(y1, y2);
      verticalLines[x1].emplace_back(y1, y2);
    }
  }  
  // first join horizontal and vertical segments that coincide  
  joinLines(horizontalLines); joinLines(verticalLines);
  /* now compress coordinates
   * partition range so that queries can be answered exactly
   */
  vector<int> P; 
  for (pair<int, vector<pair<int, int>>> lineSegments : verticalLines) {
    for (pair<int, int> lineSegment : lineSegments.second) {
      P.push_back(lineSegment.first - 1);
      P.push_back(lineSegment.second);
    }
  }
  sort(P.begin(), P.end());
  P.resize(unique(P.begin(), P.end()) - P.begin());
  /* Now let M = P.size(). We have M + 1 partitions.
   * (-INF, P[0]], (P[0], P[1]], (P[1], P[2]], ..., (P[M - 2], P[M-1]], (P[M-1],INF]
   */
  unordered_map<int, int> coordinateBucket;
  for (int i = 0; i < P.size(); ++i) coordinateBucket[P[i]] = i;
  // begin keep track of blackened squares
  long long blackenedSquares = 0;
  // sort the horizontal lines end points to prepare for a scan
  // tuple is (x-coordinate, flag for left or right endpoint, y-coordinate)
  vector<LineEndpoint> horizontalLineEndpoints;  
  for (pair<int, vector<pair<int, int>>> lineSegments : horizontalLines) {
    for (pair<int, int> lineSegment : lineSegments.second) {
      horizontalLineEndpoints.emplace_back(lineSegment.first, lineSegments.first, true); // start
      horizontalLineEndpoints.emplace_back(lineSegment.second, lineSegments.first, false); //end
      // horizontal lines don't coincide with one another, count them all
      blackenedSquares += lineSegment.second - lineSegment.first + 1;
    }
  }  
  // now prepare to scan vertical lines from left to right
  sort(horizontalLineEndpoints.begin(), horizontalLineEndpoints.end(), 
       [](LineEndpoint &a, LineEndpoint &b) -> bool {
         if (a.x != b.x) return a.x < b.x;
         if (a.start != b.start) return a.start; // add lines before removing them
         return a.y < b.y;
       });
  FenwickTree horizontalLineState(P.size() + 1);
  vector<LineEndpoint>::iterator lineEndpoint = horizontalLineEndpoints.begin();
  for (pair<int, vector<pair<int, int>>> lineSegments : verticalLines) {
    /* update the horizontal line state
     * process endpoints that occur before vertical line      
     * add line if it occurs at the vertical line
     */
    while (lineEndpoint != horizontalLineEndpoints.end() && 
           (lineEndpoint -> x < lineSegments.first ||
            (lineEndpoint -> x == lineSegments.first && lineEndpoint -> start))) {
      int bucketIdx = lower_bound(P.begin(), P.end(), lineEndpoint -> y) - P.begin();
      if (lineEndpoint -> start) { // add the line
        horizontalLineState.update(bucketIdx, 1);
      } else if (lineEndpoint -> end) { // remove the line
        horizontalLineState.update(bucketIdx, -1);
      }
      ++lineEndpoint;
    }
    for (pair<int, int> lineSegment : lineSegments.second) {
      // count all squares
      blackenedSquares += lineSegment.second - lineSegment.first + 1;
      // subtract away intersections, make sure we start at first bucket that intersects with line
      blackenedSquares -= horizontalLineState.sum(coordinateBucket[lineSegment.first - 1] + 1, 
                                                  coordinateBucket[lineSegment.second]);
    }    
  }
  cout << blackenedSquares << endl;
  return 0;
}