Posts by Philip Pham

Photo URL is broken

I had taken a break from Machine Learning: a Probabilistic Perspective in Python for some time after I got stuck on figuring out how to train a neural network. The problem Nonlinear regression for inverse dynamics was predicting torques to apply to a robot arm to reach certain points in space. There are 7 joints, but we just focus on 1 joint.

The problem claims that Gaussian process regression can get a standardized mean squared error (SMSE) of 0.011. Part (a) asks for a linear regression. This part was easy, but only could get an SMSE of 0.0742260930.

Part (b) asks for a radial basis function (RBF) network. Basically, we come up with $K$ prototypical points from the training data with $K$-means clustering. $K$ was chosen with by looking at the graph of reconstruction error. I chose $K = 100$. Now, each prototype can be though of as unit in a neural network, where the activation function is the radial basis function:

\begin{equation} \kappa(\mathbf{x}, \mathbf{x}^\prime) = \exp\left(-\frac{\lVert\mathbf{x}-\mathbf{x}^\prime\rVert^2}{2\sigma^2}\right), \end{equation}

where $\sigma^2$ is the bandwidth, which was chosen with cross validation. I just tried powers of 2, which gave me $\sigma^2 = 2^8 = 256$.

Now, the output of these activation functions is our new dataset and we just train a linear regression on the outputs. So, neural networks can pretty much be though of as repeated linear regression after applying a nonlinear transformation. I ended up with an SMSE of 0.042844306703931787.

Finally, after months I trained a neural network and was able to achieve an SMSE of 0.0217773683, which is still a ways off from 0.011, but it's a pretty admirable performance for a homework assignment versus a published journal article.

Thoughts on TensorFlow

I had attempted to learn TensorFlow several times but repeatedly gave up. It does a great job at abstracting out building your network with layers of units and training your model with gradient descent. However, getting your data into batches and feeding it into the graph can be quite a challenge. I eventually decided on using queues, which are apparently being deprecated. I thought these were a pretty good abstraction, but since they are tied to your graph, it makes evaluating your model a pain. It makes it so that validation against a different dataset must be done in separate process, which makes a lot of sense for an production-grade training process, but it is quite difficult for beginners who sort of imagine that there should be model object that you can call like model.predict(...). To be fair, this type of API does exist with TFLearn.

It reminds me a lot of learning Java, which has a lot of features for building class hierarchies suited for large projects but confuses beginners. It took me forever to figure out what public static thing meant. After taking the time, I found the checkpointing system with monitored sessions to be pretty neat. It makes it very easy to stop and start training. In my script sarco_train.py, if you quit training with Ctrl + C, you can just start again by restarting the script and pointing to the same directory.

TensorBoard was pretty cool espsecially after I discovered that if you point your processes to the same directory, you can get plots on the same graph. I used this to plot my batch training MSE together with the MSE on the entire training set and test set. All you have to do is evaluate the tf.summary tensor as I did in the my evaluation script.

MSE Time Series

MSE time series: the orange is batches from the training set. Purple is the whole training set, and blue is the test set.

Of course, you can't see much from this graph. But the cool thing is that these graphs are interactive and update in real time, so you can drill down and remove the noisy MSE from the training batches and get:

Filtered MSE

MSE time series that has been filtered.

From here, I was able to see that the model around training step 190,000 performed the best. And thanks to checkpointing, that model is saved so I can restore it as I do in Selecting a model.

I thought one of the cooler features was the ability to visualize your graph. The training graph is show in the title picture and its quite complex with regularization and the Adams Optimizer calculating gradients and updating weights. The evaluation graph is much easier to look at on the other hand.

Evaluation graph

It's really just a subset of the much larger training graph that stops at calculating the MSE. The training graph just includes extra nodes for regularization and optimizing.

All in all, while learning this TensorFlow was pretty frustrating, it was rewarding to finally have a model that worked quite well. Honestly, though, there is still much more to explore like training on GPUs, embeddings, distributed training, and more complicated network layers. I hope to write about doing that someday.


Despite being a classic read by high schoolers everywhere, I never did read John Steinbeck's The Grapes of Wrath. Given Donald Trump's ascendency to the White House, the plight of rural, white Americans has received much attention. I decided to read The Grapes of Wrath because it speaks to a struggling, rural America of a different era.

The novel left me pondering how terribly we can treat our fellow man, and yet, feel just in doing so. The economic hardships that the Okies faced were made more difficult by their being branded with otherness. It also got me thinking about the strong sense of pride that these people have, which informs their reactions to the trials that they face. The virtue of hard work is not lost on them, and they demand a sense of agency.

For the most part, it didn't seem that anyone was truly evil. Everyone was just doing their job or looking out for their families. The police officers are paid to protect tha land. The man running a tractor over homes of his neighbors just wants to feed his families.

Some of the owner men were kind because they hated what they had to do, and some of them were angry because they hated to be cruel, and some of them were cold because they had long ago found that one could not be an owner unless one were cold. And all of them were caught in something larger than themselves. Some of them hated the mathematics that drove them, and some were afraid, and some worshiped the mathematics because it provided a refuge from thought and from feeling.

But the sum of all these mundane jobs and following orders leads to true horrors like children starving. It reminded me Hemingway's A Farewell to Arms about the stupidity of a war. Everyone can agree that what the war and killing is wrong, yet everyone feels powerless to stop fighting. In the same way,

It happens that every man in a bank hates what the bank does, and yet the bank does it. The bank is something more than men, I tell you. It’s the monster. Men made it, but they can’t control it.

Indeed, it seems as if man is more than happy to make his own idols and become enslaved to them. This attitude is alive and well today. The financial crisis in 2008 could attributed to the worship of such mathematics. We can justify the ruthlessness of companies like Uber because, well, it's just business.

Despite being poor and mistreated, the Okies remain a proud and dignified people. In my younger years, I can admit to falling for the Asian model minority myth and using it as a justification for racism against other minorities. But now, seeing the Joad's invoke their ancestors has started to make me understand the destruction that slavery wrought on African Americans. Mama Joad recalls her ancestors often, which often leads the family to persevere and do right.

I never heerd tell of no Joads or no Hazletts, neither, ever refusin’ food an’ shelter or a lift on the road to anybody that asked. They’s been mean Joads, but never that mean.

And also,

Jus' like I said, they ain't a gonna keep no Joad in jail.

Slavery stripped African Americans of that sense of pride and narritive, however. Because the Joad's have so much pride, they won't accept how the police treat them.

“They comes a time when a man gets mad.’’

...

Did you ever see a deputy that didn’ have a fat ass? An’ they waggle their ass an’ flop their gun aroun’. Ma,’’ he said, “if it was the law they was workin’ with, why, we could take it. But it ain’t the law. They’re a-workin’ away at our spirits. They’re a-tryin’ to make us cringe an’ crawl like a whipped bitch. They tryin’ to break us. Why, Jesus Christ, Ma, they comes a time when the on’y way a fella can keep his decency is by takin’ a sock at a cop. They’re workin’ on our decency.

Eventually, Tom sees violence against the police as the only solution, and yet, we are made to feel sympathetic for the Joad's plight. I suppose that this is how many in the Black Lives Matter movement feel, but many condemn their extreme tactics.

Many attribute Trump's rise to the aloofness and condescension of elites. They are not depicted nicely in THe Grapes of Wrath, either. In addition to the quote about owners and the worship of mathematics, there are many instances of how the so-called Okies are looked down on for being backwards. These attitudes are transmitted to the children, for Winfield says:

“That kid says we was Okies,’’ he said in an outraged voice. “He says he wasn’t no Okie ’cause he come from Oregon. Says we was goddamn Okies. I socked him.’’

This is not too dissimilar to today's depictions of the Christian right or Southern rednecks. While the owner class is depicted as cold and detached, we see a human side of the poor. Mama Joad tells us:

“Learnin’ it all a time, ever’ day. If you’re in trouble or hurt or need— go to poor people. They’re the only ones that’ll help— the only ones.’’

And maybe, there's some truth to that. The most striking hypocrisy among elite liberals is their tendency to self-segregate and resist all attempts at integration. Californians tend to live in enclaves and fight against affordable housing. It was true in Steinbeck's time, and it is true today. Embracing other cultures for many of the globally elite rarely goes beyond dining at ethnic restaurants found on Yelp or taking exotic vacations. While many liberals support refugee resettlement, nearly all of that work is done by Evangelical Christians, who are often derided as denying science like evolution and climate change.

Of course, Christians are not without their hypocrisy, either. Casy is an ex-preacher who admits to having slept around. The reason he is an ex-preacher is that he is well aware of how the Church has failed his congregation. Just as then, today, there are those that would use the Church to enrish themselves.

All in all, though, the book seems to echo some many fundamental biblical teachings. The owner men that oppress the poor are not actually evil. They are simply "caught in something larger than themselves," and so as in Matthew 19:24: "Again I tell you, it is easier for a camel to go through the eye of a needle than for a rich person to enter the kingdom of God." In Casy's last moments, he says, “You don’ know what you’re a-doin’.’’ as in Luke 23:24,

Jesus said, "Father, forgive them, for they don't know what they are doing." And the soldiers gambled for his clothes by throwing dice.

Finally, Tom almost repeats Ecclessiates 4:9-12 verbatim:

‘Two are better than one, because they have a good reward for their labor. For if they fall, the one will lif’ up his fellow, but woe to him that is alone when he falleth, for he hath not another to help him up.’ 1 That’s part of her.’’ “Go on,’’ Ma said. “Go on, Tom.’’ “Jus’ a little bit more. ‘Again, if two lie together, then they have heat: but how can one be warm alone? And if one prevail against him, two shall withstand him, and a three-fold cord is not quickly broken.’

I wasn't entirely sure of what to make of the very end, which I won't mention the details here, lest I ruin the book for any readers. It seemed either to be an ultimate act of desperation or a higher call to look after our fellow man. Maybe it was a bit of both.


Photo URL is broken

Never having taken any computer science courses beyond the freshman level, there are some glaring holes in my computer science knowledge. According to Customs and Border Protection, every self-respecting software engineer should be able to balance a binary search tree.

I used a library-provided red-black tree in Policy-Based Data Structures in C++, so I've been aware of the existence of these trees for quite some time. However, I've never rolled my own. I set out to fix this deficiency in my knowledge a few weeks ago.

I've created an augmented AVL tree to re-solve ORDERSET.

The Problem

The issue with vanilla binary search trees is that the many of the operations have worst-case $O(N)$ complexity. To see this, consider the case where your data is sorted, so you insert your data in ascending order. Then, the binary search tree degenerates into a linked list.

Definition of an AVL Tree

AVL trees solve this issue by maintaining the invariant that the height of the left and right subtrees differ by at most 1. The height of a tree is the maximum number of edges between the root and a leaf node. For any node $x$, we denote its height $h_x$. Now, consider a node $x$ with left child $l$ and right child $r$. We define the balance factor of $x$ to be $$b_x = h_l - h_r.$$ In an AVL tree, $b_x \in \{-1,0,1\}$ for all $x$.

Maintaining the Invariant

Insertion

As we insert new values into the tree, our tree may become unbalanced. Since we are only creating one node, if the tree becomes unbalanced some node has a balance factor of $\pm 2$. Without loss of generality, let us assume that it's $-2$ since the two cases are symmetric.

Now, the only nodes whose heights are affected are along the path between the root and the new node. So, only the parents of these nodes, who are also along this path, will have their balance factor altered. Thus, we should start at the deepest node with an incorrect balance factor. If we can find a a way to rebalance this subtree, we can recursively balance the whole tree by going up to the root and rebalancing on the way.

So, let us assume we have tree whose root has a balance factor of $-2$. Thus, the height of the right subtree exceeds the height of the left subtree by $2$. We have $2$ cases.

Right-left Rotation: The Right Child Has a Balance Factor of $1$

In these diagrams, circles denote nodes, and rectangles denote subtrees.

In this example, by making $5$ the new right child, we still have and unbalanced tree, where the root has balance factor $-2$, but now, the right child has a right subtree of greater height.

Right-right Rotation: The Right Child Has a Balance Factor of $-2$, $-1$, or $0$

To fix this situation, the right child becomes the root.

We see that after this rotation is finished, we have a balanced tree, so our invariant is restored!

Deletion

When we remove a node, we need to replace it with the next largest node in the tree. Here's how to find this node:

  1. Go right. Now all nodes in this subtree are greater.
  2. Find the smallest node in this subtree by going left until you can't.

We replace the deleted node, say $x$, with the smallest node we found in the right subtree, say $y$. Remember this path. The right subtree of $y$ becomes the new left subtree of the parent of $y$. Starting from the parent of $y$, the balance factor may have been altered, so we start here, go up to the root, and do any rotations necessary to correct the balance factors along the way.

Implementation

Here is the code for these rotations with templates for reusability and a partial implementation of the STL interface.

#include <cassert>
#include <algorithm>
#include <iostream>
#include <functional>
#include <ostream>
#include <stack>
#include <string>
#include <utility>

using namespace std;

namespace phillypham {
  namespace avl {
    template <typename T>
    struct node {
      T key;
      node<T>* left;
      node<T>* right;
      size_t subtreeSize;
      size_t height;
    };

    template <typename T>
    int height(node<T>* root) {
      if (root == nullptr) return -1;
      return root -> height;
    }

    template <typename T>
    void recalculateHeight(node<T> *root) {
      if (root == nullptr) return;
      root -> height = max(height(root -> left), height(root -> right)) + 1;
    }

    template <typename T>
    size_t subtreeSize(node<T>* root) {
      if (root == nullptr) return 0;
      return root -> subtreeSize;
    }

    template <typename T>
    void recalculateSubtreeSize(node<T> *root) {
      if (root == nullptr) return;
      root -> subtreeSize = subtreeSize(root -> left) + subtreeSize(root -> right) + 1;
    }

    template <typename T>
    void recalculate(node<T> *root) {
      recalculateHeight(root);
      recalculateSubtreeSize(root);
    }

    template <typename T>
    int balanceFactor(node<T>* root) {
      if (root == nullptr) return 0;
      return height(root -> left) - height(root -> right);
    }

    template <typename T>
    node<T>*& getLeftRef(node<T> *root) {
      return root -> left;    
    }

    template <typename T>
    node<T>*& getRightRef(node<T> *root) {
      return root -> right;
    }

    template <typename T>
    node<T>* rotateSimple(node<T> *root,
                          node<T>*& (*newRootGetter)(node<T>*),
                          node<T>*& (*replacedChildGetter)(node<T>*)) {
      node<T>* newRoot = newRootGetter(root);
      newRootGetter(root) = replacedChildGetter(newRoot);
      replacedChildGetter(newRoot) = root;
      recalculate(replacedChildGetter(newRoot));
      recalculate(newRoot);
      return newRoot;
    }

    template <typename T>
    void swapChildren(node<T> *root,
                      node<T>*& (*childGetter)(node<T>*),
                      node<T>*& (*grandChildGetter)(node<T>*)) {
      node<T>* newChild = grandChildGetter(childGetter(root));
      grandChildGetter(childGetter(root)) = childGetter(newChild);
      childGetter(newChild) = childGetter(root);
      childGetter(root) = newChild;
      recalculate(childGetter(newChild));
      recalculate(newChild);
    }

    template <typename T>
    node<T>* rotateRightRight(node<T>* root) {
      return rotateSimple(root, getRightRef, getLeftRef);
    }

    template <typename T>
    node<T>* rotateLeftLeft(node<T>* root) {
      return rotateSimple(root, getLeftRef, getRightRef);    
    }

    template <typename T>
    node<T>* rotate(node<T>* root)  {
      int bF = balanceFactor(root);
      if (-1 <= bF && bF <= 1) return root;
      if (bF < -1) { // right side is too heavy
        assert(root -> right != nullptr);
        if (balanceFactor(root -> right) != 1) {
          return rotateRightRight(root);
        } else { // right left case
          swapChildren(root, getRightRef, getLeftRef);
          return rotate(root);
        }
      } else { // left side is too heavy
        assert(root -> left != nullptr);
        // left left case
        if (balanceFactor(root -> left) != -1) {
          return rotateLeftLeft(root);
        } else { // left right case
          swapChildren(root, getLeftRef, getRightRef);
          return rotate(root);
        }
      }
    }

    template <typename T, typename cmp_fn = less<T>>
    node<T>* insert(node<T>* root, T key, const cmp_fn &comparator = cmp_fn()) {
      if (root == nullptr) {
        node<T>* newRoot = new node<T>();
        newRoot -> key = key;
        newRoot -> left = nullptr;
        newRoot -> right = nullptr;
        newRoot -> height = 0;
        newRoot -> subtreeSize = 1;
        return newRoot;
      }
      if (comparator(key, root -> key)) { 
        root -> left = insert(root -> left, key, comparator);
      } else if (comparator(root -> key, key)) {
        root -> right = insert(root -> right, key, comparator);
      }
      recalculate(root);
      return rotate(root);
    }

    template <typename T, typename cmp_fn = less<T>>
    node<T>* erase(node<T>* root, T key, const cmp_fn &comparator = cmp_fn()) {
      if (root == nullptr) return root; // nothing to delete
      if (comparator(key, root -> key)) { 
        root -> left = erase(root -> left, key, comparator);
      } else if (comparator(root -> key, key)) {
        root -> right = erase(root -> right, key, comparator);
      } else { // actual work when key == root -> key
        if (root -> right == nullptr) {
          node<T>* newRoot = root -> left;
          delete root;
          return newRoot;
        } else if (root -> left == nullptr) {
          node<T>* newRoot = root -> right;
          delete root;
          return newRoot;
        } else {
          stack<node<T>*> path;
          path.push(root -> right);
          while (path.top() -> left != nullptr) path.push(path.top() -> left);
          // swap with root
          node<T>* newRoot = path.top(); path.pop();
          newRoot -> left = root -> left;
          delete root;

          node<T>* currentNode = newRoot -> right;
          while (!path.empty()) {
            path.top() -> left = currentNode;
            currentNode = path.top(); path.pop();
            recalculate(currentNode);
            currentNode = rotate(currentNode);
          }
          newRoot -> right = currentNode;
          recalculate(newRoot);
          return rotate(newRoot);
        }        
      }
      recalculate(root);
      return rotate(root);
    }

    template <typename T>
    stack<node<T>*> find_by_order(node<T>* root, size_t idx) {
      assert(0 <= idx && idx < subtreeSize(root));
      stack<node<T>*> path;
      path.push(root);
      while (idx != subtreeSize(path.top() -> left)) {
        if (idx < subtreeSize(path.top() -> left)) {
          path.push(path.top() -> left);
        } else {
          idx -= subtreeSize(path.top() -> left) + 1;
          path.push(path.top() -> right);
        }
      }
      return path;
    }

    template <typename T>
    size_t order_of_key(node<T>* root, T key) {
      if (root == nullptr) return 0ULL;
      if (key == root -> key) return subtreeSize(root -> left);
      if (key < root -> key) return order_of_key(root -> left, key);
      return subtreeSize(root -> left) + 1ULL + order_of_key(root -> right, key);
    }

    template <typename T>
    void delete_recursive(node<T>* root) {
      if (root == nullptr) return;
      delete_recursive(root -> left);
      delete_recursive(root -> right);
      delete root;
    }
  }

  template <typename T, typename cmp_fn = less<T>>
  class order_statistic_tree {
  private:
    cmp_fn comparator;
    avl::node<T>* root;
  public:
    class const_iterator: public std::iterator<std::bidirectional_iterator_tag, T> {
      friend const_iterator order_statistic_tree<T, cmp_fn>::cbegin() const;
      friend const_iterator order_statistic_tree<T, cmp_fn>::cend() const;
      friend const_iterator order_statistic_tree<T, cmp_fn>::find_by_order(size_t) const;
    private:
      cmp_fn comparator;
      stack<avl::node<T>*> path;
      avl::node<T>* beginNode;
      avl::node<T>* endNode;
      bool isEnded;

      const_iterator(avl::node<T>* root) {
        setBeginAndEnd(root);
        if (root != nullptr) {
          path.push(root);
        }
      }

      const_iterator(avl::node<T>* root, stack<avl::node<T>*>&& path): path(move(path)) {
        setBeginAndEnd(root);
      }

      void setBeginAndEnd(avl::node<T>* root) {
        beginNode = root;
        while (beginNode != nullptr && beginNode -> left != nullptr)
          beginNode = beginNode -> left;
        endNode = root;        
        while (endNode != nullptr && endNode -> right != nullptr)
          endNode = endNode -> right;
        if (root == nullptr) isEnded = true;
      }

    public:
      bool isBegin() const {
        return path.top() == beginNode;
      }

      bool isEnd() const {
        return isEnded;
      }

      const T& operator*() const {
        return path.top() -> key;
      }

      const bool operator==(const const_iterator &other) const {
        if (path.top() == other.path.top()) {
          return path.top() != endNode || isEnded == other.isEnded;
        }
        return false;
      }

      const bool operator!=(const const_iterator &other) const {
        return !((*this) == other);
      }

      const_iterator& operator--() {
        if (path.empty()) return *this;
        if (path.top() == beginNode) return *this;
        if (path.top() == endNode && isEnded) {
          isEnded = false;
          return *this;
        }
        if (path.top() -> left == nullptr) {
          T& key = path.top() -> key;
          do {
            path.pop();
          } while (comparator(key, path.top() -> key));
        } else {
          path.push(path.top() -> left);
          while (path.top() -> right != nullptr) {
            path.push(path.top() -> right);
          }
        }
        return *this;
      }

      const_iterator& operator++() {
        if (path.empty()) return *this;
        if (path.top() == endNode) {
          isEnded = true;
          return *this;
        }
        if (path.top() -> right == nullptr) {
          T& key = path.top() -> key;
          do {
            path.pop();
          } while (comparator(path.top() -> key, key));
        } else {
          path.push(path.top() -> right);
          while (path.top() -> left != nullptr) {
            path.push(path.top() -> left);
          }
        }
        return *this;
      }
    };

    order_statistic_tree(): root(nullptr) {}

    void insert(T newKey) {
      root = avl::insert(root, newKey, comparator);
    }

    void erase(T key) {
      root = avl::erase(root, key, comparator);
    }

    size_t size() const {
      return subtreeSize(root);
    }

    // 0-based indexing
    const_iterator find_by_order(size_t idx) const {
      return const_iterator(root, move(avl::find_by_order(root, idx)));
    }

    // returns the number of keys strictly less than the given key
    size_t order_of_key(T key) const {
      return avl::order_of_key(root, key);
    }

    ~order_statistic_tree() {
      avl::delete_recursive(root);
    }

    const_iterator cbegin() const{
      const_iterator it = const_iterator(root);
      while (!it.isBegin()) --it;
      return it;
    }

    const_iterator cend() const {
      const_iterator it = const_iterator(root);
      while (!it.isEnd()) ++it;
      return it;
    }
  };
}

Solution

With this data structure, the solution is fairly short, and is in fact, identical to my previous solution in Policy-Based Data Structures in C++. Performance-wise, my implementation is very efficient. It differs by only a few hundredths of a second with the policy-based tree set according to SPOJ.

int main(int argc, char *argv[]) {
  ios::sync_with_stdio(false); cin.tie(NULL);

  phillypham::order_statistic_tree<int> orderStatisticTree;
  int Q; cin >> Q; // number of queries
  for (int q = 0; q < Q; ++q) {
    char operation; 
    int parameter;
    cin >> operation >> parameter;
    switch (operation) {
      case 'I':
        orderStatisticTree.insert(parameter);
        break;
      case 'D':
        orderStatisticTree.erase(parameter);
        break;
      case 'K':
        if (1 <= parameter && parameter <= orderStatisticTree.size()) {
          cout << *orderStatisticTree.find_by_order(parameter - 1) << '\n';
        } else {
          cout << "invalid\n";
        }
        break;
      case 'C':
        cout << orderStatisticTree.order_of_key(parameter) << '\n';
        break;
    }
  }
  cout << flush;
  return 0;
}

Photo URL is broken

I'll wake you up with some breakfast in bed
I'll bring you coffee
With a kiss on your head
- Excerpt from "Say You Won't Let Go" orginally by James Arthur

After a few attempts, I created a Croque Madame that I'm quite happy with. I improved on Croque-Madame by replacing the ham with slices of thinly cut pork belly that I cooked on a skillet. I, then, used all that pork fat to cook my eggs. For those that prefer exact recipies, I also used sourdough bread, 3/8 tsp salt, and 1/8 tsp of ground nutmeg.

Also, I think the blog might give people the false idea that I'm pretty much good at everything. It's time to pull back the facade of perfection that social media projects. After many hours of practice on the guitar, I mustered this effort of singing "Say You Won't Let Go". Here's an excerpt. I hope you enjoy my embarassing myself.

One of the coolest benefits of working for Google is we have guitars lying around the office for impromptu jam sessions. This was filmed in the Seattle office, with a Google guitar, and a Google laptop. Now, I'm done being a shill for Google.


Photo URL is broken

After solving Kay and Snowflake, I wanted write this post just to serve as notes to my future self. There are many approaches to this problem. The solution contains three different approaches, and my approach differs from all of them, and has better complexity than all but the last solution. My solution contained a lot of code that I think is reusable in the future.

This problem involves finding the centroid of a tree, which is a node such that when removed, each of the new trees produced have at most half the number of nodes as the original tree. We need to be able to do this for any arbitrary subtree of the tree. Since there are as many as 300,000 subtrees we need to be a bit clever about how we find the centroid.

First, we can prove that the centroid of any tree is a node with smallest subtree such that the subtree has size at least half of the original tree. That is, if $T(i)$ is the subtree of node $i$, and $S(i)$ is its size, then the centroid of $T(i)$ is \begin{equation} C(i) = \underset{k \in \{j \in T(i) : 2S(j) - T(i) \geq 0\}}{\arg\min} \left(S(k) - \frac{T(i)}{2}\right). \end{equation}

To see this, first note the ancestors and descendents of $k = C(i)$ will be in separate subtrees. Since $S(k) \geq T(i)/2 \Rightarrow T(i) - S(k) \leq T(i)/2$, so all the ancestors will be part of new trees that are sufficiently small. Now, suppose for a contradiction that some subset of the descendents of $k$ belong to a tree that is greater than $T(i)/2$. This tree is also a subtree of $T(i)$, so $C(i)$ should be in this subtree, so this is ridiculous.

Now, that proof wasn't so hard, but actually finding $C(i)$ for each $i$ is tricky. The naive way would be to do an exhaustive search in $T(i)$, but that would give us an $O(NQ)$ solution.

The first observation that we can make to simplify our search is that if we start at $i$ and always follow the node with the largest subtree, the centroid is on that path. Why? Well, there can only be one path that has at least $T(i)/2$ nodes. Otherwise, the tree would have too many nodes.

So to find the centroid, we can start at the leaf of this path and go up until we hit the centroid. That still doesn't sound great though since the path might be very long, and it is still $O(N)$. The key thing to note is that the subtree size is strictly increasing, so we can do a binary search, reducing this step to $O(\log N)$. With a technique called binary lifting described in Least Common Ancestor, we can further reduce this to $O\left(\log\left(\log N\right)\right)$, which is probably not actually necessary, but I did it anyway.

Now, we need to know several statistics for each node. We need the leaf of path of largest subtrees, and we need the subtree size. Subtree size can be calculate recursively with depth-first search. Since subtree size of $i$ is the size of all the child subtrees plus 1 for $i$ itself. Thus, we do a post-order traversal to calculate subtree size and the special leaf. We also need depth of a node to precompute the ancestors for binary lifting. The depth is computed with a pre-order traversal. In the title picture, the numbers indicate the order of an in-order traversal. The letters correspond to a pre-order traversal, and the roman numerals show the order of a post-order traversal. Since recursion of large trees leads to stack overflow errors and usually you can't tell the online judge to increase the stack size, it's always better to use an explicit stack. I quite like my method of doing both pre-order and post-order traversal with one stack.

/**
 * Does a depth-first search to calculate various statistics.
 * We are given as input a rooted tree in the form each node's
 * children and parent. The statistics returned:
 * ancestors: Each node's ancestors that are a power of 2
 *     edges away.
 * maxPathLeaf: If we continually follow the child that has the
 *     maximum subtree, we'll end up at this leaf.
 * subtreeSize: The size of the node's subtree including itself.
 */
void calculateStatistics(const vector<vector<int>> &children, 
                         const vector<int> &parent,
                         vector<vector<int>> *ancestorsPtr,
                         vector<int> *maxPathLeafPtr,
                         vector<int> *subtreeSizePtr) {
  int N = parent.size();
  if (N == 0) return;
  // depth also serves to keep track of whether we've visited, yet.
  vector<int> depth(N, -1); // -1 indicates that we have not visited.
  vector<int> height(N, 0);
  vector<int> &maxPathLeaf = *maxPathLeafPtr;
  vector<int> &subtreeSize = *subtreeSizePtr;
  stack<int> s; s.push(0); // DFS
  while (!s.empty()) {
    int i = s.top(); s.pop();
    if (depth[i] == -1) {       // pre-order
      s.push(i); // Put it back in the stack so we visit again.
      depth[i] = parent[i] == -1 ? 0: depth[parent[i]] + 1;
      for (int j : children[i]) s.push(j);
    } else {                    // post-order
      int maxSubtreeSize = INT_MIN;
      int maxSubtreeRoot = -1;
      for (int j : children[i]) {
        height[i] = max(height[j] + 1, height[i]);
        subtreeSize[i] += subtreeSize[j];
        if (maxSubtreeSize < subtreeSize[j]) {
          maxSubtreeSize = subtreeSize[j];
          maxSubtreeRoot = j;
        }        
      }
      maxPathLeaf[i] = maxSubtreeRoot == -1 ?
        i : maxPathLeaf[maxSubtreeRoot];
    }   
  }
  // Use binary lifting to calculate a subset of ancestors.
  vector<vector<int>> &ancestors = *ancestorsPtr;
  for (int i = 0; i < N; ++i)
    if (parent[i] != -1) ancestors[i].push_back(parent[i]);
  for (int k = 1; (1 << k) <= height.front(); ++k) {
    for (int i = 0; i < N; ++i) {
      int j = ancestors[i].size();
      if ((1 << j) <= depth[i]) {
        --j;
        ancestors[i].push_back(ancestors[ancestors[i][j]][j]);
      }
    }
  }
}

With these statistics calculated, the rest our code to answer queries is quite small if we use C++'s built-in implementation of binary search. With upper_bound:

#include <algorithm>
#include <cassert>
#include <climits>
#include <iostream>
#include <iterator>
#include <stack>
#include <vector>

using namespace std;

/**
 * Prints out vector for any type T that supports the << operator.
 */
template<typename T>
void operator<<(ostream &out, const vector<T> &v) {
  copy(v.begin(), v.end(), ostream_iterator<T>(out, "\n"));
}

int findCentroid(int i,
                 const vector<vector<int>> &ancestors,
                 const vector<int> &maxPathLeaf,
                 const vector<int> &subtreeSize) {
  int centroidCandidate = maxPathLeaf[i];
  int maxComponentSize = (subtreeSize[i] + 1)/2;
  while (subtreeSize[centroidCandidate] < maxComponentSize) {
    // Alias the candidate's ancestors.
    const vector<int> &cAncenstors = ancestors[centroidCandidate];
    assert(!cAncenstors.empty());
    // Check the immediate parent first. If this is an exact match, and we're done.
    if (subtreeSize[cAncenstors.front()] >= maxComponentSize) {
      centroidCandidate = cAncenstors.front();
    } else {
      // Otherwise, we can approximate the next candidate by searching ancestors that
      // are a power of 2 edges away.
      // Find the index of the first ancestor who has a subtree of size
      // greater than maxComponentSize.
      int centroidIdx =
        upper_bound(cAncenstors.cbegin() + 1, cAncenstors.cend(), maxComponentSize,
                    // If this function evaluates to true, j is an upper bound.
                    [&subtreeSize](int maxComponentSize, int j) -> bool {
                      return maxComponentSize < subtreeSize[j];
                    }) - ancestors[centroidCandidate].cbegin();
      centroidCandidate = cAncenstors[centroidIdx - 1];
    }
  }
  return centroidCandidate;
}

int main(int argc, char *argv[]) {
  ios::sync_with_stdio(false); cin.tie(NULL);
  int N, Q;
  cin >> N >> Q;
  vector<int> parent; parent.reserve(N);
  parent.push_back(-1); // the root has no parent
  vector<vector<int>> children(N);
  for (int i = 1; i < N; ++i) {
    int p; cin >> p; --p; // Convert to 0-indexing.
    children[p].push_back(i);
    parent.push_back(p);
  }
  // ancestors[i][j] will be the (2^j)th ancestor of node i.
  vector<vector<int>> ancestors(N);
  vector<int> maxPathLeaf(N, -1);
  vector<int> subtreeSize(N, 1);
  calculateStatistics(children, parent,
                      &ancestors, &maxPathLeaf, &subtreeSize);

  vector<int> centroids; centroids.reserve(Q);
  for (int q = 0; q < Q; ++q) {
    int v; cin >> v; --v; // Convert to 0-indexing.
    // Convert back to 1-indexing.
    centroids.push_back(findCentroid(v, ancestors, maxPathLeaf, subtreeSize) + 1);
  }
  cout << centroids;
  cout << flush;
  return 0;
}

Of course, we can use lower_bound in findCentroid, too. It's quite baffling to me that the function argument takes a different order of parameters in lower_bound, though. I always forget how to use these functions, which is why I've decided to write this post.

int findCentroid(int i,
                 const vector<vector<int>> &ancestors,
                 const vector<int> &maxPathLeaf,
                 const vector<int> &subtreeSize) {
  int centroidCandidate = maxPathLeaf[i];
  int maxComponentSize = (subtreeSize[i] + 1)/2;
  while (subtreeSize[centroidCandidate] < maxComponentSize) {
    // Alias the candidate's ancestors.
    const vector<int> &cAncenstors = ancestors[centroidCandidate];
    assert(!cAncenstors.empty());
    // Check the immediate parent first. If this is an exact match, and we're done.
    if (subtreeSize[cAncenstors.front()] >= maxComponentSize) {
      centroidCandidate = cAncenstors.front();
    } else {
      // Otherwise, we can approximate the next candidate by searching ancestors that
      // are a power of 2 edges away.
      // Find the index of the first ancestor who has a subtree of size
      // at least maxComponentSize.
      int centroidIdx =
        lower_bound(cAncenstors.cbegin() + 1, cAncenstors.cend(), maxComponentSize,
                    // If this function evaluates to true, j + 1 is a lower bound.
                    [&subtreeSize](int j, int maxComponentSize) -> bool {
                      return subtreeSize[j] < maxComponentSize;
                    }) - ancestors[centroidCandidate].cbegin();      
      if (centroidIdx < cAncenstors.size() &&
          subtreeSize[cAncenstors[centroidIdx]] == maxComponentSize) {
        centroidCandidate = cAncenstors[centroidIdx];
      } else { // We don't have an exact match.
        centroidCandidate = cAncenstors[centroidIdx - 1];
      }
      centroidCandidate = cAncenstors[centroidIdx - 1];
    }
  }
  return centroidCandidate;
}

All in all, we need $O(N\log N)$ time to compute ancestors, and $O\left(Q\log\left(\log N\right)\right)$ time to answer queries, so total complexity is $O\left(N\log N + Q\log\left(\log N\right)\right)$.


Photo URL is broken

One of the better parts about working for Google is the office gym and the community and camaraderie of working out and striving towards fitness goals together with your coworkers. This past year, we had the GFit games. The friendly competition gave me the boost I needed to finally hit two longstanding goals of mine, to deadlift 4 plates (405 pounds) and squat 3 plates (315 pounds). In competition, I was able to hit exactly 405 pounds for the deadlift, and I smashed my goal for the squat with 345 pounds. Yay!

People often wonder why I lift. It's certainly not for the girls given my dating life. Despite my love of problem solving that I discussed in my previous article, there is something deeply unsatisfying about urban intellectual work. I'm apparently not alone in this. I recently read this wonderful article about French people leaving the city to farm, Life on the Farm Draws Some French Tired of Urban Rat Race. I often wonder how software engineering became such a male-dominated field because pressing buttons all day on a keyboard makes nursing look like the epitomy of masculinity since they do much more hard labor that we do.

I suppose that through some defect of evolution, there's some primitive vestige left in my brain from our caveman days that associates work with physical labor. I guess a lot of people are of the same mind given the strong grip that manufacturing industries have on the American pysche as shown in this past election cycle. For this reason, when I go to work in the morning, my first stop is the gym. It seems dumb and brutish of me, but I find a lot of satisfaction in picking heavy things up and putting them back down.

There is something spiritually fulfilling about working towards fitness goals, too. As I've gotten stronger and tested my limits, I've gained a new found appreciation for creation. Conquering fear to do a backflip was exhilirating. Working on my balance to do handstand pushups often helps me relax and focus. I can't really get on board with the fitness industry about how being in great shape will radically transform your life. No, despite my abs, girls do not flock to me. However, I do find a lot of joy in pushing myself physically. Hopefully, 2017 will be another great year.