I was recently posed the following puzzle:

Imagine you are offered a choice between two different bets. In (A), you must make 2/3 soccer shots, and in (B), you must make 5/8. In either case, you receive a $100 prize for winning the bet. Which bet should you choose?

Intuitively, a professional soccer player would want to take the second bet, whereas a hopeless case like me would want to take the first. However, suppose you have no idea whether your skill level is closer to Lionel Messi or to Max Shinn. The puzzle continues:

You are offered the option to take practice shots to determine your skill level at a cost of $0.01 for each shot. Assuming you and the goalie never fatigue, how do you decide when to stop taking practice shots and choose a bet?

Clearly it is never advisable to take more than 100/.01=10000 practice shots, but how many should we take? A key to this question is that you do not have to determine the number of shots to take beforehand. Therefore, rather than determining a fixed number of shots to take, we will instead need to determine a decision procedure for when to stop shooting and choose a bet.

There is no single “correct” answer to this puzzle, so I have documented my approach below.

Approach

To understand my approach, first realize that there are a finite number of potential states that the game can be in, and that you can fully define each state based on how many shots you have made and how many you have missed. The sum of these is the total number of shots you have taken, and the order does not matter. Additionally, we assume that all states exist, even if you will never arrive at that state by the decision procedure.

An example of a state is taking 31 shots, making 9 of them, and missing 22 of them. Another example is taking 98 shots, making 1 of them and missing 97 of them. Even though we may have already made a decision before taking 98 shots, the concept of a state does not depend on the procedure used to “get there”.

Using this framework, it is sufficient to show which decision we should take given what state we are in. My approach is as follows:

  1. Find a tight upper bound B \ll 10000 on the number of practice shots to take. This limits the number of states to work with.
  2. Determine the optimal choice based on each potential state after taking B total shots. Once B shots have been taken, it is always best to have chosen either bet (A) or bet (B), so choose the best bet without the option of shooting again.
  3. Working backwards, starting with states with B-1 shots and moving down to B-2,...,0, determine the expected value of each of the three choices: select bet (A), select bet (B), or shoot again. Use this to determine the optimal choice to make at that position.

The advantage of this approach is that the primary criterion we will work with is the expected value for each decision. This means that if we play the game many times we will maximize the amount of money we earn. As a convenient consequence of this, we know much money we can expect to earn given our current state.

The only reason this procedure is necessary is because we don’t know our skill level. If we could determine with 100% accuracy what are skill level was, we would never need to take any shots at all. Thus, a key part of this procedure is estimating our skill level.

What if you know your skill level?

We define skill level as the probability p_0 that you will make a shot. So if you knew your probability of making each shot, we could find your expected payoff from each bet. On the plot below, we show the payoff (in dollars) of each bet on the y-axis, and how it changes with skill on the x-axis.

Assuming we have a precise knowledge of your skill level, we can find how much money you can expect to make from each bet.

The first thing to notice is the obvious: as our skill improves, the amount of money we can expect to win increases. Second, we see that there is some point (the “equivalence point”) at which the bets are equal; we compute this numerically to be p_0 = 0.6658. We should choose bet (A) if our skill level is worse than 0.6658, and bet (B) if it is greater than 0.6658.

But suppose our guess is poor. We notice that the consequence for guessing too high is less than the consequence for guessing too low. It is better to bias your choice towards (A) unless you obtain substantial evidence that you have a high skill level and (B) would be a better choice. In other words, the potential gains from choosing (A) over (B) are larger than the potential gains for choosing (B) over (A).

Finding a tight upper bound

Quantifying this intuition, we compute the maximal possible gain of choosing (A) over (B) and (B) over (A) as the maximum distance between the curves on each side of the equivalence point. In other words, we find the skill level at which the incentive is strongest to choose one bet over the other, and then find what the incentive is at these points.

We see here the locations where the distance between the curves is greatest, showing the skill levels where it is most advantageous to choose (A) or (B).

This turns out to be $4.79 for choosing (B) over (A), and $17.92 for choosing (A) over (B). Since each shot costs $0.01, we conclude that it is never a good idea to take more than 479 practice shots. Thus, our upper bound B=479.

Determining the optimal choice at the upper bound

Because we will never take more than 479 shots, we use this as a cutoff point, and force a decision once 479 shots have been taken. So for each possible combinations of successes and failures, we must find whether bet (A) or bet (B) is better.

In order to determine this, we need two pieces of information: first, we need the expected value of bets (A) and (B) given p_0 (i.e. the curve shown above); second, we need the distribution representing our best estimate of p_0. Remember, it is not enough to simply choose (A) when our predicted skill is less than 0.6658 and (B) when it is greater than 0.6658; since we are biased towards choosing (A), we need a probability distribution representing potential values of p_0. Then, we can find the expected value of each bet given the distribution of p_0 (see appendix for more details). This can be computed with a simple integral, and is easy to approximate numerically.

Once we have performed these computations, in addition to having information about whether (A) or (B) was chosen, we also know the expected value of the chosen bet. This will be critical for determining whether it is beneficial to take more shots before we have reached the upper bound.

Determining the optimal choice below the upper bound

We then go down one level: if 478 shots have been taken, with k successes and (478-k) failures, should we choose (A), should we choose (B), or should we take another shot? Remember, we would like to select the choice which will give us the highest expected outcome.

Based on this principle, it is only advisable to take another shot if it would influence the outcome; in other words, if you would choose the same bet no matter what the outcome of your next shot, it does not make sense to take another shot, because you lose $0.01 without gaining any information. It only makes sense to take the shot if the information gained from taking the shot increases the expected value by more than $0.01.

Thus, we would only like to take another shot if the information gained is worth more than $0.01. We can compute this by finding the expected value of each of the three options (choose (A), choose (B), or shoot again). Using our previous experiments to judge the probability of a successful shot (see appendix), we can find the expected payoff of taking another shot. If it is greater than choosing (A) or (B), we take the shot.

Working backwards, we continue until we are on our first shot, where we assume we have a 50% chance of succeeding. Once we reach this point, we have a full decision tree, indicating which action we should take based on the outcome of each shot, and the entire decision process can be considered solved.

Conclusion

Here is the decision tree, plotted in raster form.

Starting at the point (0,0), go one to the right for every shot that you take, and one up for every shot that you make. Red indicates you should shoot again, blue indicates you should choose (A), and green indicates you should choose (B).

Looking more closely at the beginning, we see that unless you are really good, you should choose (A) rather quickly.

An identical plot to that above, but zoomed in near the beginning.

We can also look at the amount of money you will win on average if you play by this strategy. As expected, when you make more shots, you will have a higher chance of winning more money.

For each point in the previous figures, these values correspond to the choices.

We can also look at the zoomed in version.

An identical plot to the one above, but zoomed in near the beginning.

This algorithm grows in memory and computation time like O(B^2), meaning that if we double the size of the upper bound, we quadruple the amount of memory and CPU time we require.

This may not be the best strategy, but it seems to be a principled strategy which works reasonably well with a relatively small runtime.

Appendix: Determining the distribution of p_0

In order to find the distribution for p_0, we consider the distribution of p_0 for a single shot. The chance that we make a shot is 100% if p_0=1, 0% if p_0=0, 50% if p_0=0.5, and so on. Thus, the distribution of p_0 from a single successful trial is f(p)=p for 0 ≤ p ≤ 1. Similarly, if we miss the shot, then the distribution is f(p)=(1-p) for 0≤p≤1. Since these probabilities are independent, we can multiply them together and find that, for n shots, k successes, and (n-k) failures, we have f(p)=p^k (1-p)^{n-k}/c for some normalizing constant c. It turns out, this is identical to the beta distribution, with parameters α=k+1 and β=n-k+1.

However, we need a point estimate of p_0 to compute the expected value of taking another shot. We cannot simply use the ratio n/k for two practical reasons: first, it is undefined when no shots have been taken, and second, when the first shot has been taken, we have a 100% probability of one outcome and a 0% probability of the other. If we want to assume a 50% probability of making the shot initially, an easy way to solve this problem is to use the ratio (k+1)/(n+2) instead of k/n to estimate the probability. Interestingly, this quick and dirty solution is equivalent to finding the mean of the beta distribution. When no shots have been taken, k=0 and n=0, so α=1 and β=1, which is equivalent to the uniform distribution, hence our non-informative prior.

Code/Data:


There is a wonderful collection of puzzles by Simon Tatham called the Portable Puzzle Collection which serves as a fun distraction. The game “Towers” is a simple puzzle where you must fill in a Latin square with numbers 1 \ldots N, only one of each per row/column, as if the squares contained towers of this height. The number of towers visible from the edges of rows and columns are given as clues. For example,

An example starting board from the Towers game.

Solved, the board would appear as,

The previous example solved.

In more advanced levels, not all of the hints are given. Additionally, in these levels, hints can also be given in the form of the value of particular cells. For example, the initial conditions of the puzzle may be,

A more difficult example board.

With such different types of hints, it raises the question of whether some hints are better than others.

How will we approach the problem?

We will use an information-theoretic framework to understand how useful different hints are. This allows us to measure the amount of information that a particular hint gives about the solution to a puzzle in bits, a nearly-identical unit to that used by computers to measure file and memory size.

Information theory is based on the idea that random variables (quantities which can take on one of many values probabilistically) are not always independent, so sometimes knowledge of the value of one random variable can change the probabilities for a different random variable. For instance, one random variable may be a number 1-10, and a second random variable may be whether that number is even or odd. A bit is an amount of information equal to the best possible yes or no question, or (roughly speaking) information that can cut the number of possible outcomes in half. Knowing whether a number is even or odd gives us one bit of information, since it specifies that the first random variable can only be one of five numbers instead of one of ten.

Here, we will define a few random variables. Most importantly, we will have the random variable describing the correct solution of the board, which could be any possible board. We will also have random variables which represent hints. There are two types of hints: initial cell value hints (where one of the cells is already filled in) and tower visibility hints (which define how many towers are visible down a row or column).

The number of potential Latin squares of size N grows very fast. For a 5×5 square, there are 161,280 possibilities, and for a 10×10, there are over 10^{47}. Thus, for computational simplicity, we analyze a 4×4 puzzle with a mere 576 possibilities.

How useful are “initial cell value” hints?

First, we measure the entropy, or the maximal information content that a single cell will give. For the first cell chosen, there is an equal probability that any of the values (1, 2, 3, or 4) will be found in that cell. Since there are two options, this give us 2 bits of information.

What about the second initial cell value? Interestingly, it depends both on the location and on the value. If the second clue is in the same row or column as the first, it will give less information. If it is the same number as the first, it will also give less information.

Counter-intuitively, in the 4×4 board, this means we gain more than 2 bits of information from the second hint. This is because, once we reveal the first cell’s value, the probabilities of each of the other cell’s possible values are not equal as they were before. Since we are not choosing from the same row or column of our first choice, is more likely that this cell will be equal to the first cell’s value than to any other value. So therefore if we reveal a value which is different, it will provide more information.

Intuitively, for the 4×4 board, suppose we reveal the value of a cell and it is 4. There cannot be another 4 in the same column or row, so if we are to choose a hint from a different column or row, we are effectively choosing from a leaving a 3×3 grid. There must be 3 4 values in the 3×3 grid, so the probability of selecting it is 1/3. We have an even probability of selecting a 1, 2, or 3, so each other symbol has a probability of 2/9. Being more surprising finds, we gain 2.17 bits of information from each of these three.

Consequently, selecting a cell in the same row or column, or one which has the same value as the first, will give an additional 1.58 bits of information.

How about “tower visibility” hints?

In a 4×4 puzzle, it is very easy to compute the information gained if the hint is a 1 or a 4. A hint of 1 always gives the same amount of information as a single square: it tells us that the cell on the edge of the hint must be a 4, and gives no information about the rest of the squares. If only one tower can be seen, the tallest tower must come first. Thus, it must give 2 bits of information.

Additionally, we know that if the hint is equal to 4, the only possible combination for the row is 1, 2, 3, 4. Thus, this gives an amount of information equal to the entropy of a single row, which turns out to be 4.58 bits.

For a hint of 2 or 3, the information content is not as immediately clear, but we can calculate them numerically. For a hint of 2, we have 1.13 bits, and for a hint of 3, we have 2 bits.

Conveniently, due to the fact that the reduction of entropy in a row must be equal to the reduction of entropy in the entire puzzle, we can compute values for larger boards. Below, we show the information gained about the solution from each possible hint (indicated by the color). In general, it seems higher hints are usually better, but a hint of 1 is generally better than one of 2 or 3.

For each board size, the information content of each potential hint is plotted.

Conclusion

In summary:

  • The more information given by a hint for a puzzle, the easier that hint makes it to solve the puzzle.
  • Of the two types of hints, usually the hints about the tower visibility are best.
  • On small boards (of size less than 5), hints about individual cells are very useful.
  • The more towers visible from a row or column, the more information is gained about the puzzle from that hint.

Of course, remember that all of the hints combined of any given puzzle must be sufficient to completely solve the puzzle (assuming the puzzle is solvable), so the information content provided by the hints must be equal to the entropy of the puzzle of the given size. When combined, we saw in the “initial cell value” that hints may become more or less effective, so these entropy values cannot be directly added to determine which hints provide the most information. Nevertheless, this serves as a good starting point in determining which hints are the most useful.

More information

Theoretical note

For initial cell hints, it is possible to compute the information content analytically for any size board. For a board of size N×N with N symbols, we know that the information contained in the first hint is -\log(1/N) bits. Suppose this play uncovers token X. Using this first play, we construct a sub-board where the row and column of the first hint are removed, leaving us with an (N-1)×(N-1) board. If we choose a cell from this board, it has a 1/(N-1) probability of being X and an equal chance of being anything else, giving a \frac{N-2}{(N-1)^2} probability of each of the other tokens. Thus, information gained is -\frac{N-2}{(N-1)^2}×\log\left(\frac{N-2}{(N-1)^2}\right) if the value is different from the first, and -1/(N-1)×\log\left(1/(N-1)\right) if they are the same; these expressions are approximately equal for large N. Note how no information is gained when the second square is revealed if N=2.

Similarly, when a single row is revealed (for example by knowing that N towers are visible from the end of a row or column) we know that the entropy must be reduced by -\sum_{i=1}^N \log(1/N). This is because the first element revealed in the row gives -\log(1/N) bits, the second gives -\log(1/(N-1)) bits, and so on.

Solving a puzzle algorithmically

Most of these puzzles are solvable without backtracking, i.e. the next move can always be logically deduced from the state of the board without the need for trial and error. By incorporating the information from each hint into the column and row states and then integrating this information across rows and columns, it turned out to be surprisingly simple to write a quick and dirty algorithm to solve the puzzles. This algorithm, while probably not of optimal computational complexity, worked reasonably well. Briefly,

  1. Represent the initial state of the board by a length-N list of lists, where each of the N lists represents a row of the board, and each sub-list contains all of the possible combinations of this row (there are N! of them to start). Similarly, define an equivalent (yet redundant) data structure for the columns.
  2. Enforce each condition on the start of the board by eliminating the impossible combinations using the number of towers visible from each row and column, and using the cells given at initialization. Update the row and column lists accordingly.
  3. Now, the possible moves for certain squares will be restricted by the row and column limitations; for instance, if only 1 tower is visible in a row or column, the tallest tower in the row or column must be on the edge of the board. Iterate through the cells, restricting the potential rows by the limitations on the column and vice versa. For example, if we know the position of the tallest tower in a particular column, eliminate the corresponding rows which do not have the tallest tower in this position in the row.
  4. After sufficient iterations of (3), there should only be one possible ordering for each row (assuming it is solvable without backtracking). The puzzle is now solved.

This is not a very efficient algorithm, but it is fast enough and memory-efficient enough for all puzzles which might be fun for a human to solve. This algorithm also does not work with puzzles which require backtracking, but could be easily modified to do so.

Code/Data:


Anyone who has taken the bus has at one time or another wondered, “When should I plan to be at the bus stop?” or more importantly, “When should I leave if I want to catch the bus?” Many bus companies suggest arriving a few minutes early, but there seem to be no good analyses on when to leave for the bus. I decided to find out.

Finding a cost function

Suppose we have a bus route where a bus runs every I minutes, so if you don’t catch your bus, you can always wait for the next bus. However, since more than just your time is at stake for missing the bus (e.g. missed meetings, stress, etc.), we assume there is a penalty \delta for missing the bus in addition to the extra wait time. \delta here is measured in minutes, i.e. how many minutes of your time would you exchange to be guaranteed to avoid missing the bus. \delta=0 therefore means that you have no reason to prefer one bus over another, and that you only care about minimizing your lifetime bus wait time.

Assuming we will not be late enough to need to catch the third bus, we can model this with two terms, representing the cost to you (in minutes) of catching each of the next two buses, weighted by the probability that you will catch that bus:

C(t) = \left(E(T_B) - t\right) P\left(T_B > t + L_W\right) + \left(I + E(T_B) - t + \delta\right) P(T_B < t + L_W)

where T_B is the random variable representing the time at which the bus arrives, L_W is the random variable respresenting the amount of time it takes to walk to the bus stop, and t is the time you leave. (E is expected value and P is the probability.) We wish to choose a time to leave the office t which minimizes the cost function C.

If we assume that T_B and L_W are Gaussian, then it can shown that the optimal time to leave (which minimizes the above function) is

t = -\mu_W - \sqrt{\left(\sigma_B^2 + \sigma_W^2\right)\left(2\ln\left(\frac{I+\delta}{\sqrt{\sigma_B^2+\sigma_W^2}}\right)-2\ln\left(\sqrt{2\pi}\right)\right)}

where \sigma_B^2 is the variance of the bus arrival time, \sigma_W^2 is the variance of your walk, and \mu_W is expected duration of your walk. In other words, you should plan to arrive at the bus stop on average \sqrt{\left(\sigma_B^2 + \sigma_W^2\right)\left(2\ln\left(\left(I+\delta\right)/\sqrt{\sigma_B^2+\sigma_W^2}\right)-2\ln\left(\sqrt{2\pi}\right)\right)} minutes before your bus arrives.

Note that one deliberate oddity of the model is that the cost function does not just measure wait time, but also walking time. I optimized on this because, in the end, what matters is the total time you spend getting on the bus.

What does this mean?

The most important factor you should consider when choosing which bus to take is the variability in the bus’ arrival time and the variability in the time it takes you to walk to the bus. The arrival time scales approximately linearly with the standard deviation of the variability.

Additionally, it scales at approximately the square root of the log the your value of time and of the frequency of the buses. So even if very high values of time and very infrequent buses do not substantially change the time at which you should arrive. For approximation purposes, you might consider adding a constant in place of this term, anywhere from 2-5 minutes depending on the frequency of the bus.

Checking the assumption

First, we need to collect some data to assess whether the bus time arrival (T_B) is normally distributed. I wrote scripts to scrape data from Yale University’s Blue Line campus shuttle route. Many bus systems (including Yale’s) now have real-time predictions, so I used many individual predictions by Yale’s real-time arrival prediction system as the expected arrival time, simulating somebody checking this to see when the next bus comes.

For our purposes, the expected arrival time looks close enough to a Gaussian distribution:

It actually looks like a Gaussian!

So what time should I leave?

When estimating the \sigma_B^2 parameter, we only examine bus times which are 10 minutes away or later. This is because you can’t use a real-time bus system to plan ahead of time to catch something if it is too near in the future, which defeats the purpose of the present analysis. The variance in arrival time for the Yale buses is \sigma_B^2=5.7.

We use an inter-bus interval of I=15 minutes.

While the variability of the walk to the bus station \sigma_W^2 is unique for each person, I consider two cases: one case, where we assume that arrival time variability is small (\sigma_W^2=0) compared to the bus’ variability, representing the case where the bus stop is (for intance) located right outside one’s office building. I also consider the case where the time variability is comperable to the variability for the bus (\sigma_W=5), representing the case where one must walk a long distance to the bus stop.

Finally, I consider the case where we strongly prioritize catching the desired bus (\delta=60 corresponding to, e.g., an important meeting) and also the case where we seek to directly minimize the expected wait time (\delta=0 corresponding to, e.g., the commute home).

Even though the shape of the optimization function changes greatly, the optimal arrival time changes very little.

We can also look at a spectrum of different cost tradeoffs for missing the bus (values of \delta) and variance in the walk time (values of \sigma_W^2 = var(W)). Because they appear similarly in the equations, we can also consider these values to be changes in the interval of the bus arrival I and the variance of the bus’ arrival time \sigma_B^2=var(B) respectively.

Across all reasonable values, the optimal time to plan to arrive is between 3.5 and 8 minutes early.

Conclusion

So to summarize:

  • If it always takes you approximately the same amount of time to walk to the bus stop, plan to be there 3-4 minutes early on your commute home, or 5-6 minutes early if it’s the last bus before an important meeting.
  • If you have a long walk to the bus stop which can vary in duration, plan to arrive at the bus stop 4-5 minutes early if you take the bus every day, or around 7-8 minutes early if you need to be somewhere for a special event.
  • These estimations assume that you know how long it takes you on average to walk to the bus stop. As we saw previously, if you need to be somewhere at a certain time, arriving a minute early is much better than arriving a minute late. If you don’t need to be somewhere, just make your best guess.
  • The best way to reduce waiting time is to decrease variability.
  • These estimates also assume that the interval between buses is sufficiently large. If it is small, as in the case of a subway, there are different factors that govern the time you spend waiting.
  • This analysis focuses on buses with an expected arrival time, not with a scheduled arrival time. When buses have schedules, they will usually wait at the stop if they arrive early. This situation would require a different analysis than what was performed here.

Code/Data: