How Far Would You Go To Rig A Coin Flip?
Category : The Riddler
Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. There are two types: Riddler Express for those of you who want something bite-size and Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,6 and you may get a shoutout in next week’s column. If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.
From Justin Bernstein, when all you have is a coin, all you can do is flip:
Coin flips are a handy way to determine a winner — if the goal is to give two people an equal chance of winning. But sometimes it’s not. Suppose Anna and Barry aren’t interested in equity. All they have is a fair coin marked heads and tails. How can they devise a game that gives Anna a 1 in 3 chance of winning? What about a 1 in 4 chance? What about a 1 in 5 chance?
Submit your answer
From Jim Crimmins, a puzzle that may or may not be topical. There’s something happening next week, I think — I just can’t seem to remember what it is …
You are the elections analyst for Riddler Nation’s wildly successful data-driven political blog, OneHundred. The annual election is coming up, and all 100 Riddler Nation Senate seats are up for grabs. There are two parties in Riddler Nation: the Theorists and the Programmers.
Elections in Riddler Nation work a bit differently than those in other countries. Here, a virtual coin is flipped for each seat — the Programmers win on heads, while the Theorists win on tails. Remarkably, the founders of Riddler Nation allowed for a political party to write the program that does the flipping. (Conflicts of interest were not high on the founders’ list of things to worry about.) Although the Programmers do the programming of the virtual coin, the Theorists do have one recourse, though — they have the right to challenge the results if they think that something is fishy.
The Theorists can’t code and are generally suspicious of the Programmers. So they collect complete data (the flip results) for each seat after every election. Election data and challenges go to the Riddler Nation Statistical Court, composed of learned and honest statisticians. The Theorists’ challenges have no statute of limitations — a 100-year-old election can be part of their case, if need be.
Here are your questions, elections analyst:
If the Programmers don’t cheat and the virtual coin is fair, what is the probability that one party will win a simple majority in the Senate? What is the probability that one party will win a supermajority of 60 seats?
If the Programmers decide to cheat this year by weighting the virtual coin in their favor, what weighting would give them a 50 percent chance of winning a 60-seat supermajority? How often can they do this periodic, annual 50-percent weighting before the Theorists can prove to the Statistical Court that there’s at least a 99 percent chance that the coin wasn’t fair?
If the Programmers decide to cheat by weighting the coin permanently for the next 100 elections, how heavily can they weight it and escape a successful 99 percent challenge by the Theorists and detection by the court? How many 60-seat supermajorities can they expect to win over this 100-year period?
What is the optimal cheating strategy for the Programmers to perpetually escape a successful 99 percent statistical challenge and maximize the number of expected supermajorities? (Assume that if a successful challenge is made, the Programmer party is banished from Riddler Nation.)
Does changing the coin weighting on a seat-by-seat basis in an individual election help improve the Programmers’ odds in any of these scenarios?
Submit your answer
Solution to last week’s Riddler Express
Congratulations to Jon and Lisa of Columbus, Ohio, (and their trap “Boom!”). They’re the new Chief-Arch-Super-Villain Engineers Extraordinaires of Riddler Nation!
Last week brought a participatory, game theoretic and cinematic engineering game. You and your two roommates were villainous engineers with a specialty in engineering traps to catch James Bond. You each designed and named your own trap, specifically choosing its effectiveness — the percentage of the time the trap would capture Bond if he tried to get past it. Your three traps were then placed in a lair — in increasing order of effectiveness — and your goal was to be the one whose trap captured Bond. So, while a 99 percent effective trap might’ve sounded great, it was also likely to be placed last in line, and Bond might have already been captured by the time he got to it. So the engineering challenge was really a trade-off between getting to Bond as early as possible and having a better chance at nabbing 007.
I received 1,211 trap submissions over the past weekend. I then simulated 1 million groups of villainous roommate trios, properly ordering the three traps in each lair and keeping track of whose trap was the one that captured Bond as he entered and tried to get safely through the lair. Here’s how those trap submissions broke down:
Very roughly speaking, your submissions were normally distributed and centered around 50 percent. But a few rough values of effectiveness were especially popular, which you can see in the tall spikes in the distribution above: one-third, one-half, two-thirds and 100 percent.
But that’s just trivia. We have debonair spies to catch. The magic number to do that appeared to be somewhere between 45 percent and 47 percent. Here are the top 10 trappers and their traps7:
The top 10 James Bond trappers
Based on 1 million simulations using 1,211 user submissions
Avg. Bonds trapped
Jon and Lisa
Brooklyn, New York
The Goldenface 3000
Mayor Humdinger’s Kittens
Morristown, New Jersey
Santa’s Ultimate Prohibitively Expensive Raptor-Balrog-Orc-Naga-Dragon-Yeti-Impossibly-Evil-Lich Dropper
Brooklyn, New York
Super Dull Somewhat Vaguely Effective Net
C’s First Feigenbaum Constant * 10 Trap
Brooklyn in the house! And Ohio, too. And Denmark and Finland. Our Ohioan winners explained their trap decision this way: They wanted to “beat 50 percent and really low [submissions], and break even with [those who submitted] 100 percent.”
Indeed, beating the many traps clustered around 50 percent seems to have been the key to successful super-villainous engineering. Traps around 46 percent, for example, were unambitious enough to come before many other traps in the lair’s trap queue, but effective enough to capture Bond pretty often when he passed by.
How do the best levels of effectiveness translate into Bonds-caught-per-lair? If all traps were calibrated to be perfectly effective, every villainous roommate would capture an average of one-third (0.333…) of a James Bond per lair — it’d just be a lottery as to whose net did the capturing.
That number can help us set a baseline for spy-capturing performance. The best-calibrated traps in this game were able to do even better (although not dramatically better) than that, with our winners nabbing about 0.35 of a Bond — or more than their “fair” share — per lair. Here is how a trap effectiveness translated into Bonds caught:
All 1,211 of these traps were constructed with the goal of being the one to nab Bond. But this villainous selfishness is very beneficial for Bond himself, of course. If you and your villain roommates were working together, laying down three maximally effective traps, Bond would never escape the lair. But given your desire that your trap, and not those of your roommates, be the decisive capturing trap and your resulting strategizing thereabout, Bond escaped your collective lairs unscathed about 11 percent of the time.
Solution to last week’s Riddler Classic
Congratulations to Linus Hamilton of Hyattsville, Maryland, winner of last week’s Riddler Classic!
Last week, you were an avid card player who had developed a very specific organizational tic at the table. Every time you were dealt a hand of cards, you wanted to sort it in a specific way using a single motion. By moving just one block of cards from one position in your randomly dealt hand to another, you wanted your hand organized such that the cards of each suit were all together and that, if possible, no suits of the same color were adjacent to each other. If you’re dealt a hand of six cards, what are the chances that you’ll be actually able to realize your organizational obsession? What if you’re dealt a hand of N cards?
With six cards, the chances are around 61 percent. As the number of cards increases, your chances decrease dramatically.
Given the enormous number of ways hands of cards can be dealt and the gnarly combinatorial math that results, this is a natural programming problem. Our winner, Linus, explained his thinking this way: “It would take forever to iterate through all possible hands of six cards (there are over 14 billion). But it’s fast to iterate through all 4,096 sequences of six suits, compute the probability of receiving that sequence, and then check whether it can be well-scrambled with one move. I used a Python program to do this.”
Solvers Matt Wade and Keith Hudson also programmed their way to 61 percent. Matt was kind enough to share his work and handy interactive animation, and Keith shared his code and explained the intuition behind his programmatic approach like so:
Step 1: Find all the different ways you could permute a hand of cards by grabbing a contiguous block of them and moving them somewhere else in the hand. For six cards, there are 36 ways; for 13 cards, there are 365.
Step 2: Figure out how to recognize an acceptable hand. I defined a hand’s “fingerprint” as the order of suited groups — for example, a hand with spade, club, diamond, diamond, club and heart in that order would have the fingerprint “scdch.” I figured out an acceptable hand would have one of the following 32 fingerprints: s, h, d, c, sh, sd, sc, hs, hd, hc, ds, dh, dc, cs, ch, cd, shc, sdc, hsd, hcd, dsh, dch, chs, cds, shcd, sdch, hsdc, hcds, dshc, dchs, chsd or cdsh.
Step 3: See if some rearrangement of a hand results in a good fingerprint. For example, for the hand mentioned above — “scddch” — taking the third through fifth cards and moving them after the first card gives one of the good fingerprints: sdch. This step is essentially just looking at each possible rearrangement and seeing if it’s in the list of “good” fingerprints.
Step 4: Figure out how likely it is for a specific arrangement of cards to appear in a deal from a real deck.
Solver Laurent Lessard graphed these chances for hands of various sizes. With one (obviously), two (obviously), three (sort of obviously) or even four cards, you can always scratch your organizational itch. Beyond that, the odds fall off sharply. If you were playing bridge, say, where you have a hand of 13 cards, you’d be able to accomplish your goal with one move only about 0.3 percent of the time.
But this problem is so contrived, dear Dr. Riddler Editor, an overly arbitrary mathematical exercise meant solely to steal away my Friday. I can practically hear you yelling that at your screen. But au contraire, mon frère ou ma sœur. To wit, solver Daniel Kunigan filed this report:
“I don’t know the answer. I just wanted to say this is word-for-word exactly what I do when I receive my hand in any card game. More than one move just makes it look like I have a lot of variety to organize, and I don’t want to give away anything. But separating by suit is just so satisfying! Anyway, I’ll guess a random number like 39 [percent].”
Not bad, Daniel. Not bad at all.
Want more riddles?
Well, aren’t you lucky? There’s a whole book full of the best of them and some never seen before, called “The Riddler,” and it’s in stores now!
Want to submit a riddle?
Email me at [email protected]