<?xml version=”1.0” encoding=”utf-8”?>

Dan McKinley urn:uuid:0f5320a2-eb2c-2b09-f70e-ca9845402e07 https://mcfunley.com/assets/images/favicon.png https://mcfunley.com/assets/images/favicon.png 2025-05-01T17:54:33+00:00 On Misery 2025-01-10T00:00:00+00:00 2025-01-10T00:00:00+00:00 urn:uuid:E257E02B-2504-4DFF-95F7-B224ADDC0CF3 <p>Mark Zuckerberg is in the middle of a <a href="https://www.nytimes.com/2025/01/10/technology/meta-mark-zuckerberg-trump.html">coordinated-if-haphazard heel turn</a>, removing tampons from mens’ rooms and welcoming slurs back to his platforms. This occurs while the neighborhood next to mine is still on fire, and his behavior stands in stark contrast to the imperative to <em>love one another or die</em> that is all around me. I connected with this take:</p> <blockquote class="quotation"> <p>[I]t is probably worth thinking about what is happening in Silicon Valley as a revolt of the bosses against their workers. none of this is rational. it is simply causing misery for the purpose of causing it.</p> <p class="attribution"> <a href="https://bsky.app/profile/theophite.bsky.social/post/3lffwuakla223">@theophite.bsky.social</a> </p> </blockquote> <p>First of all, fuck anyone who is for what Zuck is up to, and may god give you blood to drink. I find the hateful nihilism in all of this quite depressing. But I guess I am not shocked to see it revealed that many industry leaders have invested in progressive causes transactionally, rather than out of solidarity, basic decency, and an ideological commitment to a brighter future in which <a href="https://en.wikipedia.org/wiki/Categorical_imperative#Second_formulation:_Humanity">the humanity of others is an end rather than a means to an end</a>.</p> <p>That it is <em>right</em> to affirm the humanity of others should be the only reason one needs to do it. But we have just established that many powerful people are transactional, and not moved by morality. In the interest of working towards better outcomes, let me take on some of the labor of expanding upon why the boss revolt is not rational. Winning, fun, and positivity are correlated, and spiteful misery as a business strategy is very stupid.</p> <h5 id="stand-by-while-i-turn-my-temperature-down-30-degrees">Stand by while I turn my temperature down 30 degrees</h5> <p>As the tech labor market has cooled off, skepticism about perks and positive vibes at work is a broad trend. Zuck’s latest rollout is definitely on the more bigoted and mean-spirited end of it, but this has been going on for some time now. Let’s play “spot the logical error:”</p> <ul> <li>Some things that feel good and are positive are a distraction from work.</li> <li>This thing feels good and is positive.</li> <li>Therefore it is a distraction from work.</li> </ul> <p>Look, I have responded to production incidents whilst a founder played Guitar Hero (not even the good songs) on a projector directed two feet above my head. At times I have had to share an office with sales bells, round-the-clock ping pong tournaments, mechanical keyboards, kegerators, and a nontrivial fortune in DJ equipment. I have watched anti-footwear coworkers put bare feet on communal tables. One time a guy passed out in a boat that we had in the office for obscure reasons. While many of these things have a time and a place, let’s say that it has not left me as a workplace hedonism maximalist.</p> <p>I am here to tell you that these sorts of workplace culture programs, which are all good and fun, are not that:</p> <ul> <li>Hack weeks, i.e. “go make a cool thing with coworkers.”</li> <li>Bootcamps and rotations, e.g. “go experience what a totally different team is doing.”</li> <li>Official slack time, e.g. 20% time <a name="ref1" href="#f1" class="footnote">[1]</a>.</li> </ul> <p>These were all practices that got started in the middle 2000s, when startup funding was hard to come by and we needed to stretch our headcount as far as we possibly could. All of these have what might have been called “ulterior motives,” however that would be ahistorical since the motives were made explicit at the time.</p> <p>Hack weeks and bootcamps create new edges in your relationship graph, and spread knowledge. Slack time provides the fertile soil for those edges and that knowledge to bear fruit: serendipitous product, organizational efficiency, or what have you. All of this makes people want to keep working hard. Not for you, really, but for each other.</p> <p>To the extent we can still stomach identifying as “hackers” in its original sense, we should seek clever, synergistic, joyful, high-leverage, and possibly subversive ways to get the outcomes we want. That’s what all of this was about.</p> <p>Again, I view DE&amp;I <a name="ref2" href="#f2" class="footnote">[2]</a> as a moral imperative. But it can also be understood within the framework of this school of thought. We can find genius where society has overlooked it. And we should take seriously the project of creating the incentive and permission structures that allow these talented people we’ve convinced to work here to contribute to their greatest potential.</p> <p>I wish I didn’t feel so insane and frustrated while pointing this out.</p> <h5 id="winning-is-fun-correlated">Winning is fun-correlated</h5> <p>“Winning is fun” is a mantra I have deployed in the past, and I have meant it in the spirit of “the main thing that’s gonna make people happy is if the product is making users happy.” As is the way with these things, magical thinking can certainly take hold and we can get cause and effect mixed up.</p> <figure> <img src="https://datadriven.club/slides/slides.022.jpeg" /> <figcaption> I wrote <a href="https://datadriven.club">a whole talk</a> about exactly this kind of confusion once. </figcaption> </figure> <p>Fun is not a precondition for winning. But if you <em>hope</em> to win, you should <em>expect</em> to be having fun. This is all to say that if you are systematically eradicating fun things, using “anything that feels positive must be wasting time” as a heuristic, you have thoroughly disappeared up your own ass. If you are committing willful acts of harm as Zuckerberg is, may an even darker abyss than that await you.</p> <p>Meta’s size makes it de facto unkillable, and I’m sure it’ll exist in some form for centuries. But it’s my hope that it will exist in the sense that IBM exists today. Theoretically you know it’s out there, but it’s very hard to grasp the point of it and it feels thoroughly irrelevant. Nobody remembers who started it or why.</p> <p>A healthy, happy, positive, spiritually fulfilled workforce is an end unto itself but yes, also a means to an end. Eventually, either our industry or an adjacent one will figure this out again <a name="ref3" href="#f3" class="footnote">[3]</a> and the ironic points of light that constitute the historical tech workforce will go congregate there instead.</p> <hr /> <p>Like this? I have a recent related talk called <a href="https://egoless.engineering">Egoless Engineering</a>.</p> <ol class="footnote-list"> <li> <a name="f1"></a> <p> Google&rsquo;s 20% time in popular imagination is conflated with the idea that engineers will ship products on their own. That is typically a bad idea, because successful product launches take a village. I have no idea if that is really what that was about at Google, but unscheduled slack time in my experience has explicitly made that kind of outcome a non-goal. <a href="#ref1">&#x21A5;</a> </p> </li> <li> <a name="f2"></a> <p> Noting that I am saying &ldquo;DE&amp;I&rdquo; here and not &ldquo;DE&amp;I <em>programs</em>&rdquo; is the moral imperative. I&rsquo;d like to defer to the experience of under-represented people in tech, many of whom have experienced the actually-existing programs as a sham. <a href="#ref2">&#x21A5;</a> </p> </li> <li> <a name="f3"></a> <p> Barring positive black swan events that might come from organizing, or negative black swan events such as the return of indentured servitude. <a href="#ref3">&#x21A5;</a> </p> </li> </ol> Dan McKinley https://mcfunley.com/ The Guerilla’s Guide to Influencing Leadership 2024-12-02T00:00:00+00:00 2024-12-02T00:00:00+00:00 urn:uuid:8667A2A3-4AFF-4E4C-B12D-45CBD6B037C5 <p>In my talk <a href="https://egoless.engineering/">Egoless Engineering</a> I make the case that results are better when teams cooperate, that punching down and other forms of brilliant-jerkhood are <em>actually</em> dumb, and that leaders should reward curiosity and generosity. I think misery is a dumb strategy and I am encouraged that some folks have found this case compelling.</p> <p>Some of those who find the content compelling wish they could persuade their leaders on these points, and have asked me for advice on how to accomplish this. I address this a little in the talk:</p> <blockquote> <p>I don’t think you can grass roots [it], beyond what I’m trying to do here by making the idea more popular.</p> <p>It’s on leaders to value cooperation and to reward curiosity.</p> </blockquote> <p>I stand by this overall assessment. It’s unrealistic to expect workers to build a culture that executes well without leadership support in the best case, and against leadership’s instincts in the worst. It’s taking risks, especially in proximity to down-punchers that will attempt to undermine you.</p> <p>That said, I don’t think there’s <em>nothing</em> you can do to win support. Over the decades I have discovered a few hacks for influencing superiors. I apologize in advance for how dumb some of this sounds. Please know that I mean this pragmatically, and sincerely. None of this advice is sarcastic. It may be darkly funny, but it is not a <em>joke</em>.</p> <h5 id="hack-1-write-about-it-publicly">Hack #1: Write about it Publicly</h5> <p>It could be that you were hired at a company to solve a shiny problem, and came in with some amount of momentum and a mandate. That’s great, but there are only a few possibilities from here:</p> <ul> <li>The shiny problem at the company changes because you solved it (good job)!</li> <li>The shiny problem at the company changes because leadership is fickle or was wrong, or just because the world changed.</li> <li>The shiny problem is really difficult and it was unrealistic to expect anyone was going to ride in on a white horse and fix it in a short amount of time.</li> </ul> <p>All of which is to say that once you work someplace, your cool factor has a brief half-life. Did they hire you to do ML? Sorry, they need “AI” now, and they are going to hire someone else on a really fantastic, different white horse for this <a name="ref1" href="#f1" class="footnote">[1]</a>.</p> <p>So let’s suppose you find yourself in this situation and you want to influence leadership, but they now find you boring and don’t hear you. One tactic that has served me very well is to make my agenda shiny (again, possibly) by getting people in the broader industry buzzing about it.</p> <p>Write a talk or a blog post explaining your point of view. If it’s something you’re already doing at work, feel free to imply that it’s how <em>everyone</em> is doing that stuff where you work. Get it on BlueSky, Mastodon, Hacker News, or wherever the conversations are happening about it. If your cause is truly just people will talk about it and give you positive feedback.</p> <p>Companies and their leaders love this kind of reflected glory. Their peers will congratulate them on being such great leaders that they motivated or inspired this great work you’ve done. In reaction they will support you. Lean into it and you’ll regain your sheen (for a while).</p> <h5 id="hack-2-have-an-outsider-say-it">Hack #2: Have an Outsider Say It</h5> <p>The odds are very good that at least some of you are reading this while facemuted in a <a href="https://businesserotica.com/blog/the-pyramid-principle">McKinsey consultant workshop</a> about how to write email subject lines, or something equally thrilling. If so, another leadership PSYOP tactic is right in front of you: have outsiders say the thing.</p> <figure> <img src="/assets/images/guerilla/someone-else.png" /> </figure> <p>Outsiders are definitionally novel to insiders, and often already have a whole deck full of classic Simpsons references punched up and ready to present. The move here is to volunteer to start an external speaker series, and stack the agenda in your favor. Talks are great for morale anyway, so nobody needs to know what your specific ulterior motive is.</p> <p>If you’re trying to persuade leadership that it’s <a href="https://how.complexsystems.fail/">counterproductive to frame risk management as “who’s going to get fired if X happens”</a> (as I have occasionally tried to do) it’s more effective to launder this feedback through an ostensibly neutral outsider who is at no risk of getting fired.</p> <h5 id="a-practical-example">A Practical Example</h5> <p>Much like cards in a <a href="https://www.youtube.com/watch?v=J_t1sjoJufI">really sick Slay the Spire deck</a>, these tactics can synergize with one another and bestow scaling properties upon you <a name="ref2" href="#f2" class="footnote">[2]</a>. A concrete example of this was my talk <a href="https://datadriven.club/">Data Driven Products Now!</a>, wherein I tried to make a case that it was a good idea to do some napkin math before spending a year coding something. (Or, “opportunity sizing for engineers,” as my friend Roberto called it.)</p> <p>Anyway, I wrote this talk after the fourth or fifth year in a row that Etsy’s big product push consumed 75% of the team’s effort for something that didn’t really stand up to such scrutiny. The other 25% of us <em>were</em> doing math and it <em>was</em> going pretty well, so I decided to represent the minority approach as “The Etsy Way” in a public talk.</p> <ul> <li>This got positive attention from outsiders, which successfully made the idea of doing napkin math much more popular internally (<strong>hack #1</strong>).</li> <li>A while later I left the company but kept doing this talk in public, where it could still ricochet back into Etsy and make things better (<strong>hack #2</strong> - still good for me since I owned stock at this time).</li> <li>I also delivered the talk as an outside speaker at many companies, including Mailchimp and Mozilla where I eventually wound up working (<strong>hack #2</strong>).</li> <li>At Mailchimp, after my new aqui-hire sheen had worn off, we brought in an outside consultant to deliver the same material again (<strong>hack #2</strong>). This person had coincidentally been in the audience to see me give this talk at least once.</li> </ul> <figure> <img src="/assets/images/guerilla/hacks.png" /> </figure> <p>By my count, that’s at least six distinct units of influence at companies I either worked at or owned stock in for one act of writing. What a coup!</p> <p>My <a href="https://boringtechnology.club">Boring Technology</a> work was a similar dynamic on an even grander scale.</p> <h5 id="go-forth">Go Forth</h5> <p>I hope this has been helpful. I ask that you only deploy these tactics, which are powerful, for the causes of justice. If I can help you do this as an outsider (<strong>hack #2</strong>), don’t hesitate to <a href="https://bsky.app/profile/mcfunley.com">drop me a line</a>.</p> <hr /> <ol class="footnote-list"> <li> <a name="f1"></a> <p> I know this stings now but stay positive! As long as you don’t overreact to this, trust me, you will be best friends with that person for life once <em>their</em> white horse drops dead. <a href="#ref1">&#x21A5;</a> </p> </li> <li> <a name="f2"></a> <p> While writing both <a href="https://boringtechnology.club">Boring Technology</a> and <a href="https://datadriven.club">Data Driven Products Now!</a>, I was talking extensively with my friend Steve about the content. He had a big influence on both pieces. Some of the punchiest (and therefore best) sections of these were spiritually born of me ranting back and forth with Steve. </p> <p> So there’s a third dimension in which my work lives on that I enjoy perhaps most of all: <em>people mansplaining my work to Steve.</em> Steve will suggest something wherever he works now, and a colleague of his will object. “It’s called Boring Technology Steve, ever heard of it?” This is truly the gift that keeps on giving. <a href="#ref2">&#x21A5;</a> </p> </li> </ol> Dan McKinley https://mcfunley.com/ I Tried to use AI to Read an AI Book 2024-06-09T00:00:00+00:00 2024-06-09T00:00:00+00:00 urn:uuid:57CF0F29-3044-4481-8C5D-7C83A3A050C2 <p>I recently read <a href="https://www.vromansbookstore.com/book/9780593716717"><em>Co-Intelligence</em> by Ethan Mollick</a>. It was good! You should read it. I want to say this up front, since after some preamble I’m going to describe a Rube-Goldbergian attempt to poke petty holes in it. I don’t want the reader to lose sight of the big picture, which is that I was trying to do this <em>in the spirit of the book.</em> Which again, is pretty good.</p> <h5 id="the-zeitgeist-is-a-polterin">The Zeitgeist is a-Polterin’</h5> <p>How are we going to know what’s true? How are we going to find information, now? It’s been on my mind lately, as it’s been on everyone else’s mind. The web has been thrown into chaos. As of right now if you ask Google <a href="https://www.google.com/search?q=is+there+a+country+in+africa+that+starts+with+k&amp;rlz=1C5CHFA_enUS1097US1099&amp;oq=is+there+a+country+in+af&amp;gs_lcrp=EgZjaHJvbWUqDAgAEAAYFBiHAhiABDIMCAAQABgUGIcCGIAEMgcIARAAGIAEMgYIAhBFGDkyBwgDEAAYgAQyBwgEEAAYgAQyBwgFEAAYgAQyBwgGEAAYgAQyBggHEEUYPKgCALACAQ&amp;sourceid=chrome&amp;ie=UTF-8">if there’s a country in Africa that starts with “k,”</a> you get a confident “no” that cites one post or another lampooning the entire debacle.</p> <figure> <img src="/assets/images/close-reading/k-countries.webp" alt="Google erroneously asserting that no countries in Africa start with the letter K" /> <figcaption>Web publications are presumably competing for the dregs of display ad revenue by seeing who can roast Google for this the hardest. Which (objectively) rules.</figcaption> </figure> <p>Google isn’t shooting itself in the face right before our eyes because they all think these results are good. (I bet the internal conversations are <em>hilarious.</em>) They’re shooting themselves in the face because they’re in a desperate steel cage match with the <a href="https://en.wikipedia.org/wiki/The_Innovator%27s_Dilemma">Innovator’s Dilemma</a>. Our relationship with information retrieval seems like it’s changing, and this will affect Google. But as a participant in this shitshow they are constrained to seek the set of different equilibria that still more or less resemble web search. When they fuck up, they are fairly scrutinized in ways that their competitors are not. They have to transmogrify the golden goose into <a href="https://www.reddit.com/r/TheSimpsons/comments/kpsnb9/grand_funk_railroad_paved_the_way_for_jefferson/">some sort of hovercraft</a>, which is a significantly harder task than simply killing it.</p> <p>LLM’s are now training on their own hallucinated content in a doom loop, and media companies are too busy dying <a name="ref1" href="#f1" class="footnote">[1]</a> to be plausible as a solution to this. I don’t know what to say about Twitter except “good luck.” You’d be forgiven for hoping for the <a href="https://www.youtube.com/watch?v=2twY8YQYDBE">Nothing but Flowers</a> scenario, in which we all collectively and abruptly decide to go back to the land.</p> <p>But despite all of this I am not an LLM detractor. Whereas the entire web3 era came and went without ever coalescing into a legible concept of any kind, LLM’s are very much a non-fake technology. We haven’t figured out the right way to hold them, yet, but that’s no reason to just give up and walk into the sea.</p> <h5 id="idk-lets-all-read-books-instead">Idk, Let’s All Read Books Instead?</h5> <p>When I’m really chewing on something I read books about it, and I recommend the practice. I do not recommend it as a solution to everything, as <a href="https://www.404media.co/ai-generated-kara-swisher-biographies-flood-amazon/">books are not necessarily written by humans</a> and even when they are <a href="https://www.youtube.com/watch?v=biYciU1uiUw">they are not necessarily using their whole ass to do it</a>. But as a way to let ideas really stretch out in your mind and stink up the place, I don’t have a better way. So again, I read <em>Co-Intelligence</em>. And again, it was pretty good.</p> <p>Inspiration struck right on schedule when my friend <a href="https://gigamonkeys.com/">Peter Seibel</a> also read <em>Co-Intelligence</em>. Peter noticed a claim about 3/4 of the way through:</p> <blockquote> <p> [R]epeated studies found that differences between the programmers in the top 75th percentile and those in the bottom 25th percentile can be as much as 27 times along some dimensions of programming quality... [b]ut AI may change all that. <a href="#f2" class="footnote" name="ref2">[2]</a> </p> </blockquote> <p>Peter is a <a href="https://gigamonkeys.com/book/">programming book author</a>, and tech industry veteran turned high school CS teacher. You could say he’s dedicated himself to spreading the craft, and so this claim is something of a pet issue of his.</p> <figure> <a href="https://twitter.com/peterseibel/status/512615519934230528"> <img src="/assets/images/close-reading/peter-10x.webp" alt="How to be a 10x programmer, per Peter Seibel: help ten other engineers be twice as good." /> </a> </figure> <p><a href="https://blog.glitch.com/post/the-10x-programmer-and-other-myths/">Indeed the claim is a well-established industry trope at this point</a>. It is widely considered to be thinly sourced at best, and entirely vibes at worst. It usually relies on scant evidence when it’s sourced at all. But in this case, it was sourced! In a paper we hadn’t heard of before! The specific citation was:</p> <blockquote> <p>The gap between the programmers: L. Prechelt, “An Empirical Comparison of Seven Programming Languages,” IEEE Computer 33, no. 10 (2000): 23–29, <a href="https://doi.org/10.1109/2.876288">https://doi.org/10.1109/2.876288</a>.</p> </blockquote> <p>So I decided to dig in and see if the paper supported this <a href="#f3" class="footnote" name="ref3">[3]</a>.</p> <h5 id="no-the-cited-paper-does-not-support-the-existence-of-27x-programmers">No, The Cited Paper does not Support the Existence of 27x Programmers</h5> <p><a href="https://www.cs.tufts.edu/~nr/cs257/archive/lutz-prechelt/comparison.pdf">Prechelt’s paper</a> is a comparison of <em>programming languages</em>, not <em>programmers</em>. Its conclusion is closer to “C++ sucks” <a name="ref4" href="#f4" class="footnote">[4]</a> than anything to do with programmer ability. There are two overlapping problems with using it here:</p> <ul> <li>The paper acknowledges weaknesses in its samples, and other reasons we may be looking at biased results. (This is lovely to see.)</li> <li>The paper is not trying to make any points about programmer capabilities.</li> </ul> <p>Hence the conclusions of the paper don’t really support the premise in that part of Mollick’s text.</p> <h5 id="again-i-have-to-stress-that-co-intelligence-is-a-good-book">Again, I Have to Stress that <em>Co-Intelligence</em> is a <em>Good</em> Book</h5> <p><em>Co-Intelligence</em> has sources that we are invited to check, and of course many books do not do this. Mollick is a serious person who is trying to do a good job, in good faith, and many are not. There’s nothing unusual about noticing a problem like this in a book. This is just what it’s like to read something that touches on topics that you happen to know a great deal about.</p> <p>Many of 2024’s gravest epistemological dangers arise when we read things that we <em>don’t</em> know much about <a name="ref5" href="#f5" class="footnote">[5]</a>. In those situations, we’re liable to reinforce our own biases or blithely accept the authority of the text. How can we do better?</p> <p>The answer is probably something like “critical thinking,” or “close reading.” We should be putting more thought into the sources of what we’re consuming. We should be questioning whether those sources support the conclusions drawn, and what problems they may have themselves. By doing so we can form a more nuanced interpretation of what we’re consuming.</p> <p>Of course, the downside is that this all takes a metric shit-ton of time.</p> <h5 id="what-to-do-mechanize">What to do? Mechanize!</h5> <p>It occurred to me at this point that perhaps I could use AI to augment my critical thinking skills. It occurred to me because this is the sort of thing the book was constantly encouraging me to do:</p> <blockquote> <p> Research has successfully demonstrated that it is possible to correctly determine the most promising directions in science by analyzing past papers with AI <a name="ref6" class="footnote" href="#f6">[6]</a> </p> </blockquote> <p>What if AI could be an asset in skepticism about itself? Could AI be both the cause of <em>and solution to</em> all of our filter bubble problems? I am not sure. Let’s find out!</p> <p>My first few naive attempts were to simply feed the LLM <a name="ref7" class="footnote" href="#f7">[7]</a> some content by hand. I’d give it a PDF, a passage from the book, and the specific claim that was being supported by the citation. I’d ask it what it thought, either in general or as a two parter (<em>“How would you rate this paper? Do you think it provides good support for this claim in this text?”</em>).</p> <p>The results of this were disappointing–the LLM universally responded with paragraphs amounting to <em>“yeah, lookin’ good hoss!”</em> Being at least vaguely on top of <a href="https://applied-llms.org/">the conversations around using LLM’s in anger</a>, I figured that the problem here was that I was asking it to do far too much at once.</p> <h5 id="revising-the-approach">Revising the Approach</h5> <p>After thinking about it a bit more, I realized that the goal should probably be about prioritization. This is also in line with Mollick’s advice:</p> <blockquote> <p> The closer we move to a world of Cyborgs and Centaurs in which the AI augments our work, the more we need to maintain and nurture human expertise. We need expert humans in the loop. <a name="ref8" href="#f8" class="footnote">[8]</a> </p> </blockquote> <p>I am definitely not going to check the 90+ academic papers cited by this book, let alone the web pages and other books cited. And on the basis of its output so far, I am also not going to just trust the LLM to do that for me without help. Instead, the goal would be to use AI to get the drudgery of sifting through references out of my way. I decided that I’d try this instead:</p> <ol> <li>I’ll ask the LLM to give each of the cited papers an overall trustworthiness score.</li> <li>I’ll ask the LLM to rate how well each citation supports the claim in the text.</li> <li>From those two scores, I’ll make a weighted list of things to dig into by hand, leveraging my own abilities better.</li> </ol> <p>I spent a day day collecting the papers, and managed to find nearly all of them without having to pay a wall.</p> <h5 id="scoring-papers">Scoring Papers</h5> <p>My first attempt at scoring the papers was direct: I just fed the LLM the paper and asked it to rate its trustworthiness on a scale of one to ten. The LLM scored nearly every paper a 9 or a 10 out of 10. That’s perhaps unsurprising, since there are several sources of selection combining to bias the book towards citing papers that aren’t just total nonsense <a href="#f9" name="ref9" class="footnote">[9]</a>. But unfortunately that’s useless as a means to differentiate. Asking the same question while providing a sample of other papers as a basis for comparison produced the same results <a href="#f10" name="ref10" class="footnote">[10]</a>.</p> <p>I switched to asking the LLM to stack rank a set of papers. I’d give it a paper with nine others, and ask it to give me the ranking of trustworthiness. At first this seemed to produce better results, meaning the LLM would relent and say, “ok, this paper is a four out of ten in this set.” But repeating this a few times showed that the rankings were unstable - the same paper would get a range of rankings between 1 and 10 that seemed quite broad.</p> <p>It occurred to me that the instability might average out in a useful way with repeated trials. If we ask the LLM to repeatedly stack rank a paper, it might occasionally rate it as an 8 but ultimately average it as a 3. Like so:</p> <figure> <img src="/assets/images/close-reading/simulated-convergence.webp" alt="Simulating what would happen if LLM-provided paper scores eventually converged." /> <figcaption> Simulating repeatedly scoring papers if it's the case that the LLM can differentiate them unreliably. In the ideal case the average scores (the center lines) will be different from each other and the standard errors (the shaded regions) should get tighter with repeated trials. </figcaption> </figure> <p>But after doing this 20 times for a set of referenced papers, in practice that didn’t work:</p> <figure> <img src="/assets/images/close-reading/convergence-attempt-1.webp" alt="Graph showing that LLM paper rankings are random in practice" /> <figcaption> Asking the LLM to rank papers repeatedly results in every paper converging on a 5/10 with relatively wide distributions, i.e. the LLM's answers seem to be random. </figcaption> </figure> <p>I redid the process one more time by asking the LLM to focus on just critiques of the papers. That produced much more pessimism, but not in a way that would give me a principled smaller set of papers to scrutinize by hand:</p> <figure> <img src="/assets/images/close-reading/critique-convergence.webp" alt="" /> <figcaption> Asking the LLM to focus on ranking critiques of papers results in very pessimistic scores, but doesn't differentiate them. </figcaption> </figure> <p>So progress seemed to stall out here. Having failed to find a way to give a paper a trustworthiness score that made any sense, it didn’t seem worthwhile to work out a way to rate the faithfulness of the citations to paper conclusions.</p> <h5 id="what-did-we-learn">What Did We Learn?</h5> <p>The premise here was that I could write some AI automation to help me walk away from an overall good work with a more nuanced view than I otherwise would have. Given the correlated nature of the attempt and the subject matter, that definitely worked! I have a more nuanced view of what someone could reasonably ask a current LLM to do now! But it is not clear that I achieved anything durable yet, beyond building a very inefficient shuffle algorithm that cost me $100 and ate a few days.</p> <p>I approached this with some classical ML system expertise, and therefore applied at least a little statistical thinking to what I did. I think in a lot of applications, folks just wouldn’t do this. The path of least resistance would be to ask the LLM for opinions, observe that it gives them, and plow forward.</p> <figure> <img src="/assets/images/close-reading/nimoy.webp" alt="Leonard Nimoy In Search of Spoof." /> </figure> <p>This could be good or bad! In purely creative scenarios it’s probably a win. But it’d be a convoluted way to reinforce confirmation bias in others, i.e., the exact opposite of what I was trying to accomplish.</p> <p>It seems like you could certainly use a current LLM to distinguish sources that are intrinsically terrible. But they aren’t particularly good at drawing out this kind of nuance right now, at least not in any straightforward way.</p> <hr /> <p class="acknowledgements"> Hi, thank you for reading. If you liked this you might like <a href="/talks">some of my talks</a> such as the notable banger <a href="https://boringtechnology.club/">Choose Boring Technology</a> or maybe some of my <a href="/writing">other writing</a>. To old friends, I apologize for not writing in a while. I assure you I was embroiled in some really baroque psychodrama that seemed important at the time. </p> <p class="acknowledgements"> Thanks to Camille Fournier, Peter Seibel, Lonnen, Moishe Lettvin, et al for help with this! </p> <hr /> <ol class="footnote-list"> <li> <a name="f1"></a> <p> Here's where I thought I might link you to a great and relevant <a href="https://searchengine.show">Search Engine</a> episode about the media apocalypse, except as far as I can tell we've all decided to break the ability to link to podcast episodes. Case in point. Regardless, you should consider subscribing to <a href="https://searchengine.show">Search Engine</a>. <a href="#ref1">&#x21A5;</a> </p> </li> <li> <a name="f2"></a> <p> Ethan Mollick, <i>Co-Intelligence</i> (New York: Penguin Random House, 2024), 156. <a href="#ref2">&#x21A5;</a> </p> </li> <li> <a name="f3"></a> <p> Peter would like it to be noted that he reviewed the Prechelt paper himself, quickly concluded that it didn't support the book's claim, and moved on. The reader is encouraged to come to their own conclusions about our differing priority preferences and life choices. <a href="#ref3">&#x21A5;</a> </p> </li> <li> <a name="f4"></a> <p> It's unclear if this <i>needs</i> to be empirically proven, but it is correct. The paper does this: </p> <figure> <img src="/assets/images/close-reading/boxplot.webp" alt="Box plot excerpted from L. Prechelt paper" /> </figure> <p> Prechelt studies a big set of programs written in different languages. He contrives a "bad to good ratio," which is the median of the slowest half of the programs divided by the median of the fastest half. The difference of "27 times" is the spread of outcomes within a language, which the book then conflates with programmer capability. </p> <p> The paper talks a bit about how the programs in different languages are sourced from different places. The C++ programs are from CS master students, the Tcl programs are from open calls, etc. The paper discusses how there will be bias in the outcomes as the result of this. <a href="#ref4">&#x21A5;</a> </p> </li> <li> <a name="f5"></a> <p> An example that's close to home for me is that some of my college friends (who also completed <i>an ivy league engineering program</i>) are now moon landing deniers. This seems to be the result of choosing the Joe Rogan podcast as a source of information and identity. </p> <p> I don't actually know since I won't listen to it, but as far as I can discern this podcast is a decades-long freefall into the bottomless abyss of nonsense that yawns beyond the boundaries of one's own expertise. <a href="https://www.youtube.com/watch?v=lWAyfr3gxMA">He had an actor on who thinks 1&times;1=2</a>, and apparently took it seriously. That may be an extreme example, but we'd be mistaken to believe we're categorically immune from these kinds of errors just because we aren't megadosing shark cartilage and suffering head trauma regularly, or whatever. </p> <p> Incidentally, do not contact me to discuss Joe Rogan. <a href="#ref5">&#x21A5;</a> </p> </li> <li> <a name="f6"></a> <p> Mollick, 202. The <a href="https://arxiv.org/abs/2210.00881">paper he is citing</a> in this case is not using a large language model, but is cool work regardless. They built their own semantic graph of research topics with more mundane extraction techniques, and then tried to predict future edges in it. They found that models with hand-crafted features outperformed unsupervised approaches, including a transformer model. And "[s]urprisingly, using purely network theoretical features without machine learning works competitively." But, this was all in 2022. <a href="#ref6">&#x21A5;</a> </p> </li> <li> <a name="f7"></a> <p>Everything in this writeup was done with GPT-4o. <a href="#ref7">&#x21A5;</a></p> </li> <li> <a name="f8"></a> <p>Mollick, 182. <a href="#ref8">&#x21A5;</a></p> </li> <li> <a name="f9"></a> <p> Indeed if you ask it to read <a href="https://www.thelancet.com/action/showPdf?pii=S0140-6736%2897%2911096-0">a very bad paper</a>, it will rate its trustworthiness very low. <a href="#ref9">&#x21A5;</a> </p> </li> <li> <a name="f10"></a> <p> Various attempts at excoriating the LLM to behave differently didn't get me anywhere either. <i>"Your rankings overall should be normally distributed! Your mean ranking should be a five! Don't worry about your rankings getting back to the authors! Nobody is going to judge you for this!"</i> <a href="#ref10">&#x21A5;</a> </p> </li> </ol> Dan McKinley https://mcfunley.com/ Google Reader Killed RSS 2019-12-18T00:00:00+00:00 2019-12-18T00:00:00+00:00 urn:uuid:de2dac6b-5dc0-4097-86d7-466e1210b626 <p>There were rumblings earlier this week that Alphabet executives mused about <a href="https://www.theinformation.com/articles/google-brass-set-2023-as-deadline-to-beat-amazon-microsoft-in-cloud">killing GCP</a>. I think they probably won’t do it <a href="#f1" ref="#f1" class="footnote">[1]</a>. But as a side effect this has provoked yet another round of <a href="https://twitter.com/search?q=reader%20https%3A%2F%2Ftwitter.com%2Fkilledbygoogle%2Fstatus%2F1198773553039962112&amp;src=typed_query">everyone pouring one out</a> for the most beloved Google ex-feature ever, <a href="https://en.wikipedia.org/wiki/Google_Reader">Google Reader</a>.</p> <p>I miss the RSS world of the early 2000’s as much as anyone. I miss it almost as much as I miss McCarren Pool having no water in it and new Spoon albums sounding fresh. This is why I feel compelled to point out that those mourning Google Reader are forgetting that it was actually responsible for ruining the whole thing.</p> <figure> <img src="/assets/images/homer-stands.jpg" /> <figcaption class="text-center">Computer, engage shitpost. Attack pattern "digging up graves."</figcaption> </figure> <p>It went like this: Google Reader killed RSS, <em>and then like a decade later</em> Google killed Google Reader. You’re having a funeral for the tame old fox that was mysteriously living in your henhouse.</p> <h5 id="a-bull-moose-stomping-around-the-primordial-tidepool">A Bull Moose Stomping around the Primordial Tidepool</h5> <p>The existence of Google Reader wiped out a generation of attempts at building hosted, social feed readers. I was working on one. We had maybe a thousand users, so I’m not trying to overestimate the cardinality of the set of alternate universes in which ours won. But the survival of any of them as independent actors became untenable once Google Reader came out.</p> <p>Hosted feed aggregation was a relatively expensive product to attempt at the time. There were no clouds yet, and bandwidth pricing on shared hosts was oppressive to those of us just getting by on bootstrapped budgets. Everyone subscribed to less than a hundred feeds, but it was fat-tailed and everyone chose a different set of less than a hundred feeds. Your servers had to download a lot of stuff, and they had to do it as often as you could afford.</p> <p>There was a significant amount of toil involved in maintaining the perception of quality, because blogging software was a much more fragmented space then, and feeds of the era were a <em>mess</em>. Remember <a href="https://en.wikipedia.org/wiki/Cute_Overload">Cute Overload</a>? I do, mainly because it was a freaking frameset around a blogger site. This kind of kluge was typical <a ref="#f2" href="#f2" class="footnote">[2]</a>.</p> <figure> <img src="/assets/images/cute-overload.jpg" /> <figcaption class="text-center">mfw we realized it was a freaking frameset</figcaption> </figure> <p>As long as Google Reader existed, the two available paths out of this were out of reach. Anyone with money who believed in RSS as a consumer technology also believed Google would dominate the space <a href="#f3" class="footnote">[3]</a>. The aura of infallability that Google possessed in this era before laughingstocks like <a href="https://twitter.com/adamlisagor/status/187596931638362114">Glass</a>, <a href="https://www.theverge.com/2019/4/2/18290637/google-plus-shutdown-consumer-personal-account-delete">Google+</a>, <a href="https://techcrunch.com/2009/11/26/why-google-wave-sucks/">Wave</a>, etc, is hard to relate. Picture showing up for an audition and getting in line behind Denzel Washington.</p> <p>And of course you couldn’t charge a fee, because Google Reader was free.</p> <h5 id="google-reader-not-impressive">Google Reader: Not Impressive</h5> <p>This all would have been water under the bridge if Google had followed through with making Reader what it deserved to be. But they did not. They kept it on starvation rations for more than ten years.</p> <p>Reader’s social features, for example, were only slightly less catastrophically haphazard than Buzz.</p> <figure> <img src="/assets/images/google-reader-notes.jpg" /> <figcaption class="text-center">What passed as the social features of Reader ca. 2008</figcaption> </figure> <p>For years and years it wasn’t even obvious how the friend list worked, <em>at all</em>.</p> <blockquote class="quotation"> If you check the associated help page, it turns out that to remove someone, you have to remove them as a Gmail/Google Talk contact. Wow. <p class="attribution"> <a href="https://searchengineland.com/google-reader-gets-social-with-friends-shared-items-12949">Search Engine Land, 2007</a> </p> </blockquote> <p>Despite this, people out there lament the loss of the communities they’d built on Reader. It’s frankly incredible that anyone managed this with tools this bad. It validates that there was something there, something that could have been more than what we got.</p> <h5 id="hello-from-a-smoking-crater-inside-the-kill-zone">Hello from a Smoking Crater Inside the Kill Zone</h5> <p>Google Reader reigned for so long that people towards the end of its run weren’t wistful for a return to the old ways. They were wistful for the thing that wrecked the old ways. The old ways were a world not even remembered.</p> <p>Allowing Reader to exist, but not attempting to make it something that could achieve broader adoption–or even just be great inside its niche–was sufficient to doom the medium. Reader was a worse product than Twitter by the time Twitter came around. I don’t think it needed to be that way.</p> <ol class="footnote-list"> <li> <a name="f1"></a> I will say that if killing beloved products is your bag, then building a cloud platform is the smartest strategy because it allows you to shut down products you <em>don't even own</em>. What legends! </li> <li><a name="f2"></a><em>The Daily Kluge</em>, though, ran a tight ship.</li> <li> <a name="f3"></a> This was not wrong at all, but it played out differently than RSS fans expected. Some folks just didn't believe RSS would work at all, which I think is somewhat discredited now with the resurgence of podcasts. </li> </ol> <p class="acknowledgements"> Thanks to <a href="https://programmingisterrible.com/">tef</a>, <a href="https://twitter.com/lxt">Laura Thomson</a>, and <a href="https://www.moishelettvin.com/">Moishe</a>. </p> Dan McKinley https://mcfunley.com/ Some Recent Work 2017-05-08T00:00:00+00:00 2017-05-08T00:00:00+00:00 urn:uuid:6d6cb15a-f879-141b-247f-b7971b5bd79f <p>Here are some links to recent work I’ve done elsewhere.</p> <ul> <li><a href="/ship-small-diffs">Ship Small Diffs</a> - I tried to transmute the anguish I feel looking at huge changesets into words.</li> <li><a href="https://hackernoon.com/mistakes-you-apparently-just-have-to-make-yourself-cc2dd2bfc25c">Mistakes You Apparently Just Have to Make Yourself</a> - Getting youngfolk to listen to you is harder than I realized.</li> <li><a href="/fourteen-months-with-clojure">Fourteen Months with Clojure</a> - Going back to my Lisp roots here.</li> <li><a href="http://pushtrain.club/">The Push Train</a> - Trying to frantically document some of the human element of making engineering function at a high level, which for whatever reason didn’t strike me as vital at the time.</li> <li><a href="https://frequentdeploys.club">Deploying Often is a Very Good Idea</a> - Conditional probability is extremely good.</li> <li><a href="/you-cant-have-a-rollback-button">You Can’t Have a Rollback Button</a> - Please engrave “but what if you didn’t?” on my tombstone I guess.</li> <li><a href="https://blog.skyliner.io/a-simple-pattern-for-jobs-and-crons-on-aws-2f965e43932f">A Simple Pattern for Jobs and Crons on AWS</a> - Not only did I stoop to writing a practical post for once, I also wrote <a href="https://medium.com/@mcfunley/at-most-once-vs-at-least-once-f215dafd27e2">a followup</a>.</li> <li><a href="/no-way-out-but-through">No Way Out but Through</a> - More ranting and raving about deploying more than once a year.</li> </ul> Dan McKinley https://mcfunley.com/ Fourteen Months with Clojure 2017-03-30T00:00:00+00:00 2017-03-30T00:00:00+00:00 urn:uuid:7D1CEBFA-D4FF-43BA-A3D4-8B876B961DC6 <p><a href="https://codahale.com/">Coda</a> and I have been using <a href="https://clojure.org">Clojure</a> to build Skyliner for the last fourteen months or so. I thought it might be a good idea to write down some of our experiences with this, for the benefit of others considering it for practical work.</p> <figure> <img src="/assets/images/skyliner-state-machine.png" /> <figcaption class="text-center">The beating heart of Skyliner, a deploy encoded as a finite state machine.</figcaption> </figure> <h5 id="learning-languages-is-easy-learning-the-idioms-is-less-easy">Learning languages is easy, learning the idioms is less easy</h5> <p><a href="https://www.cliki.net/CloserLookAtSyntax">“Lisp has no syntax,”</a> or so they say. It does have some, but significantly less than other languages. Clojure has a slightly larger pile of stuff that you could mistake for syntax, but, it’s still compact and simple. The tricky part isn’t the <em>language</em> so much as it is the <em>slang</em>.</p> <p>As a seasoned engineer who theoretically “knows” a few dozen languages, I got productive with Clojure pretty fast. Nevertheless I definitely emitted some crappy code in my first few months. Stuff like:</p> <div class="language-clojure highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">(</span><span class="nb">every?</span><span class="w"> </span><span class="o">#</span><span class="p">(</span><span class="nb">=</span><span class="w"> </span><span class="n">%</span><span class="w"> </span><span class="err">“</span><span class="n">success</span><span class="err">”</span><span class="p">)</span><span class="w"> </span><span class="p">(</span><span class="nb">map</span><span class="w"> </span><span class="no">:status</span><span class="w"> </span><span class="p">(</span><span class="no">:state</span><span class="w"> </span><span class="n">task</span><span class="p">)))</span><span class="w"> </span></code></pre></div></div> <p>Which I’d write like this today:</p> <div class="language-clojure highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">(</span><span class="nf">-&gt;&gt;</span><span class="w"> </span><span class="n">task</span><span class="w"> </span><span class="no">:state</span><span class="w"> </span><span class="p">(</span><span class="nb">map</span><span class="w"> </span><span class="no">:status</span><span class="p">)</span><span class="w"> </span><span class="p">(</span><span class="nb">every?</span><span class="w"> </span><span class="o">#</span><span class="p">(</span><span class="nb">=</span><span class="w"> </span><span class="n">%</span><span class="w"> </span><span class="err">“</span><span class="n">success</span><span class="err">”</span><span class="p">)))</span><span class="w"> </span></code></pre></div></div> <p><a href="https://clojure.org/guides/threading_macros">Threading macros</a> and <a href="https://clojure.org/reference/transducers">transducers</a> specifically took a few months to become second nature.</p> <p>This is the kind of thing that would matter to you if you were going to try to onboard a few new engineers a week. I never read a tutorial, because this is a startup, and I did not have time. You’d probably want to rectify that mistake and review their stuff for a while.</p> <h5 id="when-the-going-gets-tough-the-tough-use-maps">When the going gets tough, the tough use maps</h5> <p>If I were going to give you a quick summary of what our codebase is like, I’d say it’s <strong>procedural code that manipulates maps</strong>. That is literally 90% of it. This is a lot less bad than it probably sounds if you’ve never written Clojure, because the entire language is oriented around manipulating maps and lists.</p> <p>We keep the wheels on a few ways.</p> <ul> <li>Schemas for our maps are pretty handy, particularly when they’re of the user-supplied data variety. We’re using <a href="https://github.com/plumatic/schema">prismatic/schema</a> for this, although if we were starting today we might use <a href="https://clojure.org/about/spec">clojure.spec</a>.</li> <li>Our codebase has better test coverage than nearly anything I’ve ever worked on.</li> <li>We use <a href="https://github.com/clj-commons/kibit">Kibit</a> and <a href="https://github.com/jonase/eastwood">Eastwood</a> in our build pipeline for the sake of general cleanliness.</li> </ul> <h5 id="bells-and-whistles-are-very-rare">Bells and whistles are very rare</h5> <p>I kind of assumed writing Clojure professionally would involve communing with the grand harmony of the spheres, or something, but it really doesn’t. And this isn’t bad. It is actually extremely good.</p> <figure> <img src="/assets/images/ancient-of-days.png" /> <figcaption class="text-center">So then like McCarthy’s student Russell noticed that EVAL could serve as an interpreter and *<a href="https://twitter.com/dril/status/163500308469792768?lang=en">goes limp &amp; rolls down steep mountainside for 10 minutes or so, banging head on branches and rocks, surely dead</a>.* </figcaption> </figure> <p>In fourteen months I count about six uses of <code class="inline">recur</code>. I think I wrote some code using <code class="inline">trampoline</code> once or twice and then decided against shipping it.</p> <p>We’ve written <code class="inline">defmacro</code> ourselves less than ten times. Most of those are for logging, so that we can grab the caller’s value of <code class="inline">*ns*</code>. Others are setting dynamically scoped variables for the sake of implementing feature flags. They’re all really simple macros.</p> <p>Types of any kind are rare to a degree that astonish me. We’ve written a handful of protocols, for example our <code class="inline">scm</code> protocol is there to provide a uniform interface for both GitHub and private git repos. We have records representing different kinds of CloudFormation stacks that we create and manipulate. That is pretty much it.</p> <h5 id="multimethods-are-less-rare">Multimethods are less rare</h5> <p>One thing we do use more extensively are <a href="https://clojure.org/about/runtime_polymorphism">multimethods</a>. We use this to dispatch asynchronous jobs in our workers, to fan out to different handlers from webhook endpoints, and to make transitions in our deploy finite state machine.</p> <figure> <img src="/assets/images/clojure-multimethods.png" /> <figcaption class="text-center">Using a simple little multimethod to convert java types into primitives that are acceptable to our frontend templates.</figcaption> </figure> <p>In other languages we’d probably want to use some object abstraction or other, but multimethods handle things like this cleanly.</p> <h5 id="clojure-is-not-scala">Clojure is not Scala</h5> <p>I had some anxiety when we were getting started with Clojure, and that was grounded in my years of experience with Scala. Scala has <a href="https://codahale.com/downloads/email-to-donald.txt">scarred</a> both of us for a number of reasons. Scala <a href="https://docs.scala-lang.org/overviews/reflection/typetags-manifests.html">builds on JVM typing</a> to erect additional complexity, and in my opinion the results are <a href="https://docs.scala-lang.org/tour/implicit-parameters.html">mixed</a>.</p> <figure> <img src="/assets/images/contravariance.png" /> <figcaption class="text-center">A cathedral of covariance and contravariance built on the soft sandy base layer of type erasure.</figcaption> </figure> <p>Clojure doesn’t ask you to type anything if you don’t want to. That has its pluses and minuses, but you can write most of your code without getting into any slapfights with the JVM. So as a higher-level abstraction over Java, it works.</p> <p>Building a server application with Clojure is a better experience than with many <a href="https://clojure.org/reference/compilation">compiled languages</a>, because as with any Lisp, you can just hotpatch everything in the REPL as you build it.</p> <p>I’ll grant you that maybe Scala has answers to all of these problems now, as I haven’t had the pleasure of using it in several versions. Do not @ me to talk about this.</p> <h5 id="nesting-sucks">Nesting sucks</h5> <p>Although Common Lisp has <a href="https://gigamonkeys.com/book/functions.html#function-return-values">return-from</a>, Clojure has no facility like <code class="inline">return</code> or <code class="inline">goto</code>. This isn’t something you miss writing idiomatic Clojure, but sometimes you find yourself boxed into writing non-idiomatic Clojure. A good example of such a situation is dealing with a morass of heterogenous functions that can return error codes.</p> <p>Let’s say that you have a list of steps that need to complete in a specific order, and may fail. Conceptually, in Python:</p> <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">x</span> <span class="o">=</span> <span class="n">foo</span><span class="p">()</span> <span class="k">if</span> <span class="ow">not</span> <span class="n">x</span><span class="p">:</span> <span class="k">return</span> <span class="bp">False</span> <span class="n">y</span> <span class="o">=</span> <span class="n">bar</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="k">if</span> <span class="ow">not</span> <span class="n">y</span><span class="p">:</span> <span class="k">return</span> <span class="bp">False</span> <span class="k">return</span> <span class="n">baz</span><span class="p">(</span><span class="n">y</span><span class="p">)</span> </code></pre></div></div> <p>This can be elegantly handled if the methods in the pipeline all return <code class="inline">nil</code> in the failure case, and we don’t care to do much else.</p> <div class="language-clojure highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">(</span><span class="nf">some-&gt;</span><span class="w"> </span><span class="p">(</span><span class="nf">foo</span><span class="p">)</span><span class="w"> </span><span class="n">bar</span><span class="w"> </span><span class="n">baz</span><span class="p">)</span><span class="w"> </span></code></pre></div></div> <p>But things start to fall apart as the signatures of the functions in the pipeline vary, or if we want to instrument the pieces with logging.</p> <div class="language-clojure highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">(</span><span class="nb">if-let</span><span class="w"> </span><span class="p">[</span><span class="n">x</span><span class="w"> </span><span class="p">(</span><span class="nf">foo</span><span class="p">)]</span><span class="w"> </span><span class="p">(</span><span class="nb">if-let</span><span class="w"> </span><span class="p">[</span><span class="n">y</span><span class="w"> </span><span class="p">(</span><span class="nf">bar</span><span class="w"> </span><span class="n">x</span><span class="p">)]</span><span class="w"> </span><span class="p">(</span><span class="nb">if-let</span><span class="w"> </span><span class="p">[</span><span class="n">z</span><span class="w"> </span><span class="p">(</span><span class="nf">goo</span><span class="w"> </span><span class="n">x</span><span class="w"> </span><span class="n">y</span><span class="p">)]</span><span class="w"> </span><span class="p">(</span><span class="nf">do</span><span class="w"> </span><span class="p">(</span><span class="nf">qux</span><span class="w"> </span><span class="n">x</span><span class="w"> </span><span class="n">y</span><span class="w"> </span><span class="n">z</span><span class="p">)</span><span class="w"> </span><span class="p">(</span><span class="nf">log</span><span class="w"> </span><span class="s">"it worked"</span><span class="p">)</span><span class="w"> </span><span class="n">true</span><span class="p">)</span><span class="w"> </span><span class="p">(</span><span class="nf">do</span><span class="w"> </span><span class="p">(</span><span class="nf">log</span><span class="w"> </span><span class="s">"goo failed"</span><span class="p">)</span><span class="w"> </span><span class="n">false</span><span class="p">))</span><span class="w"> </span><span class="p">(</span><span class="nf">do</span><span class="w"> </span><span class="p">(</span><span class="nf">log</span><span class="w"> </span><span class="s">"bar failed"</span><span class="p">)</span><span class="w"> </span><span class="n">false</span><span class="p">))</span><span class="w"> </span><span class="p">(</span><span class="nf">do</span><span class="w"> </span><span class="p">(</span><span class="nf">log</span><span class="w"> </span><span class="s">"foo failed"</span><span class="p">)</span><span class="w"> </span><span class="n">false</span><span class="p">))</span><span class="w"> </span></code></pre></div></div> <p>We have a decent amount of old code that looks like this. It’s all well tested and in that sense it’s relatively safe, but it’s still craptacular and tricky to modify.</p> <p>Before a throng of enlightened individuals amble up to the mic stand in the aisle to tell us this, I should say that we are wonk as hell and therefore realized we were building a composition of <a href="https://hackage.haskell.org/package/category-extras-0.52.0/docs/Control-Monad-Either.html">either monads</a>.</p> <figure> <img src="/assets/images/profunctor-optics.png" /> <figcaption class="text-center">Could you not</figcaption> </figure> <p>But a highbrow-yet-idiomatic solution to that in a language otherwise devoid of category theory wasn’t immediately obvious. I messed around the idea of tackling this with specialized macros, but decided this was an unmaintainable tarpit.</p> <p>In the end we decided to just try using a category theory library, <a href="https://github.com/funcool/cats">cats</a>. That lets you write something equivalent to the above like so:</p> <div class="language-clojure highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">(</span><span class="nf">require</span><span class="w"> </span><span class="o">'</span><span class="p">[</span><span class="n">cats.core</span><span class="w"> </span><span class="no">:as</span><span class="w"> </span><span class="n">m</span><span class="p">])</span><span class="w"> </span><span class="p">(</span><span class="nf">require</span><span class="w"> </span><span class="o">'</span><span class="p">[</span><span class="n">cats.monad.either</span><span class="w"> </span><span class="no">:as</span><span class="w"> </span><span class="n">either</span><span class="p">])</span><span class="w"> </span><span class="o">@</span><span class="p">(</span><span class="nf">m/mlet</span><span class="w"> </span><span class="p">[</span><span class="n">x</span><span class="w"> </span><span class="p">(</span><span class="nb">if-let</span><span class="w"> </span><span class="p">[</span><span class="n">v</span><span class="w"> </span><span class="p">(</span><span class="nf">foo</span><span class="p">)]</span><span class="w"> </span><span class="p">(</span><span class="nf">either/right</span><span class="w"> </span><span class="n">v</span><span class="p">)</span><span class="w"> </span><span class="p">(</span><span class="nf">either/left</span><span class="p">))</span><span class="w"> </span><span class="n">y</span><span class="w"> </span><span class="p">(</span><span class="nb">if-let</span><span class="w"> </span><span class="p">[</span><span class="n">v</span><span class="w"> </span><span class="p">(</span><span class="nf">bar</span><span class="w"> </span><span class="n">x</span><span class="p">)]</span><span class="w"> </span><span class="p">(</span><span class="nf">either/right</span><span class="w"> </span><span class="n">v</span><span class="p">)</span><span class="w"> </span><span class="p">(</span><span class="nf">either/left</span><span class="p">))</span><span class="w"> </span><span class="n">z</span><span class="w"> </span><span class="p">(</span><span class="nb">if-let</span><span class="w"> </span><span class="p">[</span><span class="n">v</span><span class="w"> </span><span class="p">(</span><span class="nf">goo</span><span class="w"> </span><span class="n">x</span><span class="w"> </span><span class="n">y</span><span class="p">)]</span><span class="w"> </span><span class="p">(</span><span class="nf">either/right</span><span class="w"> </span><span class="n">v</span><span class="p">)</span><span class="w"> </span><span class="p">(</span><span class="nf">either/left</span><span class="p">))]</span><span class="w"> </span><span class="p">(</span><span class="nf">m/return</span><span class="w"> </span><span class="p">(</span><span class="nf">qux</span><span class="w"> </span><span class="n">x</span><span class="w"> </span><span class="n">y</span><span class="w"> </span><span class="n">z</span><span class="p">)))</span><span class="w"> </span></code></pre></div></div> <p>Which cuts out the nesting and makes a big difference in sufficiently complicated scenarios.</p> <p>It is unclear to me if the category theory would still be a win on a less experienced team. I have a long history of being skeptical of things like this, but it has improved our lives recently.</p> <h5 id="thanks-for-reading">Thanks for reading!</h5> <p>I hope this helps if you’re considering building something real with Clojure.</p> Dan McKinley https://mcfunley.com/ You Can’t Have a Rollback Button 2017-02-28T00:00:00+00:00 2017-02-28T00:00:00+00:00 urn:uuid:0DFE5A8B-4AAA-4E66-919B-B2AF181A33F5 <p>I’ve worked with deploy systems in the past that have a prominent “rollback” button, or a console incantation with the same effect. The presence of one of these is reassuring, in that you can imagine that if something goes wrong you can quickly get back to safety by undoing your last change.</p> <p>But the rollback button is a lie. You can’t have a rollback button that’s safe when you’re deploying a running system.</p> <figure> <img src="/assets/images/no-rollback/buffalo.webp" alt="A buffalo blocking the road in Yellowstone" /> <figcaption> The majestic bison is insouciant when monopolizing the push queue, stuck in a debug loop, to the annoyance of his colleagues. </figcaption> </figure> <h5 id="the-old-version-does-not-exist">The Old Version does not Exist</h5> <p>The fundamental problem with rolling back to an old version is that web applications are not self-contained, and therefore they do not have versions. They have a current state. The state consists of the application code and everything that it interacts with. Databases, caches, browsers, and concurrently-running copies of itself.</p> <figure> <img src="/assets/images/no-rollback/wirth.webp" alt="The cover of Niklaus Wirth's Algorithms + Data Structures = Programs" /> <figcaption> What they don’t tell you in school is the percentage of your life as a working programmer that will be spent dealing with the “plus” sign. </figcaption> </figure> <p>You can roll back the SHA the webservers are running, but you can’t roll back what they’ve inflicted on everything else in the system. Well, not without a time machine. If you have a time machine, please use the time machine. Otherwise, the remediation has to occur in the direction of the future.</p> <h5 id="a-demonstration">A Demonstration</h5> <p>Contriving an example of a fault that can’t be rolled back is trivial. We can do this by starting with a python script that emulates a simple read-through cache:</p> <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># version1.py </span><span class="kn">from</span> <span class="nn">pymemcache.client.base</span> <span class="kn">import</span> <span class="n">Client</span> <span class="n">c</span> <span class="o">=</span> <span class="n">Client</span><span class="p">((</span><span class="s">'localhost'</span><span class="p">,</span> <span class="mi">11211</span><span class="p">))</span> <span class="n">db</span> <span class="o">=</span> <span class="p">{</span><span class="s">'a'</span><span class="p">:</span> <span class="mi">1</span><span class="p">}</span> <span class="k">def</span> <span class="nf">read_through</span><span class="p">(</span><span class="n">k</span><span class="p">):</span> <span class="n">v</span> <span class="o">=</span> <span class="n">c</span><span class="p">.</span><span class="n">get</span><span class="p">(</span><span class="n">k</span><span class="p">)</span> <span class="k">if</span> <span class="ow">not</span> <span class="n">v</span><span class="p">:</span> <span class="c1"># let’s pretend this reads from the database. </span> <span class="n">v</span> <span class="o">=</span> <span class="n">db</span><span class="p">[</span><span class="n">k</span><span class="p">]</span> <span class="n">c</span><span class="p">.</span><span class="nb">set</span><span class="p">(</span><span class="n">k</span><span class="p">,</span> <span class="n">v</span><span class="p">)</span> <span class="k">return</span> <span class="nb">int</span><span class="p">(</span><span class="n">v</span><span class="p">)</span> <span class="k">print</span><span class="p">(</span><span class="s">'value: %d'</span> <span class="o">%</span> <span class="n">read_through</span><span class="p">(</span><span class="s">'a'</span><span class="p">))</span> </code></pre></div></div> <p>We can verify that this works fine:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python version1.py value: 1 </code></pre></div></div> <p>Now let’s consider the case of pushing some bad code over top of it. Here’s an updated version:</p> <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># version1.py </span><span class="kn">from</span> <span class="nn">pymemcache.client.base</span> <span class="kn">import</span> <span class="n">Client</span> <span class="n">c</span> <span class="o">=</span> <span class="n">Client</span><span class="p">((</span><span class="s">'localhost'</span><span class="p">,</span> <span class="mi">11211</span><span class="p">))</span> <span class="n">db</span> <span class="o">=</span> <span class="p">{</span><span class="s">'a'</span><span class="p">:</span> <span class="mi">1</span><span class="p">}</span> <span class="k">def</span> <span class="nf">read_through</span><span class="p">(</span><span class="n">k</span><span class="p">):</span> <span class="n">v</span> <span class="o">=</span> <span class="n">c</span><span class="p">.</span><span class="n">get</span><span class="p">(</span><span class="n">k</span><span class="p">)</span> <span class="k">if</span> <span class="ow">not</span> <span class="n">v</span><span class="p">:</span> <span class="c1"># let’s pretend this reads from the database. </span> <span class="n">v</span> <span class="o">=</span> <span class="n">db</span><span class="p">[</span><span class="n">k</span><span class="p">]</span> <span class="n">c</span><span class="p">.</span><span class="nb">set</span><span class="p">(</span><span class="n">k</span><span class="p">,</span> <span class="n">v</span><span class="p">)</span> <span class="k">return</span> <span class="nb">int</span><span class="p">(</span><span class="n">v</span><span class="p">)</span> <span class="k">def</span> <span class="nf">write_through</span><span class="p">(</span><span class="n">k</span><span class="p">,</span> <span class="n">val</span><span class="p">):</span> <span class="n">c</span><span class="p">.</span><span class="nb">set</span><span class="p">(</span><span class="n">k</span><span class="p">,</span> <span class="n">val</span><span class="p">)</span> <span class="n">db</span><span class="p">[</span><span class="n">k</span><span class="p">]</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="n">val</span><span class="p">)</span> <span class="c1"># mess up the cache lol </span><span class="n">write_through</span><span class="p">(</span><span class="s">'a'</span><span class="p">,</span> <span class="s">'x'</span><span class="p">)</span> <span class="k">print</span><span class="p">(</span><span class="s">'value: %d'</span> <span class="o">%</span> <span class="n">read_through</span><span class="p">(</span><span class="s">'a'</span><span class="p">))</span> </code></pre></div></div> <p>That corrupts the cache, and promptly breaks:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python version2.py ValueError: invalid literal for int() with base 10: ’x’ </code></pre></div></div> <p>At this point, red sirens are going off all over the office and support reps are sprinting in the direction of our desks. So we hit the rollback button, and:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python version1.py ValueError: invalid literal for int() with base 10: b’x’ </code></pre></div></div> <p>Oh no! It’s still broken! We can’t resolve this problem by rolling back. We’re lucky that in this case, nothing has been made the worse. But that is also a possibility. There’s no guarantee that the path from v1 to v2 and then back to v1 isn’t actively destructive.</p> <p>A working website can eventually be resurrected by writing some new code to cope with the broken data.</p> <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">read_through</span><span class="p">(</span><span class="n">k</span><span class="p">):</span> <span class="n">v</span> <span class="o">=</span> <span class="n">c</span><span class="p">.</span><span class="n">get</span><span class="p">(</span><span class="n">k</span><span class="p">)</span> <span class="k">if</span> <span class="ow">not</span> <span class="n">v</span><span class="p">:</span> <span class="c1"># let’s pretend this reads from the database. </span> <span class="n">v</span> <span class="o">=</span> <span class="n">db</span><span class="p">[</span><span class="n">k</span><span class="p">]</span> <span class="n">c</span><span class="p">.</span><span class="nb">set</span><span class="p">(</span><span class="n">k</span><span class="p">,</span> <span class="n">v</span><span class="p">)</span> <span class="k">try</span><span class="p">:</span> <span class="k">return</span> <span class="nb">int</span><span class="p">(</span><span class="n">v</span><span class="p">)</span> <span class="k">except</span> <span class="nb">ValueError</span><span class="p">:</span> <span class="c1"># n.b. we screwed up some of the cached values on $DATE, </span> <span class="c1"># this remediates </span> <span class="n">v</span> <span class="o">=</span> <span class="n">db</span><span class="p">[</span><span class="n">k</span><span class="p">]</span> <span class="n">c</span><span class="p">.</span><span class="nb">set</span><span class="p">(</span><span class="n">k</span><span class="p">,</span> <span class="n">v</span><span class="p">)</span> <span class="k">return</span> <span class="nb">int</span><span class="p">(</span><span class="n">v</span><span class="p">)</span> </code></pre></div></div> <p>You might dispute the plausibility of a mistake as transparently daft as this. But in my career I’ve carried out conceptually similar acts of cache destruction many times. I’m not saying I’m a great programmer. But then again maybe you aren’t, either.</p> <h5 id="a-sharp-knife-whose-handle-is-also-a-knife">A Sharp Knife, Whose Handle is also a Knife</h5> <p>Adding a rollback button is not a neutral design choice. It affects the code that gets pushed. If developers incorrectly believe that their mistakes can be quickly reversed, they will tend to take more foolish risks. It might be hard to <a href="https://medium.com/@mcfunley/mistakes-you-apparently-just-have-to-make-yourself-cc2dd2bfc25c">talk them out of it</a>.</p> <p>Mounting a rollback button within easy reach (as opposed to <code class="language-plaintext highlighter-rouge">git revert</code>, which you <a href="https://twitter.com/simonw/status/835975770740670464">probably have to google</a>) means that it’s more likely to be pressed carelessly in an emergency. <em>Panic buttons are for when you’re panicking.</em></p> <h5 id="practice-small-corrections">Practice Small Corrections</h5> <p>Pushbutton rollback is a bad idea. The only sensible thing to do is change the way we organize our code for deployment.</p> <ul> <li><strong>Push “dark” code</strong>. You should be deploying code behind a disabled feature flag that will not be invoked. It’s relatively easy to <a href="/ship-small-diffs">visually inspect an if statement for correctness</a> and check that a flag is disabled.</li> <li><strong>Ramp up invocations of new code</strong>. Breaking requests without a quick rollback path is bad. But it’s much worse to break 100% of requests than it is to break 1% of requests. If we ramp up new code gradually, we can often contain the scope of the damage.</li> <li><strong>Maintain off switches</strong>. In the event that a complicated remediation is required, we’re in a stronger position if we can disable broken features while we work on them in relative calm.</li> <li><strong>Roll forward</strong>. Production pushes will include many commits, all of which need to be evaluated for reversibility when a complete rollback is proposed. Reverting smaller diffs as a roll-forward is <a href="/ship-small-diffs">more verifiable</a>.</li> </ul> <p>Complete deployment rollbacks are high-G maneuvers. The implications of initiating one given a nontrivial set of changes are impossible to reason about. You may decide that one is called for, but you should do this as a last resort.</p> Dan McKinley https://mcfunley.com/ Ship Small Diffs 2017-02-09T00:00:00+00:00 2017-02-09T00:00:00+00:00 urn:uuid:2A742D25-3179-4DE2-B6C8-ABFA36FA2D50 <p>Building a web application is a young and poorly-understood activity. Toolchains for building code in general are widely available, relatively older, and they also happen to be closest at hand when you’re getting started. The tendency, then, is to pick some command line tools and work forwards from their affordances.</p> <p>Git provides methods for coping with every merge problem conceivable. It also gives us support for arbitrarily complicated branching and tagging schemes. Many people reasonably conclude that it makes sense to use those features all the time.</p> <figure> <img src="/assets/images/dante.jpg" /> <figcaption class="text-center">I found myself in a dark wood, where the straight way was lost. The good lord would not have given me this 25 ton hydraulic splitter if I weren’t meant to cut up some logs.</figcaption> </figure> <p>This is a mistake. You should start from practices that work operationally, and follow the path backwards to what is done in development. Even <a href="https://datadriven.club">allowing for discardable MVP’s</a>, ultimately in a working business <a href="https://boringtechnology.club">most of the cost of software is in operating it, not building it</a>.</p> <p>I’ll make the case for one practice that works very well operationally: deploying small units of code to production on a regular basis. I think that your deploys should be measured in dozens of lines of code rather than hundreds. You’ll find that taking this as a fixed point requires only relatively simple uses of revision control.</p> <h5 id="ship-small-diffs-and-stand-a-snowballs-chance-of-inspecting-them-for-correctness">Ship small diffs, and stand a snowball’s chance of inspecting them for correctness.</h5> <p>Your last chance to avoid broken code in production is just before you push it, and to that end many teams think it’s a good idea to have standard-ish code reviews. This isn’t wrong, but return on effort diminishes.</p> <p>Submitting hundreds of lines of code for review is a large commitment. It encourages sunk cost thinking and entrenchment. Reviews for large diffs are closed with a single “lgtm,” or miss big-picture problems for the weeds. Even the strongest cultures have reviews that devolve into Maoist struggle sessions about whitespace.</p> <figure> <img src="/assets/images/there-are-four-lights.png" /> <figcaption class="text-center">Your tormentors will demand baffling, seemingly-trivial concessions.</figcaption> </figure> <p>Looking at a dozen lines for mistakes is the sort of activity that is reasonably effective without being a burden. This will not prevent all problems, or even fail to create any new ones. But as a process it is a mindful balance between the possible and the practical.</p> <h5 id="ship-small-diffs-because-code-isnt-correct-until-its-running-production">Ship small diffs, because code isn’t correct until it’s running production.</h5> <p>The senior developer’s conditioned emotional response to a large deploy diff is abject terror. This is an instinctive understanding of a simple relationship.</p> <figure> <img src="/assets/images/poppies.jpg" /> <figcaption class="text-center">Quick, find the red one</figcaption> </figure> <p>Every line of code has some probability of having an undetected flaw that will be seen in production. Process can affect that probability, but it cannot make it zero. Large diffs contain many lines, and therefore have a high probability of breaking when given real data and real traffic.</p> <p>In online systems, you have to ship code to prove that it works.</p> <h5 id="ship-small-diffs-because-the-last-thing-you-changed-is-probably-setting-those-fires">Ship small diffs, because the last thing you changed is probably setting those fires.</h5> <p>We cannot prevent all production problems. They will happen. And when they do, we’re better off when we’ve been pushing small changesets.</p> <p>Many serious production bugs will make themselves evident <a href="https://github.com/danluu/post-mortems#config-errors">as soon as they’re pushed out</a>. If a new database query on your biggest page is missing an index, you will probably be alerted quickly. When this happens, it is reasonable to assume that the last deploy contains the flaw.</p> <figure> <img src="/assets/images/oops.png" /> <figcaption class="text-center">Oops</figcaption> </figure> <p>At other times, you’ll want to debug a small but persistent problem that’s been going on for a while. The key pieces of information useful to solving such a mystery are when the problem first appeared, and what was changed around that time.</p> <p>In both of these scenarios, the debugger is presented with a diff. Finding the problem in the diff is similar to code review, but worse. It’s a code review performed under duress. So the time to recover from problems in production will tend to be proportional to the size of the diffs that you’re releasing.</p> <h5 id="taking-small-diffs-seriously">Taking Small Diffs Seriously</h5> <p>Human frailty limits the efficacy of code review for prophylactic purposes. Problems in releases are inevitable, and scale with the amount of code released. The time to debug problems is a function of (among other things) the volume of stuff to debug.</p> <p>This isn’t a complicated list of precepts. But taking them to heart leads you to some interesting conclusions.</p> <ul> <li><strong>Branches have inertia, and this is bad</strong>. I tell people that it’s fine with me if working in a branch helps them, as long as I’m not ever able to tell for sure that they’re doing it. It’s easier to double down on a branch than it is to merge and deploy, and developers fall into this tiger trap all the time.</li> <li><strong>Lightweight manipulation of source code <em>is fine</em></strong>. PR’s of GitHub branches are great. But <code class="inline">git diff | gist -optdiff</code> also works reasonably if we are talking about a dozen lines of code.</li> <li><strong>You don’t need elaborate Git release rituals</strong>. Ceremony such as tagging releases gets to feel like a waste of time once you are releasing many times per day.</li> <li><strong>Your real problem is releasing frequently</strong>. Limiting the amount of code you push is going to block progress, unless you can simultaneously increase the rate of pushing code. This is not as easy as it sounds, and it will shift the focus of your developer tooling budget in the direction of software built with this goal in mind.</li> </ul> <p>That is not an exhaustive list. Starting from operations and working backwards has lead us to critically examine what we do in development, and this is a good thing.</p> Dan McKinley https://mcfunley.com/ No Way Out But Through 2016-08-25T00:00:00+00:00 2016-08-25T00:00:00+00:00 urn:uuid:23F1A7A3-7E3B-40CC-B9F5-296BD4E95129 <p><em><strong>Note</strong>: This was a post for Skyliner, which was a startup I co-founded in 2016. The post is recreated here since it makes some good points and was reasonably popular. But be advised the startup it describes is now defunct (we sold ourselves to Mailchimp in 2017).</em></p> <hr /> <p>I’ve been around long enough to see production releases done a few different ways.</p> <p>My first tech job began back when delivering software over the internet wasn’t quite normal, yet. Deployments happened roughly every 12 to 18 months, and they were unmitigated disasters that stretched out for weeks.</p> <p>When I got to <a href="https://etsy.com">Etsy</a> in 2007, deploys happened a bit more often. But they were still arcane and high-stress affairs. An empowered employee typing commands manually pushed weeks of other people’s work, and often it <em>Did Not Go Well</em>.</p> <p>But by the time I left Etsy in 2014, we were pushing code live to production <a href="https://www.youtube.com/watch?v=AwOG65UGAH4">dozens of times per day, with minimal drama</a>. This experience has convinced me of a few things.</p> <ol> <li>Changing code is risky.</li> <li>Unfortunately, achieving business goals generally involves changing code.</li> <li>The best coping strategy I’m aware of is to change code as frequently as possible.</li> </ol> <p>I believe deploys should be <em>routine, cheap, and safe</em>. That is the philosophy we’ve used to build Skyliner, and we built Skyliner with the intent of sharing this philosophy with other teams.</p> <h5 id="routine">Routine</h5> <p>In deployment, the <a href="https://frequentdeploys.club">path of least resistance should also be the right way to do it</a>. It should be easier and quicker to deploy the right way than to circumvent the process. Making “proper” deploys more complex, slower, or riddled with manual steps backfires. Human nature will lead to chaotic evil, like hand-edited files on production machines.</p> <p>I’ve been there. I have debugged more than one outage precipitated by live edits to php.ini. Our team worked hard in the years following those incidents to build a deployment system that was too easy and joyful to evade.</p> <h5 id="cheap">Cheap</h5> <p>Deploys can only be routine if they’re relatively quick. If it takes you hours to deploy your code, obviously this imposes a natural limit on how often deploys can be done. But the secondary effects of the latency are worse.</p> <p>Rare, expensive deploys bundle many changes; quick, cheap deploys can bundle just a few. This becomes important when things don’t go as planned. The most plausible answer to “what went wrong” is usually “the last thing we changed.” So when debugging a problem in production, it matters a great deal whether the release diff is a handful of lines or thousands.</p> <figure> <img src="/assets/images/noway/forum-posts.webp" alt="A graph of forum posts spiking up after a deploy, which is indicated on the graph with a vertical red line" /> <figcaption> Many interesting things in the field of web operations immediately follow a code deploy. Here’s the record of me causing mass hysteria with several pushes, <a href="https://www.etsy.com/codeascraft/track-every-release/">back in 2010</a>. </figcaption> </figure> <p>Infrequent deploys also create natural deadlines. Engineers will tend to rush to get their changes in for a weekly push, and rushing leads to mistakes. If pushes happen hourly, the penalty for waiting for the next one to write a few more unit tests is much less severe.</p> <h5 id="safe">Safe</h5> <p>Total safety in deploying code is not possible, and the deployment engine is only one part of the operational puzzle. Striving for a purely technical solution to deploy-driven outages is bound to lead to complexity that will have the opposite effect. As I’ve explained, I think that routine and cheap deploys are inherently safer, and these are cultural choices as much as they are a set of technical solutions.</p> <p>But, mechanics are still important. Early versions of <a href="https://github.com/etsy/deployinator">Etsy’s Deployinator</a> stopped pushing code if the browser of the person performing the deploy disconnected. That was a bad choice, and that became evident immediately the first time I tried to deploy from an airplane somewhere over Kansas. That’s ridiculous, but many teams use a single machine to orchestrate deployments and just hope that it never dies in the act.</p> <figure> <img src="/assets/images/noway/deployinator.webp" alt="Screenshot of Etsy's deployinator tool" /> <figcaption> Etsy's Deployinator, an inspiration for much of the Skyliner deployment experience. </figcaption> </figure> <p>It is also nontrivial to replace code as it’s running. In the bad old days we’d just do deploys during maintenance windows, but that’s become untenable. In the 21st century we have to make changes to sites while they’re live, and getting this right is a challenge.</p> <h5 id="baking-hard-lessons-into-skyliner">Baking Hard Lessons Into Skyliner</h5> <p>Skyliner deploys are easy to use: you just wait for the build to finish and press the button. They’re all logged and recorded, and it’d take significantly more effort to do anything less safe.</p> <figure> <img src="/assets/images/noway/skyliner.webp" alt="Screenshot of an application in Skyliner (RIP)" /> <figcaption> The deployment view in a Skyliner application. </figcaption> </figure> <p>We value simplicity, and are believers in <a href="http://www.paulhammond.org/2010/06/trunk/">Paul Hammond’s advice that you should always ship trunk</a>. Skyliner affords you a single deployment branch. You’re free to act out baroque git contortions if you wish, but we suggest that you keep your release process simple and just deploy a master branch.</p> <p>We’ve worked hard to make Skyliner deploys as fast as possible. The speed of deploys is decoupled from the instance count, so pushes to small clusters as well as large clusters can both be expected to finish in two or three minutes.</p> <p>That’s not quite as fast as might be possible with a system that just copied files, but Skyliner deploys are much more than this. We think that the benefits are worth a minor amount of extra waiting.</p> <p>Our engine models each deploy as a finite state machine. Workers cooperate to complete (idempotent) tasks to advance the deploy state, which means that our instances can die without breaking running deploys.</p> <figure> <img src="/assets/images/noway/skyliner-arch.webp" alt="Skyliner (RIP) deployment architecture diagram" /> <figcaption> The coordination of Skyliner deploys is distributed. Deploy workers advance a finite state machine, and can safely be killed without breaking a running deploy. </figcaption> </figure> <p>Every Skyliner deploy is a <a href="http://martinfowler.com/bliki/BlueGreenDeployment.html">blue/green deploy</a>. We spin up an entirely new cluster with the new code, make sure it’s healthy, and then make it live as an atomic switch at the load balancer level. This has a few notable advantages to deploying files in place:</p> <ul> <li>Given a sufficiently good healthcheck, the system never makes a totally-broken version live. (Application bugs, regrettably, are still possible.)</li> <li>By routinely destroying the entire cluster, we eliminate the possibility that the <a href="http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html">application has inadvertently become reliant upon local machine state</a>.</li> </ul> <h5 id="my-gray-hairs-grown-for-you">My Gray Hairs, Grown For You</h5> <p>Deployment is tricky business. We wanted to give Skyliner users a system informed by several decades of our own mistakes. “Well, that sucked,” I said to myself, “but there’s no reason that the rest of the world needs to trip over the same cord.”</p> Dan McKinley https://mcfunley.com/ The Unreasonable Effectiveness of Mathematics in Planning 2016-02-03T00:00:00+00:00 2016-02-03T00:00:00+00:00 urn:uuid:0db0d98a-fb9b-cc6a-94a8-c4506f128ce4 <p>I was speaking on a panel the other day that was handed the topic, “the challenges of balancing data-light product bets vs purely data driven incremental improvements.” <a href="https://twitter.com/skamille">Camille Fournier</a> was also a panelist and wrote up her thoughts <a href="http://whilefalse.blogspot.com/2016/01/qualitative-or-quantitative-but-always.html">here</a>. Camille’s take (which I think is right) is that even if you don’t have data to work from, you can still approach projects analytically.</p> <p>For me, the process of behaving analytically incorporates mathematical reasoning but not necessarily <em>data</em>. And I think this kind of spitballing is a useful activity, even if the numbers are made up. The reason for this is that human brains were forged on the African savanna where nothing is very fast, very large, or very small, cosmically speaking, and we are laughably equipped for coping with orders of magnitude.</p> <figure> <img src="http://i.imgur.com/bIMKyZl.jpg" /> <figcaption>That is also why you think this looks awesome, but don't let that spoil it for you.</figcaption> </figure> <p>The kind of thinking I’m describing works like this: <em>“ok that’s a thing measured in thousands multiplied by a thing measured in tens of thousands, and then filtered through a rate of a few percent, are we even close?”</em>. When permitted to skip this check on deficient intuition, most humans will sense their way to the wrong answers.</p> <p>But on the panel and in subsequent discussions, it’s been easy to run with the dichotomy that you’ve either got data to work from, or you’ve got nothing at all. The temptation is to jump into philosophical takes given examples of products or entire markets that could not have been calculated with forsight before they existed. While that’s valid, I think it doesn’t describe most of the situations that you encounter in the wild.</p> <h5 id="data-exists-and-we-dont-want-to-look">Data Exists, and We Don’t Want to Look</h5> <p>The daily grind at a company consists of building in proximity to a thing that’s satisfying some definition of “working.” Yes, there’s always the innovator’s dilemma to worry about and the prospect of weird new platforms that will enable use cases you don’t understand yet. But the degree to which we’re striking out into the <em>undiscovered country</em> is overstated.</p> <p>Companies release products that you’d figure shouldn’t have survived opportunity analysis all the time. They just don’t pitch them that way:</p> <blockquote> <p>This feature notifies pairs of individuals that have arranged an unlikely relationship on the internet beforehand. The notifications are delivered two or three times a year, and only if the parties are in close geographic proximity. And they both have an optional iOS app installed. And in this scenario one of the people is known to be in a cohort that tends to not have that iOS app installed. And then at the end of this funnel we’re hoping that some small percentage of these folks will wind up showing up online and buying a thing. <em>Later.</em></p> </blockquote> <p>I have a real launch in mind with that, but I’ve rendered it unrecognizable and absurd by describing it accurately. This isn’t a situation where the volume couldn’t be estimated. If it were, I’d have a harder time lampooning it. This is the neglected scenario: we have all the data we need, but instead of deploying it we shipped something doomed.</p> <p>When you hear people speak in defense of such things, they act out the same misdirection and head straight for the words we use when we’re discussing the iPod. <em>You can’t, like, quantify vision, man.</em> What they’re really espousing is the idea that product success obeys an uncertainty principle. If we look at things too closely, the magic disappears. And of course the good vibes would sublimate in this case, because the magic is nonsense.</p> <h5 id="the-hazards-of-narrative-arc">The Hazards of Narrative Arc</h5> <p>Of course, this is not what anyone is actually thinking. Nobody sets out to ignore data on purpose, hoping to improve their chances of failing. You just watched me retcon an ethos onto feral behavior. And in doing so, I am part of the problem.</p> <p>Everyone’s the hero of the novel they’re writing in their heads. That is the human condition. And having <a href="https://en.wikipedia.org/wiki/List_of_artistic_depictions_of_Steve_Jobs">saved a company by inventing a new market</a> is a great narrative arc, which is why we reach for it when we’re actually engaged in something mundane. <a href="/effective-web-experimentation-as-a-homo-narrans">We just systematically find stories too compelling</a>.</p> <p>It is rarely the case that vision can’t be at least sketched using arithmetic. Mathematics is the language we use to describe reality, and vision is generally assumed to have effects <em>in reality.</em> That’s what makes numeric methods more powerful than they should reasonably be. We’re constantly engaged in the art of self-deception, and they force you to snap out of it.</p> Dan McKinley https://mcfunley.com/ Do You Work at Amazon? 2016-01-26T00:00:00+00:00 2016-01-26T00:00:00+00:00 urn:uuid:95ebbab7-0de7-4d46-f064-940561c3ec29 <p><span class="coauthor">Please note that <a href="http://twitter.com/paradosso">Roberto Medri</a> is a coauthor on this post.</span></p> <p><a href="http://continuations.com">Albert Wenger</a> has been one of the VC’s I most admire for a long time. He was very present in the early days at Etsy, and sat in giving counsel on some, uh, <em>significantly astray</em> engineering team meetings. Albert is a smart, data-driven guy whose values roughly align with my own.</p> <p>That said, I have an axe to grind with his latest post, <a href="http://continuations.com/post/138017572565/dont-mind-the-share-price-hint-it-fluctuates">Don’t Mind the Share Price</a>. In it, Albert deploys the story of Amazon as a warning against focusing too much on how the market values a company. This is the story of Amazon:</p> <p><img src="http://i.imgur.com/PdcjSCu.png" alt="Amazon's historical stock price" /></p> <p>Amazon was riding high in the late 90s, then felt the DotCom burst roughly along with the rest of the tech sector. Albert points out that history has shamed anyone that might’ve judged Amazon on its share price fifteen years ago, since it’s returned north of 2000% in the years since.</p> <blockquote> <p>So whether you are running a tech company, working for one, or investing in one I highly recommend not reading too much into changes in share price. Focus instead on whether your company is making real progress.</p> </blockquote> <p>Albert is careful to stress that you should focus on fundamentals over fluctuations in the price, which is generally good advice. But I think the subtext is clear: <em>don’t be discouraged by even large declines in price, because you might be working at the next Amazon.</em></p> <p>This is a premise that we can investigate quantitatively.</p> <h5 id="the-odds-of-being-an-amazon">The Odds of Being an Amazon</h5> <p>Suppose that we’re working at a public company that’s experienced a decline in its share price of at least 50%, relative to a recent high price. We’d like to approximate the odds that this company is going to recover <a ref="#f1" href="#f1" class="footnote">[1]</a>.</p> <p>It turns out that since 2002, there have been <a href="https://github.com/mcfunley/shaken-stocks/blob/master/shaken-stocks.csv">2,132 companies traded on the NASDAQ</a> that fit this description. One of these is indeed Amazon. But how many others are like it?</p> <p>We can take this set of companies and categorize them. Let’s identify companies that wound up being completely wiped out—losing 90% of their remaining value or more—and then all other companies that declined in value. For companies that increased in value, we’ll differentiate those that beat the market (defined as the S&amp;P 500 Index) from those that didn’t. The idea being that you would have been better off just buying an index fund with your cash surplus from working for Google in a parallel universe. And finally, we’ll identify the <a href="https://www.youtube.com/watch?v=zbQTXFJL8lo">miraculous</a>: those companies that return 1000% or more, of which Amazon is one example.</p> <p>If we do that, it looks like this:</p> <table> <thead> <tr> <th>Category</th> <th>Count</th> <th>Percent</th> <th>Cumulative Percent</th> </tr> </thead> <tr class="negative"> <td><strong>Wiped Out</strong></td> <td>239</td> <td>11.21%</td> <td>11.21%</td> </tr> <tr class="negative"> <td><strong>Declined</strong></td> <td>794</td> <td>37.24%</td> <td>48.45%</td> </tr> <tr class="negative"> <td><strong>Beaten by Market</strong></td> <td>344</td> <td>16.14%</td> <td>64.59%</td> </tr> <tr class="positive"> <td><strong>Beat Market</strong></td> <td>661</td> <td>31.00%</td> <td>95.59%</td> </tr> <tr class="positive"> <td><strong>Miracle</strong></td> <td>94</td> <td>4.41%</td> <td>100.00%</td> </tr> </table> <p>Here we can see that about 65% percent of public companies that find themselves in this situation don’t recover. But 35% of companies do. These are tough odds, but definitely not impossible odds, right?</p> <h5 id="recovery-is-not-good-enough">Recovery is not Good Enough</h5> <p>Albert asks us to consider investors, officers, and employees of the company as having roughly identical situations. This is a mistake. Things are significantly worse in the case of employees <a ref="#f2" href="#f2" class="footnote">[2]</a> at a public company that have been issued options. In these cases, the company may very well recover, but we have to contemplate several other horrifying possibilities.</p> <ul> <li>Employees may have already exercised options at a strike price higher than the current market price. If so, they’re screwed if the company never recovers above that price. Even if the company beats the market from here out.</li> <li>The strike price may be below the current market price, meaning that the options are worth something. But employees may owe taxes (or AMT), forcing them to sell before the recovery.</li> <li>Options may be underwater and worthless. At least in this scenario, there is clarity.</li> </ul> <p>From these situations we can see that as an employee <a ref="#f3" href="#f3" class="footnote">[3]</a>, it makes sense to consider the odds that the company will not just recover, but will ultimately get back to where it was. That looks like this:</p> <table class="table table-striped"> <thead> <tr> <th>Category</th> <th>Count</th> <th>Percent</th> <th>Cumulative Percent</th> </tr> </thead> <tr class="negative"> <td><strong>Wiped Out</strong></td> <td>239</td> <td>11.21%</td> <td>11.21%</td> </tr> <tr class="negative"> <td><strong>Declined</strong></td> <td>794</td> <td>37.24%</td> <td>48.45%</td> </tr> <tr class="negative"> <td><strong>Beaten by Market</strong></td> <td>344</td> <td>16.14%</td> <td>64.59%</td> </tr> <tr class="negative"> <td><strong>Recovered Below High Price</strong></td> <td>210</td> <td>9.84%</td> <td>75.04%</td> </tr> <tr class="positive"> <td><strong>Beat Market</strong></td> <td>441</td> <td>20.66%</td> <td>95.69%</td> </tr> <tr class="positive"> <td><strong>Miracle</strong></td> <td>92</td> <td>4.31%</td> <td>100.00%</td> </tr> </table> <p>This makes it worse: <strong>75% of companies won’t recover using this definition</strong>. And only about 4% will make miraculous comebacks of Amazon’s order of magnitude.</p> <h5 id="are-you-making-progress">Are You Making Progress?</h5> <p>Remember that Albert provides us with an important caveat: we should “[f]ocus … on whether the company is making real progress.” But this can be tricky to surmise as an employee, for several reasons:</p> <ul> <li>You are in unavoidably close proximity to a coordinated propaganda campaign. It’s called <em>the company’s internal communications and morale efforts.</em> You may find yourself thinking unreasonably positively about these things.</li> <li>You are putting in hours at this company, and human nature compels us to confuse effort with progress.</li> <li>Remember that we’re talking about a public company. So unless you’re an officer, you’ll have a difficult time of getting detailed information about how much progress the company is really making. And of course timing trades on such information would be <em>illegal</em>.</li> </ul> <p>We should agree that the outlook here is going to be hazy at best, and self-deception is a hazard.</p> <h5>The Base Rate Fallacy&rsquo;s Perverse Tyranny Over the American <a ref="#f4" href="#f4" class="footnote">[4]</a> Mind</h5> <p>If there is any line of reasoning that really drives me crazy, it’s the following:</p> <ul> <li>A series of cosmically unlikely events has unfolded.</li> <li>This is submitted as evidence that <em>it can happen to anyone.</em></li> </ul> <p>Examples of this are everywhere. Someone is going to win powerball, therefore it makes sense to buy tickets. Barack Obama was elected president, therefore systematic racism is toothless. Mark Zuckerberg struck it rich, so you’ve just gotta have faith.</p> <figure> <img src="http://i.imgur.com/0Jb88Db.png" alt="By the way this guy also thinks that picking your own numbers gives you a higher chance of winning." /> <figcaption>By the way this guy also thinks that picking your own numbers gives you a higher chance of winning.</figcaption> </figure> <p>In looking to Amazon (or Google, Facebook, Netflix, or dear god <em>Apple</em>) as consolation in the event that a company has experienced a decline in share price, we make the following mistake. <strong>The probability that successful companies have stumbled in their past is not the probability that a company will succeed, having stumbled.</strong></p> <p>This isn’t a call for nihilism if you find yourself in such a situation. Far from it—it’s a call to realize that the odds are now against you, and to behave proactively.</p> <hr /> <p><em>The code and data for this article is available <a href="https://github.com/mcfunley/shaken-stocks">here, on Github</a>. It’s a bit sloppy and hastily written, sorry. We started from a dataset of companies traded on the NASDAQ that experienced a decline of 50% or more off of a previous high. our dataset started around the year 2000.</em></p> <hr /> <ol class="foot-note-list"> <li> <a name="f1"></a> You may notice that I've switched questions, from "are you working at Amazon" to "is the company Amazon." Calculating the odds that you are working at Amazon would of course require a richer dataset that includes company headcounts, and I am a lazy man. </li> <li> <a name="f2"></a> Investors can more easily scale their commitment to the company by having a diverse portfolio. Employees and officers, however, give 100% of their labor to the company. And in the event that things go well, a large percentage of their net worth derives from the value of the company. Officers have a high floor on their returns, via guaranteed bonuses, parachute provisions, accelerated vesting schedules in the event of termination, and so on. Employees on the other hand are screwed. </li> <li> <a name="f3"></a> This refinement doesn't apply to all employees. Early employees probably have strike prices that are very low, and can make money despite a large drop in the share price. But at a newly-minted public company, <em>most</em> employees are probably new, and <em>most</em> employees are therefore affected. </li> <li><a name="f4"></a>I know that Albert Wenger is German.</li> </ol> Dan McKinley https://mcfunley.com/ Are My Push Notifications Driving Users Away? 2015-11-24T00:00:00+00:00 2015-11-24T00:00:00+00:00 urn:uuid:a59ff9ac-0706-8ec9-179c-c94d942094ad <p>In response to <a href="https://twitter.com/kellan">Kellan’s</a> musing about push notifications on twitter, <a href="http://twitter.com/mccue">Adam McCue</a> asked an interesting question:</p> <blockquote align="center" class="twitter-tweet" lang="en"><p lang="en" dir="ltr"><a href="https://twitter.com/kellan">@kellan</a> <a href="https://twitter.com/mcfunley">@mcfunley</a> what's the best way to do this?</p>&mdash; Adam McCue (@mccue) <a href="https://twitter.com/mccue/status/669386580059099136">November 25, 2015</a></blockquote> <p>I quickly realized that fitting an answer into tweets was hopeless, so here’s a stab at it in longform.</p> <h5 id="how-would-we-do-this">How would we do this?</h5> <p>Let’s come up with a really simple way to figure this out for the case of a single irritating notification. This is limited, but the procedure described ought to be possible for anyone with a web-enabled mobile app. We need:</p> <ol> <li>A way to divide the user population into two groups: a treatment group that will see the ad notification, and a control group that won’t.</li> <li>A way to decide if users have disappeared or not.</li> </ol> <p>To make the stats as simple as possible, we need (1) to be random and we need (2) to be a <a href="http://homepages.wmich.edu/~bwagner/StatReview/Binomial/binomial%20probabilities.htm">binomial measure</a> (i.e. “yes or no,” “true or false,” “heads or tails,” etc).</p> <p>To do valid (simple) stats, we also want our trials to be <em>independent</em> of each other. If we send the same users the notifications over and over, we can’t consider each of those to be independent trials. It’s easy to intuit why that might be: I’m more likely to uninstall your app after the fifth time you’ve bugged me <a href="#f0" ref="#f0" class="footnote">[1]</a>. So we need to consider disjoint sets of users on every day of the experiment.</p> <figure> <img src="http://i.imgur.com/Dy6loZn.png" /> <figcaption>Does this hurt us or help us? <a href="http://store-xkcd-com.myshopify.com/products/try-science">Let's try science.</a></figcaption> </figure> <p>How to randomly select users to receive the treatment under these conditions is up to you, but one simple way that should be broadly applicable is just hashing the user ID. Say we need 100 groups of users: both a treatment and control group for 50 days. We can hash the space of all user ID’s down to 100 buckets <a ref="#f1" href="#f1" class="footnote">[2]</a>.</p> <p>So how do we decide if users have disappeared? Well, most mobile apps make http requests to a server somewhere. Let’s say that we’ll consider a user to be “bounced” if they don’t make a request to us again within some interval.</p> <p>Some people will probably look at the notification we sent (resulting in a request or two), but be annoyed and subsequently uninstall. We wouldn’t want to count such a user as happy. So let’s say we’ll look for usage between one day after the notification and six days after the notification. Users that send us a request during that interval will be considered “retained.”</p> <figure> <img class="max-width-75 mb-max-width-100" src="http://i.imgur.com/b7Nl6Ve.png" /> <figcaption>Some examples of our binomial model. We'll call a user retained if they request data from us on any of days two through seven counting from the time of the notification. User 4 in this example is not retained because (s)he only requests data on the day the notification was sent.</figcaption> </figure> <p>To run the experiment properly you need to know how long to run it. That depends a lot on your personal details: how many people use your app, how often they use it, how valuable the ad notification is, and how severe uninstalls are for you. For the sake of argument, let’s say:</p> <ul> <li>We can find disjoint sets of 10,000 users making requests to us on any given day, daily, for a long time.</li> <li>(As discussed) we’ll put 50% of them in the treatment group.</li> <li>60% of people active on a given day currently will be active between one and six days after that.</li> <li>We want to be 80% sure that if we move that figure by plus or minus 1%, we’ll know about it.</li> <li>We want to be 95% sure that if we measure a deviation in plus or minus 1% that it’s for real.</li> </ul> <p><a href="http://www.experimentcalculator.com/#lift=1&amp;conversion=60&amp;visits=10000&amp;percentage=50">If you plug all of that into experiment calculator</a> <a href="#f2" ref="#f2" class="footnote">[3]</a> it will tell you that you need 21 days of data to satisfy those conditions. But since we use a trailing time interval in our measurement, we need to wait 28 days.</p> <h5 id="an-example-result">An example result</h5> <p>Ok, so let’s say we’ve run that experiment and we have some results. And suppose that they look like this:</p> <table> <thead> <tr> <th>Group</th> <th>Users</th> <th>Retained users</th> <th>Bounced users</th> </tr> </thead> <tr> <td>Treatment</td> <td>210,000</td> <td>110,144</td> <td>99,856</td> </tr> <tr> <td>Control</td> <td>210,000</td> <td>126,033</td> <td>83,967</td> </tr> </table> <p>Using these figures we can see that we’ve apparently decreased retention by 12.6%, and a <a href="https://gist.github.com/mcfunley/b7b9320e7f0bafcbaab2">test of proportions</a> confirms that this difference is statistically significant. Oops!</p> <h5 id="ive-run-the-experiment-now-what">I’ve run the experiment, now what?</h5> <p>You most likely have created the ad notification because you had some positive goal in mind. Maybe the intent was to get people to buy something. If that’s the case, then you should do an additional computation to see if what you gained in positive engagement outweighs what you’ve lost in users.</p> <h5 id="i-dont-think-i-have-enough-data">I don’t think I have enough data.</h5> <p>You might not have 420,000 users to play with, but that doesn’t mean that the experiment is necessarily pointless. In our example we were trying to detect changes of <em>plus or minus one percent.</em> You can detect more dramatic changes in behavior with smaller sets of users. Good luck!</p> <h5 id="im-sending-reactivation-notifications-to-inactive-users-can-i-still-measure-uninstalls">I’m sending reactivation notifications to inactive users. Can I still measure uninstalls?</h5> <p>In our thought experiment, we took it as a given that users were likely to use your app. Then we considered the effect of push notifications on that behavior. But one reason you might be contemplating sending the notifications is that they’re <em>not</em> using it, and you are trying to reactivate them.</p> <p>If that’s the case, you might want to just measure reactivations instead. After all, the difference between a user who has your app installed but never opens it and a user that has uninstalled your app is mostly philosophical. But you may also be able to design an experiment to detect uninstalls. And that might be sensible if very, very infrequent use of your app can still be valuable.</p> <p>A procedure that might work for you here is to send two notifications. You could then use delivery failures of secondary notifications as a proxy metric for uninstalls.</p> <h5 id="i-want-to-learn-more-about-this-stuff">I want to learn more about this stuff.</h5> <p>As it happens, I recorded <a href="http://shop.oreilly.com/product/0636920040149.do">a video with O’Reilly</a> that covers things like this in more detail. You might also like <a href="http://www.evanmiller.org/">Evan Miller’s blog</a> and <a href="http://ai.stanford.edu/~ronnyk/ronnyk-bib.html">Ron Kohavi’s publications</a>.</p> <hr /> <ol class="footnote-list"> <li><a name="f0"></a><em>"How many notifications are too many?"</em> is a separate question, not considered here.</li> <li><a name="f1"></a>If you do many experiments, you want to avoid using the _same_ sets of people as control and treatment. So include something based on the name of the experiment in the hash. So if user 12345 is in the treatment for 50/50 experiment X, she should be only 50% likely (not 100% likely) to be in the treatment for some other 50/50 experiment Y.</li> <li><a name="f2"></a>The labeling on the tool is for experiments on a website. The math is the same though.</li> </ol> Dan McKinley https://mcfunley.com/ Choose Boring Technology (Expanded, Slide-Based Edition) 2015-07-27T00:00:00+00:00 2015-07-27T00:00:00+00:00 urn:uuid:1432e359-01f1-977f-dd5a-0da6a2c55d5c <p>I gave a spoken word version of <a href="/choose-boring-technology">Choose Boring Technology</a> at OSCON in Portland last week. Here are the slides:</p> <div class="speakerdeck-container"> <div class="speakerdeck-loading"></div> <script id="choose-boring-technology-deck" async="" class="speakerdeck-embed" data-id="454e3843ac184d3f8bcb0e4a50d3811a" data-ratio="1.31113956466069" src="//speakerdeck.com/assets/embed.js"></script> <script>$('#choose-boring-technology-deck').speakerdeck();</script> </div> Dan McKinley https://mcfunley.com/ Choose Boring Technology 2015-03-30T00:00:00+00:00 2015-03-30T00:00:00+00:00 urn:uuid:d62993ee-047e-c4a5-1b11-e986b22566b8 <p>Probably the single best thing to happen to me in my career was having had <a href="http://laughingmeme.org/">Kellan</a> placed in charge of me. I stuck around long enough to see Kellan’s technical decisionmaking start to bear fruit. I learned a great deal <em>from</em> this, but I also learned a great deal as a <em>result</em> of this. I would not have been free to become the engineer that wrote <a href="/data-driven-products-lean-startup-2014">Data Driven Products Now!</a> if Kellan had not been there to so thoroughly stick the landing on technology choices.</p> <figure> <img src="http://i.imgur.com/FRQKLCy.jpg" /> <figcaption>Being inspirational as always.</figcaption> </figure> <p>In the year since leaving Etsy, I’ve resurrected my ability to care about technology. And my thoughts have crystallized to the point where I can write them down coherently. What follows is a distillation of the Kellan gestalt, which will hopefully serve to horrify him only slightly.</p> <h5 id="embrace-boredom">Embrace Boredom.</h5> <p>Let’s say every company gets about three innovation tokens. You can spend these however you want, but the supply is fixed for a long while. You might get a few more <em>after</em> you achieve a <a href="http://rc3.org/2015/03/24/the-pleasure-of-building-big-things/">certain level of stability and maturity</a>, but the general tendency is to overestimate the contents of your wallet. Clearly this model is approximate, but I think it helps.</p> <p>If you choose to write your website in NodeJS, you just spent one of your innovation tokens. If you choose to use <a href="/why-mongodb-never-worked-out-at-etsy">MongoDB</a>, you just spent one of your innovation tokens. If you choose to use <a href="https://consul.io/">service discovery tech that’s existed for a year or less</a>, you just spent one of your innovation tokens. If you choose to write your own database, oh god, you’re in trouble.</p> <p>Any of those choices might be sensible if you’re a javascript consultancy, or a database company. But you’re probably not. You’re probably working for a company that is at least ostensibly <a href="https://www.etsy.com">rethinking global commerce</a> or <a href="https://stripe.com">reinventing payments on the web</a> or pursuing some other suitably epic mission. In that context, devoting any of your limited attention to innovating ssh is an excellent way to fail. Or at best, delay success <a ref="#f1" href="#f1" class="footnote">[1]</a>.</p> <p>What counts as boring? That’s a little tricky. “Boring” should not be conflated with “bad.” There is technology out there that is both boring and bad <a ref="#f2" href="#f2">[2]</a>. You should not use any of that. But there are many choices of technology that are boring and good, or at least good enough. MySQL is boring. Postgres is boring. PHP is boring. Python is boring. Memcached is boring. Squid is boring. Cron is boring.</p> <p>The nice thing about boringness (so constrained) is that the capabilities of these things are well understood. But more importantly, their failure modes are well understood. Anyone who knows me well will understand that it’s only with a overwhelming sense of malaise that I now invoke the spectre of Don Rumsfeld, but I must.</p> <figure> <img src="http://i.imgur.com/n8ElWr3.jpg" /> <figcaption>To be clear, fuck this guy.</figcaption> </figure> <p>When choosing technology, you have both known unknowns and unknown unknowns <a ref="#f3" href="#f3" class="footnote">[3]</a>.</p> <ul> <li>A known unknown is something like: <em>we don’t know what happens when this database hits 100% CPU.</em></li> <li>An unknown unknown is something like: <em>geez it didn’t even occur to us that <a href="http://www.evanjones.ca/jvm-mmap-pause.html">writing stats would cause GC pauses</a>.</em></li> </ul> <p>Both sets are typically non-empty, even for tech that’s existed for decades. But for shiny new technology the magnitude of unknown unknowns is significantly larger, and this is important.</p> <h5 id="optimize-globally">Optimize Globally.</h5> <p>I unapologetically think a bias in favor of boring technology is a good thing, but it’s not the only factor that needs to be considered. Technology choices don’t happen in isolation. They have a scope that touches your entire team, organization, and the system that emerges from the sum total of your choices.</p> <p>Adding technology to your company comes with a cost. As an abstract statement this is obvious: if we’re already using Ruby, adding Python to the mix doesn’t feel sensible because the resulting complexity would outweigh Python’s marginal utility. But somehow when we’re talking about Python and Scala or MySQL and Redis people <a href="http://martinfowler.com/bliki/PolyglotPersistence.html">lose their minds</a>, discard all constraints, and start raving about using the best tool for the job.</p> <p><a href="https://twitter.com/coda/status/580531932393504768">Your function in a nutshell</a> is to map business problems onto a solution space that involves choices of software. If the choices of software were truly without baggage, you could indeed pick a whole mess of locally-the-best tools for your assortment of problems.</p> <figure> <svg width="423px" height="420px" viewBox="0 0 423 420" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:sketch="http://www.bohemiancoding.com/sketch/ns"> <!-- Generator: Sketch 3.2.2 (9983) - http://www.bohemiancoding.com/sketch --> <title>Crazy</title> <desc>Created with Sketch.</desc> <defs></defs> <g id="Page-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd" sketch:type="MSPage"> <g id="Crazy" sketch:type="MSLayerGroup" transform="translate(1.000000, -4.000000)"> <ellipse id="Solutions" stroke="#979797" sketch:type="MSShapeGroup" cx="341.5" cy="229.5" rx="79.5" ry="193.5"></ellipse> <ellipse id="Problems" stroke="#979797" sketch:type="MSShapeGroup" cx="79.5" cy="229.5" rx="79.5" ry="193.5"></ellipse> <g id="arrows" transform="translate(45.000000, 77.000000)" stroke="#D0011B" stroke-width="3" fill="#D0011B" stroke-linecap="square"> <path d="M19.5,26.5 L255.502121,26.5" id="Line" sketch:type="MSShapeGroup"></path> <path id="Line-decoration-1" d="M255.5,26.5 C251.72,25.45 248.48,24.55 244.7,23.5 C244.7,25.6 244.7,27.4 244.7,29.5 C248.48,28.45 251.72,27.55 255.5,26.5 C255.5,26.5 255.5,26.5 255.5,26.5 Z"></path> <path d="M19.5,26.5 L245.5,84.5" id="Line-2" sketch:type="MSShapeGroup"></path> <path id="Line-2-decoration-1" d="M245.186355,84.419507 C241.786016,82.4628271 238.87144,80.7856729 235.471101,78.8289931 C234.94908,80.8630761 234.501633,82.6065758 233.979612,84.6406589 C237.901972,84.5632557 241.263995,84.4969101 245.186355,84.419507 C245.186355,84.419507 245.186355,84.419507 245.186355,84.419507 Z"></path> <path d="M19.5,26.5 L299.5,0.5" id="Line-3" sketch:type="MSShapeGroup"></path> <path id="Line-3-decoration-1" d="M299.296324,0.518912741 C295.435434,-0.177093062 292.126099,-0.773669465 288.265208,-1.46967527 C288.459373,0.621329291 288.6258,2.41361891 288.819965,4.50462347 C292.486691,3.10962472 295.629598,1.9139115 299.296324,0.518912741 C299.296324,0.518912741 299.296324,0.518912741 299.296324,0.518912741 Z"></path> <path d="M19.5,26.5 L255.502121,26.5" id="Line-4" sketch:type="MSShapeGroup"></path> <path id="Line-4-decoration-1" d="M255.5,26.5 C251.72,25.45 248.48,24.55 244.7,23.5 C244.7,25.6 244.7,27.4 244.7,29.5 C248.48,28.45 251.72,27.55 255.5,26.5 C255.5,26.5 255.5,26.5 255.5,26.5 Z"></path> <path d="M63.5,79.5 L256.5,34.5" id="Line-5" sketch:type="MSShapeGroup"></path> <path id="Line-5-decoration-1" d="M256.327927,34.5401208 C252.408243,34.3758734 249.048513,34.2350899 245.128829,34.0708426 C245.605677,36.1159872 246.014403,37.8689684 246.49125,39.9141131 C249.934087,38.0332157 252.88509,36.4210181 256.327927,34.5401208 C256.327927,34.5401208 256.327927,34.5401208 256.327927,34.5401208 Z"></path> <path d="M63.5,79.5 L301.5,116.5" id="Line-6" sketch:type="MSShapeGroup"></path> <path id="Line-6-decoration-1" d="M300.651315,116.368062 C297.077479,114.749853 294.014192,113.362816 290.440356,111.744607 C290.117761,113.819681 289.84125,115.598316 289.518655,117.67339 C293.415086,117.216525 296.754884,116.824927 300.651315,116.368062 C300.651315,116.368062 300.651315,116.368062 300.651315,116.368062 Z"></path> <path d="M63.5,79.5 L254.5,209.5" id="Line-7" sketch:type="MSShapeGroup"></path> <path id="Line-7-decoration-1" d="M254.464216,209.475644 C251.930146,206.480751 249.758085,203.9137 247.224014,200.918806 C246.042418,202.654845 245.02962,204.142878 243.848024,205.878916 C247.563691,207.137771 250.748549,208.216789 254.464216,209.475644 C254.464216,209.475644 254.464216,209.475644 254.464216,209.475644 Z"></path> <path d="M0.5,115.5 L251.5,216.5" id="Line-8" sketch:type="MSShapeGroup"></path> <path id="Line-8-decoration-1" d="M250.981706,216.291443 C247.866929,213.906268 245.19712,211.861831 242.082342,209.476656 C241.298409,211.424847 240.626466,213.094725 239.842533,215.042916 C243.741243,215.4799 247.082995,215.854459 250.981706,216.291443 C250.981706,216.291443 250.981706,216.291443 250.981706,216.291443 Z"></path> <path d="M54.5,176.5 L300.5,193.5" id="Line-10" sketch:type="MSShapeGroup"></path> <path id="Line-10-decoration-1" d="M299.914697,193.459552 C296.216079,192.151452 293.045835,191.030224 289.347217,189.722124 C289.202441,191.817128 289.078346,193.612845 288.93357,195.707849 C292.776964,194.920945 296.071303,194.246456 299.914697,193.459552 C299.914697,193.459552 299.914697,193.459552 299.914697,193.459552 Z"></path> <path d="M54.5,176.5 L288.5,273.5" id="Line-11" sketch:type="MSShapeGroup"></path> <path id="Line-11-decoration-1" d="M288.215373,273.382013 C285.125578,270.964562 282.477183,268.892461 279.387389,266.47501 C278.58323,268.41494 277.89395,270.077737 277.089791,272.017667 C280.983745,272.495188 284.321419,272.904492 288.215373,273.382013 C288.215373,273.382013 288.215373,273.382013 288.215373,273.382013 Z"></path> <path d="M11.5,231.5 L287.5,283.5" id="Line-12" sketch:type="MSShapeGroup"></path> <path id="Line-12-decoration-1" d="M286.658962,283.341544 C283.138722,281.609837 280.121373,280.125516 276.601133,278.393809 C276.212321,280.457502 275.879054,282.226381 275.490243,284.290073 C279.399294,283.958088 282.74991,283.673529 286.658962,283.341544 C286.658962,283.341544 286.658962,283.341544 286.658962,283.341544 Z"></path> <path d="M11.5,231.5 L249.5,223.5" id="Line-13" sketch:type="MSShapeGroup"></path> <path id="Line-13-decoration-1" d="M249.36566,223.504516 C245.552519,222.582095 242.284113,221.79145 238.470973,220.869029 C238.541521,222.967844 238.601991,224.766828 238.67254,226.865643 C242.415132,225.689248 245.623068,224.68091 249.36566,223.504516 C249.36566,223.504516 249.36566,223.504516 249.36566,223.504516 Z"></path> <path d="M0.5,115.5 L248.5,156.5" id="Line-9" sketch:type="MSShapeGroup"></path> <path id="Line-9-decoration-1" d="M248.138638,156.440259 C244.580524,154.78777 241.530711,153.371351 237.972596,151.718862 C237.630068,153.790739 237.336473,155.566633 236.993945,157.63851 C240.894588,157.219122 244.237996,156.859647 248.138638,156.440259 C248.138638,156.440259 248.138638,156.440259 248.138638,156.440259 Z"></path> </g> <g id="problems" transform="translate(33.000000, 91.000000)" stroke="#979797" fill="#4990E2" sketch:type="MSShapeGroup"> <circle id="Oval-3" cx="30" cy="14" r="14"></circle> <circle id="Oval-4" cx="74" cy="66" r="14"></circle> <circle id="Oval-5" cx="14" cy="103" r="14"></circle> <circle id="Oval-6" cx="64" cy="163" r="14"></circle> <circle id="Oval-7" cx="23" cy="219" r="14"></circle> </g> <g id="Solutions" transform="translate(293.000000, 68.000000)" stroke="#979797" fill="#7ED321" sketch:type="MSShapeGroup"> <circle id="Oval-8" cx="26" cy="37" r="14"></circle> <circle id="Oval-9" cx="74" cy="69" r="14"></circle> <circle id="Oval-10" cx="14" cy="99" r="14"></circle> <circle id="Oval-11" cx="71" cy="129" r="14"></circle> <circle id="Oval-12" cx="18" cy="168" r="14"></circle> <circle id="Oval-13" cx="71" cy="205" r="14"></circle> <circle id="Oval-14" cx="22" cy="229" r="14"></circle> <circle id="Oval-15" cx="66" cy="14" r="14"></circle> <circle id="Oval-16" cx="58" cy="289" r="14"></circle> </g> <text id="Problems" sketch:type="MSTextLayer" font-family="Lato" font-size="18" font-weight="normal" fill="#000000"> <tspan x="43" y="18">Problems</tspan> </text> <text id="Technical-Solutions" sketch:type="MSTextLayer" font-family="Lato" font-size="18" font-weight="normal" fill="#000000"> <tspan x="262" y="18">Technical Solutions</tspan> </text> </g> </g> </svg> <figcaption class="text-center">The way you might choose technology in a world where choices are cheap: "pick the right tool for the job."</figcaption> </figure> <p>But of course, the baggage exists. We call the baggage “operations” and to a lesser extent “cognitive overhead.” You have to monitor the thing. You have to figure out unit tests. You need to know the first thing about it to hack on it. You need an init script. I could go on for days here, and all of this adds up fast.</p> <figure> <svg width="423px" height="420px" viewBox="0 0 423 420" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:sketch="http://www.bohemiancoding.com/sketch/ns"> <!-- Generator: Sketch 3.2.2 (9983) - http://www.bohemiancoding.com/sketch --> <title>Sane</title> <desc>Created with Sketch.</desc> <defs></defs> <g id="Page-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd" sketch:type="MSPage"> <g id="Sane" sketch:type="MSLayerGroup" transform="translate(1.000000, -4.000000)"> <ellipse id="Solutions-3" stroke="#979797" sketch:type="MSShapeGroup" cx="341.5" cy="229.5" rx="79.5" ry="193.5"></ellipse> <ellipse id="Problems-2" stroke="#979797" sketch:type="MSShapeGroup" cx="79.5" cy="229.5" rx="79.5" ry="193.5"></ellipse> <g id="arrows" transform="translate(51.000000, 102.000000)" stroke="#D0011B" stroke-width="3" fill="#D0011B" stroke-linecap="square"> <path d="M13.5,1.5 L249.5,1.5" id="Line-14" sketch:type="MSShapeGroup"></path> <path id="Line-14-decoration-1" d="M249.5,1.5 C245.72,0.45 242.48,-0.45 238.7,-1.5 C238.7,0.6 238.7,2.4 238.7,4.5 C242.48,3.45 245.72,2.55 249.5,1.5 C249.5,1.5 249.5,1.5 249.5,1.5 Z"></path> <path d="M13.5,1.5 L248.5,120.5" id="Line-15" sketch:type="MSShapeGroup"></path> <path id="Line-15-decoration-1" d="M248.132239,120.313772 C245.23431,117.669362 242.75037,115.402724 239.852441,112.758314 C238.903738,114.631803 238.090564,116.237651 237.141861,118.111141 C240.988493,118.882062 244.285607,119.542851 248.132239,120.313772 C248.132239,120.313772 248.132239,120.313772 248.132239,120.313772 Z"></path> <path d="M57.5,54.5 L249.5,8.5" id="Line-17" sketch:type="MSShapeGroup"></path> <path id="Line-17-decoration-1" d="M249.078398,8.6010088 C245.157787,8.46060711 241.797264,8.34026282 237.876654,8.19986114 C238.365932,10.2420674 238.785314,11.9925299 239.274592,14.0347362 C242.705924,12.1329316 245.647066,10.5028134 249.078398,8.6010088 C249.078398,8.6010088 249.078398,8.6010088 249.078398,8.6010088 Z"></path> <path d="M0.5,92.5 L240.5,137.5" id="Line-20" sketch:type="MSShapeGroup"></path> <path id="Line-20-decoration-1" d="M240.320814,137.466403 C236.79906,135.737776 233.780414,134.256096 230.25866,132.52747 C229.871654,134.591501 229.539934,136.360671 229.152928,138.424703 C233.061688,138.089298 236.412054,137.801808 240.320814,137.466403 C240.320814,137.466403 240.320814,137.466403 240.320814,137.466403 Z"></path> <path d="M57.5,52.5 L242.5,129.5" id="Line-18" sketch:type="MSShapeGroup"></path> <path id="Line-18-decoration-1" d="M242.1449,129.352202 C239.058585,126.930309 236.413173,124.854402 233.326858,122.432509 C232.51991,124.371281 231.828241,126.033085 231.021292,127.971856 C234.914555,128.454977 238.251637,128.869081 242.1449,129.352202 C242.1449,129.352202 242.1449,129.352202 242.1449,129.352202 Z"></path> <path d="M13.5,1.5 L248.5,183.5" id="Line-16" sketch:type="MSShapeGroup"></path> <path id="Line-16-decoration-1" d="M248.313733,183.355742 C245.968119,180.211065 243.957592,177.515627 241.611978,174.37095 C240.32613,176.031249 239.223974,177.454363 237.938125,179.114662 C241.569588,180.59904 244.68227,181.871364 248.313733,183.355742 C248.313733,183.355742 248.313733,183.355742 248.313733,183.355742 Z"></path> <path d="M0.5,92.5 L253.5,15.5" id="Line-19" sketch:type="MSShapeGroup"></path> <path id="Line-19-decoration-1" d="M253.061904,15.6333334 C249.139957,15.7294168 245.778289,15.8117739 241.856342,15.9078572 C242.467781,17.9168724 242.991872,19.6388854 243.603311,21.6479005 C246.913819,19.542802 249.751397,17.7384319 253.061904,15.6333334 C253.061904,15.6333334 253.061904,15.6333334 253.061904,15.6333334 Z"></path> <path d="M0.5,92.5 L244.5,191.5" id="Line-21" sketch:type="MSShapeGroup"></path> <path id="Line-21-decoration-1" d="M244.204221,191.379991 C241.09632,188.985863 238.432405,186.933753 235.324504,184.539624 C234.534968,186.485551 233.858223,188.153489 233.068687,190.099416 C236.966124,190.547618 240.306784,190.93179 244.204221,191.379991 C244.204221,191.379991 244.204221,191.379991 244.204221,191.379991 Z"></path> <path d="M49.5,150.5 L258.5,19.5" id="Line-22" sketch:type="MSShapeGroup"></path> <path id="Line-22-decoration-1" d="M257.939322,19.8514296 C254.178828,20.9692764 250.955547,21.9274308 247.195052,23.0452775 C248.310345,24.8246376 249.26631,26.3498034 250.381603,28.1291635 C253.026805,25.2319566 255.29412,22.7486365 257.939322,19.8514296 C257.939322,19.8514296 257.939322,19.8514296 257.939322,19.8514296 Z"></path> <path d="M3.5,207.5 L265.5,22.5" id="Line-23" sketch:type="MSShapeGroup"></path> <path id="Line-23-decoration-1" d="M264.902063,22.9222075 C261.208605,24.2448071 258.042784,25.378464 254.349327,26.7010636 C255.560618,28.4165147 256.598868,29.8869013 257.81016,31.6023523 C260.292326,28.5643016 262.419897,25.9602582 264.902063,22.9222075 C264.902063,22.9222075 264.902063,22.9222075 264.902063,22.9222075 Z"></path> <path d="M3.5,207.5 L243.5,147.5" id="Line-24" sketch:type="MSShapeGroup"></path> <path id="Line-24-decoration-1" d="M243.125198,147.593701 C239.203396,147.491836 235.841853,147.404523 231.920052,147.302658 C232.429376,149.339957 232.865941,151.086214 233.375265,153.123513 C236.787742,151.188079 239.712721,149.529135 243.125198,147.593701 C243.125198,147.593701 243.125198,147.593701 243.125198,147.593701 Z"></path> <path d="M3.5,207.5 L244.5,201.5" id="Line-25" sketch:type="MSShapeGroup"></path> <path id="Line-25-decoration-1" d="M244.425346,201.501859 C240.620384,200.546263 237.358988,199.72718 233.554026,198.771584 C233.606292,200.870934 233.651091,202.670376 233.703357,204.769726 C237.456053,203.625972 240.67265,202.645612 244.425346,201.501859 C244.425346,201.501859 244.425346,201.501859 244.425346,201.501859 Z"></path> </g> <g id="problems-2" transform="translate(33.000000, 91.000000)" stroke="#979797" fill="#4990E2" sketch:type="MSShapeGroup"> <circle id="Oval-3" cx="30" cy="14" r="14"></circle> <circle id="Oval-4" cx="74" cy="66" r="14"></circle> <circle id="Oval-5" cx="14" cy="103" r="14"></circle> <circle id="Oval-6" cx="64" cy="163" r="14"></circle> <circle id="Oval-7" cx="23" cy="219" r="14"></circle> </g> <g id="Solutions-2" transform="translate(293.000000, 68.000000)" stroke="#979797" fill="#7ED321" sketch:type="MSShapeGroup"> <circle id="Oval-8" cx="26" cy="37" r="14"></circle> <circle id="Oval-9" cx="74" cy="69" r="14"></circle> <circle id="Oval-10" cx="14" cy="99" r="14"></circle> <circle id="Oval-11" cx="71" cy="129" r="14"></circle> <circle id="Oval-12" cx="18" cy="168" r="14"></circle> <circle id="Oval-13" cx="71" cy="205" r="14"></circle> <circle id="Oval-14" cx="22" cy="229" r="14"></circle> <circle id="Oval-15" cx="66" cy="14" r="14"></circle> <circle id="Oval-16" cx="58" cy="289" r="14"></circle> </g> <text id="Problems-3" sketch:type="MSTextLayer" font-family="Lato" font-size="18" font-weight="normal" fill="#000000"> <tspan x="43" y="18">Problems</tspan> </text> <text id="Technical-Solutions-2" sketch:type="MSTextLayer" font-family="Lato" font-size="18" font-weight="normal" fill="#000000"> <tspan x="262" y="18">Technical Solutions</tspan> </text> </g> </g> </svg> <figcaption class="text-center">The way you choose technology in the world where operations are a serious concern (i.e., "reality"). </figcaption> </figure> <p>The problem with “best tool for the job” thinking is that it takes a myopic view of the words “best” and “job.” Your job is keeping the company in business, god damn it. And the “best” tool is the one that occupies the “least worst” position for as many of your problems as possible.</p> <p>It is basically always the case that the long-term costs of keeping a system working reliably vastly exceed any inconveniences you encounter while building it. Mature and productive developers understand this.</p> <h5 id="choose-new-technology-sometimes">Choose New Technology, Sometimes.</h5> <p>Taking this reasoning to its <em>reductio ad absurdum</em> would mean picking Java, and then trying to implement a website without using anything else at all. And that would be crazy. You need some means to add things to your toolbox.</p> <p>An important first step is to acknowledge that this is a process, and a conversation. New tech eventually has company-wide effects, so adding tech is a decision that requires company-wide visibility. Your organizational specifics may force the conversation, or <a href="https://twitter.com/mcfunley/status/578603932949164032">they may facilitate developers adding new databases and queues without talking to anyone</a>. One way or another you have to set cultural expectations that <strong>this is something we all talk about</strong>.</p> <p>One of the most worthwhile exercises I recommend here is to <strong>consider how you would solve your immediate problem without adding anything new</strong>. First, posing this question should detect the situation where the “problem” is that someone really wants to use the technology. If that is the case, you should immediately abort.</p> <figure> <img src="http://i.imgur.com/rmdSx.gif" /> <figcaption>I just watched a webinar about this graph database, we should try it out.</figcaption> </figure> <p>It can be amazing how far a small set of technology choices can go. The answer to this question in practice is almost never “we can’t do it,” it’s usually just somewhere on the spectrum of “well, we could do it, but it would be too hard” <a ref="#f4" href="#f4" class="footnote">[4]</a>. If you think you can’t accomplish your goals with what you’ve got now, you are probably just not thinking creatively enough.</p> <p>It’s helpful to <strong>write down exactly what it is about the current stack that makes solving the problem prohibitively expensive and difficult.</strong> This is related to the previous exercise, but it’s subtly different.</p> <p>New technology choices might be purely additive (for example: “we don’t have caching yet, so let’s add memcached”). But they might also overlap or replace things you are already using. If that’s the case, you should <strong>set clear expectations about migrating old functionality to the new system.</strong> The policy should typically be “we’re committed to migrating,” with a proposed timeline. The intention of this step is to keep wreckage at manageable levels, and to avoid proliferating locally-optimal solutions.</p> <p>This process is not daunting, and it’s not much of a hassle. It’s a handful of questions to fill out as homework, followed by a meeting to talk about it. I think that if a new technology (or a new service to be created on your infrastructure) can pass through this gauntlet unscathed, adding it is fine.</p> <h5 id="just-ship">Just Ship.</h5> <p>Polyglot programming is sold with the promise that letting developers choose their own tools with complete freedom will make them more effective at solving problems. This is a naive definition of the problems at best, and motivated reasoning at worst. The weight of day-to-day operational <a href="https://twitter.com/handler">toil</a> this creates crushes you to death.</p> <p>Mindful choice of technology gives engineering minds real freedom: the freedom to <a href="/effective-web-experimentation-as-a-homo-narrans">contemplate bigger questions</a>. Technology for its own sake is snake oil.</p> <p><em>Update, July 27th 2015: I wrote a talk based on this article. You can see it <a href="http://boringtechnology.club">here</a>.</em></p> <hr /> <ol class="footnote-list"> <li> <a name="f1"></a> Etsy in its early years suffered from this pretty badly. We hired a bunch of Python programmers and decided that we needed to find something for them to do in Python, and the only thing that came to mind was creating a pointless middle layer that <a href="https://www.youtube.com/watch?v=eenrfm50mXw">required years of effort to amputate</a>. Meanwhile, the 90th percentile search latency was about two minutes. <a href="http://www.sec.gov/Archives/edgar/data/1370637/000119312515077045/d806992ds1.htm">Etsy didn't fail</a>, but it went several years without shipping anything at all. So it took longer to succeed than it needed to. </li> <li> <a name="f2"></a> We often casually refer to the boring/bad intersection of doom as &ldquo;enterprise software,&rdquo; but that terminology may be imprecise. </li> <li> <a name="f3"></a> In saying this Rumsfeld was either intentionally or unintentionally alluding to <a href="http://en.wikipedia.org/wiki/I_know_that_I_know_nothing">the Socratic Paradox</a>. Socrates was by all accounts a thoughtful individual in a number of ways that Rumsfeld is not. </li> <li> <a name="f4"></a> <p>A good example of this from my experience is <a href="https://speakerdeck.com/mcfunley/etsy-activity-feed-architecture">Etsy&rsquo;s activity feeds</a>. When we built this feature, we were working pretty hard to consolidate most of Etsy onto PHP, MySQL, Memcached, and Gearman (a PHP job server). It was much more complicated to implement the feature on that stack than it might have been with something like Redis (or <a href="https://aphyr.com/posts/283-call-me-maybe-redis">maybe not</a>). But it is absolutely possible to build activity feeds on that stack. </p> <p>An amazing thing happened with that project: our attention turned elsewhere for several years. During that time, activity feeds scaled up 20x while <em>nobody was watching it at all.</em> We made no changes whatsoever specifically targeted at activity feeds, but everything worked out fine as usage exploded because we were using a shared platform. This is the long-term benefit of restraint in technology choices in a nutshell. </p> <p>This isn&rsquo;t an absolutist position--while activity feeds stored in memcached was judged to be practical, implementing full text search with faceting in raw PHP wasn't. So Etsy used Solr. </p> </li> </ol> Dan McKinley https://mcfunley.com/ Data Driven Products: Lean Startup 2014 2015-01-27T00:00:00+00:00 2015-01-27T00:00:00+00:00 urn:uuid:f99e4b0b-e3c6-2adc-4d38-eccef199f91a <p>Here’s a video of me doing a slightly-amended version of my <a href="/data-driven-products-now">Data Driven Products</a> talk at the <a href="http://leanstartup.co/">Lean Startup Conference</a> back in December.</p> <iframe class="video" src="//www.youtube.com/embed/SZOeV-S-2co?list=PL1M9pu1POlelJcmYWGv_Oq5FPr0J1XKa5" frameborder="0" allowfullscreen=""></iframe> <p>I am told I <a href="http://en.wikipedia.org/wiki/High_rising_terminal">upspeak</a>? You be the judge.</p> Dan McKinley https://mcfunley.com/ Thoughts on the Technical Track 2014-12-09T00:00:00+00:00 2014-12-09T00:00:00+00:00 urn:uuid:4468bf6c-533e-e431-97ad-16ad3a6bad8b <p>I saw <a href="http://lizthedeveloper.com/how-to-reward-skilled-coders-with-something-other-than-people-management">lizTheDeveloper’s post</a> about technical leadership at Simple and I realized that I’ve been meaning to write about this for a while. I hope to persuade you that there are a number of systemic biases working against a healthy technical career path. I don’t think that they’re insurmountable, and I don’t disagree with Liz’s post. But I’ve never heard of a company clearing all of these hurdles at once.</p> <p>I was the first person at Etsy with the title of “Principal Engineer,” which was the technical equivalent to a directorship (i.e., one level below CTO). I’m not saying this to toot my own horn, but rather so that it’s understood that the following comes from someone that was the beneficiary of an existing system.</p> <p>(Incidentally, I think Etsy is an example of a company whose heart is in the right place, and it’s not my intention to single them out.)</p> <h5 id="to-review-management-is-a-job">To Review, Management is a Job</h5> <p>My views on the merits of having a technical track align with those of many people in our industry. Management is a different job, with different skills. They’re not necessarily more <em>difficult</em> skills, they’re just <em>different</em>. By and large they’re unrelated to the day-to-day labor of the people who build technology products.</p> <p>It doesn’t make any sense to divert your technical talent into a discipline where they will need to stop doing technical work. (That’s in the event that they intend to be effective managers, which I concede might be an unrealistic expectation.)</p> <p>Other people have made this case, so I’ll just proceed as if we agree that there must be a way forward for people that are great programmers other than to simply graduate into not programming at all.</p> <p>Having that way forward is an ideal. There is always a gap between our ideals and reality, and we cannot act as though we’ve solved a problem simply by articulating it.</p> <h4 id="fundamental-asymmetries">Fundamental Asymmetries</h4> <h5 id="management-just-happens">Management Just Happens</h5> <p>I have had management responsibility thrust upon me at least four times over the course of my career, and at no point has that been my goal. It just happens. Do you want to be a manager? I will now tell you the secret to becoming a manager in a growing company: <em>just wait.</em></p> <p>You have a manager. Eventually, your manager will accrue too many responsibilities, and they will freak out. They will need somebody to take over some of their reports, and that lucky warm body is you.</p> <figure> <img src="/assets/images/homer-manager.png" /> <figcaption class="text-center">Good hair: also helpful.</figcaption> </figure> <p>It is entirely plausible to become a manager accidentally. It might even be the norm.</p> <h5 id="technical-track-promotions-are-post-hoc">Technical Track Promotions are Post-Hoc</h5> <p>The process for minting a new manager is: <em>crap, we need another manager</em>. There’s no symmetrical forcing function pushing people into the upper ranks of technical leadership.</p> <p>Mentorship and technical feedback are things everyone does on a functioning engineering team. A technical track “promotion” is merely additional recognition given to someone who is already performing that role notably well.</p> <p>If the job is already getting done, then filling the job is clearly not a pressing need. Technical promotions are something that happen when it’s convenient, which is generally never.</p> <h5 id="stumping">Stumping</h5> <p>Between the founding of the United States and the end of the 19th century, it was considered tacky for presidential candidates to personally campaign for the job. Instead, they staged an elaborate farce in which they reluctantly answered the call of the nation to serve. Trying to intentionally get a promotion into the technical track is pretty much just like this.</p> <figure> <img src="/assets/images/garfield.jpg" /> <figcaption class="text-center">Getting promoted in the technical track is kind of like being James Garfield.</figcaption> </figure> <p>Your work must be recognized, and this is the rub. Let me rephrase: “someone with the power to bestow promotions has to be your fan.” To be promoted you have to be a good mentor, but you also have to worry about playing to an audience. That may be executives, or it may be your peers (and potential competitors). Regardless, you’re running a weird campaign in which actually saying anything directly about wanting the job would be gauche.</p> <p>The most qualified individual contributors may become <em>known</em> without ever really doing this on purpose, but that doesn’t say much for this as a tenable career goal of the sort that can be counted on.</p> <h4 id="the-problem-of-credibility">The Problem of Credibility</h4> <h5 id="society-applies-to-idealistic-tech-companies-too">Society Applies to Idealistic Tech Companies, Too</h5> <p>American society is not a classless oasis. That’s a lie we tell ourselves. And the person who knows what everyone else gets paid and can fire you is not in your class.</p> <p>A technical job does not have equivalent prestige to a management position with an equivalent salary just because you say it does. Even if you conquer this within your own company, it’s not true in the rest of the industry, and it’s not true in the world at large. In the world our parents live in, it’s a big deal to be somebody else’s boss.</p> <p>You’re hiring people from the world at large all the time. Without continuous effort a technical track decays to its ground state, where the jobs are second class.</p> <h5 id="halfhearted-managers-are-the-worst">Halfhearted Managers are The Worst</h5> <p>The natural result of a system in which technical promotions can’t be counted on and are viewed as suspiciously-maybe-second-class anyway is that people who don’t really give a shit about management wind up going into management. Given the choice of waiting for a technical promotion that may never arrive and taking an offer to manage others, almost everyone is going to take the bird in the hand.</p> <figure> <img src="/assets/images/lumberg.jpg" /> <figcaption>Once you let the soulless suspendered lizard in the building, you are screwed.</figcaption> </figure> <p>Managers that have no passion for management are a blight on society. I can say this because I have been one of them. I was never a good manager, and for that I apologize to anyone that ever had to report to me.</p> <p>I am not an isolated case. Many people in management are frankly terrible at it. And they would rather have technical track jobs anyway, but they have no idea how to make the switch. A credible technical track is a great way to ensure a higher level of satisfaction and competency among the <em>managers</em>.</p> <h5 id="ratios-observed-in-the-wild-make-no-sense">Ratios Observed in the Wild Make No Sense</h5> <p>You don’t need to take my reasoning about the intrinsic pressure favoring management bloat at face value. You can actually look at the ratio of managers to technical employees at your company.</p> <p>At one point, I was alone at my level. There were five theoretically-equivalent directors at the time. The ratio was at least that bad on the lower rungs. (I have no idea if this is still true at that company, and it might not be.)</p> <p>For that to make sense, we’d have to believe a few things that don’t stand up to scrutiny. First, we’d have to believe in a very high proclivity among engineers to manage, and I think that betrays our expectations. Not very many of us got into this business with the hope of not actually building things.</p> <p>Second, we’d have to believe that although it took five directors to effectively manage the organization, only one technical leader was required to advise the same group on the details of the work they do every day.</p> <h4 id="what-might-help">What Might Help?</h4> <h5 id="promotions-should-not-be-miraculous-and-rare">Promotions Should Not Be Miraculous and Rare</h5> <p>Of course, it wouldn’t make logical sense to say that the ratio of individual contributors to managers at a given level must be 1:1. I honestly don’t know if 1:2 or 2:1 is closer to correct. The answer is probably contingent, and the relationship might not be linear.</p> <p>But I think it’s important for any company that takes the ideal of having a tenable technical track seriously to put a stake in the ground on this question. It’s hard to build a credible technical track, and we need a baseline to grade ourselves against.</p> <p>I don’t think that proceeding with the assumption that leaders will just naturally emerge produces the best results. Adding a self-imposed quota achieves accountability. It acknowledges the possibility that problems can lie in the system of recognition, and not only in the talents of the people in the pool for promotions.</p> <p><em>“Do we think that we hire smart people here? Yes? Then we should be able to find N of them worthy of promotion for every manager. If we can’t then the problem is most likely to be found in how we’re recognizing people for their work.”</em></p> <p>I know that the word “quota” is <em>verboten</em> for many, and I gleefully await your flames.</p> <h5 id="address-prestige-with-superpowers">Address Prestige with Superpowers</h5> <p>If we think about why managers and technical employees on even salary footing may be perceived to not truly be equals, it comes down to superpowers. The managers have special capabilities that the technical employees don’t: hiring, firing, compensation, and the like. Is it possible to give technical employees a different set of superpowers, to address the prestige problem?</p> <p>Maybe. I don’t think that I have seen this done correctly yet. If I had superpowers, they were:</p> <ul> <li>The ability to work on whatever I wanted.</li> <li>The ability to talk to anyone I wanted.</li> </ul> <p>These were indeed powerful, but using them to create positive action was difficult. It would have been easy for me to opt out of projects that I didn’t believe in and to do my own thing. I did often do my own thing. But I also worked on projects that I didn’t believe in, because I knew that opting out was a selfish act. One of my friends would just be forced to work on it in my place, and sometimes leadership is about jumping on grenades.</p> <figure> <img src="/assets/images/dark-knight.jpg" /> <figcaption>I guess there are worse superpowers. For example, the ability to allow oneself to be framed for the good of the city.</figcaption> </figure> <p>Talking to other teams made it possible for me to point out places where resources weren’t intelligently allocated. But this also begat mostly negative actions. “Hey, this isn’t the best way to use these folks,” I’d find myself saying all the time. It was draining, and a bummer.</p> <p>Giving the technical leadership deeper involvement in the planning process could address this. Of course that would involve dragging the technical leadership to meetings, which I admit is tricky.</p> <h5 id="in-closing">In Closing</h5> <p>I hope I’ve demonstrated that creating a career path outside of management for technical employees is only the beginning of your problems. It’s a good and necessary step, but it’s not an achievement by itself.</p> <p>I’d love to hear from anyone with better ideas. These issues are difficult and I don’t claim to have all of the right answers.</p> Dan McKinley https://mcfunley.com/ Data Driven Products Now! 2014-09-18T00:00:00+00:00 2014-09-18T00:00:00+00:00 urn:uuid:e89c5588-5740-8e4f-715e-0cc2377e0fa9 <p>Back when I was at Etsy, I did a presentation internally about the craft of sizing opportunities. I finally got around to writing a public incarnation of that talk. Here it is:</p> <div class="speakerdeck-container"> <div class="speakerdeck-loading"></div> <script id="data-driven-products-now-deck" async="" class="speakerdeck-embed" data-id="13b6d210211a01327085562b5da4981b" data-ratio="1.0" src="//speakerdeck.com/assets/embed.js"></script> <script>$('#data-driven-products-now-deck').speakerdeck();</script> </div> Dan McKinley https://mcfunley.com/ Manual Delivery 2014-03-10T00:00:00+00:00 2014-03-10T00:00:00+00:00 urn:uuid:c153dff4-755b-8a55-4e30-3150a8fba544 <p>The person on build rotation, or the nightly <em>schlimazel</em> I suppose, went into a hot 5’x8’ closet containing an ancient computer. This happened after everyone else had left, so around 8:30PM. Although in crunch time that was more like 11:30PM. And we were in crunch time at one point for a stretch of a year and a half. “That release left a mark,” my friend Matt used to say. In a halfhearted attempt at fairness to those who will take this post as a grave insult, I’ll concede that my remembrance of these details is the work of The Mark.</p> <p>Anyway, the build happened after quitting time. This guaranteed that if anything went wrong, you were on your own. Failure in giving birth to the test build implied that the 20 people in Gurgaon comprising the QA department would show up for work in a matter of hours having nothing to do.</p> <p>You used a tool called “VBBuild.” This was a GUI tool, rumored to be written by Russians:</p> <p><img src="/assets/images/vbbuild.gif" alt="VBBuild" /></p> <p>VBBuild did mysterious COM stuff to create the DLLs that nobody at the time understood properly. It presented you with dozens of popups even when it was working perfectly, and you had to be present to dismiss each of them. The production of executable binary code was all smoke and lasers. And, apparently, popups.</p> <p>Developers wrote code using the more familiar VB6 IDE. The IDE could run interpreted code as an interactive debugger, but it could not produce finished libraries in a particularly repeatable or practical way. So the release compilation was different in many respects from what programmers were doing at their desks. Were there problems that existed in one of these environments but not the other? Yes, sometimes. I recall that we had a single function that weighed in at around 70,000 lines. The IDE would give up and execute this function even if it contained clear syntax errors. That was the kind of discovery which, while exciting, was wasted in solitude somewhere past midnight as you attempted to lex and parse the code for keeps.</p> <figure> <img src="/assets/images/vb6.jpg" alt="VB6" /> <figcaption>Isaiah 2:4: "And he shall displace VB6 in search engine results with a book written by vegans."</figcaption> </figure> <p>Developers weren’t really in the habit of doing complete pulls from source control. And who could blame them, since doing this whitescreened your machine for half an hour. They were also never in any particular hurry to commit, at least until it was time to do the test build. As there was no continuous integration at the time, this was the first time that all of the code was compiled in several days.</p> <p>Often <em>[ed: always]</em> there were compilation errors to be resolved. We were using Visual Sourcesafe, so people could be holding an exclusive lock on files containing the errors. Typically, this problem was addressed by walking around the office an hour before build time and reminding everyone to check their files in. In the event that someone forgot <em>[ed: every time]</em>, there was an administrative process for unlocking locked files. Not everyone had the necessary rights to do this, but happily, I did.</p> <p>By design, the build tried to assume an exclusive lock on all of the code. As a result, nobody could work while the build was in progress. Sometimes, the person performing the build would check all of the files out and not check them back in. So your first act the morning after a build might be to walk over to the build closet and release the source files from their chains.</p> <figure> <img src="/assets/images/vss.gif" alt="Visual Sourcesafe" /> <figcaption>The Visual Sourcesafe documentation strongly advised against its use on a team of more than four programmers, and apparently this was not a joke.</figcaption> </figure> <p>Deployment required dozens of manual steps that I will never be able to remember. When the build was done, you copied DLLs over to the test machines and registered them there. By “copied” I mean that you selected them in an explorer window, pressed “Ctrl-C,” and then pressed “Ctrl-V” to paste them into another. There was no batch script worked out to do this more efficiently. Ok, this is a slight lie. There had <em>been</em> a script, but was put out to pasture on account of a history of hideous malfunction. And popups. On remote machines sometimes, where they could only be dismissed by wind and ghosts.</p> <p>Registration involved connecting to each machine with Remote Desktop and right clicking all the DLLs. You could skip a machine or just one library, and things would be very screwy indeed.</p> <p>The production release, which happened roughly twice a year under ideal conditions, was identical to this but with the added complexity of about eight more servers receiving the build. And we might take the opportunity to add completely new machines, which would not necessarily have the same patch levels for, oh, like 700,000 windows components that were relied upon.</p> <p>Given eight or ten machines, the probability of a mistake on at least one of the servers approached unity. So the days and weeks following a production release were generally spent sussing out all of the minute differences and misconfigurations on the production machines. There would be catastrophic bugs that affected a tiny sliver of requests, under highly specific server conditions, and <em>only if executed on one server out of eight</em>. I was an expert at debugging in disassembly at the time. Upon leaving the job, I thought that this was pretty badass. But in the seven years since–do you know what? It’s never come up.</p> <figure> <img src="/assets/images/sandp.jpg" alt="Nonstandard &amp; poorly reproducible builds is more like it am I right" /> <figcaption>"The code could be <a href="http://www.bloomberg.com/news/2013-02-05/s-p-analyst-joked-of-bringing-down-the-house-ahead-of-collapse.html">structured by cows</a> and we would build it by hand."</figcaption> </figure> <p>At one point I wrote a new script to perform the deployment. It was an abomination of XML to be sure, but it got the job done without all of the popups. I started doing the test build with this with some success and suggested that we use it for the production release. This was out of the question, I was told by one of my closer allies in the place. The production release was “too important to use a script.”</p> <p>The operating systems and supporting libraries on the machines were also set up by hand, by a separate team, working from printed notes. The results were similar. This is kind of another story.</p> <p>This all happened in 2003.</p> Dan McKinley https://mcfunley.com/ Scalding at Etsy 2014-03-02T00:00:00+00:00 2014-03-02T00:00:00+00:00 urn:uuid:b2555cf4-db74-983b-cde4-1da747c34460 <p>Here’s a presentation I gave about how Etsy wound up using <a href="https://github.com/twitter/scalding">Scalding</a> for analysis. Given at the <a href="http://www.meetup.com/cascading/">San Francisco Cascading Meetup</a>.</p> <div class="speakerdeck-container"> <div class="speakerdeck-loading"></div> <script id="scalding-at-etsy-deck" async="" class="speakerdeck-embed" data-id="309f7f7083c90131707926064ba69595" data-ratio="1.0" src="//speakerdeck.com/assets/embed.js"></script> <script>$('#scalding-at-etsy-deck').speakerdeck();</script> </div> Dan McKinley https://mcfunley.com/ The Case for Secrecy in Web Experiments 2014-01-16T00:00:00+00:00 2014-01-16T00:00:00+00:00 urn:uuid:b055f44c-2b32-1e5b-a566-4e79beea5e83 <p>For four months ending in early 2011, I worked on team of six to redesign Etsy’s homepage. I don’t want to overstate the weight of this in the grand scheme of things, but hopes flew high. The new version was to look something like this:</p> <figure> <a href="/assets/images/nhp2010-big.png"> <img src="/assets/images/nhp2010-big.png" class="max-width-50 mb-max-width-75" /> </a> </figure> <p>There were a number of methodological problems with this, one of our very first web experiments. Our statistics muscles were out of practice, and we had a very difficult time <a href="/whom-the-gods-would-destroy-they-first-give-real-time-analytics">fighting the forces of darkness who wanted to enact radical redesigns after five minutes of real-time data</a>. We had no toolchain for running experiments to speak of. The nascent analytics pipeline jobs failed every single night.</p> <p>But perhaps worst of all, we publicized the experiment. Well, “publicized” does not accurately convey the magnitude of what we did. We allowed visitors to join the treatment group using a magic URL. We proactively told our most engaged users about this. We tweeted the magic URL from the <a href="http://www.twitter.com/etsy">@Etsy account</a>, which at that point had well over a million followers.</p> <figure> <a href="http://www.etsy.com/teams/7718/questions/discuss/6848711/page/3?post_id=60817018"><img src="/assets/images/nhp-forum-post.png" alt="The magic URL was chosen to celebrate the CEO&apos;s 31st birthday." /></a> <figcaption>The magic URL was chosen to celebrate the CEO's 31st birthday. None of this was Juliet's fault.</figcaption> </figure> <p>This project was a disaster for many reasons. Nearly all of the core hypotheses turned out to be completely wrong. The work was thrown out as a total loss. Everyone involved learned valuable life lessons. I am here today to elaborate on one of these: <em>telling users about the experiment as it was running was a big mistake.</em></p> <h5 id="the-diamond-forging-pressure-to-disclose-experiments">The Diamond-Forging Pressure to Disclose Experiments</h5> <p>If you operate a website with an active community, and you do A/B testing, you might feel some pressure to disclose your work. And this seems like a proper thing to do, if your users are invested in your site in any serious way. They may notice anyway, and the <a href="http://instagram.com/p/f3HLODBQdH/">most common reaction to change on a beloved site</a> tends to be varying degrees of panic.</p> <figure> <a href="http://www.businessinsider.com/mark-zuckerberg-joins-facebook-group-i-automatically-hate-the-new-facebook-home-page-2009-10"><img alt="If you can&apos;t beat &apos;em, join &apos;em" class="thinborder" src="/assets/images/mz-story.png" /></a> <figcaption>"If you can't beat 'em, join 'em."</figcaption> </figure> <p>As an honest administrator, your wish is to reassure your community that you have their best interest at heart. Transparency is the best policy!</p> <p>Except in this case. I think there’s a strong argument to be made against announcing the details of active experiments. It turns out to be easier for motivated users to overturn your experiment than you may believe. And disclosing experiments is work, and work that comes before real data should be minimized.</p> <h5 id="online-protests-not-necessarily-a-waste-of-time">Online Protests: Not Necessarily A Waste of Time</h5> <p>A fundamental reason that you should not publicize your A/B tests is that this can introduce <a href="http://en.wikipedia.org/wiki/Bias_(statistics)">bias</a> that can affect your measurements. This can even overturn your results. There are many different ways for this to play out.</p> <p>Most directly, motivated users can just perform positive actions on the site if they believe that they are in their preferred experiment bucket. Even if the control and treatment groups are very large, the number of people completing a goal metric (such as purchasing) may be just a fraction of that. And the anticipated difference between any two treatments might be slight. It’s not hard to imagine how a small group of people could determine an outcome if they knew exactly what to do.</p> <figure> <table> <thead> <tr> <th>Group</th> <th>Visits</th> <th>Conversions (organic)</th> <th>Conversions (gamed)</th> <th>Proportion</th> </tr> </thead> <tr> <td>Control</td> <td>10000</td> <td>50</td> <td class="negative">10</td> <td class="positive">0.0060</td> </tr> <tr> <td>New</td> <td>10000</td> <td>55</td> <td>0</td> <td class="negative">0.0055</td> </tr> </table> <table class="max-width-50 mb-max-width-100"> <thead> <tr> <th>Control</th> <th>New</th> </tr> </thead> <tr> <td>10000 visits</td> <td>10000 visits</td> </tr> <tr> <td>50 organic conversions</td> <td>50 organic conversions</td> </tr> <tr> <td class="negative attention">10 gamed conversions</td> <td>0 gamed conversions</td> </tr> <tr> <td class="positive">0.60% converted</td> <td class="negative">0.55% converted</td> </tr> </table> <figcaption>Figure 1: In some cases a small group of motivated users can change an outcome, even if the sample sizes are large.</figcaption> </figure> <p>As the scope and details of an experiment become more fully understood, this gets easier to accomplish. But intentional, organized action is not the only possible source of bias.</p> <p>Even if users have no preference as to which version of a feature wins, some will still be curious. If you announce an experiment, visitors will engage with the feature immediately who otherwise would have stayed away. This well-intentioned interest could ironically make a winning feature appear to be a loss. Here’s an illustration of what that looks like.</p> <figure> <table> <thead> <tr> <th>Group</th> <th>Visits (oblivious)</th> <th>Visits (rubbernecking)</th> <th>Visits (total)</th> <th>Conversions</th> <th>Proportion</th> </tr> </thead> <tr> <td>Control</td> <td>500</td> <td>50</td> <td>550</td> <td>30</td> <td class="positive">0.055</td> </tr> <tr> <td>New</td> <td>500</td> <td class="negative">250</td> <td>750</td> <td>35</td> <td class="negative">0.047</td> </tr> </table> <table class="max-width-50 mb-max-width-100"> <thead> <tr> <th>Control</th> <th>New</th> </tr> </thead> <tr> <td>500 oblivious visits</td> <td>500 oblivious visits</td> </tr> <tr> <td>50 rubbernecking visits</td> <td class="negative">250 rubbernecking visits</td> </tr> <tr> <td>550 total visits</td> <td>750 total visits</td> </tr> <tr> <td>30 conversions</td> <td>35 conversions</td> </tr> <tr> <td class="positive">5.5% converted</td> <td class="negative">4.7% converted</td> </tr> </table> <figcaption>Figure 2: An example in which 100 engaged users are told about a new experiment. They are all curious and seek out the feature. Those seeing the new treatment visit the new feature more often just to look at it, skewing measurement.</figcaption> </figure> <p>These examples both involve the distortion of numbers on one side of an experiment, but <a href="http://en.wikipedia.org/wiki/Novelty_effect">many other scenarios</a> are possible. Users may change their behavior in either group for <a href="http://en.wikipedia.org/wiki/Hawthorne_effect">no reason other than that they believe they are being measured</a>.</p> <p>Good experimental practice requires that you isolate the intended change as the sole variable being tested. To accomplish this, you randomly assign visitors the new treatment or the old, controlling for all other factors. Informing visitors that they’re part of an experiment places this central assumption in considerable jeopardy.</p> <h5 id="predicting-bias-is-hard">Predicting Bias is Hard</h5> <p>“But,” you might say, “most users aren’t paying attention to our communiqués.” You may think that you can announce experiments, and only a small group of the most engaged people will notice. This is very likely true. But as I have already shown, the behavior of a small group cannot be dismissed out of hand.</p> <p>Obviously, this varies. There <em>are</em> experiments in which a vocal minority cannot possibly bias results. But determining if this is true for any given experiment in advance is a difficult task. There is roughly one way for an experiment to be conducted correctly, and there are an infinite number of ways for it to be screwed.</p> <p>A/B tests are already complicated: bucketing, data collection, experimental design, <a href="http://www.experimentcalculator.com">experimental power</a>, and analysis are all vulnerable to mistakes. From this point of view, <em>“is it safe to talk about this?”</em> is just another brittle moving part.</p> <h5 id="communication-plans-are-real-work">Communication Plans are Real Work</h5> <p>Something I have come to appreciate over the years is the role of product marketing. I have been involved in many releases for which the act of explaining and gaining acceptance for a new feature constituted the <em>majority</em> of the effort. Launches involve a lot more than pressing a deploy button. This is a big deal.</p> <figure> <iframe class="video" src="//player.vimeo.com/video/27836540?title=0&amp;byline=0&amp;portrait=0&amp;color=ffffff" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe> <figcaption>Product marketing: this is serious business.</figcaption> </figure> <p>It also seems to be true that <a href="https://twitter.com/Nat_S">people who are skilled at this kind of work</a> are hard to come by. You will be lucky to have a few of them, and this imposes limits on the number of major changes that you can make in any given year.</p> <p>It makes excellent sense to avoid wasting this resource on quite-possibly-fleeting experiments. It will delay their deployment, steal cycles from launches for finished features, and it will do these things in the service of work that may never see the light of day!</p> <p>Users will tend to view any experiment as presaging an imminent release, regardless of your intentions. Therefore, you will need to put together a relatively complete narrative explaining why the changes are positive at the outset. A “minimum viable announcement” probably won’t do. And you will need to execute this without the benefit of quantitative results to bolster your case.</p> <h5 id="your-daily-reminder-that-experiments-fail">Your Daily Reminder that Experiments Fail</h5> <p>Doing data-driven product work really does imply that you will not release changes that don’t meet some quantitative standard. In such an event you might tweak things and start over, or you might give up altogether. Announcing your running experiments is problematic given this reality.</p> <p>Obviously, product costs will be compounded by communication costs. Every time you retool an experiment, you will have to bear the additional weight of updating your community. Adding marginal effort makes it more difficult for humans to behave rationally and objectively. We have a name for this well-known pathology: <a href="http://en.wikipedia.org/wiki/Sunk_costs">the sunk cost fallacy</a>. <em>We’ve put so much into this feature, we can’t just give up on it now.</em></p> <figure> <img src="/assets/images/pillory.jpg" /> <figcaption>The fear of admitting mistakes in public can be motivating.</figcaption> </figure> <p>Announcing experiments also has a way of raising the stakes. The prospect of backtracking with your users (and being perceived as admitting a mistake) only makes killing a bad feature less palatable. The last thing you need is additional temptation to delude yourself. You have plenty of this already. The danger of living in public is that it will turn a bad release that should be discarded into an inevitability.</p> <h5 id="consistency-and-expectations">Consistency and Expectations</h5> <p>Let’s say you’ve figured out workarounds for every issue I’ve raised so far. You are still going to want to run experiments that are not publicly declared.</p> <p>Some experiments are inherently controversial or exploratory. It may be perfectly legitimate to try changes that you would never release to learn more about your site. Removing a dearly beloved feature temporarily for half of new registrations is a good example of this. By doing so, you can measure the effect of that feature on lifetime value, and make better decisions with your marketing budget.</p> <p>Other experiments work only when they’re difficult to detect. Search ranking is a high-stakes arms race, and complete transparency can just make it easier for malicious users gain unfair advantages. It’s likely you’re going to want to run experiments on search ranking without disclosing them.</p> <p>It would be malpractice to give users the expectation that they will always know the state of running experiments. They will not have the complete picture. Leading them to believe otherwise can do more harm to your relationship than just having a consistent policy of remaining silent until features are ready for release.</p> <h5 id="what-can-you-share">What can you share?</h5> <p>Sharing too much too soon can doom your A/B tests. But this doesn’t mean that you are doomed to be locked in a steel cage match with your user base over them.</p> <figure> <img src="/assets/images/cagematch.jpg" alt="Forum moderators of the world: good luck." /> <figcaption>Forum moderators of the world: good luck.</figcaption> </figure> <p>You can do rigorous, well-controlled experiments and also announce features in advance of their release. You can give people time to acclimate to them. You can let users preview new functionality, and enable them at a slower pace. These practices all relate to <em>how</em> a feature is released, and they are not necessarily in conflict with how you decide <em>which</em> features should be released. It is important to decouple these concerns.</p> <p>You can and should share information about completed experiments. “What happened in the A/B test” should be a regular feature of your release notes. If you really have determined that your new functionality performs better than what it replaces, your users should have this data.</p> <figure> <a href="https://www.etsy.com/teams/7716/announcements/discuss/12732278/page/1"><img src="/assets/images/nlp-announce.png" /></a> <figcaption>Plain-language A/B test results can ease user anxiety in launches.</figcaption> </figure> <p>Counterintuitively, perhaps, trust is also improved by sharing the details of failed experiments. If you only tell users about your victories, they have no reason to believe that you are behaving objectively. Who’s to say that you aren’t just making up your numbers? Showing your scars (as I tried to do with my homepage story above) can serve as a powerful declaration against interest.</p> <h5 id="successful-testing-is-good-stewardship">Successful Testing is Good Stewardship</h5> <p>Your job in product development, very broadly, is to make progress while striking a balance between short and long term concerns.</p> <ul> <li>Users should be as happy as possible in the short term.</li> <li>Your site should continue to exist in the long term.</li> </ul> <p>The best interest of your users is ultimately served by making the correct changes to your product. Talking about experiments can break them, leading to both quantitative errors and mistakes of judgment.</p> <p>I firmly believe that A/B tests in any organization should be as free, easy, and cheap as humanly possible. After all, <a href="/testing-to-cull-the-living-flower">running A/B tests is perhaps the only way to know that you’re making the right changes</a>. Disclosing experiments as they are running is a policy that can alleviate some discontent in the short term. But the price of this is making experiments harder to run in the long term, and ultimately making it less likely that measurement will be done at all.</p> <p class="acknowledgements"> Thanks to <a href="http://twitter.com/nellwyn">Nell Thomas</a>, <a href="http://twitter.com/stevemardenfeld">Steve Mardenfeld</a>, and <a href="http://hilaryparker.com/">Dr. Parker</a> for their help on this. </p> Dan McKinley https://mcfunley.com/