Quotes


You were hired to write code. Many developers make the mistake and think that their job stops there. That’s not true. In fact, you have two jobs:

  1. Write good code.
  2. Be easy to work with.

The first part tends to be the easy part.

[…]

Being easy to work with doesn’t mean being a sycophant or a pushover. […] Mostly, it means that you understand the value of relationships, and build them as carefully and intentionally as you build frameworks and libraries.

[…]

Most of the hardest problems we face have both technical and human components, and the best engineers know how to solve both.

Jacob Kaplan-Moss: You have two jobs


I only really give two pieces of career advice. The first is that you should be super easy to work with, the second is that you should get really good at something you also find as easy as possible.

Ed Zitron on BlueSky


There are two kinds of creative advice that I think you can get from creative people. The first is how to buy more lottery tickets, and the second is how to win the lottery. And I think the former can be extremely useful, and I think the latter is nonsense.

Darius Kazemi: How I Won The Lottery


I admit that I am perhaps missing part of a larger picture that encourages or even requires the use of cop shit in the classroom. Yet when my students come to me with reams of documentation for a simple cold, or fearful that I will dole out unexcused absences for them taking job interviews, I have to wonder what that larger picture could be. I’ve only taught for a few semesters and I don’t pretend that my experiences are generalizable to the level of empirical data, but my students seem to do perfectly well in the absence—as much as I can make—of cop shit.

So why do we have cop shit in our classrooms?

One provisional answer is that the people who sell cop shit are very good at selling cop shit, whether that cop shit takes the form of a learning management system or a new pedagogical technique. Like any product, cop shit claims to solve a problem. We might express that problem like this: the work of managing a classroom, at all its levels, is increasingly complex and fraught, full of poorly defined standards, distractions to our students’ attentions, and new opportunities for grift. Cop shit, so cop shit argues, solves these problems by bringing order to the classroom. Cop shit defines parameters. Cop shit ensures compliance. Cop shit gives students and teachers alike instant feedback in the form of legible metrics.

In short, cop shit operates according the the logic of datafication.

[…]

Cop shit is seductive. It makes metrics transparent. It allows for the clear progress toward learning objectives. It also subsumes education within a market logic. “Here,” cop shit says, “you will learn how to do this thing. We will know you learned it by the acquisition of this gold star. But in order for me to award you this gold star, I must parse you, sense you, track you, collect you, and—” here’s the key, “I will presume that you will attempt to flout me at every turn. We are both scamming each other, you and I, and I intend to win.” When a classroom becomes adversarial, of course, as cop shit presumes, then there must be a clear winner and loser. The student’s education then becomes not a victory for their own self-improvement or -enrichment, but rather that the teacher conquered the student’s presumed inherent laziness, shiftiness, etc. to instill some kernel of a lesson.

Jeffrey Moro: Against Cop Shit


Engineers working for Evil, Inc. are like engineers anywhere; they include a mix of people inclined to doing quick hardcoded scripts or perhaps writing Fully General Frameworks For Exploitation Of The Global Economy. The first group ends up doing far more damage because the second group, predictably, doesn’t ship much usable software.

Patrick McKenzie: Credit cards as a legacy system


1957 - John Backus and IBM create FORTRAN. There’s nothing funny about IBM or FORTRAN. It is a syntax error to write FORTRAN while not wearing a blue tie.

[…]

1972 - Dennis Ritchie invents a powerful gun that shoots both forward and backward simultaneously. Not satisfied with the number of deaths and permanent maimings from that invention he invents C and Unix.

[…]

1983 - In honor of Ada Lovelace’s ability to create programs that never ran, Jean Ichbiah and the US Department of Defense create the Ada programming language. In spite of the lack of evidence that any significant Ada program is ever completed historians believe Ada to be a successful public works project that keeps several thousand roving defense contractors out of gangs.

[…]

1990 - A committee formed by Simon Peyton-Jones, Paul Hudak, Philip Wadler, Ashton Kutcher, and People for the Ethical Treatment of Animals creates Haskell, a pure, non-strict, functional language. Haskell gets some resistance due to the complexity of using monads to control side effects. Wadler tries to appease critics by explaining that “a monad is a monoid in the category of endofunctors, what’s the problem?”

[…]

1996 - James Gosling invents Java. Java is a relatively verbose, garbage collected, class based, statically typed, single dispatch, object oriented language with single implementation inheritance and multiple interface inheritance. Sun loudly heralds Java’s novelty.

2001 - Anders Hejlsberg invents C#. C# is a relatively verbose, garbage collected, class based, statically typed, single dispatch, object oriented language with single implementation inheritance and multiple interface inheritance. Microsoft loudly heralds C#’s novelty.

James Iry


Climate is what on average we may expect; weather is what we actually get.

— Andrew John Herbertson, Outlines of Physiography (1901)


We are as gods. We may as well get good at it.

— Stewart Brand


Without data, you’re just another person with an opinion

— W. Edwards Deming


Without data, you’re just another person with an opinion. […] With data, you’re still just another person with an opinion. Expert analysts understand this in their very bones.

— Cassie Kozyrkov


The three golden rules to ensure computer security are: do not own a computer; do not power it on; and do not use it.

— Robert Morris


My second meta-principle of statistics is the methodological attribution problem, which is that the many useful contributions of a good statistical consultant, or collaborator, will often be attributed to the statistician’s methods or philosophy rather than to the artful efforts of the statistician himself or herself. Don Rubin has told me that scientists are fundamentally Bayesian (even if they do not realize it), in that they interpret uncertainty intervals Bayesianly. Brad Efron has talked vividly about how his scientific collaborators find permutation tests and p-values to be the most convincing form of evidence. Judea Pearl assures me that graphical models describe how people really think about causality. And so on. I am sure that all these accomplished researchers, and many more, are describing their experiences accurately. Rubin wielding a posterior distribution is a powerful thing, as is Efron with a permutation test or Pearl with a graphical model, and I believe that (a) all three can be helping people solve real scientific problems, and (b) it is natural for their collaborators to attribute some of these researchers’ creativity to their methods.

The result is that each of us tends to come away from a collaboration or consulting experience with the warm feeling that our methods really work, and that they represent how scientists really think. In stating this, I am not trying to espouse some sort of empty pluralism — the claim that, for example, we would be doing just as well if we were all using fuzzy sets, or correspondence analysis, or some other obscure statistical method. There is certainly a reason that methodological advances are made, and this reason is typically that existing methods have their failings. Nonetheless, I think we all have to be careful about attributing too much from our collaborators’ and clients’ satisfaction with our methods.

Andrew Gelman: Bayesian Statistics Then and Now


Even one of the most elementary things taught on a statistics course, the standard deviation, is more complex than it need be, and is considered here as an example of how convenience for mathematical manipulation often over-rides pragmatism in research methods.

[…]

In essence, the claim made for the standard deviation is that we can compute a number (SD) from our observations that has a relatively consistent relationship with a number computed in the same way form the population figures. This claim, in itself, is of no great value. Reliability alone does not make that number of any valid use. For example, if the computation led to a constant whatever figures were used then there would be a perfectly consistent relationship between the parameters for the sample and population. But to what end? Surely the key issue is not how stable the statistic but whether it encapsulates what we want it to.

[…]

Of course, much of the rest of traditional statistics is now based on the standard deviation, but it is important to realise that it need not be.

Stephen Gorard: Revisiting a 90-Year-Old Debate: The Advantages of the Mean Deviation


Opinionated software is only cool if you have cool opinions

Tom MacWright: soapbox: longitude, latitude is the right way


There was some discussion in the comments thread here about axis labels and zero. Sometimes zero has no particular meaning (for example when graphing degrees Fahrenheit), but usually it has a clear interpretation, in which case it can be pleasant to include it on the graph. On the other hand, if you’re plotting something that varies from, say 184 to 197, it would typically be a bad idea to extend the axis all the way to zero, as this would destroy your visual resolution.

The advice we usually give is: If zero is in the neighborhood, invite it in.

Andrew Gelman: Graphing advice


Young man, in mathematics you don’t understand things. You just get used to them.

— John Von Neumann


To deal with hyper-planes in a 14-dimensional space, visualize a 3-D space and say ‘fourteen’ to yourself very loudly. Everyone does it.

Geoffrey Hinton: A geometrical view of perceptrons


Dependencies are an open invitation for other people to break your code.

— Nathan Eastwood: useR 2021 talk on poorman


RAM is cheap and thinking hurts.

Uwe Ligges: R-Help


Software people are not alone in facing complexity. Physics deals with terribly complex objects even at the “fundamental” particle level. The physicist labors on, however, in a firm faith that there are unifying principles to be found, whether in quarks or in unified field theories. Einstein repeatedly argued that there must be simplified explanations of nature, because God is not capricious or arbitrary.

No such faith comforts the software engineer. Much of the complexity he must master is arbitrary complexity, forced without rhyme or reason by the many human institutions and systems to which his interfaces must confirm. These differ from interface to interface, and from time to time, not because of necessity but only because they were designed by different people, rather than by God.

Frederick P. Brooks, Jr.: No Silver Bullet —- Essence and Accident in Software Engineering


On-prem is a lock-in. Cloud is a lock-in. Every single language you program in is a type of lock-in. Python is easy to get started with, but soon you run into packaging issues and are optimizing the garbage collector. Scala is great, but everyone winds up migrating away from it. And on and on.

Every piece of code written in a given language or framework is a step away from any other language, and five more minutes you’ll have to spend migrating it to something else. That’s fine. You just have to decide what you’re willing to be locked into.

Vicki Boykis: Commit to your lock-in


I believe we need a ‘Digital Earth’. A multi-resolution, three-dimensional representation of the planet, into which we can embed vast quantities of georeferenced data.

[…]

Imagine, for example, a young child going to a Digital Earth exhibit at a local museum. After donning a head-mounted display, she sees Earth as it appears from space. Using a data glove, she zooms in, using higher and higher levels of resolution, to see continents, then regions, countries, cities, and finally individual houses, trees, and other natural and man-made objects. Having found an area of the planet she is interested in exploring, she takes the equivalent of a ‘magic carpet ride’ through a 3-D visualization of the terrain. Of course, terrain is only one of the many kinds of data with which she can interact.

[…]

She is able to request information on land cover, distribution of plant and animal species, real-time weather, roads, political boundaries, and population.

— Al Gore: The Digital Earth: Understanding our Planet in the 21st Century, 1998


To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.

— R.A. Fisher


Hiawatha, mighty hunter,
He could shoot ten arrows upward,
Shoot them with such strength and swiftness
That the last had left the bow-string
Ere the first to earth descended.
- This was commonly regarded
As a feat of skill and cunning.
Several sarcastic spirits
Pointed out to him, however,
That it might be much more useful
If he sometimes hit the target.
“Why not shoot a little straighter
And employ a smaller sample?”
Hiawatha, who at college
Majored in applied statistics,
Consequently felt entitled
To instruct his fellow man
In any subject whatsoever

[…]

Hiawatha, in a temper,
Quoted parts of R. A. Fisher,
Quoted Yates and quoted Finney,
Quoted reams of Oscar Kempthorne,
Quoted Anderson and Bancroft
(practically in extenso)
Trying to impress upon them
That what actually mattered
Was to estimate the error.
- Several of them admitted:
“Such a thing might have its uses;
Still,” they said, “he would do better
If he shot a little straighter.”

[…]

In a corner of the forest
Sits alone my Hiawatha
Permanently cogitating
On the normal law of errors.
Wondering in idle moments
If perhaps increased precision
Might perhaps be sometimes better
Even at the cost of bias,
If one could thereby now and then
Register upon a target.

. E. Mientka, “Professor Leo Moser - Reflections of a Visit”


For in much wisdom is much grief: and he that increaseth knowledge increaseth sorrow.

— Ecclesiastes 1:18


An extra year of experience has not changed my belief in a disconnect between traditional statistical regression methods and the pure prediction algorithms. Section 8 of the paper, featuring Table 5, makes the case directly in terms of six criteria. Five of the six emerged more or less unscathed from the discussion. Criteria 2, long-time scientific truth versus possibly short-term prediction accuracy, was doubted by FHT and received vigorous push-back from Yu/Barter:

…but in our experience the “truth” that traditional regression methods supposedly represent is rarely justified or validated…

This is a hard-line Breimanian point of view. That “rarely” is over the top. The truism that begins “all models are wrong” ends with “but some are useful.” Traditional models tend to err on the side of over-simplicity (not enough interactions, etc.) but still manage to capture at least some aspect of the underlying mechanism. “Eternal truth” is a little too much to ask for, but in the Neonate example we did wind up believing that respiratory strength had something lasting to do with the babies’ survival.

[…]

The fathers of statistical theory — Pearson, Fisher, Neyman, Hotelling, Wald — forgot to provide us with a comprehensive theory of optimal prediction. We will have to count on the current generation of young statisticians to fill the gap and put prediction on a principled foundation.

Bradley Efron: Rejoinder to Prediction, Estimation, and Attribution


It’s important to be realistic: most people don’t care about program performance most of the time. Modern computers are so fast that most programs run fast enough even with very slow language implementations. In that sense, I agree with Daniel’s premise: optimising compilers are often unimportant. But “often” is often unsatisfying, as it is here. Users find themselves transitioning from not caring at all about performance to suddenly really caring, often in the space of a single day.

This, to me, is where optimising compilers come into their own: they mean that even fewer people need care about program performance. And I don’t mean that they get us from, say, 98 to 99 people out of 100 not needing to care: it’s probably more like going from 80 to 99 people out of 100 not needing to care. This is, I suspect, more significant than it seems: it means that many people can go through an entire career without worrying about performance. Martin Berger reminded me of A N Whitehead’s wonderful line that “civilization advances by extending the number of important operations which we can perform without thinking about them” and this seems a classic example of that at work.

Laurence Tratt: What Challenges and Trade-Offs do Optimising Compilers Face?


Pay attention to unexpected data that has no natural constituency, and to lack of data that are in high demand.

Whitney R. Robinson: More on meta-epistemology: an epidemiologist’s perspective


Half of what you’ll learn in medical school will be shown to be either dead wrong or out of date within five years of your graduation; the trouble is that nobody can tell you which half–so the most important thing to learn is how to learn on your own.

David Sackett (playing off a common phrase)


File organization and naming are powerful weapons against chaos. (Jenny Bryan)

Your closest collaborator is you six months ago, but you don’t reply to emails. (Mark Holder)

I will let the data speak for itself when it cleans itself. (Allison Reichel)

Working with data is not about rules to follow but about decisions to make. (Naupaka Zimmerman)

I’m not worried about being scooped, I’m worried about being ignored. (Magnus Nordborg)

Teach stats as you would cake baking: make a few before you delve into the theory of leavening agents. (Jenny Bryan)

The opposite of “open” isn’t “closed”. The opposite of “open” is “broken”. (John Wilbanks)

Collected by Karl Broman


This was not meant to be an “emperor has no clothes” kind of story, rather “the emperor has nice clothes but they’re not suitable for every occasion.” Where they are suitable, the pure prediction algorithms can be stunningly successful. When one reads an enthusiastic AI-related story in the press, there’s usually one of these algorithms, operating in enormous scale, doing the heavy lifting. Regression methods have come a long and big way since the time of Gauss.

Bradley Efron: Prediction, Estimation, and Attribution.


There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems.

Leo Breiman: Statistical Modeling: The Two Cultures.


PS Please excuse my not mailing this — but I don’t know your new address.

Richard Feynman: Letter to his dead wife, Arline, 17 Oct 1946


The more instructions something has, the worse its design. It’s cheaper to add instructions later than to design something well.

Scott Berkun: How Design Makes the World


We prove that last digits are approximately uniform for distributions with an absolutely continuous distribution function. From a practical perspective, that result, of course, is only moderately interesting. For that reason, we derive a result for ‘certain’ sums of lattice-variables as well. That justification is provided in terms of stationary distributions.

Stephan Dlugosz and Ulrich Muller-Funk: The value of the last digit: statistical fraud detection with digit analysis


The first of McIlroy’s dicta is often paraphrased as “do one thing and do it well”, which is shortened from “Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new ‘features.’”

McIlroy’s example of this dictum is:

Surprising to outsiders is the fact that UNIX compilers produce no listings: printing can be done better and more flexibly by a separate program.

[…]

McIlroy implies that the problem is that people didn’t think hard enough, the old school UNIX mavens would have sat down in the same room and thought longer and harder until they came up with a set of consistent tools that has “unusual simplicity”. But that was never going to scale, the philosophy made the mess we’re in inevitable. It’s not a matter of not thinking longer or harder; it’s a matter of having a philosophy that cannot scale unless you have a relatively small team with a shared cultural understanding, able to to sit down in the same room.

If anyone can write a tool and the main instruction comes from “the unix philosophy”, people will have different opinions about what “simplicity” or “doing one thing” means, what the right way to do things is, and inconsistency will bloom, resulting in the kind of complexity you get when dealing with a wildly inconsistent language, like PHP. People make fun of PHP and javascript for having all sorts of warts and weird inconsistencies, but as a language and a standard library, any commonly used shell plus the collection of widely used *nix tools taken together is much worse and contains much more accidental complexity due to inconsistency even within a single Linux distro and there’s no other way it could have turned out. If you compare across Linux distros, BSDs, Solaris, AIX, etc., the amount of accidental complexity that users have to hold in their heads when switching systems dwarfs PHP or javascript’s incoherence. The most widely mocked programming languages are paragons of great design by comparison.

To be clear, I’m not saying that I or anyone else could have done better with the knowledge available in the 70s in terms of making a system that was practically useful at the time that would be elegant today. It’s easy to look back and find issues with the benefit of hindsight. What I disagree with are comments from Unix mavens speaking today; comments like McIlroy’s, which imply that we just forgot or don’t understand the value of simplicity, or Ken Thompson saying that C is as safe a language as any and if we don’t want bugs we should just write bug-free code. These kinds of comments imply that there’s not much to learn from hindsight; in the 70s, we were building systems as effectively as anyone can today; five decades of collective experience, tens of millions of person-years, have taught us nothing; if we just go back to building systems like the original Unix mavens did, all will be well. I respectfully disagree.

Dan Luu: The growth of command line options, 1979-Present


This isn’t to say there’s no cost to adding options – more options means more maintenance burden, but that’s a cost that maintainers pay to benefit users, which isn’t obviously unreasonable considering the ratio of maintainers to users. This is analogous to Gary Bernhardt’s comment that it’s reasonable to practice a talk fifty times since, if there’s a three hundred person audience, the ratio of time spent watching to the talk to time spent practicing will still only be 1:6. In general, this ratio will be even more extreme with commonly used command line tools.

Dan Luu: The growth of command line options, 1979-Present


objects: Everything that exists in R is an object.
functions: Everything that happens in R is a function call.
interfaces: Interfaces to other languages are a part of R.

John Chambers - S, R, and Data Science


PHP is an embarrassment, a blight upon my craft. It’s so broken, but so lauded by every empowered amateur who’s yet to learn anything else, as to be maddening. It has paltry few redeeming qualities and I would prefer to forget it exists at all.

[…]

Do not tell me that “good developers can write good code in any language”, or bad developers blah blah. That doesn’t mean anything. A good carpenter can drive in a nail with either a rock or a hammer, but how many carpenters do you see bashing stuff with rocks? Part of what makes a good developer is the ability to choose the tools that work best.

[…]

PHP is built to keep chugging along at all costs. When faced with either doing something nonsensical or aborting with an error, it will do something nonsensical. Anything is better than nothing.

PHP: A fractal of bad design


The joy in mathematics is often the receding of the pain.

[…]

Claim: You can’t control the use of knowledge.

The choice is not between “peaceful” and “warlike” science. For most of science, the only choice is “relevant” and “irrelevant”.

The only way to make sure your scientific output will not be used by the military for war: Make sure it irrelevant, or wrong, or better: Both.

Almost anything that will give one side an advantage will eventually be used in warfare.

Nonetheless: The individual still has the responsibility to judge what they are working on. This is often not easy.

[…]

Are better weapons bad?

People are inclined to believe that better weapons are (morally) bad.

I visited Japan in early 2016 - both Hiroshima and Tanegashima. I will not talk about Hiroshima…. Tanegashima is an island in the south of Japan.

1467: Japan’s Feudal system collapses. Sengoku period starts, permanent internal warfare. No side could get an upper hand.

76 years later, the first Musket arrives in Japan (1543). Full adoption into the battle around 1570s; decisive in battle from 1575.

From 1615 onward: Japan unified, century of peace. 108 years of war, ended in 40 years.

Prolonged warfare is terrible.

Superior weaponry does not always mean more civilian casualties.

Some people argue that the terrible power of nuclear weapons are what enabled the last 75 years without World Wars.

Are better weapons bad?

I do not have an answer. I think about the history of Tanegashima, with little result.

[…]

Bob Morris Sr. asked me when I met what I do.
“I study math.”
“For whom?”

[…]

It is difficult to get a man to understand something when his salary depends upon his not understanding it. People tend to pick their ideologies by function.

Halvar Flake - OffensiveCon 2020 Keynote


Data analysis is hard, and part of the problem is that few people can explain how to do it. It’s not that there aren’t any people doing data analysis on a regular basis. It’s that the people who are really good at it have yet to enlighten us about the thought process that goes on in their heads.

Imagine you were to ask a songwriter how she writes her songs. There are many tools upon which she can draw. We have a general understanding of how a good song should be structured: how long it should be, how many verses, maybe there’s a verse followed by a chorus, etc. In other words, there’s an abstract framework for songs in general. Similarly, we have music theory that tells us that certain combinations of notes and chords work well together and other combinations don’t sound good. As good as these tools might be, ultimately, knowledge of song structure and music theory alone doesn’t make for a good song. Something else is needed.

In Donald Knuth’s legendary 1974 essay Computer Programming as an Art, Knuth talks about the difference between art and science. In that essay, he was trying to get across the idea that although computer programming involved complex machines and very technical knowledge, the act of writing a computer program had an artistic component. In this essay, he says that

Science is knowledge which we understand so well that we can teach it to a computer.

Everything else is art.

At some point, the songwriter must inject a creative spark into the process to bring all the songwriting tools together to make something that people want to listen to. This is a key part of the art of songwriting. That creative spark is difficult to describe, much less write down, but it’s clearly essential to writing good songs. If it weren’t, then we’d have computer programs regularly writing hit songs. For better or for worse, that hasn’t happened yet.

Much like songwriting (and computer programming, for that matter), it’s important to realize that data analysis is an art. It is not something yet that we can teach to a computer.

Roger D. Peng and Elizabeth Matsui: The Art of Data Science


There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models andtreats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems.

Leo Breiman: Statistical Modeling: The Two Cultures


At first glance Leo Breiman’s stimulating paper looks like an argument against parsimony and scientific insight, and in favor of black boxes with lots of knobs to twiddle. At second glance it still looks that way, but the paper is stimulating, and Leo has some important points to hammer home

[…]

Rule 1. New methods always look better than old ones. Neural nets are better than logistic regression, support vector machines are better than neural nets, etc. In fact it is very difficult to run an honest simulation comparison, and easy to inadvertently cheat by choosing favorable examples, or by not putting as much effort into optimizing the dull old standard as the exciting new challenger.

Rule 2. Complicated methods are harder to criticize than simple ones. By now it is easy to check the efficiency of a logistic regression, but it is no small matter to analyze the limitations of a support vector machine.

Brad Efron: Comment on Statistical Modeling: The Two Cultures


Occasionally, one sees frequentism defined in careerist terms, e.g., “A statistician who always rejects null hypotheses at the 95% level will over time make only 5% errors of the first kind.” This is not a comforting criterion for the statistician’s clients.

[…]

Something important changed in the world of statistics in the new millennium. Twentieth-century statistics, even after the heated expansion of itslate period, could still be contained within the classic Bayesian–frequentist–Fisherian inferential triangle (Figure 14.1). This is not so in the twenty-first century. Some of the topics discussed in Part III—false-discovery rates,post-selection inference, empirical Bayes modeling, the lasso—fit within the triangle but others seem to have escaped, heading south from the frequentist corner, perhaps in the direction of computer science.

Bradley Efron & Trevor Hastie: Computer Age Statistical Inference


People sometimes think (or complain) that working with quantitative data like this inures you to the reality of the human lives that lie behind the numbers. Numbers and measures are crude; they pick up the wrong things; they strip out the meaning of what’s happening to real people; they make it easy to ignore what can’t be counted. There’s something to those complaints. But it’s mostly a lazy critique. In practice, I find that far from distancing you from questions of meaning, quantitative data forces you to confront them. The numbers draw you in. Working with data like this is an unending exercise in humility, a constant compulsion to think through what you can and cannot see, and a standing invitation to understand what the measures really capture—what they mean, and for whom. Those regular spikes in the driving data are the pulse of everyday life as people go out to have a good time at the weekend. That peak there is the Mardi Gras parade in New Orleans. That bump in Detroit was a Garth Brooks concert. Right across the country, that is the sudden shock of the shutdown the second weekend in March. It was a huge collective effort to buy time that, as it turns out, the federal government has more or less entirely wasted. And now through May here comes the gradual return to something like the baseline level of activity from January, proceeding much more quickly in some cities than in others.

I sit at my kitchen-counter observatory and look at the numbers. Before my coffee is ready, I can quickly pull down a few million rows of data courtesy of a national computer network originally designed by the government to be disaggregated and robust, because they were convinced that was what it would take for communication to survive a nuclear war. I can process it using software originally written by academics in their spare time, because they were convinced that sophisticated tools should be available to everyone for free. Through this observatory I can look out without even looking up, surveying the scale and scope of the country’s ongoing, huge, avoidable failure. Everything about this is absurd.

Kieran Healy: The Kitchen Counter Observatory


Surgisphere appears to be the Theranos, or possibly the Cornell Food and Brand Lab, of medical research, and Lancet is a serial enabler of research fraud (see this news article by Michael Hiltzik), and it’s easy to focus on that. But remember all the crappy papers these journals publish that don’t get retracted, cos they’re not fraudulent, they’re just crappy. Retracting papers just cos they’re crappy—no fraud, they’re just bad science—I think that’s never ever ever gonna happen. Retraction is taken as some kind of personal punishment meted out to an author and a journal. This frustrates me to no end. What’s important is the science, not the author. But it’s not happening. So, when we hear about glamorous/seedy stories of fraud, remember the bad research, the research that’s not evilicious but just incompetent, maybe never even had a chance of working. That stuff will stay in the published literature forever, and journals love publishing it.

As we say in statistics, the shitty is the enemy of the good.

  1. Open code, open data, open review . . .

So, you knew I’d get to this…

Just remember, honesty and transparency are not enuf. Open data and code don’t mean your work is any good. A preregistered study can be a waste of time. The point of open data and code is that it makes it easier to do post-publication review. If you’re open, it makes it easier for other people to find flaws in your work. And that’s a good thing.

An egg is just a chicken’s way of making another egg.

And the point of science and policy analysis is not to build beautiful careers. The purpose is to learn about and improve the world.

Andrew Gelman: bla bla bla PEER REVIEW bla bla bla


Sometimes somebody says something to me, like a whisper of a hint of an echo of something half-forgotten, and it lands on me like an invocation. The mania sets in, and it isn’t enough to believe; I have to know.

[…]

So: the technical reason we started counting arrays at zero is that in the mid-1960’s, you could shave a few cycles off of a program’s compilation time on an IBM 7094. The social reason is that we had to save every cycle we could, because if the job didn’t finish fast it might not finish at all and you never know when you’re getting bumped off the hardware because the President of IBM just called and fuck your thesis, it’s yacht-racing time.

There are a few points I want to make here.

The first thing is that as far as I can tell nobody has ever actually looked this up.

Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but “mythology”. Worse, by obscuring the technical and social conditions that led humans to make these technical and social decisions, by talking about the nature of computing as we find it today as though it’s an inevitable consequence of an immutable set of physical laws, we’re effectively denying any responsibility for how we got here. And worse than that, by refusing to dig into our history and understand the social and technical motivations for those choices, by steadfastly refusing to investigate the difference between a motive and a justification, we’re disavowing any agency we might have over the shape of the future. We just keep mouthing platitudes and pretending the way things are is nobody’s fault, and the more history you learn and the more you look at the sad state of modern computing the the more pathetic and irresponsible that sounds.

mhoye: Citation Needed


Of course, someone has to write for loops. It doesn’t have to be you.

Jenny Bryan: Row Oriented Workflows in R With the Tidyverse


An over-simplified and dangerously reductive diagram of a data system might look like this:

Collection → Computation → Representation

Whenever you look at data — as a spreadsheet or database view or a visualization, you are looking at an artifact of such a system. What this diagram doesn’t capture is the immense branching of choice that happens at each step along the way. As you make each decision — to omit a row of data, or to implement a particular database structure or to use a specific colour palette you are treading down a path through this wild, tall grass of possibility. It will be tempting to look back and see your trail as the only one that you could have taken, but in reality a slightly divergent you who’d made slightly divergent choices might have ended up somewhere altogether different. To think in data systems is to consider all three of these stages at once.

Jer Thorp: You Say Data, I Say System


The thing that is most alluring about the Gini coefficient also turns out to be its greatest shortcoming. By collapsing the whole rainbow of the income distribution into a single statistical point of white light, it necessarily conceals much of great interest. That is of course true of any single summary measure… The best measures are those that match our purpose, or pick up on the places where important changes are happening. We should pick and mix with that in mind.

Angus Deaton and Angela Case


After describing the perverse interaction between wealth gaps, education, mortality trends and political economy, Case and Deaton note that “You could crunch the Gini coefficient to as many decimal places as you like, and you’d learn next to nothing about what’s really going on here.” Quite so, but surely it is unreasonable to demand of one measure of inequality along one dimension that it shed light on complex social interactions or detect causal relationships.

That would be like urging us to abandon the Centigrade scale as a measure of temperature because it so badly fails to inform us about how climate change plays havoc with rainfall patterns, the incidence of extreme weather events, or the extent of sea level rises. You could calculate average global temperature increases to as many decimal places in degrees Celsius as you liked, and you would be none the wiser about the diversity of consequences of climate change around the world.

Francisco Ferreira


Software has been around since the 1940s. Which means that people have been faking their way through meetings about software, and the code that builds it, for generations. Now that software lives in our pockets, runs our cars and homes, and dominates our waking lives, ignorance is no longer acceptable. The world belongs to people who code. Those who don’t understand will be left behind. (Josh Tyrangiel, header)

[…]

A computer is a clock with benefits. They all work the same, doing second-grade math, one step at a time: Tick, take a number and put it in box one. Tick, take another number, put it in box two. Tick, operate (an operation might be addition or subtraction) on those two numbers and put the resulting number in box one. Tick, check if the result is zero, and if it is, go to some other box and follow a new set of instructions.

[…]

It’s a good and healthy exercise to ponder what your computer is doing right now. Maybe you’re reading this on a laptop: What are the steps and layers between what you’re doing and the Lilliputian mechanisms within? When you double-click an icon to open a program such as a word processor, the computer must know where that program is on the disk. It has some sort of accounting process to do that. And then it loads that program into its memory—which means that it loads an enormous to-do list into its memory and starts to step through it. What does that list look like?

Maybe you’re reading this in print. No shame in that. In fact, thank you. The paper is the artifact of digital processes. Remember how we put that “a” on screen? See if you can get from some sleepy writer typing that letter on a keyboard in Brooklyn, N.Y., to the paper under your thumb. What framed that fearful symmetry?

Thinking this way will teach you two things about computers:
One, there’s no magic, no matter how much it looks like there is. There’s just work to make things look like magic.
And two, it’s crazy in there.

[…]

You can tell how well code is organized from across the room. Or by squinting or zooming out. The shape of code from 20 feet away is incredibly informative. Clean code is idiomatic, as brief as possible, obvious even if it’s not heavily documented. Colloquial and friendly. As was written in Structure and Interpretation of Computer Programs (aka SICP), the seminal textbook of programming taught for years at MIT, “A computer language is not just a way of getting a computer to perform operations – it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.” A great program is a letter from current you to future you or to the person who inherits your code. A generous humanistic document.

Of course all of this is nice and flowery; it needs to work, too.

Paul Ford: What is Code?


The Postulates of Mathematics Were Not on the Stone Tablets that Moses Brought Down from Mt. Sinai. It is necessary to emphasize this. We begin with a vague concept in our minds, then we create various sets of postulates, and gradually we settle down to one particular set. In the rigorous postulational approach, the original concept is now replaced by what the postulates define. This makes further evolution of the concept rather difficult and as a result tends to slow down the evolution of mathematics. It is not that the postulation approach is wrong, only that its arbitrariness should be clearly recognized, and we should be prepared to change postulates when the need becomes apparent.

— Richard Hamming


Teachers should prepare the student for the student’s future, not for the teacher’s past.

[…]

Education is what, when, and why to do things. Training is how to do it.

[…]

In science, if you know what you are doing, you should not be doing it. In engineering, if you do not know what you are doing, you should not be doing it. Of course, you seldom, if ever, see either pure state.

Richard Hamming: The Art of Doing Science and Engineering


I note with fear and horror that even in 1980, language designers and users have not learned this lesson [mandatory run-time checking of array bounds]. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law.

[…]

I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult. It demands the same skill, devotion, insight, and even inspiration as the discovery of the simple physical laws which underlie the complex phenomena of nature. It also requires a willingness to accept objectives which are limited by physical, logical, and technological constraints, and to accept a compromise when conflicting objectives cannot be met. No committee will ever do this until it is too late.

C.A.R. Hoare: The Emperor’s Old Clothes


I like this definition: A tool addresses human needs by amplifying human capabilities. That is, a tool converts what we can do into what we want to do. A great tool is designed to fit both sides.

Bret Victor: A Brief Rant on the Future of Interaction Design


Discoverability is often cited as npm’s biggest flaw. Many blog posts – scratch that, entire websites – have been created to try and mitigate the difficulty of finding what you need on npm. Everyone has an idea about how to make it easier to find needles in the haystack, but no-one bothers to ask what all this hay is doing here in the first place.

Rich Harris: Small modules: it’s not quite that simple


Brave to write it, all by itself, and then brave to show it. It is like opening your ribcage, and letting someone see the little bird you have inside. What if they don’t love the bird? It’s not like you can change it. I mean.. That’s your bird.

Tycho: Fan Fiction


Look, you have two choices. You can say, “I’m a pessimist, nothing’s gonna work, I’m giving up, I’ll help ensure that the worst will happen.” Or you can grasp onto the opportunities that do exist, the rays of hope that exist, and say, “Well, maybe we can make it a better world.” It’s not a much of a choice.

Noam Chomskey: Interview


There’s really only one important question worth asking, which is: what is a life well-lived?… That’s a question that can’t be answered, but one thing we can say, with a lot of certainty, is that a life well-lived is not going to be a life in which every moment is scrutinized.

Frank Lantz: Hearts and Minds


I hate abstraction. Here are some examples.

Adam Cadre: Fatal abstraction


I find that the single thing which inhibits young professionals, new students most severely, is their acceptance of standards that are too low. If I ask a student whether her design is as good as Chartres, she often smiles tolerantly at me as if to say, “Of course not, that isn’t what I am trying to do…. I could never do that.”

Then, I express my disagreement, and tell her: “That standard must be our standard. If you are going to be a builder, no other standard is worthwhile. That is what I expect of myself in my own buildings, and it is what I expect of my students.” Gradually, I show the students that they have a right to ask this of themselves, and must ask this of themselves. Once that level of standard is in their minds, they will be able to figure out, for themselves, how to do better, how to make something that is as profound as that.

Christopher Alexander: Foreward to Patterns of Software


The Stone Age didn’t end for lack of stone, and the Oil Age will end long before the world runs out of oil.

— Sheik Ahmed Zaki Yamani


[I told the fourth-graders] I was thinking of a number between 1 and 10,000. … They still cling stubbornly to the idea that the only good answer is a yes answer. This, of course, is the result of miseducation in which “right answers” are the only ones that pay off. They have not learned how to learn from a mistake, or even that learning from mistakes is possible. If they say, “Is the number between 5,000 and 10,000?” and I say yes, they cheer; if I say no, they groan, even though they get exactly the same amount of information in either case. The more anxious ones will, over and over again, ask questions that have already been answered, just for the satisfaction of hearing a yes.

— John Holt: How Children Fail


Ian is a game design teacher and a professional skeptic. People call him a “curmudgeon”, but they don’t really understand how much love, how much actual faith, that kind of skepticism takes. On a pretty regular basis one of us will IM the other something like “help” or “fuck” or “people are terrible”.

Only when you fully believe in how wonderful something is supposed to be does every little daily indignity start to feel like some claw of malaise. At least, that’s how I explain Ian to other people.

— Leigh Alexander: The Unearthing


Q: So it’s fine to say, everybody should learn a little bit about how to program and this way of thinking because it’s valuable and important. But then maybe that’s just not realistic. Donald Knuth told me that he thinks two percent of the population have brains wired the right way to think about programming.

A: That same logic would lead you to say that one percent of the US’s population is wired to understand Mandarin. The reasoning there is equivalent.

— [Hal Abselson: Interview]


Many professions require some form of programming. Accountants program spreadsheets; musicians program synthesizers; authors program word processors; and web designers program style sheets.

[…]

The typical course on programming teaches a “tinker until it works” approach. When it works, students exclaim “It works!” and move on. Sadly, this phrase is also the shortest lie in computing, and it has cost many people many hours of their lives.

[…]

By “good programming,” we mean an approach to the creation of software that relies on systematic thought, planning, and understanding from the very beginning, at every stage, and for every step. To emphasize the point, we speak of systematic program design and systematically designed programs. Critically, the latter articulates the rationale of the desired functionality. Good programming also satisfies an aesthetic sense of accomplishment; the elegance of a good program is comparable to time-tested poems or the black-and-white photographs of a bygone era. In short, programming differs from good programming like crayon sketches in a diner from oil paintings in a museum. No, this book won’t turn anyone into a master painter. But, we would not have spent fifteen years writing this edition if we didn’t believe that
everyone can design programs
and
everyone can experience the satisfaction that comes with creative design.

[…]

When you were a small child, your parents taught you to count and perform simple calculations with your fingers: “1 + 1 is 2”; “1 + 2 is 3”; and so on. Then they would ask “what’s 3 + 2?” and you would count off the fingers of one hand. They programmed, and you computed. And in some way, that’s really all there is to programming and computing. Now it is time to switch roles.

How to Design Programs


You can’t sell someone the solution before they’ve bought the problem.

— Chip Morningstar: Smart people can rationalize anything


when you don’t create things, you become defined by your tastes rather than ability. your tastes only narrow & exclude people. so create

— why the lucky stiff


When you believe you have a future, you think in terms of generations and years. When you do not, you live not just by the day – but by the minute.

— Iris Chang – Suicide Note


It seems like most people ask: “How can I throw my life away in the least unhappy way?”

— Stewart Brand


Q: Are you ever afraid of someone stealing your thunder?

A: I think anybody really good is going to want to do their own thing. Anybody who’s not really good, you don’t have to worry too much about.

Chris Hecker: Interview


Living organisms are shaped by evolution to survive, not necessarily to get a clear picture of the universe. For example, frogs’ brains are set up to recognize food as moving objects that are oblong in shape. So if we take a frog’s normal food – flies – paralyze them with a little chloroform and put them in front of the frog, it will not notice them or try to eat them.

It will starve in front of its food! But if we throw little rectangular pieces of cardboard at the frog it will eat them until it is stuffed! The frog only sees a little of the world we see, but it still thinks it perceives the whole world.

Now, of course, we are not like frogs! Or are we?

Alan Kay: The Center of “Why?”


Pick a plane or a cave wall to project the shadow of the Real World onto, and tell a story about the outlines it makes. The trick is to shrug and smile and pick another plane and do it all again to get a completely different shadow, until you find the one most useful for the day. It’s a magic trick for most folks.

Down is just the most common way out


To an architect, imagination is mostly about the future. To invent the future, one must live in it, which means living (at least partly) in a world that does not yet exist. Just as a driver whizzing along a highway pays more attention to the front window than the rear, the architect steers by looking ahead. This can sometimes make them seem aloof or absent-minded, as if they are someplace else. In fact, they are. For them, the past is a lesson, the present is fleeting; but the future is real. It is infinite and malleable, brimming with possibility.

Danny Hillis: The Power of Conviction


If the first line of your [R] script is setwd("C:\Users\jenny\path\that\only\I\have"), I will come into your lab and SET YOUR COMPUTER ON FIRE.

Jenny Bryan: here, here


It becomes important in such a climate of opinion to emphasize that books do not store knowledge. They contain symbolic codes that can serve us as external mnemonics for knowledge. Knowledge can exist only in living human minds.

— Kieran Egan: The Educated Mind: How Cognitive Tools Shape Our Understanding


People are much smarter when they can use their full intellect and when they can relate what they are learning to situations or phenomena which are real to them. The natural reaction, when someone is having trouble understanding what you are explaining, is to break up the explanation into smaller pieces and explain the pieces one by one. This tends not to work, so you back up even further and fill in even more details.

But human minds do not work like computers: it is harder, not easier, to understand something broken down into all the precise little rules than to grasp it as a whole. It is very hard for a person to read a computer assembly language program and figure out what it is about…

Studying mathematics one rule at a time is like studying a language by first memorizing the vocabulary and the detailed linguistic rules, then building phrases and sentences, and only afterwards learning to read, write, and converse. Native speakers of a language are not aware of the linguistic rules: they assimilate the language by focusing on a higher level, and absorbing the rules and patterns subconsciously. The rules and patterns are much harder for people to learn explicitly than is the language itself.

William Thurston: Mathematical Education


A lot of the stuff going on [in AI] is not very ambitious. In machine learning, one of the big steps that happened in the mid-’80s was to say, “Look, here’s some real data – can I get my program to predict accurately on parts of the data that I haven’t yet provided to it?” What you see now in machine learning is that people see that as the only task.

Stuart Russell


Like most readers, I had functionally consigned [our game] to the furnace. I had let it float away on one of those little lantern boats in a way that brought me closure, if no one else. Insufficient.
Fucking insufficient.

You have to get back on the horse. Somehow, and I don’t know how this kind of thing starts, we have started to lionize horseback-not-getting-on: these casual, a priori assertions of inevitable failure, which is nothing more than a gauze draped over your own pulsing terror. Every creative act is open war against The Way It Is. What you are saying when you make something is that the universe is not sufficient, and what it really needs is more you. And it does, actually; it does. Go look outside. You can’t tell me that we are done making the world.

Tycho: A Matter of Scale


Q: At the [NYU] Game Center, we’re interested in the role of the university as an alternate place for thinking about games… What in your opinion are some of the big interesting problems that students should be working on?

A: My advice for students is… I question the question. I don’t think there are problems that students should be working on. I think students should be making games that are interesting and push the boundaries, and those will generate the problems.

— Chris Hecker


A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.

— John Gall: Systemantics


We have this limited bubble of experience. We can only have so many experiences in our lifetime to build models from, and we’re abstracting from that data. We’ve found, through evolution, two ways to get more data, to build more elaborate models of the world. One is to have toy experiences, little counterfeit experiences. The other one is to learn from the experience of others. When somebody tells you a story, you can actually learn from that story, incorporate it into your model of the world to make your model more accurate based upon that data that you got from somebody else. So over time, we have come to call one of these things “play” and the other one “storytelling”. These are both fundamentally educational technologies that allow us to build more elaborate models of the world around us, by supplanting our limited experience with other experiences.

Will Wright: Gaming Reality


John [McCarthy]’s world is a world of ideas, a world in which ideas don’t belong to anyone, and when an idea is wrong, just the idea - not the person - is wrong. A world in which ideas are like young birds, and we catch them and proudly show them to our friends. The bird’s beauty and the hunter’s are distinct….

Some people won’t show you the birds they’ve caught until they are sure, certain, positive that they - the birds, or themselves - are gorgeous, or rare, or remarkable. When your mind can separate yourself from your bird, you will share it sooner, and the beauty of the bird will be sooner enjoyed. And what is a bird but for being enjoyed?

Richard Gabriel: The Design of Parallel Programming Languages