A few big ideas

Notes (August 2021)

I wrote this a few years ago in preparation for a very open ended brief for a presentation, a better title than I had might be:

Some basic concepts from academia which should be much better known

Diminishing Marginal Returns

Returns to time/money/effort probably increase at margin, then decrease, then go negative. Figuring out where you are on the curve and when to stop is damned useful.

Evolutionary Stable State (ESS)

Societies/economies/ecosystems may evolve to state where there is no incentive to defect; but this state can be bad for everyone (suboptimal ESS)

Don't compete, cooperate

According to Homer Simpson communism is nice in theory. So is capitalism, except not for the competing agent whose marginal return should go to zero. Don't be a commodity/compete... this may often mean cooperating. It may even be better for everyone (see ESS)

Complexity as Risk

Systems can quickly become fragile and impossible to understand/maintain... if isolated complexity is probably benign

Chaitin-Kolmogorov complexity

The minimal length of a process to generate something is precisely how random/complex that thing is. A beautiful definition, but suggestive in practice: am I configuring thing such that descriptions/processes are efficient?

Pareto Improvement

A change which makes everyone (weakly) better off. Changes don't have to make someone worse off... we are not typically cutting up a pizza. Oppositely the demand that policy changes are always Pareto improving is absurd. If making Donald Trump 90% poorer makes a million people significantly better off then that is almost certainly a good idea.

Statistical Significance

How likely is an observed change due to chance? If more than 5% assume no real change.

Multiple Hypothesis testing

But can't apply above naively... do many statistical tests and you will get something passing this threshold (1 in 20 times, assuming your assumptions valid). Should decide what to test ahead of time and correct as much as possible.

Automate

Humans are fallible, distractible and limited. Automate where possible.

Noise and Feedback

Get feedback, but be aware that much of it will be noise/idiosyncratic. Ideally test on huge numbers of people quantitatively (and with good scientific controls). This is basically never possible. There are some simple heuristics that seem sensible e.g. Basecamp's we don't record feedback, the important stuff will recur.