Book group rules
I'm in a book group that has a number of rules. These rules govern the following aspects of the group:
- Who can propose a book (we take it in turns)
- What types of book can be proposed (fiction or poetry)
- The length of proposed books (max 200 pages)
- How a book gets chosen (complex voting system)
- How long the group has to read a book (3 months)
Sometimes the group considers changes or additions to the rules. For instance:
- Setting a minimum page length in addition to the maximum page length
- Varying the time alloted to read a book (eg. by making it proportional to the length of the book)
- Allowing non-fiction books or short stories
- Allowing books that members of the group have already read
Why do we have these rules? In theory, they exist to help ensure that the members of the group derive as much benefit from participating in the group as possible. These benefits broadly fall under two categories: intellectual benefits and social benefits. Within each category, the benefits essentially boil down to "more", "different" and "better".
1. Intellectual benefits
- You read more books.
- You read a greater variety of books.
- You engage with the books you read more deeply than those you read outside of the group.
2. Social benefits
- You see the members of the group more often or more regularly than you otherwise would have done.
- You discuss topics that you might otherwise not have discussed with members of the group.
- You have richer or more meaningful conversations with the members of the group than you might otherwise have done.
3. Potential risks
- You end up reading more books that you don't enjoy.
- You end up spending more time reading books rather than doing something else that you would rather have done, eg. exercise.
- Participation in the book group puts a strain on your relationships with other members of the group.
Conclusion
For me, the most attractive benefits of a book group are 1.3 and 2.3, and the greatest risk is 3.1. Given this, if I were to devise a set of rules for a book group, they should probably have the following characterstics:
- The rules should seek to maximise the chance that the group picks a book that will lead to an interesting discussion.
- The rules should seek to minimise the chance that the group picks a book that is unjoyable to read.
- There should be less emphasis on "process" rules that seek to minimise social friction and more emphasis on "outcome" rules that seek to increase the quality of the books are chosen.
A critique of Longtermism
According to the EA website, there are four values that "unite effective altruism":
- Prioritization
- Impartial altruism
- Open truthseeking
- Collaborative spirit
In this post, I'll critique value 2, "impartial altruism". I will then argue that this value underpins the most common EA argument in favour of longtermism and that therefore, if a commitment to impartial altruism is weakened, EA's commitment to longtermism should be similarly weakened.
Impartial altruism
The value of impartial altruism is described as follows:
Impartial altruism: We believe that all people count equally. Of course it's reasonable to have special concern for one's own family, friends and life. But, when trying to do as much good as possible, we aim to give everyone's interests equal weight, no matter where or when they live. This means focusing on the groups who are most neglected, which usually means focusing on those who don’t have as much power to protect their own interests.
It's interesting that this description explicitly acknowledges the reasonableness of having "special concern" for one's own family, friends and life. Unfortunately, however, it does not explain why it is reasonable to have special concern for certain people. Nor does it explain why, if such special concern is reasonable, we should "give everyone's interests equal weight". After all, if it is in fact reasonable to have special concern for some people, surely it might be reasonable to give those people's interests greater weight?
In my opinion, the most compelling reason to have special concern for certain people is that we have special responsibilities towards those people. This seems particularly compelling in the case of parent-child relationships. I believe that most people would accept that parents have an especially strong duty of care to their children, and that this duty often makes it morally permissible for parents to prioritise their own children's wellbeing over those of others, even if doing so fails to maximise overall wellbeing. Special responsibilities seem to exist in other relationships too, like those between partners, friends and other family members. Many feel that they also exist, albeit to a lesser extent, to one's peers, colleagues and local communities.
It's beyond the scope of this post to put forward a comprehensive account of how special responsibilities might arise, though plausible explanations often appeal to concepts of love, kindness and reciprocity. Nevertheless, a belief in the existence of these special responsibilities is widespread, and on the whole, acting in accordance with these responsibilities is considered morally virtuous. Therefore I believe this makes them moral commitments that we should not give up easily.
Longtermism
According to the EA website's "Introduction to Longtermism", longtermism is "the view that positively influencing the long-term future is a key moral priority of our time." Specifically, the long-term future, means "something like the period from now to many thousands of years in the future, or much further still."
As I understand it, the value of impartial altruism underpins the most common argument for longtermism. As a reminder, impartial altruism requires that:
when trying to do as much good as possible, we aim to give everyone's interests equal weight, no matter where or when they live. This means focusing on the groups who are most neglected
Accordingly, the most common justification for longtermism holds that the interests of people in the future matter just as much as those of people in the present, and their interests are currently being neglected. Therefore, when trying to do as much good as possible, we should focus on the interests of people in the future.
However, if EAs accept that it is reasonable to have special concern for certain people over others, this leaves open the possibility that it is reasonable to have special concern for currently existing people over future people. And indeed, that is what many people strongly believe. I therefore think that the onus is on EAs to explain why it's reasonable for people to have special concern for friends, family and oneself over others, but not reasonable to have special concern for currently existing people over future people.
Typically EAs criticise future discounting in order to defend the idea that future people matter just as much as currently existing people. However, an argument for special concern need not be based on any notion of future discounting. As discussed above, a compelling argument for the reasonableness of special concerns is one based on the existence of special responsibilities to certain people. I personally find it extremely plausible that we could have special responsibilities to currently existing people that we do not have to future people. Therefore, in order for EAs to make a more compelling case for longtermism, I believe they need to convincingly articulate why it is reasonable to have special concerns for some people over others in a way that does not imply it is reasonable to have special concern for existing people over future people.
Quantity over quality
This year, we started a daily writing exercise. The terms of engagement were:
- write for 1 hour a day (usually between 8pm - 9pm)
- share what we've written at the end of each week (Sunday at 9pm)
This week, we've mixed it up a bit. We're still writing for 1 hour a day, but now we're also sharing what we've written at the end of every day.
This shift is supposed to encourage more actual writing. Whereas previously we might have spent most of the week thinking about a topic and only put pen to paper at the weekend, the new sharing schedule forces us to produce written words every single day.
Removing time to think may well result in lower quality writing. But I think that's ok for our (or at least my) purposes. For me, the primary goal of this exercise is to become a faster, more fluent writer.
When I was at university, I had to write 3 essays a week. By the end of the first term, I'd become a writing machine. Were the essays all of a high quality? Absolutely not. But they definitely improved over time. I'd like to recreate that sense of progression now.
Writing is effortful and currently feels painfully slow. When there's little pressure to actually produce something, it's very easy to get distracted. Earlier I said that in previous weeks I might have spent most of the week "thinking about a topic", but that's a generous characterisation of how I spent the time. Mostly it was procrastination until the deadline was close enough to be motivating.
In addition to the effort, there's the fear of judgement. Despite knowing all the cliches - "perfect is the enemy of good", "done is better than perfect" - the fear of producing dross leads to complete self-censorship.
So for now, I'm going to prioritise quantity over quality and hope that, as in the parable of the pottery class, the quality will inevitably follow.
The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the "quantity" group: fifty pound of pots rated an "A", forty pounds a "B", and so on. Those being graded on "quality", however, needed to produce only one pot - albeit a perfect one - to get an "A".
Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the "quantity" group was busily churning out piles of work - and learning from their mistakes - the "quality" group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.
5-star Open Data
For decades now, Tim Berners-Lee has been pushing the idea of "The Semantic Web".
The Semantic Web isn't just about putting data on the web. It is about making links, so that a person or machine can explore the web of data. With linked data, when you have some of it, you can find other, related, data.
Tim Berners-Lee, Linked Data
To promote this vision, he developed a 5-star rating system for data on the web:
★ make your stuff available on the Web (whatever format) under an open license
★★ make it available as structured data (e.g., Excel instead of image scan of a table)
★★★ make it available in a non-proprietary open format (e.g., CSV instead of Excel)
★★★★ use URIs to denote things, so that people can point at your stuff
★★★★★ link your data to other data to provide context
Today, most data only gets as far as star 2 or 3, so there's still quite a way to go...
But in the meantime, you can buy 5-star data mugs and other merch from the w3c shop should you wish to support the cause.
Personalisation
The dominance of the Tech Giants is often attributed to their use of data to personalise their products. Be it ads, media, or product recommendations, tech companies' ability to tailor their content to our individual preferences is arguably what gives them such an edge over non-tech companies.
This post is about personalised content and why I think it's a bad thing. By "content", I mean media and entertainment: news, music, films, tv shows, books, articles, games etc. I'm not anti-personalisation in all contexts - personalised medicine, for instance, would be transformative - but when it comes to content, I am.
Why do I think that personalised content is a bad thing? For two reasons.
First, because personalisation is constraining. It limits your exposure to ideas and experiences. Platforms that serve personalised content typically try to predict what you'll like based on content you've enjoyed in the past and then just serve you more of the same. But there is inherent value in being exposed to a wide variety of content. We may discover that we enjoy some music that's quite different from music we've enjoyed in the past; we may come to see an issue from a completely different perspective from perspectives we've considered in the past; we may come across a new and interesting idea in a domain that we've never even heard of before. A varied diet of content is more interesting and produces more interesting individuals.
The second reason I think personalised content is a bad thing is because it reduces the potential for shared experiences. People are different, so when content is personalised, each person experiences different things. But if we're not watching the same tv shows as our friends, how can we discuss the latest plot twists? If we're not listening to the same music as our friends, how can we all sing along when our favourite song plays at a party? We can't. Without shared experiences, there can be no culture. We won't have the same reference points; we'll be disconnected. And that's pretty sad.
Money and therapy
One of the biggest factors determining the efficacy of therapy is the quality of the therapeutic relationship. It seems that how you feel about your therapist is just as important, if not more important, than the particular type of therapy you receive.
This makes some intuitive sense. In order to get value from therapy, you typically have to attend sessions over an extended period of time, share your innermost thoughts and feelings with your therapist, and follow any exercises or advice they might give you. All of these things are unlikely to happen if you don't actually like or trust your therapist.
But one thing that might undermine your positive feelings towards your therapist is the fact that you're paying them for their services. Money inevitably affects the nature of relationships, and it may spark thoughts like:
- if my therapist really cared about me, they wouldn't charge me to see them.
- it's in my therapist's financial interests to keep me as their patient for as long as possible, so are they really doing all they can to help me?
- my therapist is charging me £X / hour - are they really worth it or should I see a cheaper therapist?
Unfortunately, once you've entertained these kind of thoughts, it's hard to dismiss them wholeheartedly.
And it's not just patients' feelings that are affected by money. Many therapists experience anxiety, or even guilt, over charging a fee. The idea of refusing to help people in need because they're too poor to afford a fee makes many therapists deeply uncomfortable.
All of this raises 2 questions: if a therapist is paid by someone other than the patient, does this allow for a better therapeutic relationship? And if so, does this mean that therapy is more likely to be effective when the patient is not directly paying the therapist for their services?
It might seem unlikely that the structural issue of how therapists are paid would have much bearing on therapeutic outcomes. But given the importance of the therapeutic relationship, I would have thought it's worth exploring whether eliminating tensions relating to money could improve outcomes.
My instinct is that to see any effect, there would need to be a total disconnect between the therapist getting paid and the patient receiving therapy. If a person's therapist were being paid by a friend or family member, for instance, the tensions around money would likely remain. Therefore, to see any benefits, I imagine you would need to eliminate the transactional element of therapy. In other words, the therapist's payment should not be directly tied to seeing any particular patient. The obvious way to achieve this would be to have therapists paid a salary by the state or a charity (the other benefits of which would clearly far outweigh anything being discussed here).
We might be tempted to look for answers to these questions by comparing therapeutic outcomes between patients receiving therapy on the NHS and those paying privately. Unfortunately this isn't a good comparison because the NHS has such long waiting times (6-12 months) and offers such short treatment courses (typically one block of 6 or 12 sessions) compared to the private sector. But if we could control for these factors and just vary the party paying the therapist, I would be interested to see if this had any bearing on therapeutic outcomes.
What is interoperability?
Interoperability is about communication.
Imagine needing a Gmail account to email people with Gmail addresses; a Hotmail account to email people with Hotmail addresses; a Yahoo account to email people with Yahoo addresses. And so on and so forth.
Luckily, that's not the world we live in; email accounts are interoperable. You can send emails from a Gmail address to any other email address. Contrast this with messaging apps, which are not interoperable. You do need a WhatsApp account to message people on WhatsApp; a Signal account to message people on Signal; a Telegram account to message people on Telegram.
This might feel like a minor inconvenience, and in the case of messaging apps, it is. But in other domains, it's a major problem. Take healthcare, for example. Hospitals, GP surgeries and clinics all use different systems, but it's critically important that they be able to communicate and share medical records with one another.
One solution to the problem of interoperability would be to force everyone to use exactly the same system. But this kind of forced monopoly risks all the typical issues associated with monopolies: higher prices, lower quality and lack of innovation. And more importantly, different groups have different needs; the needs of a GP surgery are not the same as those of a hospital, so it's highly unlikely that a single system could serve both effectively.
A much better solution to the problem of interoperability would be to let different organisations choose their own systems, so long as those systems are all interoperable. This is what we should be aiming for in healthcare and in many other domains.
One of the major features of interoperable systems produces a barrier to adoption. Interoperability produces network effects: the more systems there are communicating via a particular standard, the more valuable it becomes for your system to adopt that same standard. But until there are sufficient systems adopting a standard, there's much less intrinsic value to you in adopting it.
Perhaps the best way to avoid this kind of "cold start" problem is to get an authority to dictate that a particular standard must be used. For instance, the NHS might refuse to buy any systems that do not conform to the FHIR specification. This then creates an immediate incentive for developers to build systems adhering to FHIR, regardless of the number of existing systems that already adhere to it.
Equity as a disincentive
In the world of startups, there's a commonly held view that equity is an important mechanism for aligning the interests of individuals with those of the company they work for. If you work hard, the company will become more valuable, and so too will your shares in that company. Therefore, giving shares (or at least stock options) to employees should motivate them to work harder for that company.
But as we all know by now, people are not ideal rational actors who work only to maximise their self-interest. There are other factors that drive us, sometimes to our own detriment. One such factor is how we feel about other people: if you like someone, you're more inclined to help them; if you don't like them, you're more inclined to hurt them. And if you really don't like someone, you may be prepared to hurt them even if doing so also hurts yourself.
Consider now this scenario: an employee owns 0.1% of a company, a founder owns 10%. If the value of the company goes up from £1 million to £10 million, the value of the employee's shares will rise from £1,000 to £10,000, and the value of the founder's shares will rise from £100,000 to £1,000,000. A rational employee may be motivated to work hard by the prospect of increasing the value of their own shares. But if the employee dislikes the founder, they may also feel motivated to suppress the value of the founder's shares. These two motivations are in conflict, but it's not implausible that the latter would win out. Sure, the employee would be missing out on £9,000, but the founder would be missing out on £900,000, and that's gotta hurt!
Even if the employee actually quite likes the founder, they might feel that it's unfair that their work will contribute to the founder becoming £900,000 richer whilst they only become £9,000 better off. This feeling of unfairness may well be sufficient to motivate the employee to slack off and forego their own potential gain (just as people reject "unfair" offers in the ultimatum game).
Does this kind of phenomenon have any impact on the performance of real-world startups? I don't know, but I'd be interested to find out. Employee motivation is a hot topic and it's not obvious to me that typical employee stock options provide effective incentives.
Related links
- Herbert Gintis on Game Theory. He goes so far as to as to describe vengeance or retribution as "one of the basic human behaviours", arguing that is was essential in the development of cooperative societies.
- Handcuffed to Uber. Perhaps evidence that employee stock options are effective? Though I'm taking this with a large pinch of salt. This article was written in 2016, Travis Kalanick left 2017. You do the math.
Despite employees’ immobility, morale inside Uber remains high, according to our sources, a sentiment that the jobs site Glassdoor seems to confirm. Roughly 1,600 people have reviewed Uber on the platform; the 490 who’ve rated CEO Travis Kalanick collectively award him a 91 percent approval rating.
The indignity of internal interviews
A friend recently got a promotion at work. I say "promotion", but really it was just a confirmation that she could continue to do the job she'd already been doing for the past year on an "interim" basis.
What surprised me was that she'd had to go through a formal interview process in order to get this promotion. How insulting! What better indication that she could do the job well than the fact she'd already been doing it well for the past year? The idea that you could get better evidence of her suitability from an interview was absurd.
Interviews are a notoriously bad way of predicting how people will perform in a job. Even if you aspire to run an exemplary, Kahneman-inspired "structured interview" process, time constraints, imperfect interviewers, and the difficulty of designing relevant questions mean that interviews are rarely reliable predictors of success. Nevertheless, when you're assessing candidates that are completely unknown to you, interviews at least provide some information about their suitability for the job.
But when a candidate has been working in your organisation for a while already, there is much more information available to you than could possibly be gained from an interview. Why bother asking them to "describe a time they did X", when you (or others in your organisation) already know what happened the last time they did X? Good managers should have these examples to hand and be able to provide accurate assessments of their reports' strengths and weaknesses. So if you feel that you don't have enough information about a person's suitability for a job despite them having worked in your organisation for some time, that suggests a serious failure of management.
Some people argue that interviews are necessary to ensure that promotions are "fair and transparent". All candidates get asked the same questions and are judged according to the same criteria, so there's no room for favouritism. But interviews are not the only way to run fair and transparent processes. So long as candidates all get judged according to the same criteria - and know what those criteria are - there's no reason to require that the supporting evidence comes from answering interview questions rather than real-life work performance.
But it's not just that interviews are a poorer source of information than real-life work performance. Putting internal candidates through interviews is disrespectful. If someone has already demonstrated their abilities (or lack thereof) through their sustained performance at work, why make them go through a contrived interview process? To do so is essentially to say, "We've not been paying enough attention to your performance at work, so we're going to have to judge you on the basis of what you say in the next 1 hour instead". Surely managers should do better than that?
More posts can be found in the archive.